uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,089,954
arxiv
\section{Introduction} The phenomenon of domino wave - the propagation of toppling in equal-sized and equispaced rectangular blocks, that we will further refer to as dominoes - attracted attention of many researchers. Probably the first call to consider the mechanics behind the domino waves belongs to Daykin (1971) \cite{Daykin_1971}, who suggested to propose a set of reasonable assumptions to solve the problem of the velocity of domino wave with mathematical rigor. McLachlan et al \cite{McLachlan_1983} used the analysis of problem dimensions to establish the scaling law for wave propagation velocity - it was shown that the propagation velocity has the shape \begin{equation} \label{eq1} v = \sqrt{gl}G\left(\frac{d}{l}\right), \end{equation} where $g$ is the acceleration of free fall, $l$ is the length of the domino, $d$ is the spacing between dominoes and $G$ is some unknown function. The work \cite{McLachlan_1983} also demonstrated that the experiments confirm the suggested scaling. The work of Bert \cite{Bert_1986} have presented a complete solution for the dynamics of colliding dominoes in terms of elliptic integrals. Stronge \cite{Stronge_1987} for the first time demonstrated the existence of analytical limit velocity of domino wave propagation that does not depend on the initial perturbation and also discussed the importance of friction between colliding dominoes. The question from Dutch national science quizz of 2003 motivated the work of van Leeuwen \cite{vanLeeuwen_2010}, which was published as a pre-print nearly concurrently with the paper by Efthimiou and Johnson \cite{Johnson_2007} (hereafter - EJ). These two works, done independently from earlier works \cite{Bert_1986, Stronge_1987} sparkled a new wave of interest to the problem of domino collisions. Following earlier work of Shaw \cite{Shaw_1978}, van Leeuwen considers the collisions between dominoes as inelastic, which leads him to the concept of "domino trains", or solitons. In contrast, EJ model \cite{Johnson_2007}, as well as the earlier work \cite{Bert_1986}, considers propagation of domino wave as the series of pair-wise elastic collisions. Obviously, both approaches are valid in a certain range of domino properties - the theory by van Leeuwen appears to be more suitable for slow processes with large dominoes/inelastic collisions (e.g. bricks or large concrete blocks), whereas the theory of Efthimiou and Johnson suits better for fast processes with small and thin dominoes (e.g. centimeter-size parts of glass). Intermediate regimes between two theories are complex (see the discussion in section 3.2 below), it is therefore hard to derive one as the limit case of another. These two works inspired a large number of subsequent developments on the problem, exploring its different aspects (see, e.g., the recent work \cite{Song_2021} and references therein). However, surprisingly, there were no attempts to look deeper into another important feature of the problem - contact interactions between dominoes. Starting from the scaling law (\ref{eq1}), it was assumed that the domino propagation velocity should be independent on the density and the Young's modulus of domino's material. This assumption is justified when the collision time is negligibly small compared to the free fall time of the domino. However, this assumption breaks for small separations between dominoes, which results in a singular behavior of domino wave propagation velocity according to \cite{Johnson_2007}. Of course, no mechanical interactions between dominoes can propagate faster that the speed of P-wave in the domino's material. This deficiency of EJ theory was pointed out in \cite{Larham_2008} - it was noted that the theoretical predictions diverge significantly from the experimental observations, especially in the region of small separations between dominoes. Another important consequence of the assumption of ``instant'' elastic collisions are infinite contact forces and angular moments acting on dominoes at the moment of collision. This explicitly contradicts the assumption of the ``sufficient'' force of friction to exclude slip at the domino's support point, since ``sufficient'' in this case implies the infinite coefficient of friction, or the hinge-like connection of the domino with the foundation. This makes it hard to relate the predictions of theory with the real domino-like systems. In our work, we suggest extending the EJ theory by incorporation of the finite contact stiffness between dominoes. It appears that this modification leads to a more complex scaling of domino wave velocity than the one predicted by mcLachlan. In our model, the domino wave does not exceed the wave velocity in the chain of dominoes in elastic contact. Our theoretical prediction of domino wave velocity agrees with the results of Discrete Element Method (DEM) numerical modeling and the experimental observations - given the assumptions behind the model are satisfied. The paper is organized as follows. In the next section, we describe our analytical model and highlight some of its properties. The third section compares the predictions of our model with the results of experiments and DEM numerical simulations, and establishes the borders of applicability of the modified theory. The concluding section discusses our findings. \section{Finite collision time domino wave theory} \subsection{The case of perfectly rigid dominoes} Let us first concisely overview the major results of the model presented by Efthimiou and Johnson \cite{Johnson_2007} that we have chosen as the baseline for our derivations. They modeled a row of dominoes as a system of initially vertical, infinitely thin rigid rods of height $l$, equispaced at distance $d$ apart, standing on frictional horizontal foundation (Fig. 1(A)). The inertial properties of a domino are represented with a point mass $m$ at the upper tip of the massless rod.\footnote{Other mass distributions can be straightforwardly considered (see, e.g. \cite{Bert_1986}), a simple concentrated mass case is chosen in \cite{Johnson_2007} for brevity of analytical expressions.} The friction between dominoes and the foundation was assumed to be sufficient to exclude slip; therefore, the domino toppling is viewed as a purely rotational motion. The friction between dominoes is neglected. The following notations were used (our notations mostly follow \cite{Johnson_2007}): \begin{itemize} \item $\Omega_n$ is the angular velocity of the domino $n$ immediately after collision with the domino $n-1$ \item $\Omega_{fn}$ is the angular velocity of the domino $n$ immediately before collision with the domino $n+1$ \item $\Omega_{bn}$ is the angular velocity of the domino $n$ immediately after collision with the domino $n+1$ \item $A_{n}$ is the point of support (center of rotation) of domino $n$ \item $\beta = \arcsin(d/l)$ is the angle of inclination of the domino $n$ at the moment of collision with the domino $n+1$ (Fig. 1(B)) \item $I = m l^2$ is the moment of inertia of the domino $n$ with respect to $A_{n}$ \end{itemize} \begin{figure} \begin{center} \includegraphics[width=14.5cm]{fig1} \protect\caption{Domino model under consideration. The configuration of dominoes at the initial moment (A) and the moment of collision of $n$-th and $n+1$-th dominoes (B) } \end{center} \end{figure} Assuming precise conservation of energy and angular momentum during collision, we can write down the following system of equations, linking $\Omega_n$, $\Omega_{fn}$, $\Omega_{bn}$. \begin{equation} \label{eq2} \begin{cases} \frac{1}{2}I\Omega_{fn}^2 = \frac{1}{2}I\Omega_{bn}^2 + \frac{1}{2}I\Omega_{n+1}^2\\ \Omega_{fn}\cos{\beta}^2 = \Omega_{bn}\cos{\beta}^2 + \Omega_{n+1} \end{cases} \end{equation} It's solution can be given as: \begin{equation} \label{eq3} \begin{split} \Omega_{n+1} & = f_{+}\Omega_{fn} \\ \Omega_{bn} & = \frac{\Omega_{n+1}}{ f_{-}} \end{split} \end{equation} where \begin{equation} \label{eq4} f_{\pm} = \frac{2}{{\cos^2 \beta}^2 \pm 1/\cos^2\beta} \end{equation} Further, considering energy balance for the domino falling in a gravity field, one can write: \begin{equation} \label{eq5} \frac{1}{2}I\Omega_{n}^2 + mgl = \frac{1}{2}I\Omega_{fn}^2 + mgl \cos \beta \end{equation} Combining (\ref{eq3}) and (\ref{eq5}) we have \begin{equation} \label{eq6} \Omega_{n+1}^2 = f_{+}^2 \Omega_{n}^2 + b \end{equation} where \begin{equation} \label{eq7} b = \frac{2g}{l}f_{+}^2(1-cos \beta) \end{equation} Solving \cite{Johnson_2007} for the $n$-th term of the mixed progression (\ref{eq6}) we have \begin{equation} \label{eq8} \Omega_{n}^2 = f_{+}^{2(n-1)} \Omega_{1}^2 + b \frac{1-f_{+}^{2(n-1)}}{1-f_{+}^{2}} \end{equation} The nice property of this solution is that it predicts that the limit angular velocity at $n \rightarrow \infty$ does not depend on the angular velocity $\Omega_1$ caused by initial external push: \begin{equation} \label{eq9} \lim_{n \rightarrow \infty} \Omega_{n}^2 = \frac{2g}{l}(1-\cos \beta) \frac{f_{+}^2}{1- f_{+}^2} \end{equation} The speed of the domino wave is defined by the time period between two subsequent domino collisions. This time period can be derived from the integration of the equation of energy balance for the domino inclination to arbitrary angle $\theta \in (0, \beta)$, \begin{equation} \label{eq10} \frac{1}{2}I\Omega_{n}^2 + mgl = \frac{1}{2}I \left({\frac{d\theta}{dt}}\right)^2 + mgl \cos \theta \end{equation} or \begin{equation} \label{eq11} \int_{0}^{T_{n}}dt = \int_{0}^{\beta} \frac{d \theta}{\sqrt{\Omega_{n}^2+ \frac{2g}{l} - \frac{2g}{l} \cos \theta}} \end{equation} The closed form solution is given in terms of elliptic integrals of the first kind: \begin{equation} \label{eq12} T_{n} = \frac{2}{\sqrt{a_n+c}}\left[ K(\kappa_n) - F\left( \frac{\pi - \beta}{2}, \kappa_n \right) \right] \end{equation} where $a_n = \Omega_n^2+2g/l$, $c = 2g/l$, and $\kappa_n = \sqrt{\frac{2c}{a_n+c}}$. In the limit of large $n$ time $T_n$ approaches a limiting value \begin{equation} \label{eq13} T = \frac{2}{\sqrt{a+c}}\left[ K(\kappa) - F\left( \frac{\pi - \beta}{2}, \kappa \right) \right] \end{equation} where $a = \Omega^2+2g/l$, $\kappa = \sqrt{\frac{2c}{a+c}}$. This result straightforwardly leads to the expression for the domino wave velocity: \begin{equation} \label{eq14} v = \frac{d}{T} = \frac{\sqrt{a+c}}{K(\kappa)-F\left( \frac{\pi - \beta}{2}, \kappa \right)} \end{equation} This expression can be explicitly re-written in McLachlan's form: $v = \sqrt{gl}G(\frac{d}{l})$ (see \cite{Johnson_2007} for the closed-form expression for $G$). \subsection{The case of compliant dominoes} The EJ theory, presented above, does not capture the behavior of realistic domino chains for small separations between dominoes - the theory predicts singular behavior of the velocity with the scaling $1/\beta$, or $l/d$, which is not observed in experiment. In order to explain this discrepancy, we suggest to generalize the EJ model by taking into account the finite collision time between dominoes. Our analysis is based on the following assumptions: \begin{itemize} \item Interactions between dominoes remain perfectly elastic, but the contact stiffness is not anymore infinite. This leads to finite overlaps between dominoes and finite collision time, comparable with the limit time of the domino's free fall, given by (\ref{eq13}) - see the diagram on Fig. 2. \item The collision between dominoes is assumed to be represented by the unconstrained head-on collision of two equal spherical particles in translational motion. Particles are assumed to have the finite radius $R$, which is considered to be much less than $l$ and $d$, and therefore does not affect angle $\beta$. However, the maximum overlap between particles $\delta$ is assumed to be much less than the $R$. \item The contact stiffness is constant (the assumption known to DEM community as a "linear" contact model \cite{Cundall_1979}). This stiffness is defined based on the radius of the particles in contact and their Young's modulus (see below). \item The collision time is defined as the half-period of free vibration of the equivalent undamped spring-mass system, meaning that the work of external force (gravity) is neglected during the collision. \end{itemize} \begin{figure} \begin{center} \includegraphics[width=10.0cm]{fig2} \protect\caption{Comparative time diagram of one period of rigid (instantaneous) and compliant (finite time) collision}. \end{center} \end{figure} In order to establish wave propagation velocity in the system of compliant dominoes, let us have a closer look at the diagram in Fig. 2, comparing the motion in the system of rigid and compliant dominoes. Consider the situation when rigid and compliant chains with otherwise identical parameters are synchronized at the initial moment $t_0$, corresponding to the moment of detachment of compliant dominoes: \begin{equation} \label{eq15} \theta^r_n(t_0) = \theta^c_n(t_0) \end{equation} The motion in both chains is identical till the moment $t_1$, corresponding to the instance of collision in the rigid chain (or the beginning of collision process in compliant chain). Let us denote the end of collision in elastic chain as $t_3$, and the moment $t_2$ such that \begin{equation} \label{eq16} \theta^r_{n+1}(t_2) = \theta^c_{n+1}(t_3) \end{equation} Easy to see that the period between sequential collisions in the rigid system $T^{rig} = t_2 - t_0$ differs from the similar period in the compliant system $T^{compl} = t_3 - t_0$ by the term $\Delta t = t_3 - t_2$. Assuming constant stiffness collision and harmonic acceleration, one can establish relation between $\Delta t$ and the collision time $T_c = t_3 - t_1$. The position reached by the compliant domino $n+1$ at the moment of detachment can be expressed in this case as \begin{equation} \label{eq17} \theta^c_{n+1}(t_3) = \theta^c_{n+1}(t_1) + \frac{\Omega_{n+1}}{2} \int_{0}^{T_c} \left(1 - \cos{\frac{\pi}{T_c} t}\right)dt \end{equation} The same position is expressed via angular velocity of rigid domino as \begin{equation} \label{eq18} \theta^r_{n+1}(t_2) = \theta^r_{n+1}(t_1) + \Omega_{n+1} \Delta t \end{equation} Given that $\theta^r_{n+1}(t_1) = \theta^c_{n+1}(t_1)$ we immediately get \begin{equation} \label{eq19} \Delta t = \frac{T_c}{2} \end{equation} The domino wave velocity in the system of compliant dominoes can therefore be written as \begin{equation} \label{eq20} v = \frac{d}{T + T_c/2} \end{equation} where $T$ is given by (\ref{eq13}) and $\Delta t$ is defined above. Based on the assumptions listed above, we can express the collision time as: \begin{equation} \label{eq21} T_c = \pi \sqrt{\frac{m_{r}}{k_{r}}} \end{equation} here $m_{r}, k_{r}$ are equivalent mass and stiffness of spring-mass system representing the collision of dominoes $n$ and $n+1$. Note that $T_c$ does not depend on $n$, therefore, the corresponding limits are omitted here and below. Let us have a closer look at the parameters $m_{r}, k_{r}$. In case of head-on collision of two unconstrained particles with masses $m_1,m_2$ and contact stiffnesses $k_1, k_2$ we can write down: \begin{equation} \label{eq22} \begin{split} m_{r} = \frac{m_1m_2}{m_1+m_2} \\ k_{r} = \frac{k_1k_2}{k_1+k_2} \end{split} \end{equation} Stiffnesses $k_1,k_2$ are defined as \begin{equation} \label{eq23} \begin{split} k_1 = \frac{E_1 \pi R^2}{R} = \pi R E_1 \\ k_2 = \frac{E_2 \pi R^2}{R} = \pi R E_2, \end{split} \end{equation} given $E_1 = E_2 = E$, the contact stiffness is given by \begin{equation} \label{eq24} k_{r} = \frac{\pi R E}{2} \end{equation} Assuming that the rotations are negligibly small during collision, we can consider collision in terms of translational dynamics of concentrated masses (Fig. 3(A)). The equivalent mass of the first domino is $m$. In order to ensure the conservation of the moment of inertia $I = ml^2$ of the domino $n+1$ with respect to its foundation $A_{n+1}$, we need to assume the collision with the concentrated mass $m'$ (Fig. 3(B)), such that \begin{equation} \label{eq25} I = m l^2 = m' s^2 = m l^2 \cos^2 \beta \end{equation} The equivalent mass of the second domino is therefore \begin{equation} \label{eq26} m' = \frac{m}{\cos^2 \beta} \end{equation} \begin{figure} \begin{center} \includegraphics[width=12.5cm]{fig3} \protect\caption{Derivation of the collision time. (A) Initial problem (B) representation as a head-on constrained collision of two point masses. (C,D) Illustrations on the derivations of effective stiffness (C) and mass (D) of unconstrained collision.} \end{center} \end{figure} We now have the head-on soft collision of two concentrated masses However, both mass are constrained - the mass $n$ can move only along $\xi$, whereas the mass $n+1$ moves only along $x$. Such constraints are usually called holonomic (geometric and integrable) in theoretical mechanics. Computing the collision time requires integration of the motion of the system shown in Fig. 3(B), which, in general case is only possible numerically. However, under the assumptions discussed above, we can estimate the collision time based on similarity with head-on (1D) collision in a system of two unconstrained masses. Below we derive the equivalent mass and stiffness of such system. Consider the constrained system of two masses shown in Fig. 3(B). The mass $n$ hits the initially resting mass $n+1$ and bounces back along $\xi$ axis, while the mass $n+1$ starts to move along the $x$ axis. Consider two distinct moments of time during the collision: $t_1$, when the two particles are head-on at the distance $l_1$, and $t_2$, when the particle $N+1$ displaced slightly along $x$ relative to initial head-on position. The vector difference between $\vec{l_1}$ and $\vec{l_2}$ is $\vec{\Delta l}$ As discussed above, overlaps between particles are considered to be much smaller than the particle radius. Easy to see (Fig. 3(C)) that the change of the intercenter distance between two particles in this case is: \begin{equation} \label{eq27} \begin{split} l_2-l_1 = \sqrt{(l_1 + \Delta l \cos \beta)^2 + (\Delta l \sin \beta)^2} = \sqrt{l_1^2 + 2 l_1 \Delta l \cos \beta + \Delta l^2 \cos \beta^2 + \Delta l^2 \sin \beta^2} \\ = l_1 \sqrt{1 + \frac {2 \Delta l}{l_1} \cos \beta + O \left( \left( \frac {\Delta l}{l_1}\right)^2\right)} \approxeq l_1 \left( 1 + \frac{\Delta l}{l_1} \cos \beta \right) =l_1 + \Delta \xi \end{split} \end{equation} Therefore, we can see that if $\Delta l<<R$, then up to the leading terms the effective contact stiffness along $\xi$, as well as the time instances of contact formation and breakage (and therefore, the collision time) will not be perturbed by the motion transversal to $\xi$. We can therefore conclude that the stiffness along $\xi$ should be considered the same as in the case of unconstrained motion. Let us then have a look at the effect of constraint on the dynamics along $\xi$. Easy to see (Fig. 3(D)) that due to the presence of constraint, spring force $f_{\xi}$ can only cause the acceleration along $x$ ($a_x$), causing, in turn, the projected acceleration along $\xi$: \begin{equation} \label{eq28} a_{\xi} = a_{x} \cos \beta = \frac{f_{x}}{m'} \cos \beta = \frac{f_{\xi} \cos \beta}{m'} \cos \beta = \frac{f_{\xi}} {m''}, \\ \end{equation} where \begin{equation} \label{eq29} m'' = \frac{m'}{\cos^2 \beta} = \frac{m}{\cos^4 \beta} \end{equation} Note that in case of $\beta = 0$ the effective mass $m''$ exactly coincides with $m$, while in case of $\beta = \frac{\pi}{2}$ we have $m'' \rightarrow \infty$. Therefore, under aforementioned assumptions, the collision between two dominoes can be viewed as a head-on collision of two unconstrained masses $m_1 = m$ and $m_2 = m''$. A pairwise interaction of dominoes can thus be reduced to a single spring-mass system with the parameters \begin{equation} \label{eq30} \begin{split} k_{r} &= \frac{\pi R E}{2}, \\ m_{r} &= \frac{m_1 m_2''}{m_1 + m_2''} = \frac{m}{1+\cos^4 \beta} \end{split} \end{equation} Therefore we can express the collision time as \begin{equation} \label{eq31} T_c(k,m,\beta) = \pi \sqrt{ \frac{ 2 m }{k (1+\cos^4 \beta)} } \end{equation} or, in terms of the particle radius $R$, mass $m$ and the material Young's modulus $E$: \begin{equation} \label{eq32} T_c(R,E,m,\beta) = \sqrt{ \frac{2 \pi m }{R E (1+\cos^4 \beta)} } \end{equation} \begin{figure} \begin{center} \includegraphics[width=12.5cm]{fig4} \protect\caption{Velocities of domino wave propagation as functions of angle $\beta$, for infinitely thin (A, $v(k)$) and finite thickness (B, $v^s(k, s)$) dominoes. The plots are given for infinitely stiff dominoes (red lines) and three different values of stiffness. The plots are compared with the corresponding P-wave velocities.} \end{center} \end{figure} One can see that due to the finite quantity $T_c/2$ in the denominator of (\ref{eq20}), the wave propagation velocity can not be singular for small separations. It is important to note that the expression (\ref{eq20}) is applicable only until the collision events are distinct, meaning that the collision of dominoes $N$ and $N+1$ is not initiated before the complete detachment of dominoes $N-1$ and $N$. It is easy to demonstrate that this condition can be re-written as $T>T_{c}/2$. Otherwise, the equations of harmonic pair-wise collisions are not valid anymore. For $0<T<T_{c}/2$, one can expect complex patterns associated with the emergence and disappearance of interactions, while for $T = 0$ we end up with classical equations for wave propagation in a dispersive spring-mass chain, with velocity of propagation dependent on the frequency of the initial perturbation. Fig. 4 (A) illustrates the dependence of the domino wave velocity on the angle $\beta$. The plots are given for three different values of stiffness: $k_1 = k^*$, $k_2 = 10^4 k^*$, $k_3 = 10^8 k^*$, where $k^*$ is the value of stiffness defined by $ l/g = m/k^*$ for a given mass $m$. The curve $v(k = \infty)$ is provided for the reference. The domino wave velocities are compared with acoustic P-wave velocity in a spring-mass chain with the masses $m_r$, stiffnesses $k_r$ and spring length $d$: \begin{equation} \label{eq33} v_w = d \sqrt{\frac{k_r}{m_r}} \end{equation} One can see that the domino wave predicted by our theory can not be faster than the corresponding P-wave velocity in the system. \subsection{Dominoes of finite thickness} It is useful to consider the generalization on the domino's finite thickness $s$. Here and below $s/l$ is considered to be small. The effect of finite thickness of a domino is two-fold. First, finite thickness creates the potential energy minimum that leads to domino's vertical stability in a certain range of inclination angles. This change in the potential energy relief also affects the integral (\ref{eq11}) defining the free fall time. For the purpose of wave velocity estimation, we neglect this effect as quadratic with respect to $s/l$. Second, perfectly rigid domino's finite thickness effectively increases the velocity: \begin{equation} \label{eq34} v^s = \frac{d+s}{T+T_c/2} \end{equation} When comparing our results with the experiments and DEM simulations, we use thickness-adjusted expression for velocity (\ref{eq34}). The wave velocity in such structure is evaluated as \begin{equation} \label{eq35} v_w^s = (d+s) \sqrt{\frac{k_r}{m_r}} \end{equation} Fig. 4(B) illustrates the effect of finite domino thickness ($s/l = 0.1$) on the curves that are depicted in Fig. 4(A). \subsection{Range of applicability of the theory} Real experiments and DEM simulations (see below) feature more complex mechanical behavior that the one predicted by the theory above. Assuming validity of assumptions of 2D motion of dominoes and their sequential pair-wise interactions, the key factor defining the applicability of the extended EJ theory is the friction between domino and foundation. If the friction is sufficient, the domino rotates around the support point, if not - the point of support can slide along the foundation. Fig. 5(A) illustrates possible types of domino motion that can be initiated in this case. In the analysis below we establish the bounds within which sliding of the domino's foundation is impossible. It is convenient to give the bounds in terms of magnitude of the contact force, emerging during harmonic collision. This force can be related to the limit angular velocity of of the domino $n+1$ in the following way: \begin{equation} \label{eq36} \int_{0}^{T_c} F_x dt = \cos{\beta} \int_{0}^{T_c} F_m \sin{\left( \frac{\pi}{T_c}t \right)}dt = m'v = m' \Omega l \cos{\beta} \end{equation} This leads to \begin{equation} \label{eq37} F_m = \frac{\pi m \Omega l}{2 T_c \cos^2 \beta} \end{equation} where $\Omega$ is given by (\ref{eq9}), and $T_c$ is defined according to (\ref{eq32}). The domino can exhibit initiation of forward or backward rotation when the moment created by the contact force at collision point is not compensated by the moments produced by the frictional force and (in case of domino's finite thickness) reaction of the support to the gravitational force. This condition can be written as: \begin{equation} \label{eq38} \frac{F_m}{m g} > \frac{\mu+s}{2 \cos{\beta}(\cos{\beta}-1)} \end{equation} Note that this shape is valid for both the collisions above and below the level of the domino's center of mass. Straightforward considerations allow to conclude that the translational sliding of the domino along the foundation can be initiated in one of the two following cases: \begin{itemize} \item Case 1: 1) Horizontal projection of $F_m$ exceeds the frictional force $\mu m g$, which leads to: \begin{equation} \label{eq39} \frac{F_m}{ m g} > \frac{\mu}{\cos{\beta}} \end{equation} 2) The collision is below the level of resting domino's center of mass: \begin{equation} \label{eq40} \beta > \arccos{\frac{1}{2}} \end{equation} 3) The moment acting on the domino exceeds one exerted by frictional force but is lower than the sum of moments exerted by frictional and gravitational force: \begin{equation} \label{eq41} \frac{\mu}{2 \cos{\beta}(\cos{\beta}-1)}< \frac{F_m}{m g} < \frac{\mu+s}{2 \cos{\beta}(\cos{\beta}-1)} \end{equation} \item Case 2: 1) Condition (\ref{eq39}) is met. 2) The toppling moment exerted by contact force does not exceed the stabilizing moment exerted by the gravity: \begin{equation} \label{eq42} \frac{F_m}{m g} < \frac{s}{2 l \cos{\beta}} \end{equation} \end{itemize} \begin{figure} \begin{center} \includegraphics[width=12.5cm]{fig5} \protect\caption{(A) Possible mechanisms of domino sliding, (B) Diagram detailing the boundaries between these mechanisms. Blue lines indicate magnitudes of contact forces, given by (\ref{eq37}), for few different contact stiffnesses ($k_1 = k^*$, $k_2 = 10 k^*$, $k_3 = 100 k^*$, $k^*$ is defined above). The plots are given for $s/l = 0.1$, $\mu = 0.3$.} \end{center} \end{figure} Fig. 5(B) gives ranges of angles where scenarios I-IV, depicted in Fig. 5(A), can occur. It worth noting that the expressions (\ref{eq38}-\ref{eq42}) give the "tightest bounds", beyond which the theoretical assumption of resting foundation does not hold. The further domino dynamics after onset of sliding is hard to establish within the analytical model. The numerical modeling and experimental results, provided below, give some idea on further dynamic evolution of the domino chain in these cases. \section{Numerical modeling and experiments} \subsection{Experimental determination of domino wave velocity} The experiment was carried out to determine the domino wave velocity. It had rather simple setup shown in Fig. 6(A). The set of dominoes were placed manually on one-meter-long foundation. The ruler was used to position dominoes with millimeter precision at the given spacing. The number of dominoes used in experiments varied depending on the spacing. We used plastic parts of $44 \times 20 \times 7$ mm in size ($s/l = 0.16$). The density, stiffness and the friction coefficients of the plastic parts were not known precisely. Video fixation with the rate of 50 frames per second was used to study the fast process of domino toppling. It was established that the observed velocity does not depend on the initial push if we do not use first ten domino parts, therefore the domino wave velocities were determined based on toppling propagation in dominoes $10..N$. The timer with scale division $10^{-3}$ s was used to determine the initial and final events of domino toppling used in velocity measurements. 11 spacing intervals ($d/l = 0.03, 0.05$ and $0.1, 0.2..0.9$) were studied. $10$ independent experiments were carried out for every spacing between dominoes, which allowed to establish error bars defined as one standard deviation of the set of $10$ measurements from their mean. \begin{figure} \begin{center} \includegraphics[width=12.5cm]{fig6} \protect\caption{(A) Experimental setup for determination of domino wave velocity. (B) Experimentally determined domino wave velocity as a function of angle $\beta$.} \end{center} \end{figure} As can be seen from the results of an experiment (Fig. 6(B)), the domino wave does not feature singular (or nearly singular) behavior for small separations. Our video recordings indicate that the major reason for that are forward rotations of dominoes (mechanism I in Fig. 5(A)) observed in the regions of small separations. This observation qualitatively agrees with the theoretical prediction (Fig. 5(A, B)). We also observe domino forward translations for large separations, which qualitatively agrees with the diagram in Fig. 5(B). Backward rotations are not observed, the reason for that is the stabilizing role of gravity, transforming initial backward rotation motion (mechanism II in Fig. 5(A)) into forward translation (mechanism III in Fig. 5(A)). It worth noting that in experiment the forward translation is usually followed by forward toppling of the domino. In this case, the effective angle $\beta_{n+1}$ appears to be smaller, which leads to the fall of the domino $n+1$ without sliding. This, in turn, leads to sliding/toppling mechanism on the step $n+2$. Such ``odd/even'' patterns were observed in experiement for $\beta \in (0.8,1)$. Overall, the mechanical behavior of domino chain observed in experiment qualitatively agrees with the theoretical predictions, however, the precision of measurements does not allow to reliably fit the numerical values defining the collision time and other relevant quantities of our model. \subsection{DEM modeling} We used the discrete element method \cite{Cundall_1979} to study the domino effect, employing the open source DEM package YADE \cite{Yade_2015,Yade_2021} in the calculations. The dynamics of equal-sized rigid spherical beads with mass $m_p$, radius $R_p$, volume $V_p = \frac{4}{3} \pi R^3$ and moment of inertia $\frac{2}{5}m_pR^2$ was computed using the velocity Verlet time integration scheme. The domino parts were modeled as rigid assemblies (clumps) of the beads (Fig. 7(A)). The normal interaction of the beads of neighboring clumps was given by a linear contact model described above. In presence of non-zero friction between the beads, classical Cundall and Strack model no-slip contact model was employed. The detailed description of this model is available at \cite{Cundall_1979}. Two separate friction coefficients were specified - $\phi_d = 0$ is associated with all the clumped spheres, $\phi_f = 0.3$ is used for the bottom layer of spheres and the frictional foundation. For every pair contact between entities $1$ and $2$, $\phi = \min{}{(\phi_1, \phi_2)}$ is used. Both the 3D model, allowing lateral rotation of dominoes, and the simpler 2D model with flat dominoes constrained to move in $xz$ plane, were studied. In case of small lateral asymmetries in the model, the dominoes exhibited complex 3D motion with lateral rotations (bottom inset in Fig. 7(A)) - this was previously observed in experiment \cite{Destin2018}. Once no lateral asymmetry is introduced, both models give nearly identical results (Fig. 7(B)), therefore, we used a less computationally expensive 2D model for our studies. \begin{figure} \begin{center} \includegraphics[width=12.5cm]{fig7} \protect\caption{(A) DEM model of the row of dominoes, used in numerical determination of the wave velocity. (B-D) dependence of domino wave velocity on separation for different model types (B), domino stiffnesses (C) and local damping values (D).} \end{center} \end{figure} It was established that the predictions of the finite stiffness EJ theory are in qualitative agreement with the DEM simulations (Fig. 7(C)) - stiffness trend is well predicted by the theory. The discrepancies in dependence on $\beta$ angle can be associated with domino surface roughness in DEM simulations, as well as the other geometric effects that are not accounted in the simple analytical model. Similarly to real experiments and theoretical predictions, DEM model exhibited sliding of the domino foundations, featuring forward rotations for small separations (top inset in Fig. 7(A)) and backward rotations/translations for large separations, given sufficiently slender dominoes and small coefficient of friction with the foundation. However, the boundaries presented in Fig. 5(B) were not sharply highlighted by the DEM model. In DEM, the regions of toppling appeared to be significantly wider, which, however, does not contradict the theory, since the latter only predicts the onset of sliding, and not further domino dynamics. Numerical modeling revealed another interesting feature, that was observed in the damped simulations. It appeared that in presence of small amount of acceleration-dependent local damping (please see \cite{Yade_2015} for the exact definition of local damping utilized in our simulation) and in certain range of separation angles the domino $n$ may collide with its neighbor $n-1$ more than once. This effectively increases the toppling propagation velocity (Fig. 7(D)), since every domino gains additional acceleration, not predicted by EJ theory. Such behavior can be considered as intermediate regime between EJ model (elastic collisions, single pair interactions) and van Leuven model - inelastic collisions, resulting in ``trains'' of dominoes. \section{Discussion, conclusions and future work} Our analysis suggests a rather simple generalization of EJ theory \cite{Johnson_2007} to account for finite time of collisions between dominoes. This adjustment allows to get rid of non-physical properties of EJ model - infinite stiffness of the collision, infinitely small time of collision and infinite propagation velocity for the case of small separations. Moreover, an adjusted theory allows to determine the dynamic quantities characterizing collisions, which, in turn, allows establishing the bounds within which the theory is applicable. The adjustment explains the discrepancy with the experimental data noticed previously \cite{Larham_2008} - the regions where the theory diverges from experimental data are simply beyond the borders of applicability. The theory can still be refined in few different ways. 1) Numerical modeling indicates, that even in case of slightly inelastic collisions, multiple interactions between sequential dominoes are possible. It would be useful to study collision times for domino-like systems with restitution coefficients close to $1$, and indicate the system parameters leading to multiple interactions within one cycle. These important questions remain beyond the scope of our paper. 2) We used quite a simple model of interaction between dominoes - linear contact model, that originates from the earliest works on interactions between discrete elements \cite{Cundall_1979}. This model assumes constant interaction stiffness, whose absolute value is motivated by rather simplistic considerations. Somewhat more detailed analysis may give the refined picture of contact interactions of dominoes. 3) Our analysis does not account for the role of gravity during collision. Gravitational force does non-zero work and apply nonzero torque, that, in principle, should be accounted in the conservation laws. 4) The analysis uses the simplest possible mass distribution, allowing quick analytical treatment of rotational motion of dominoes. The analysis for more realistic mass distributions will lead to somewhat more complicated expressions for quantities in the conservation laws, and, consequently, for the collision time. 5) The considerations above are based on the assumption of vanishingly small friction between dominoes, and therefore, absent tangential forces at contact points. As has been discussed by \cite{Stronge_1987}, presence of friction noticeably affects the scaling of the wave propagation velocity. The generalization 1) can make our theory applicable for slightly non-conservative systems. The generalizations 2) - 5) are expected to result in rather minor adjustments of the quantitative characteristics of the model. Finite collision time domino theory is useful for interpreting experimental results with fast domino-like systems at small scales. Moreover, it provides a foundation for modeling these systems with DEM, and helps interpreting the results of such simulations. The source code of the YADE scripts used in our simulations is available at \url{https://bitbucket.org/iostanin/domino/} \section*{Acknowledgments} The work showcases the results of few student BS/ARS projects (D.D, C.L, J.W., L.H, L.K, P,B.) accomplished at TFE/ET department of the University of Twente. The assistance from the University of Twente ME BS/ARS program is deeply appreciated. I.O. expresses his gratitude to S. Luding and A. Thornton for the fruitful discussions on the topic. \bibliographystyle{unsrtnat}
2,877,628,089,955
arxiv
\section{Introduction} \label{intro_sec} Ultra-luminous infrared galaxies (ULIRGs, defined as having $L_{IR}>10^{12}L_{\odot}$; \citealt{1987ARA&A..25..187S}) are among the most luminous objects of the Universe, radiating most of their energy in the infrared (IR) band. In the local Universe, ULIRGs are rare objects and, although very luminous, account for only ${\sim}5\%$ of the total integrated IR luminosity density \citep{1991AJ....101..354S}. Moving to higher redshift, the contribution of ULIRGs to the energy budget increases, as clearly illustrated by infrared luminosity function studies (\citealt{2005ApJ...632..169L}; \citealt{2007ApJ...660...97C}). \citet{2011A&A...528A..35M}, deriving the infrared luminosity function up to $z{\sim}2.3$ from deep 24 and 70~${\mu}$m {\it Spitzer\/}\ data, obtained a contribution of ${\sim}17 \%$ from ULIRGs to the IR luminosity density at $z{\sim}2$. This result has been recently confirmed also by the {\it Herschel\/}\ Science Demonstration Phase (SDP) preliminary results; \cite{2010A&A...518L..27G}, by deriving the total IR luminosity function up to $z\sim{3}$, estimated that ULIRGs account for ${\sim}30 \%$ to the IR luminosity density at $z{\sim}2$. At high redshift ($z{\sim}2$), a key question regards the nature of the sources that power these ultra-luminous objects (i.e., star-formation vs. accretion). Locally, thanks to {\it Infrared Space observatory (ISO)} mid-IR spectroscopy, ULIRGs were proven to be powered predominantly by star formation in the mid-IR, with the fraction of AGN-powered objects increasing with luminosity (from ${\sim}$15\% at $L_{IR}<2{\times}10^{12}~L_{\odot}$ to about 50\% at higher luminosity; \citealt{1998ApJ...505L.103L}). A limited percentage (15--20\%) of mid-IR light dominated by accretion processes has also been found using $L$-band (3-4~$\mu$m) observations of local ULIRGs with $L_{IR}\sim10^{12}L_{\odot}$ by \cite{2010MNRAS.401..197R}. Interestingly, these authors show that, although their sources are powered mostly by starburst processes, at least 60\% of them contain an active nucleus. Using accurate optical spectral line diagnostics applied to a sample of 70~$\mu$m selected luminous sources at $z\sim$1, \cite{2010MNRAS.403.1474S} found that only 20--30\% of their objects may host an active nucleus. However, such AGN are never bolometrically dominant. In the same paper, a discussion on how the low AGN incidence can be partially due to selection effects (i.e. the 70~$\mu$m band sampling the starburst 50~$\mu$m rest-frame far-IR bump) is also presented. At $z\sim2$, our understanding of the AGN content in ULIRGs is much more uncertain. On the one hand, sources selected with a 24~${\mu}$m flux density $\lower.5ex\hbox{\gtsima}$0.9--1~mJy mainly sample the bright end of the ULIRG population. The analysis of the IRS spectra of a color-selected sample of sources with S$_{24{\mu}m}{\lower.5ex\hbox{\gtsima}}0.9$~mJy and at $z>1.5$ ($L_{IR}{\sim}7{\times}10^{12}L_{\odot}$) shows that the majority of these sources are AGN-dominated (${\sim}75\%$; \citealt{2008ApJ...683..659S}). On the other hand, sub-millimeter selected galaxies (SMGs) at $z{\sim}2$, falling in the ULIRG regime, appear largely starburst-dominated objects (e.g., \citealt{2007ApJ...660.1060V}; \citealt{2008ApJ...675.1171P}). In this paper, we aim at providing a better understanding of luminous sources at $z\sim$2 with typical IR luminosities of $10^{12}L_{\odot}$, i.e. sampling the knee of the IR luminosity function instead of the bright end. We will re-analyse the sample from \citet[hereafter F10]{2010ApJ...719..425F}, where an estimate of the AGN contribution in ULIRGs has already been presented, based on {\it Spitzer\/}\ IRS data, {\it Chandra\/}\ 2~Ms observations, optical/near-IR multi-band properties, and ACS morphological properties. In this work, we will extend the analysis to far-IR data, obtained recently by the {\it Herschel\/}\ satellite as part of the guaranteed survey ``PACS Evolutionary Probe'' (PEP; \citealt{2011A&A...532A..90L}), and to the recently published {\it Chandra\/}\ 4~Ms data (\citealt{2011ApJS..195...10X}). We note that an analysis of the F10 sample (including the new far-IR data) has been already presented by \cite{2012ApJ...745..182N} [hereafter N12], where the general properties of the mid-to-far IR spectral energy distributions (SEDs) of 0.7$<z<$2.5 galaxies have been investigated. The authors confirm the early {\it Herschel\/}\ results (e.g., \citealt{2010A&A...518L..29E}), i.e. the star-formation rates at $z\sim2$ are over-estimated if derived from 24~${\mu}$m flux densities, and discuss how this effect can be due to enhanced PAH emission with respect to local templates (see also \citealt{2010A&A...518L..29E}; \citealt{2011A&A...533A.119E}). In N12, the SED library from \cite{2001ApJ...556..562C} was used, longward of 6~$\mu$m rest-frame, and the IRS spectra data were stacked to improve the faint signal of sources at $z\sim{2}$. In this paper we adopt a different method to study the properties of $z\sim2$ IR galaxies, as described in $\S$\ref{modeling_sec}; a comparison of our results vs. those obtained by N12 is presented in $\S$\ref{agnfraction_sec_fromir}. Hereafter, we adopt the concordance cosmology ($H_{0}=70$~km~sec$^{-1}$~Mpc$^{-1}$, $\Omega_{m}$=0.3, and $\Omega_{\Lambda}$=0.7, \citealt{2003ApJS..148..175S}). Magnitudes are expressed in the Vega system. \section{The data} \label{sample_sec} The sample is composed of 24 luminous sources at $z{\sim}$2, selected by F10 in the {\it Chandra\/}\ Deep Field South (CDF-S), with 24~${\mu}$m flux densities $S(24~{\mu}m){\sim}0.14-0.55$ mJy and at $z=1.75-2.40$. The sample can be considered luminosity-selected, since all of the sources satisfying these selection criteria are considered. Given the redshift of the sources, their 24~$\mu$m flux densities translate roughly into infrared luminosity around $10^{12}L_{\odot}$. We excluded two objects, originally defined as luminous IR galaxies (LIRGs; $L_{IR}>10^{11}L_{\odot}$) by F10 (L5511 and L6211) and subsequently included in the ULIRG class thanks to the new IRS redshift measurements, since they would be the only two sources outside the {\it {\it HST}\/}\ ACS area (L5511 is also outside the {\it Herschel\/}\ area). The sample benefits from a large amount of information available for the CDF-S, from multi-band photometry to the recent {\it Spitzer\/}\ IRS spectroscopy. The IRS observations were performed at low-resolution in the observed 14-35~\hbox{$\mu$m}\ wavelength regime, i.e. sampling at $z\sim2$ the important rest-frame PAH features at 6.2~\hbox{$\mu$m}\ and 7.7~\hbox{$\mu$m}, and the 9.7~\hbox{$\mu$m}\ silicate feature (see F10). As reported above, in this work we also use recent far-IR data obtained from the {\it Herschel\/}\ satellite, which consists of PACS data at 70, 100 and 160~${\mu}$m from the PEP survey \citep{2011A&A...532A..90L}. We use the PACS blind catalogue v1.3 down to 3$\sigma$ limits of 1.2, 1.2 and 2.0~mJy at 70, 110 and 160 ${\mu}$m, respectively. {\it Herschel\/}\ data have been matched to 24~\hbox{$\mu$m}\ {\it Spitzer\/}\ MIPS sources \citep{2009A&A...496...57M} using the likelihood ratio technique (\citealt{1992MNRAS.259..413S}; \citealt{2001Ap&SS.276..957C}), as described in \cite{2011A&A...532A..49B}. Shorter wavelength information has been included by matching the 24~\hbox{$\mu$m}\ sources to the multi-band (from UV to {\it Spitzer\/}\ IRAC bands) GOODS-MUSIC photometric catalog \citep{2009yCat..35040751S}. In particular, we cross-correlated the 24~\hbox{$\mu$m}\ selected ULIRGs with the PEP catalogue, considering the positions of the 24~\hbox{$\mu$m}\ sources already associated to the PEP ones. Among the 24 ULIRGs, 21 sources have counterparts in PACS. For the 3 sources undetected by {\it Herschel\/}\ (U5050, U5152, and U5153), we consider the conservative 5$\sigma$ upper limits (2.2, 2.0 and 3.0 mJy at 70, 100 and 160~\hbox{$\mu$m}, respectively). For PACS detections, the typical separation between 24~\hbox{$\mu$m}\ and PEP sources is less than 1\arcsec, with the exception of U5805 and U16526 (${\sim}$4\arcsec). By visually checking the far-IR images, these sources appear as blends of more than one 24~${\mu}$m source. In these cases, the PACS flux densities are considered as upper limits. Photometry at 16~\hbox{$\mu$m}, obtained by F10 from IRS data, has also been included. Finally, we consider the recently published {\it Chandra\/}\ 4~Ms point-like source catalogue in the CDF-S (\citealt{2011ApJS..195...10X}). Eight of the 24 sources have an \hbox{X-ray}\ counterpart within 1\arcsec. Table~\ref{sample_prop} lists the source names, redshifts from optical and IRS spectroscopy (see F10), source IDs in the GOODS-MUSIC catalogue, flux densities and associated errors from mid-IR (16~\hbox{$\mu$m}) to far-IR (160~${\mu}$m). \begin{table*} \begin{center} \caption{The sample: multi-band information} \begin{tabular}{lcccccccc}\hline\hline Name & $z_{opt}$ & $z_{IRS}$ &ID$_{MUSIC}$ &S$_{16{\mu}m}$ & S$_{24{\mu}m}$ &S$_{70{\mu}m}$ &S$_{100{\mu}m}$ &S$_{160{\mu}m}$ \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline U428 & $-$1.664 & 1.783 & 8053 & 65 $\pm$ 3 & 289 $\pm$ 5 & 1270 $\pm$ 380 & 2870 $\pm$ 450 & $<3300$ \\ U4367 & $-$1.762 & 1.624 & 3723 & 101 $\pm$ 8 & 152 $\pm$ 4 & $<1800$ & $<1900$ & 2270 $\pm$ 530 \\ U4451 & $-$1.684 & 1.875 & 3689 & & 199 $\pm$ 4 & $<1800$ & 2000 $\pm$ 380 & 5110 $\pm$ 550 \\ U4499 & $-$1.909 & 1.956 & 70361 & $<35$ & 167 $\pm$ 3 & $<1800$ & $<1900$ & 6990 $\pm$ 580 \\ U4631 & 1.896 & 1.841 & 7087 & $<21$ & 275 $\pm$ 5 & $<1800$ & $<1900$ & 4010 $\pm$ 560 \\ U4639 & 2.130 & 2.112 & 7553 & $<14$ & 213 $\pm$ 4 & $<1800$ & $<1900$ & 3570 $\pm$ 550 \\ U4642 & $-$1.748 & 1.898 & 8217 & 65 $\pm$ 9 & 242 $\pm$ 5 & $<1800$ & 1150 $\pm$ 360 & $<3300$ \\ U4812 & 1.910 & 1.930 & 13175 & $<53$ & 295 $\pm$ 6 & 2590 $\pm$ 410 & 10000 $\pm$ 380 & 24830 $\pm$ 730 \\ U4950 & 2.291 & 2.312 & 15260 & 453 $\pm$ 8 & 557 $\pm$ 6 & $<1800$ & 2060 $\pm$ 420 & $<3300$ \\ U4958 & 2.145 & 2.118 & 15483 & 109 $\pm$ 10 & 232 $\pm$ 4 & 1360$\pm$ 410 & 2690 $\pm$ 380 & 3720 $\pm$ 550 \\ U5050 & $-$1.720 & 1.938 & 13758 & 51 $\pm$ 7 & 197 $\pm$ 4 & $<1800$ & $<1900$ & $<3300$ \\ U5059 & $-$1.543 & 1.769 & 13887 & $<37$ & 272 $\pm$ 5 & 1450 $\pm$ 380 & 1750 $\pm$ 360 & 4400 $\pm$ 610 \\ U5150 & $-$1.738 & 1.898 & 15083 & 65 $\pm$ 11 & 277 $\pm$ 4 & $<1800$ & 4350 $\pm$ 440 & 5690 $\pm$ 700 \\ U5152 & $-$1.888 & 1.794 & 70066 & 66 $\pm$ 9 & 267 $\pm$ 8 & $<1800$ & $<1900$ & $<3300$ \\ U5153 & $-$2.030 & 2.442 & 70054 & $<34$ & 166 $\pm$ 3 & $<1800$ & $<1900$ & $<3300$ \\ U5632 & 1.998 & 2.016 & 9261 & $83$ $\pm$ 2 & 427 $\pm$ 5 & 2290 $\pm$ 530 & 4250 $\pm$ 360 & 11470 $\pm$ 560 \\ U5652 & 1.616 & 1.618 & 6758 & 154 $\pm$ 9 & 343 $\pm$ 4 & 1600 $\pm$ 380 & 6920 $\pm$ 380 & 17220 $\pm$ 740 \\ U5775 & $-$1.779 & 1.897 & 9361 & $<31$ & 164 $\pm$ 4 & $<1800$ & 1440 $\pm$ 360 & 3930 $\pm$ 670 \\ U5795 & $-$1.524 & 1.703 & 11201 & $<31$ & 250 $\pm$ 4 & $<1800$ & $<1900$ & 5030 $\pm$ 670 \\ U5801 & $-$1.642 & 1.841 & 12003 & $<35$ & 186 $\pm$ 4 & $<1800$ & $<1900$ & 2310 $\pm$ 550 \\ U5805 & $-$2.093 & 2.073 & 12229 & 68 $\pm$ 6 & 172 $\pm$ 4 & $<1800$ & 1910$^{a}$ $\pm$ 360 & 5100$^{a}$ $\pm$ 670 \\ U5829 & $-$1.597 & 1.742 & 9338 & 57 $\pm$ 5 & 185 $\pm$ 4 & $<1800$ & 1920 $\pm$ 380 & 4320 $\pm$ 550 \\ U5877 & $-$1.708 & 1.886 & 13250 & 179 $\pm$ 8 & 364 $\pm$ 5 & 4600 $\pm$ 420 & 7460 $\pm$ 440 & 11570 $\pm$ 870 \\ U16526 & $-$1.718 & 1.749 & 3420 & 69 $\pm$ 4 & 306 $\pm$ 9 & 4130$^{a}$ $\pm$ 920 & 8600$^{a}$ $\pm$ 390 & 15970 $^{a}$ $\pm$ 760 \\\hline \end{tabular} \begin{minipage}[h]{13.8cm} \footnotesize Notes --- (1) Source name (F10); (2) optical redshift (negative and positive values are referred to photometric and spectroscopic redshifts, respectively; see F10 for details); (3) redshift derived from IRS spectroscopy (F10); (4) ID from the v2 GOODS-MUSIC catalogue (\citealt{2009yCat..35040751S}); (5) 16~\hbox{$\mu$m}\ flux density from F10, in $\mu$Jy; (6) 24~\hbox{$\mu$m}\ flux density in $\mu$Jy (\citealt{2009A&A...496...57M}); (7), (8), (9) {\it Herschel\/}\ PEP flux density in $\mu$Jy (\citealt{2011A&A...532A..90L}). $^{a}$ indicates a possible contribution from a nearby source. \end{minipage} \label{sample_prop} \end{center} \end{table*} \section{SED decomposition} \label{modeling_sec} The IR energy budget of a galaxy can be mainly ascribed to stellar photosphere and star-formation emissions and accretion processes; to estimate the relative importance of these three processes, a proper SED decomposition should be carried out. Disentangling the different contributions to the total SED is becoming more and more effective with the advent of the {\it Spitzer\/}\ and {\it Herschel\/}\ satellites. In this regard, many studies have been performed to compare the full range of observed photometric data with expectations from a host-galaxy component and models for the circum-nuclear dust emission (e.g., \citealt{2008ApJ...675..960P}; \citealt{2008MNRAS.386.1252H}; \citealt{2009MNRAS.395.2189V}; \citealt{2010A&A...517A..11P}). Here, together with the full-band photometric SED, we benefit also from the IRS spectra, that we combine with the photometric datapoints (Sec.~\ref{fitting_sec}; see also \citealt{2011MNRAS.414.1082M}, \citealt{2011ApJ...736...82A} for examples of combined photometric/spectroscopic data analysis). The IRS spectra provide an important diagnostic, sampling the rest-frame mid-IR spectral range (${\sim}5-12~\mu$m for our sample), where the difference between starburst and AGN is strongest. In this wavelength range, starburst galaxies are generally characterized by prominent polycyclic aromatic (PAH) features and weak 10~\hbox{$\mu$m}\ continuum, whereas AGN display weak or no PAH features plus a strong continuum (e.g. \citealt{2000A&A...359..887L}). \subsection{AGN and stellar components} \label{torus_comp_sec} We have decomposed the observed SEDs using three distinct components: stars, having the bulk of the emission in the optical/near-IR; hot dust, mainly heated by UV/optical emission due to gas accreting onto the super-massive black hole (SMBH) and whose emission peaks between a few and a few tens of microns; cold dust, principally heated by star formation (we refer to \citealt{2010A&A...517A..11P}; but see also \citealt{2008MNRAS.386.1252H} for a detailed description of the properties of the AGN and host-galaxy (stars$+$cold dust) components). Here we report only the most important issues concerning this analysis, with the AGN component being the main focus. The stellar component has been modelled as the sum of simple stellar population (SSP) models of solar metallicity and ages ranging from ≈1~Myr to 2.3~Gyr, which corresponds to the time elapsed between $z=6$ (the redshift assumed for the initial star formation stars to form) and $z\sim{2}$ in the adopted cosmology. A \cite{1955VA......1..283S} initial mass function (IMF), with mass in the range 0.15--120 M$_{\odot}$, is assumed. The SSP spectra have been weighted by a Schmidt-like law of star formation (see \citealt{2004A&A...418..913B}): \begin {equation} SFR(t)=\frac{T_{G}-t}{T_{G}}{\times}\exp\left({-\frac{T_{G}-t} {T_{G}{\tau}_{sf} }}\right) \end{equation} \noindent where $T_{G}$ is the age of the galaxy (i.e. of the oldest SSP) and $\tau_{sf}$ is the duration of the burst in units of the oldest SSP. A common value of extinction is applied to stars of all ages, and a \citealt{2000ApJ...533..682C}) attenuation law has been adopted ($R_{V }$=4.05). To account for emission above 24~\hbox{$\mu$m}, a component coming from colder, diffuse dust, likely heated by star-formation processes, has been included in the fitting procedure. It is represented by templates of well-studied starburst galaxies (i.e. Arp 220, M82, M83, NGC 1482, NGC 4102, NGC 5253 and NGC 7714) and five additional host-galaxy average templates obtained recently by \cite{2011MNRAS.414.1082M} from the starburst templates of \cite{2006ApJ...653.1129B}. This set of five templates have been included since they properly reproduce the relative PAH strengths in the average IRS spectra of our ULIRG sample (see F10). Regarding the AGN component, we have used the radiative transfer code of \cite{2006MNRAS.366..767F}. This model follows the formalism developed by different authors (e.g., \citealt{1992ApJ...401...99P}; \citealt{1994MNRAS.268..235G}; \citealt{1995MNRAS.273..649E}), where the IR emission in AGN originates in dusty gas around the SMBH with a ``flared disk'', ``smooth distribution''. Recently, this model has been widely used and found to successfully reproduce the photometric data, including the 9.7~\hbox{$\mu$m}\ silicate feature in emission observed for type-I AGN (e.g., \citealt{2005A&A...436L...5S}). Recent high-resolution, interferometric mid-IR observations of nearby AGN (e.g., \citealt{2004Natur.429...47J}) have confirmed the presence of a geometrically thick, torus-like dust distribution on pc-scales; this torus is likely irregular or ``clumpy''. Over the last decade, many codes have been developed to deal with clumpy dust distributions (e.g., \citealt{2002ApJ...570L...9N}; \citealt{2008ApJ...685..160N}; \citealt{2010A&A...523A..27H}). According to \cite{2005A&A...436...47D}, the two models (smooth and clumpy) do not differ significantly in reproducing sparse photometric datapoints (see also \citealt{2008NewAR..52..274E}). The main difference is in the strength of the silicate feature observed in absorption in objects seen edge-on, which is, on average, weaker for clumpy models with the same global torus parameters. The comparison between the two models has been applied only to few samples including IRS data and the results are not conclusive. Recently, \cite{2011MNRAS.416.2068V} made a comparison of the smooth vs. clumpy models for the matter responsible for reprocessing the nuclear component of a hyper-luminous absorbed \hbox{X-ray}\ quasar at $z{\sim}0.442$ (IRAS~09104$+$4109), for which both photometric and spectroscopic (IRS) data were available. While smooth solutions (the F06 model) are able to reproduce the complete dataset, clumpy models (\citealt{2008ApJ...685..160N}) have problems in reproducing the source photometry and spectroscopy at the same time. In \cite{2011MNRAS.414.1082M}, smooth vs. clumpy models are tested against a sample of ${\sim}$10 local {\it Swift\/}/BAT AGN with prominent emission at {\it IRAS\/}\ wavelengths. In this case, the authors claim that clumpy solutions reproduce the data better; smooth model parameters produce a much wider range of SED solutions, i.e. this model seems to produce too degenerate solutions. A further, more complete and extensive comparison of smooth vs. clumpy solutions is presented in Feltre et al. (submitted), where the theoretical SED shapes and the detailed spectral features of the two classes of models (i.e. F06 for the ``smooth distribution'' and \citealt{2008ApJ...685..160N} for the ``clumpy distribution'') are compared using a large compilation of AGN with {\it IRS\/}\ spectroscopic data. Overall, similar results to those obtained by \cite{2005A&A...436...47D} are derived, i.e. SED fitting applied to both photometric and spectroscopic data is not a sufficiently reliable tool to discriminate between the smooth and the clumpy distributions. We remind the reader that in the present paper we are focusing on the torus ``global'' energy output (i.e., the relevance of accretion-related emission with respect to the total source SED), not on the details of the torus structure and geometry, so the choice of the adopted model does not critically influence the results; as shown by Feltre et al. (submitted), the two models provide consistent results in terms of energetics. \begin{figure* \centering \includegraphics[width=0.9\textwidth]{fig1a_pozzi.ps} \caption{{\bf a)} Rest-frame broad-band datapoints (red dots) compared with the best-fit model obtained as the sum (solid black line) of a stellar (red dotted line), an AGN (blue dashed line) and a starburst component (green dot-dashed line). IRS spectra are shown as magenta lines. The area filled with diagonal lines represents the AGN solutions at the 3$\sigma$ confidence level.} \label{figure_sed} \end{figure*} \addtocounter{figure}{-1} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{fig1b_pozzi.ps} \caption{{\bf b)} As in Fig.~\ref{figure_sed}a.} \end{figure*} \begin{figure*} \centering \includegraphics[angle=90,width=1.1\textwidth]{fig2_pozzi.ps} \caption{Rest-frame SEDs and datapoints, as in Fig.~\ref{figure_sed}, where a zoom in the 2--20~\hbox{$\mu$m}\ wavelength range is shown for the nine sources where an AGN component is required at the 3 sigma confidence level from the SED-fitting analysis.} \label{figure_sed_2_20_um} \end{figure*} \subsection{SED-fitting procedure} \label{fitting_sec} In most previous work using the F06 code, only photometric datapoints were used, and the quality of the fitting solutions was estimated using a standard $\chi^{2}$ minimization technique, where the observed values are the photometric flux densities (from optical-to-mid-IR/far-IR) and the model values are the ``synthetic'' flux densities obtained by convolving the sum of stars, AGN, and starburst components through the filter response curves (see \citealt{2008MNRAS.386.1252H}). In \cite{2011MNRAS.416.2068V}, the spectroscopic information was taken into account {\it a posteriori} in order to discriminate among different photometric best-fitting solutions. Here we propose a first attempt to simultaneously fit the photometric and spectroscopic datapoints using smooth torus models by transforming the mid-IR 14--35~\hbox{$\mu$m}\ observed-frame spectra into ``narrow-band'' photometric points of 1~\hbox{$\mu$m}\ band-width in the observed frame (i.e., subdividing the spectral transmission curve in sub-units), and estimating the corresponding fluxes and uncertainties using ordinary procedure of filter convolution and error propagation. The ``new'' filters have been chosen to achieve, in each wavelength bin, a sufficient signal-to-noise ratio without losing too much in terms of spectral resolution, which is needed to reproduce the 9.7~${\mu}$m feature, when present. To take into account slit loss effects, IRS data have been normalized to the 24~\hbox{$\mu$m}\ flux density, deriving normalization factors between $\sim$1.2 and 1.8. We would like to remark here that the F06 code does not necessarily impose the presence of an AGN component, i.e., solutions with only stellar emission are possible, as described in $\S$\ref{agnfraction_sec_fromir}. Furthermore, the relative normalization of the optical/near-IR component and the far-IR emission of the host galaxy is free, given the extremely complex physical relation of the two (i.e. \citealt{2010A&A...518L..39B}). Overall, the SED-fitting procedure ends with 11 free parameters: six are related to the AGN, two to the stellar component, and one to the starburst. The further two free parameters are the normalizations of the stellar and of the starburst components; the torus normalization is estimated by difference, i.e., it represents the scaling factor of the torus model capable to account for the data-to-model residuals once the stellar components have already been included. Here we briefly recall the parameters involved in our SED fitting analysis, and refer to F06 and Feltre et al. (submitted) for a detailed description of the AGN model parameter. The six parameters related to the torus are: the ratio $R_{max}/R_{min}$ between the outer and the inner radius of the torus (the latter being defined by the sublimation temperature of the dust grains); the torus opening angle $\Theta$; the optical depth $\tau$ at 9.7~\hbox{$\mu$m}\ ($\tau_{9.7{\mu}m}$); the line of sight $\theta$ with respect to the equatorial plane; two further parameters, $\gamma$ and $\alpha$, describe the law for the spatial distribution of the dust. In the currently adopted version (see Feltre et al., submitted), the ``smooth'' torus database contains 2368 AGN models. In \cite{2010A&A...517A..11P} (see also \cite{2008MNRAS.386.1252H}), the degeneracies related to the six torus parameters are extensively described: in fact, various combinations of parameter values are equally able to provide good results in reproducing a set of observed data points. In particular, the optical depth $\tau_{9.7{\mu}m}$ has the largest effect on the fit. The infrared luminosity, provided by the SED-fitting code, is robustly determined (within $\sim$0.1 dex) and appears ``solid'' against parameter degeneracies. Concerning the parameters associated with the other components, two are related to the stellar emission: $\tau_{sf}$, i.e. the parameter of the Schmidt-like law for the star formation, and the reddening $E(B-V)$. Regarding the starburst component, the free parameter is related to the choice of the best-fitting template among the starburst library. The given number of free parameters means that the acceptable solutions, within 1(3)$\sigma$ confidence levels, are derived, for each source, by considering the parameter regions encompassing $\chi^{2}_{min}$+(12.65, 28.5), respectively, in presence of 11 degrees of freedom (d.o.f.; see \citealt{1976ApJ...208..177L}). \begin{figure} \includegraphics[width=8cm]{fig3_pozzi.ps} \caption{Fractional contribution due to the AGN component in the 2-6~\hbox{$\mu$m}\ ({\it top panel}), 5--30~\hbox{$\mu$m}\ ({\it middle panel}), and 8--1000~\hbox{$\mu$m}\ ({\it bottom panel}) range. The error bars account for the AGN model dispersion at the 1$\sigma$ confidence level. Red points indicate the sources where an AGN component is detected at the 3$\sigma$ confidence level from the SED fitting, and the triangles those with \hbox{X-ray}\ emission pointing clearly towards an AGN classification.} \label{figure_agn_contr} \end{figure} \section{AGN fraction} \label{agnfraction_sec} \subsection{SED-fitting results} \label{agnfraction_sec_fromir} In Fig.1a,b the observed UV--160~\hbox{$\mu$m}\ datapoints (filled red points) are reported along with the best-fitting solutions (black lines) and the range of AGN models within the 3$\sigma$ confidence level (filled region). All the sources need a host galaxy (red dotted-line) and a starburst component (green dot-dashed line). The host galaxy dominates the UV--8~\hbox{$\mu$m}\ photometry (at $z\sim2$, the IRAC 8~\hbox{$\mu$m}\ band samples the 2.7~\hbox{$\mu$m}\ rest-frame), while the starburst component dominates at longer wavelengths. For the three sources with no PACS detection (U5050, U5152, and U5153), the starburst component is required by the SED-fitting procedure to reproduce the mid-IR spectral data, although its shape is not well constrained. Regarding the AGN component, our goal is to check whether its presence is required by the data and, in this case, to estimate its contribution to the IR luminosity. The SED-fitting procedure found that for all but one source (U428) the presence of an AGN is consistent with the photometry, although for only nine sources (${\sim}35\%$ of the sample: U4639, U4950, U4958, U5150, U5152, U5153, U5652, U5805, and U5877) its presence is significant at the 3$\sigma$ confidence level (i.e., solutions with no torus emission have ${\chi}^{2}{\ge}{\chi}^{2}_{min}$+28.5). However, in these sources as well, the AGN component far from dominates the whole spectral range, but emerges only in the narrow 2--10~\hbox{$\mu$m}\ range, where the stellar emission from the host galaxy has a minimum while the warm dust heated by the AGN manifests itself (e.g., \citealt{2000A&A...359..887L}; \citealt{2010A&A...518L..27G}). In Fig.~\ref{figure_sed_2_20_um} we report a zoom of the SED over the 2--20~$\mu$m range in order to visualize the AGN emission for all of the nine sources where such component is required at the 3$\sigma$ confidence level. At a visual inspection, the presence of a nuclear component can be inferred for sources with a power-law SED (i.e., U4950 and U5877), for sources where the stellar component alone is not able to reproduce the datapoints around 3~$\mu$m (i.e., U4958, U5153) or for sources where the starburst component, normalized to the far-IR datapoints, has already declined around 5--6~$\mu$m (i.e. U5152, U5153, and U5805). Finally, there are few cases (i.e., U4639) where visual inspection of the SED decomposition is not strongly suggesting an AGN component, which is however required by the SED-fitting analysis. The likely AGN in these sources is either obscured or of low-luminosity, although a combination of both effects is plausible as well. While our analysis is able to place constraints on the presence of an AGN, we cannot draw any firm conclusion on either the obscured or the low-luminosity hypothesis for the AGN emission from mid-IR data. Despite the uncertainties affecting the estimate of the gas column density $N_{H}$ derived from the dust optical depths (e.g., \citealt{2001A&A...365...28M}), what we observe is that for all of the sources with an AGN component, a certain level of obscuration\footnote{The column density has been derived from the the optical depth at 9.7\hbox{$\mu$m}, using the Galactic extinction law (\citealt{1989ApJ...345..245C}) and dust-to-gas ratio (\citealt{1978ApJ...224..132B}).} ($10^{22}$\lower.5ex\hbox{\ltsima}$N_{H}$\lower.5ex\hbox{\ltsima}$10^{24}$~cm$^{-2}$) is required. In particular, the three sources where the presence of an AGN is particularly evident in the mid-IR regime from our SED decomposition (U4950, U4958, and U5877) -- as already pointed out by F10 on the basis of the presence of a powerlaw mid-IR SED (U4950 and U5877), optical emission lines (U4958, with an apparently broad C\ {\sc iii}] feature) and optically unresolved nucleus (U4950 and U5877) -- also show a relatively bright \hbox{X-ray}\ counterpart (see Sec.~\ref{xray_section}). The optical depth at 9.7~\hbox{$\mu$m}\ of the best-fitting solutions corresponds to $N_{H}\sim{(2-7)}{\times}10^{22}$~cm$^{-2}$ (i.e., in the Compton-thin regime; $<N_{H}>=6{\times}10^{22}$~cm$^{-2}$ is obtained once the solutions at the 3$\sigma$ level are considered). Of the remaining six sources, one (U4639) has an association with a relatively weak \hbox{X-ray}\ source, while for the others only an upper limit to the \hbox{X-ray}\ emission can be placed (see Table~\ref{xray_properties}). For these six sources, the SED-fitting procedure indicates an obscuration still in the Compton-thin regime, but higher (up to $4{\times}10^{23}$~cm$^{-2}$, with $<N_{H}>=2{\times}10^{23}$~cm$^{-2}$ when the solutions at the 3$\sigma$ level are taken into account) than for the previous three sources. Further insights on the properties of these sources, hence on the nature of their broad-band emission, will be discussed using \hbox{X-ray}\ diagnostics (see Sec.~\ref{xray_section}). Turning now to source energetics, for the nine objects with an AGN component from the mid-IR, we computed the total and nuclear luminosity in three different spectral ranges: the whole (8--1000~$\mu$m) IR range, the mid-IR (5--30~$\mu$m) range (partially sampled by IRS), and the narrow 2--6~$\mu$m wavelength interval. We find that the 8--1000~$\mu$m luminosity is always completely dominated by star-formation emission, the AGN nuclear contribution being $\lower.5ex\hbox{\ltsima}$5$\%$ (see Fig.~\ref{figure_agn_contr}, {\it bottom panel}), and that in only one source (U4950) out of the nine is the nuclear component contribution significant (${\sim}20\%$). Our finding that starburst processes dominate the 8--1000 $\mu$m emission is consistent with the F10 and N12 conclusions. In the 5--30~\hbox{$\mu$m}\ range, where the re-processed emission from the dust surrounding the nuclear source peaks (e.g., \citealt{2004MNRAS.355..973S}), we find a larger AGN contribution (i.e. ${\sim}$25\%; see Fig.~\ref{figure_agn_contr}, {\it middle panel}). As shown in Fig.1a,b, at ${\lambda}{\sim}10$~${\mu}$m the nuclear component typically starts being overwhelmed by the starburst emission. We note that the mid-IR AGN/starburst relative contribution is also discussed by F10, adopting a completely different method than ours (i.e., scaling the SED of Mrk~231 to the 5.8~$\mu$m continuum and fitting the residual SED with the average starburst from \citealt{2006ApJ...653.1129B}). Excluding from their analysis the three sources with the strongest AGN evidence from either \hbox{X-ray}\ or optical data (U4950, U4958, and U5877), F10 found a nuclear fraction of $\sim$20\% (see their Fig.~22, {\it top panel}), which is consistent with our results. The only wavelength range where we find that the AGN overcomes the galaxy emission and contributes to ${\sim}$60\% of the emission is the narrow 2--6~${\mu}$m interval (see Fig.~\ref{figure_agn_contr}, {\it top panel}). The power of the near-IR spectral range to detect obscured AGN emission confirms previous results (e.g., \citealt[2010]{2006MNRAS.365..303R} using $L$-band spectroscopy, but see also \cite{2003ApJ...588..199L}, where a weak near-IR excess continuum emission, detected in disk galaxies thanks to ISOPHOT spectral observations, was ascribed to interstellar dust emission at temperatures of ${\sim}10^{3}$K). The dominance of the AGN emission in the narrow 2--6~${\mu}$m interval is not in contrast with N12 conclusions, i.e. that these sources in the mid-IR are dominated by the PAH features, once the different spectral range is taken into account. In Table~\ref{irx_lum}, we report the total IR luminosities (8--1000~\hbox{$\mu$m}), the AGN fractions over the three wavelengths discussed above (for the nine sources with an AGN emission detected at least at the 3$\sigma$ confidence level), and both observed and predicted 2--10~keV luminosities for our sample. The latter luminosities have been derived from the 5.8~\hbox{$\mu$m}\ luminosity, using the \cite{2009ApJ...693..447F} correlation (see Sec.~\ref{xray_section} for further details), and only for the nine sources with an AGN component detected at least at 3$\sigma$ confidence (last column of Table~\ref{irx_lum}). The integration of the SED over the 8--1000~\hbox{$\mu$m}\ range confirms the ULIRG classification ($L_{IR}>10^{12}L_{\odot}$), inferred by F10 on the basis of the 24~\hbox{$\mu$m}\ flux densities and redshifts, for 14 out of the 24 sources, the remaining 10 sources showing slightly lower IR luminosities, between 0.5${\times}10^{12}$ and $10^{12}~L_{\odot}$. The fact that 24~${\mu}$m--based measurements tend to over-estimate the $L_{IR}$ for $z{\sim}2$ sources is consistent with other works based on {\it Herschel}-PEP data (e.g., \citealt{2010A&A...518L..29E}; N12) and stacking methods (e.g., \citealt{2007ApJ...668...45P}). The final IR luminosity range of our sample is 0.5--2.8${\times}10^{12}~L_{\odot}$ ($<L_{IR}>=1.4{\times}10^{12}~L_{\odot}$, with a dispersion $\sigma=7{\times}10^{11}~L_{\odot}$). \subsection{X-ray results} \label{xray_section} \begin{table*} \begin{center} \caption{X-ray properties of the sample of $z\sim2$ IR-luminous galaxies} \label{xray_properties} \footnotesize \begin{tabular}{lcccccccc} \hline\hline Name & XID & F$_{0.5-8~keV}$ & log(L$_{2-10~keV}$) &log(L$_{2-10~keV}$,fit) & $N_{H}$ & Class. X (Xu et al. 11)&Class. X (revised) & Class. SED \\ (1) & (2) & (3) & (4) & (5) & (6) &(7)&(8)&(9)\\ \hline \multicolumn{5}{c}{\sf Sources with an \hbox{X-ray}\ counterpart in the 4~Ms CDF-S catalog} \\ \\ U428 & 579 & 0.07 & 41.9 & - & - & Gal & Gal &Gal.\\ U4639 & 555 & 0.07 & 42.0 & - & - & Gal & Gal &AGN\\ U4642 & 437 & 0.29 & 42.5 & 42.8 &4.5$^{+6.1}_{-4.3}\times{10^{22}}$ & AGN&AGN&Gal.\\ U4950 & 351 & 6.64 & 43.9 & 44.2 & 1.2$^{+0.3}_{-0.2}\times{10^{23}}$&AGN&AGN&AGN$^{\star}$\\ U4958 & 320 & 0.31 & 42.3 & 43.7 & 1.9$^{+1.4}_{-0.7}\times{10^{24}}$& AGN&AGN&AGN$^{\star}$\\ U5632 & 552 & 0.09 & 42.1 & - & - & AGN&Gal.&Gal.\\ U5775 & 360 & 0.05 & 41.8 & - & - & AGN&Gal.&Gal.\\ U5877 & 278 & 3.34 & 43.0 & 44.0 &6.0$^{+2.0}_{-1.5}\times{10^{23}}$ &AGN&AGN&AGN$^{\star}$\\ \hline \multicolumn{5}{c}{\sf Sources without \hbox{X-ray}\ detection in the 4~Ms catalog} \\ \\ U4367 & - & - & $<42.3$ & - & - & - & - &Gal.\\ U4451 & - & - & $<42.2$ & - & - & - & - &Gal.\\ U4499 & - & - & $<42.1$ & - & - & - & - &Gal.\\ U4631 & - & - & $<42.0$ & - & - & - & - &Gal.\\ U4812 & - & - & $<42.4$ & - & - & - & - &Gal.\\ U5050 & - & - & $<42.2$ & - & - & - & - &Gal.\\ U5059 & - & - & $<42.0$ & - & - & - & - &Gal.\\ U5150 & - & - & $<42.4$ & - & - & - & - &AGN\\ U5152 & - & - & $<42.5$ & - & - & - & -&AGN\\ U5153 & - & - & $<42.6$ & - & -& - & - &AGN\\ U5652 & - & - & $<41.8$ & - & -& - & - &AGN\\ U5795 & - & - & $<41.8$ & - & -& - & - &Gal.\\ U5801 & - & - & $<41.9$ & - & - & - & - &Gal.\\ U5805 & - & - & $<42.1$ & - & - & - & - &AGN\\ U5829 & - & - & $<41.9$ & - & -& - & - &Gal.\\ U16526 & - & - & $<42.2$ & - & - & - & - &Gal.\\ \hline \hline \end{tabular} \begin{minipage}[l]{0.94\textwidth} \footnotesize Notes --- (1) Source name (F10); (2) XID from \cite{2011ApJS..195...10X}; (3) observed-frame 0.5--8~keV flux in units of 10$^{-15}$~erg~cm$^{-2}$~s$^{-1}$ (\citealt{2011ApJS..195...10X}); errors on the \hbox{X-ray}\ fluxes (hence on the luminosities) vary from less than 10\% for the few sources with most counts up to $\sim$~40\% for the \hbox{X-ray}\ faintest sources; (4) logarithm of the rest-frame 2--10~keV luminosity derived from either the 0.5--8~keV flux or the 0.5--8~keV sensitivity map (upper limits in the lower panel). The flux (or upper limit) is converted into a luminosity by assuming a powerlaw model with $\Gamma$=1.4; (5) logarithm of the rest-frame, absorption-corrected 2--10~keV luminosity, in units of erg~s$^{-1}$; (6) column density. Both (5) and (6) have been derived directly from \hbox{X-ray}\ spectral fitting, which has been limited to the four sources with the highest counting statistics (see Sect.~\ref{xray_section} for details). We note that \hbox{X-ray}\ luminosities -- columns (4) and (5) -- are different because of the different spectral modeling and assumptions (see \citealt{2011ApJS..195...10X} and Sect.~\ref{xray_section}); (7), (8), (9) source classifications from \hbox{X-ray}\ data (\citealt{2011ApJS..195...10X} and current work) and from SED-fitting, respectively. The term ``AGN'' in the source classification based on SED fitting (last column) indicates that the nuclear component is detected at the 3$\sigma$ confidence level. $^\star$ indicates the presence of an AGN from optical data (U4958 -- optical emission lines; U4950 and U5877 -- optical morphology, see F10). \end{minipage} \end{center} \end{table*} The 4~Ms \hbox{X-ray}\ source catalog in the CDF-S (\citealt{2011ApJS..195...10X}) provides additional information for the sub-sample of eight matched sources (see Table~\ref{xray_properties}). The depth of the {\it Chandra\/}\ mosaic in the field, though variable across its area, provides constraints down to very faint flux limits ($\sim10^{-17}$~\cgs\ in the 0.5--2~keV band). In the following, we use the \hbox{X-ray}\ information to provide an independent estimate of the AGN presence in our sample of IR-luminous galaxies and, for sources with most counts, characterize their \hbox{X-ray}\ emission. We also provide upper limits to the \hbox{X-ray}\ luminosity of the sources which are not detected in the 4~Ms CDF-S image. A basic source classification is reported by \cite{2011ApJS..195...10X} (AGN/galaxy/star), where five different classification criteria are adopted to separate AGN from galaxies\footnote{ The criteria to classify a source as an AGN are: high \hbox{X-ray}\ luminosity (above $3\times10^{42}$~\rm erg~s$^{-1}$); flat effective photon index ($\Gamma\leq1.0$); \hbox{X-ray}-to-optical flux ratio log(f$_{\rm X}$/f$_{\rm R}$)$>-1$; an \hbox{X-ray}\ luminosity at least three times higher than that possibly due to star formation (see \citealt{2005ApJ...632..736A}); presence of either broad or high-ionisation emission lines in the optical spectra. See $\S$4.4 of \citealt{2011ApJS..195...10X} for details.}; at least one of these criteria should be satisfied. According to such classification, only two sources of the present sample are classified as galaxies, U428 and U4639, the remaining six being AGN. However, in the following we will provide indications for a ``revision'' of this classification using all of the available \hbox{X-ray}\ information (e.g., spectral properties, luminosities, count distribution). In Table~\ref{xray_properties} the source classifications from \hbox{X-ray}\ data (\citealt{2011ApJS..195...10X} and current work, respectively), along with the results from the SED-fitting analysis (see Sec.~\ref{agnfraction_sec_fromir}), are presented. X-ray luminosities, coupled with \hbox{X-ray}\ spectral analysis (see below), indicate the presence of an AGN in the three sources (U4950, U4958, and U5877) for which the SED fitting analysis and F10 already suggested an AGN. Also U4642 has an \hbox{X-ray}\ luminosity typical of AGN emission. In comparison with the 2~Ms {\it Chandra\/}\ data (\citealt{2008ApJS..179...19L}) used by F10, U4958 represents a new AGN detection, although the presence of an active nucleus was already inferred by the mid-IR featureless spectra and the optical spectrum, showing a broad \mbox{C\ {\sc iii}]} line and strong \mbox{N\ {\sc v}} and \mbox{C\ {\sc iv}} emission lines. All of the \hbox{X-ray}\ matched sources have a full-band (0.5--8~keV) detection. One source (U4958) has an upper limit in the soft band (0.5--2~keV); this result is suggestive of heavy absorption, since {\it Chandra\/}\ has its highest effective area, and hence best sensitivity, in this energy range. Basic \hbox{X-ray}\ spectral analysis (using an absorbed powerlaw with photon index fixed to 1.8, as typically observed in AGN; \citealt{2005A&A...432...15P}), actually confirms the presence of strong, possibly Compton-thick obscuration\footnote{A source is called Compton-thick if its column density $N_{\rm H}>1/\sigma_{\rm T}\sim1.5\times10^{24}$~cm$^{-2}$, where $\sigma_{\rm T}$ is the Thomson cross section.} towards this source (N$_{\rm H}=1.9^{+1.4}_{-0.7}\times10^{24}$~cm$^{-2}$). Four sources have hard-band (2--8~keV) upper limits: U428, U4639, U5632, and U5775, all of which being characterized by limited counting statistics (12--26 counts in the full band). Their \hbox{X-ray}\ luminosity (see Table~\ref{xray_properties})\footnote{For the four sources with 2--8~keV upper limits, the 2--10~keV luminosities are reported in Table~\ref{xray_properties} as these sources were detected in the hard band. We note, in fact, that the values reported in the table are derived from the 0.5--8~keV fluxes, where all of these sources are detected.} is consistent with star-formation activity, although the presence of a low-luminosity AGN cannot be excluded. U4950 and U5877 are characterized by $\sim$1420 and $\sim$360 counts in the 0.5--8~keV band, which allow a moderate-quality \hbox{X-ray}\ spectral analysis. Both sources can be fitted using a powerlaw model, but the flat photon index ($-1$ and $+$1, respectively) is indicative of absorption, which has been estimated to be 1.2$^{+0.3}_{-0.2}\times10^{23}$~cm$^{-2}$ and 6.0$^{+2.0}_{-1.5}\times10^{23}$~cm$^{-2}$, respectively, after fixing the photon index to 1.8. The same model applied to U4642 data provides good results, although the derived column density is poorly constrained ($N_{\rm H}=4.5^{+6.1}_{-4.3}\times10^{22}$~cm$^{-2}$). Apart from U4950, U4958, U5877 and U4642, for the remaining \hbox{X-ray}\ matched sources, the limited counting statistics prevent us from placing constraints on any possible obscuration. For the sources which were not detected in the 4~Ms CDF-S observations, we derived rest-frame 2--10~keV luminosity upper limits by converting the full-band sensitivity map (weighted over a region surrounding the source of interest of 4\arcsec\ radius to minimize possible spurious fluctuations in the map) using a powerlaw with $\Gamma$=1.4 (see Table~\ref{xray_properties}). The assumption of $\Gamma$=1.8 would imply luminosities higher by $\sim0.1-0.2$ dex. The derived \hbox{X-ray}\ upper limits place strong constraints on the further presence of luminous AGN in our sample, unless they are heavily obscured. Following F10, for the nine sources with an AGN from the SED fitting, we estimated the rest-frame \hbox{X-ray}\ emission using the mid-IR emission of the AGN component as a proxy of the nuclear power (Table~\ref{irx_lum}). In particular, we use the $L_{2-10keV}$--5.8~\hbox{$\mu$m}\ relation (\citealt{2009ApJ...693..447F}; $\log L_{2-10keV}=43.57 + 0.72\times (\log L_{5.8\hbox{$\mu$m}}-44.2)$, with luminosities expressed in erg~s$^{-1}$; see also \citealt{2004A&A...418..465L}), which was calibrated using unobscured AGN in the CDF-S and COSMOS surveys. In Fig.~\ref{lxlx} the predicted and measured \hbox{X-ray}\ luminosities are compared. For the three sources with clear AGN signatures (also from \hbox{X-ray}\ data), the luminosities have been corrected for the absorption (see Table~\ref{xray_properties} and above), while for the remaining sources, no correction has been applied, due to the lack of constraints to any possible column density from the low-count \hbox{X-ray}\ spectra. Overall, it appears that the three sources showing clear AGN signatures in the mid-IR band and in X-rays (U4950, U4958 and U5877) are broadly consistent with the 1:1 relation, considering the dispersion in the mid-IR/\hbox{X-ray}\ relation, the observational errors, and the uncertainties in deriving the intrinsic 2--10~keV luminosity properly. The correction accounting for the absorption seems to be correct also for the most heavily obscured of our \hbox{X-ray}\ sources, U4958. For the other six sources, the expected 2--10~keV luminosity is a factor of ${\sim}$10--100 higher than the measured \hbox{X-ray}\ luminosity. Although caution is obviously needed in all the cases where mid-IR vs. \hbox{X-ray}\ luminosity correlations are used (see, e.g., the discussion in \citealt{2010MNRAS.404...48V}), the obscuration derived from the SED fitting for the remaining six sources ($\sim2{\times}$10$^{23}$~cm$^{-2}$) and the ratio between the measured and predicted luminosities suggest the presence of significant obscuration in these sources. Our conclusion is robust against the possible presence of absorption affecting also the $L_{5.8\hbox{$\mu$m}}$ values, since in this case the predicted \hbox{X-ray}\ luminosities should be considered as lower limits. Sensitive \hbox{X-ray}\ data over a larger bandpass would be needed to test the hypothesis of heavy obscuration in these sources. \begin{figure} \includegraphics[width=8cm]{fig4_pozzi.ps} \caption{Predicted vs. measured 2--10~keV luminosities for the nine sources where an AGN component is inferred from the SED fitting. The triangles indicate the three sources with clear AGN \hbox{X-ray}\ emission; for these three sources only, the 2--10~keV luminosities have been corrected for absorption derived from \hbox{X-ray}\ spectral analysis data (see Table~\ref{xray_properties}). The dashed diagonal lines indicate ratios of 1:1, 10:1 and 100:1 between the predicted and the measured \hbox{X-ray}\ luminosity (from right to left).} \label{lxlx} \end{figure} \begin{table*} \begin{center} \caption{IR and \hbox{X-ray}\ luminosities, and AGN fractions for the sample of $z\sim2$ IR-luminous galaxies} \label{irx_lum} \footnotesize \begin{tabular}{lcccccccc}\hline\hline Name & $z_{IRS}$ & $L_{IR}$ & f$^{AGN}_{8-1000{\mu}m}$ & f$^{AGN}_{5-30{\mu}m}$&f$^{AGN}_{2-6{\mu}m}$ &log(L$_{2-10keV}$) & logL($_{2-10keV}^{pred}$) & Class.SED \\ (1) & (2) & (3) & (4) & (5) & (6) &(7) &(8)&(9)\\ \hline U428 & 1.78 & 12.06 & - & -& - & 41.9 & - &\\ U4367 & 1.62 & 11.70 & - & - & - & - & - &\\ U4451 & 1.88 & 12.00 & - & - & - & - & - &\\ U4499 & 1.96 & 12.25 & - & - & - & - & - & \\ U4631 & 1.84 & 11.89 & - & - & - & - & - &\\ U4639 & 2.11 & 12.08 & 0.03 & 0.16 & 0.46 & 42.0 & 43.7& AGN \\ U4642 & 1.90 & 11.83 & - & - & - & 42.8 & - &\\ U4812 & 1.93 & 12.45 & - & - & -& - & - &\\ U4950 & 2.31 & 12.17 & 0.22 & 0.63 & 0.91 & 44.2 & 44.4 &AGN \\ U4958 & 2.12 & 12.10 & 0.03 & 0.11 & 0.54 & 43.7 & 43.7 &AGN \\ U5050 & 1.73 & 11.90 & - & - & - & - & - &\\ U5059 & 1.77 & 11.95 & - & - & - & - & - & \\ U5150 & 1.90 & 12.24 & 0.04 & 0.26 & 0.61 & - & 43.8 &AGN \\ U5152 & 1.89 & 11.94 & 0.07 & 0.26 & 0.60 & - & 43.7 & AGN \\ U5153 & 2.09 & 12.26 & 0.10 & 0.51 & 0.78 & - & 43.8 & AGN \\ U5632 & 2.02 & 12.39 & - & - & - & 42.1 & - & \\ U5652 & 1.62 & 12.37 & 0.03 & 0.20 & 0.29 & - & 43.7 &AGN \\ U5775 & 1.90 & 11.98 & - & - & - & 41.8 & - & \\ U5795 & 1.70 & 11.93 & - & - & - & - & - & \\ U5801 & 1.84 & 11.70 & - & - & - & - & - & \\ U5805 & 2.03 & 12.12 & 0.02 & 0.13 & 0.71 & - & 43.7 & AGN \\ U5829 & 1.74 & 11.90 & - & - & - & - & - & \\ U5877 & 1.89 & 12.37 & 0.03 & 0.13 & 0.68 & 44.0 & 43.9 &AGN \\ U16526 & 1.72 & 12.38 & - & - & - & - & - & \\ \hline\hline \end{tabular} \begin{minipage}[h]{12.2cm} \footnotesize Notes --- (1) Source name (F10); (2) {\it IRS\/}\ redshift (F10); (3) logarithm of the total (AGN$+$stellar) IR (8--1000~\hbox{$\mu$m}) luminosity (in $L_{\odot}$); (4), (5), (6) AGN contribution to the 8--1000, 5--30 and 2--6~${\mu}$m luminosity, respectively, for the nine sources with an AGN component at the 3~$\sigma$ level; (7) logarithm of the 2--10~keV luminosity (erg~s$^{-1}$; see Sec.~\ref{xray_section} and Table~\ref{xray_properties}); (8) logarithm of the 2--10~keV luminosity predicted from the $L_{2-10keV}$--$L_{5.8{\mu}m}$ relation for sources with an AGN component detected at the 3$\sigma$ confidence level (erg~s$^{-1}$; see \citealt{2009ApJ...693..447F} and Sec.~\ref{xray_section}); (9) Source classification from SED fitting. ``AGN'' indicates that the nuclear component is detected at the 3$\sigma$ confidence level. All of the remaining sources are consistent with no significant AGN emission from the SED-fitting analysis. \end{minipage} \end{center} \end{table*} \section{Summary} In this paper we have studied the multi-band properties of a sample of IR luminous sources at $z{\sim}2$ in order to estimate the AGN contribution to their mid-IR and far-IR emission. The sample was selected by F10 at faint 24~$\mu$m flux densities ($S(24{\mu}m){\sim}0.14-0.55$~mJy) and $z=1.75-2.40$ to specifically target luminosities around $10^{12}~L_{\odot}$, i.e. sampling the knee of the IR luminosity function. We have extended the previous analysis by taking advantage of new far-IR data recently obtained by the {\it Herschel\/}\ satellite as part of the guaranteed survey ``PACS Evolutionary Probe'' (PEP, \citealt{2011A&A...532A..90L}), and of the recently published 4~Ms {\it Chandra\/}\ data (\citealt{2011ApJS..195...10X}). The available photometry, coupled with {\it IRS\/}\ mid-IR data, have been used to reconstruct the broad-band SEDs of our IR-luminous galaxies. These SEDs have been modeled using a SED-fitting technique with three components, namely a stellar, an AGN, and a starburst component. The most up-to-date smooth torus model by F06 have been adopted for the AGN emission. The major results of the work can be summarized as follows. \begin{itemize} \item{SED fitting indicates that emission from the host galaxy in the optical/near-IR and star formation in the mid-IR/far-IR is required for all of the sources. Presence of an AGN component is consistent with the data for all but one source (U428), although only for 9 out of the 24 galaxies (${\sim}35\%$ of the sample) is this emission significant at least at the 3$\sigma$ level.} \item{Of the sub-sample of nine sources that likely harbour an AGN according to the SED fitting, we find that their total (8--1000~${\mu}$m) and mid-IR (5--30~\hbox{$\mu$m}) emissions are dominated by starburst processes, with the AGN-powered emission accounting for only $\sim5\%$ and $\sim23\%$ of the energy budget in these wavelength ranges, respectively. We find that the AGN radiation overcomes the stellar + starburst components only in the narrow 2--6~$\mu$m range, where it accounts for $\sim$60\% of the energy budget. In this wavelength range, stellar emission has significantly declined and emission from PAH features and starburst emission is not prominent yet.} \item{For this same sub-sample, the gas column densities (derived by converting the dust optical depth at 9.7~$\mu$m obtained from the SED fitting) are indicative of a significant level of obscuration. In particular, three sources, also detected as relatively bright \hbox{X-ray}\ sources (U4950, U4958, and U5877, see below), have $<N_{H}>=6{\times}10^{22}$~cm$^{-2}$ (considering all the solutions at the 3$\sigma$ confidence level), while the remaining six sources have $<N_{H}>={2}{\times}10^{23}$~cm$^{-2}$.} \item{X-ray analysis confirm that three sources (U4950, U4958, and U5877) are actually powered by an AGN at short wavelengths, and that this AGN varies from being moderately (U4950 and U5877) to heavily obscured, possibly Compton thick (U4958). The \hbox{X-ray}\ luminosity of U4642 is also suggestive of moderately obscured AGN emission. The remaining four sources detected by {\it Chandra\/}\ have \hbox{X-ray}\ emission consistent with star-formation processes.} \item{For the six sources where the AGN is required only by SED fitting (i.e., no strong AGN emission is observed in \hbox{X-ray}), we estimate an intrinsic \hbox{X-ray}\ nuclear luminosity from the AGN continuum at 5.8 ${\mu}$m. The ratio (from 10 up to 100) between the predicted and the measured luminosities suggests the presence of significant obscuration in these sources.} \end{itemize} \section*{acknowledgements} The authors thank the referee for his/her useful comments. FP and CV thank the Sterrenkundig Observatorium (Universiteit Gent), in particular Prof. M. Baes and Dr. J. Fritz, for their kind hospitality. The authors are grateful to F.E. Bauer and F. Vito for their help with {\it Chandra\/}\ spectra. CV acknowledges partial support from the Italian Space Agency (contract ASI/INAF/I/009/10/0). \\ PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF- IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). \bibliographystyle{mn2e}
2,877,628,089,956
arxiv
\section{Introduction} One of important problems of gravity physics is to develop a theory of quantum gravity. Till now we do not have a satisfactory and consistent theory of quantum gravity although the classical general relativity is the most successful theory of gravity. General relativity (GR) attract much attention because the existence of black holes. In accordance with GR a high concentration of mass in space can result a strong gravitational field in such a way that even light cannot escape from that region named a black hole. There is an analogy between black hole physics and ordinary thermodynamics which could help to understand the theory of quantum gravity. According to Bardeen, Carter and Hawking \cite{bardeen} (see also \cite{Wald}) BH can be considered as a thermodynamic objects obeying four laws of black hole mechanics. It was understood that black holes can be considered as thermodynamic objects if quantum mechanical effects are combined with GR describing semiclassically BH physics. This allows us to study different thermodynamic properties of black holes: phase transitions, thermal stability, black hole evaporation and others. The study of thermodynamic phase transitions of black holes can give quantum interpretation of GR and may help understanding the nature of quantum gravity. BH phase transitions were firstly investigated by Davies and Hut \cite{Davies}, \cite{Hut}. Phase transitions play very important role in different branches of physics. Thermodynamic phase transitions of black holes can be studied by means of heat capacity in the canonical ensemble. When equilibrium is unstable the heat capacity becomes negative and the black hole absorbs a mass from outside. Then BH temperature decreases and the rate of absorption is greater than the rate of emission. As a result, the black hole will grow. The phase transitions take place when the discontinuities of heat capacity hold. Models of nonlinear electrodynamics (NLED) coupled to GR are of interest because they can explain inflation of the universe \cite{Garcia}-\cite{Kruglov3}. The initial singularities in the early universe also can be avoided in some NLED models \cite{Novello1}. In particular NLED an upper limit on the electric field at the centre of point-like particles occurs \cite{Born}-\cite{Kruglov1}. In addition, in some models of NLED \cite{Born}-\cite{Kruglov2} the self-energy of charges is finite. The self-energy of NLED is finite if a definite condition for the Lagrangian is satisfied \cite{Shabad2}, \cite{Shabad3}. Due to loop corrections QED also becomes NLED \cite{Heisenberg}-\cite{Adler}. Here we study a black hole within new model of NLED. Our model is a modification of the model given in \cite{Bronnikov}. We investigate the model in the framework of GR and study thermodynamics of black holes. In \cite{Bardeen}-\cite{Frolov} black holes were investigated in different NLED models. At the weak field limit our NLED is transformed into Maxwell's electrodynamics and the correspondence principle takes place. The structure of the paper is as follows. We propose the new model of NLED with the parameter $\beta$ in section 2. In section 3 NLED in GR is studied. The asymptotics of the metric and mass functions at $r\rightarrow 0$ and $r\rightarrow\infty$ are found. We also obtained corrections to the Reissner-Nordstr\"{o}m (RN) solution. We have calculated the Hawking temperature and heat capacity of black holes. It was shown that black holes undergo the second-order phase transition. The range where black holes are stable was obtained. In section 4 we made a conclusion. The units with $c=\hbar=1$, $\varepsilon_0=\mu_0=1$ are used and the metric signature is given by $\eta=\mbox{diag}(-1,1,1,1)$. \section{A model of NLED} Let us study NLED with the Lagrangian density \begin{equation} {\cal L} = -\frac{{\cal F}}{\cosh\sqrt[4]{|\beta{\cal F}|}}. \label{1} \end{equation} The parameter $\beta$ possesses the dimensions of (length)$^4$, ${\cal F}=(1/4)F_{\mu\nu}F^{\mu\nu}=(\textbf{B}^2-\textbf{E}^2)/2$, and $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the field tensor. The correspondence principle takes place because at the weak field limit, $\beta {\cal F}\ll 1$, the Lagrangian density (1) is converted into Maxwell'l one, ${\cal L}\rightarrow-{\cal F}$. From Eq. (1) one obtains field equations \begin{equation} \partial_\mu\left({\cal L}_{\cal F}F^{\mu\nu} \right)=0, \label{2} \end{equation} where \begin{equation}\label{3} {\cal L}_{\cal F}=\partial {\cal L}/\partial{\cal F}=\frac{|\beta{\cal F}|^{1/4}\sinh\sqrt[4]{|\beta{\cal F}|}}{4\cosh^2\sqrt[4]{|\beta{\cal F}|}}-\frac{1}{\cosh\sqrt[4]{|\beta{\cal F}|}}. \end{equation} Making use of Eq. (1) we find the electric displacement field $\textbf{D}=\partial{\cal L}/\partial \textbf{E}$ \begin{equation} \textbf{D}=\varepsilon\textbf{E},~~~~\varepsilon=-{\cal L}_{\cal F}, \label{4} \end{equation} and the magnetic field $\textbf{H}=-\partial{\cal L}/\partial \textbf{B}$ \begin{equation} \textbf{H}=\mu^{-1}\textbf{B},~~~~\mu^{-1}=-{\cal L}_{\cal F}=\varepsilon. \label{5} \end{equation} With the help of Eqs. (4) and (5) the field equations (2) become the Maxwell equations \begin{equation} \nabla\cdot \textbf{D}= 0,~~~~ \frac{\partial\textbf{D}}{\partial t}-\nabla\times\textbf{H}=0. \label{6} \end{equation} The second pair of nonlinear Maxwell's equations, due to the Bianchi identity $\partial_\mu \tilde{F}^{\mu\nu}=0$, where $\tilde{F}^{\mu\nu}$ being the dual tensor, is \begin{equation} \nabla\cdot \textbf{B}= 0,~~~~ \frac{\partial\textbf{B}}{\partial t}+\nabla\times\textbf{E}=0. \label{7} \end{equation} Eqs. (4) and (5) give the relationship \begin{equation} \textbf{D}\cdot\textbf{H}=(\varepsilon^2)\textbf{E}\cdot\textbf{B}. \label{8} \end{equation} Thus, $\textbf{D}\cdot\textbf{H}\neq\textbf{E}\cdot\textbf{B}$ and according to \cite{Gibbons} the dual symmetry is broken. The dual symmetry in classical electrodynamics and in Born-Infeld electrodynamics hold. It should be mentioned that in QED the dual symmetry is violated because of quantum corrections. In NLED the symmetrical energy-momentum tensor is given by \begin{equation} T_{\mu\nu}= {\cal L}_{\cal F}F_\mu^{~\alpha}F_{\nu\alpha} -g_{\mu\nu}{\cal L}. \label{9} \end{equation} By virtue of Eqs. (1), (3) and (9) we obtain the energy-momentum tensor trace \begin{equation}\label{10} {\cal T}\equiv T^{\mu}_\mu=\frac{{\cal F}\sqrt[4]{|\beta{\cal F}|}\sinh\sqrt[4]{|\beta{\cal F}|}}{\cosh^2\sqrt[4]{|\beta{\cal F}|}}, \end{equation} which is not zero and, as a result, the scale invariance is broken. At $\beta =0$ one comes to classical electrodynamics and the trace of the energy-momentum tensor, due to Eq. 10, vanishes. \section{Magnetic black holes} The action of NLED in general relativity is given by \begin{equation} I=\int d^4x\sqrt{-g}\left(\frac{1}{2\kappa^2}R+ {\cal L}\right), \label{11} \end{equation} where $R$ is the Ricci scalar, $\kappa^2=8\pi G\equiv M_{Pl}^{-2}$, $G$ is Newton's constant, and $M_{Pl}$ is the reduced Planck mass. We study magnetically charged black hole ($\textbf{E}=0$, $\textbf{B}\neq 0$). Varying action (11) with respect to the metric and electric potential, one finds Einstein's and the electromagnetic field equations \begin{equation} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=-\kappa^2T_{\mu\nu}, \label{12} \end{equation} \begin{equation} \partial_\mu\left(\sqrt{-g}{\cal L}_{\cal F}F^{\mu\nu}\right)=0. \label{13} \end{equation} The line element possessing the spherical symmetry is given by \begin{equation} ds^2=-f(r)dt^2+\frac{1}{f(r)}dr^2+r^2(d\vartheta^2+\sin^2\vartheta d\phi^2), \label{14} \end{equation} where the metric function is defined as follows \cite{Bronnikov}: \begin{equation} f(r)=1-\frac{2GM(r)}{r}. \label{15} \end{equation} The mass function is given by \begin{equation} M(r)=\int_0^r\rho_M(r)r^2dr, \label{16} \end{equation} where $\rho_M$ is the magnetic energy density. The magnetic mass of the black hole is $m_M=\int_0^\infty\rho_M(r)r^2dr$. From Eq. (9) we obtain the magnetic energy density (\textbf{E}=0) \begin{equation}\label{17} \rho_M=T_0^{~0}=-{\cal L} = \frac{{\cal F}}{\cosh\sqrt[4]{|\beta{\cal F}|}}, \end{equation} where ${\cal F}=B^2/2=q^2/(2r^4)$, and $q$ is a magnetic charge. Introducing the dimensionless parameter $x=2^{1/4}r/(\beta^{1/4}\sqrt{q})$ and making use of Eqs. (16) and (17) we obtain the mass function \begin{equation}\label{18} M(x)=m_M-\frac{2^{1/4}q^{3/2}}{\beta^{1/4}}\arctan\left(\tanh\left(\frac{1}{2x}\right)\right), \end{equation} where the magnetic mass of the black hole reads \begin{equation}\label{19} m_M = M(\infty)=\frac{\pi q^{3/2}}{2^{7/4}\beta^{1/4}}. \end{equation} With the aid of Eqs. (15) and (18) one obtains the metric function \begin{equation}\label{20} f(x)=1-\frac{\pi-4\arctan\left(\tanh\left(\frac{1}{2x}\right)\right)}{ax}, \end{equation} where $a=\sqrt{2\beta}/(Gq)$. From Eq. (20) we find the asymptotic of the metric function at $r\rightarrow\infty$ \begin{equation}\label{21} f(r)=1-\frac{2Gm_M}{r}+\frac{Gq^2}{r^2}-\frac{G\sqrt{\beta}q^3}{6\sqrt{2} r^4}+ {\cal O}(r^{-6}). \end{equation} Equation (21) defines the corrections to the RN solution that are in the order of ${\cal O}(r^{-4})$. When $r\rightarrow \infty$, $f(\infty)=1$, and the spacetime becomes Minkowski's spacetime. We find that \begin{equation}\label{22} \lim_{x\rightarrow 0^+}f(x)=1. \end{equation} In accordance with Eq. (22) the black hole is regular and it has not a conical singularity as $f(0)=1$. At $\beta=0$ the model is converted into Maxwell's electrodynamics and Eq. (21) gives the RN solution. The plot of the function $f(x)$ for different parameters $a=\sqrt{2\beta}/(Gq)$ is given in Fig. 1. \begin{figure}[h] \includegraphics[height=3.0in,width=3.0in]{yoy7.eps} \caption{\label{fig.1}The plot of the function $f(x)$. Dashed-dotted line corresponds to $a=0.5$, solid line corresponds to $a=1.42$ and dashed line corresponds to $a=5$.} \end{figure} Fig. 1 shows that at $a>1.42$ there are no horizons. If $a\simeq 1.42$ the extreme singularity occurs. At $a<1.43$ we have two horizons. When $f(x_h)=0$ which defines horizons, we come from Eq. (20) to the equation \begin{equation}\label{23} a=\frac{\pi-4\arctan\left(\tanh\left(\frac{1}{2x_h}\right)\right)}{x_h}. \end{equation} The plot of the function $a(x_h)$ is given in Fig. 2. \begin{figure}[h] \includegraphics[height=3.0in,width=3.0in]{toy8.eps} \caption{\label{fig.2}The plot of the function $a(x_h)$.} \end{figure} The inner $x_-$ and outer $x_+$ horizons of the black hole are represented in Table 1. \begin{table}[ht] \caption{The inner and outer horizons of the black hole} \centering \begin{tabular}{c c c c c c c c c c }\\[1ex] \hline \hline $a$ & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 & 1.3 & 1.4 \\[0.5ex] \hline $x_-$ & 0.280 & 0.307 & 0.334 & 0.363 & 0.394 & 0.428 & 0.466 & 0.648 & 0.816 \\[0.5ex] \hline $x_+$ & 7.158 & 5.569 & 4.503 & 3.731 & 3.145 & 2.680 & 2.297 & 1.402 & 1.065 \\[0.5ex] \hline \end{tabular} \end{table} At $r\rightarrow \infty$ the energy-momentum trace, due to Eqs. (10) (${\cal F}=q^2/(2r^4)$), approaches zero. Therefore, according to Eq. (12) the Ricci scalar is $R=\kappa^2{\cal T}$, goes to zero, and spacetime becomes Minkowski's spacetime. \subsection{Thermodynamics} To study the thermal stability of magnetically charged black holes we will calculate the Hawking temperature. The expression for the Hawking temperature is given by \begin{equation} T_H=\frac{\kappa_S}{2\pi}=\frac{f'(r_h)}{4\pi}. \label{24} \end{equation} Here $\kappa_S$ is the surface gravity and $r_h$ is the horizon. From Eqs. (15) and (16) one obtains the relations \begin{equation} f'(r)=\frac{2 GM(r)}{r^2}-\frac{2GM'(r)}{r},~~~M'(r)=r^2\rho_M,~~~M(r_h)=\frac{r_h}{2G}. \label{25} \end{equation} Making use of Eqs. (17), (24) and (25), we obtain the Hawking temperature \begin{equation} T_H=\frac{1}{2^{7/4}\pi\beta^{1/4}\sqrt{q}} \left(\frac{1}{x_h}-\frac{2}{x_h^2\left[\pi-4\arctan(\tanh(1/2x_h))\right]\cosh(1/x_h)}\right). \label{26} \end{equation} The plot of the function $T_H(x_h)$ is represented in Fig. 3. \begin{figure}[h] \includegraphics[height=3.0in,width=3.0in]{toy9.eps} \caption{\label{fig.3}The plot of the function $T_H\sqrt{q}\beta^{1/4}$ vs. horizons ($x_h$).} \end{figure} The Hawking temperature is zero at $x_h\simeq 0.93$. When $x_+>0.93$ the Hawking temperature is positive and for $x_-<0.93$ the Hawking temperature is negative. The Hawking temperature possesses the maximum at $x_+\simeq 1.82$ where the heat capacity is singular and the second-order phase transition occurs in this point. One can obtain the heat capacity from the relation \begin{equation} C_q=T_H\left(\frac{\partial S}{\partial T_H}\right)_q=\frac{T_H\partial S/\partial r_h}{\partial T_H/\partial r_h}=\frac{2\pi r_h T_H}{G\partial T_H/\partial r_h}, \label{27} \end{equation} where the entropy satisfies the Hawking area low $S=A/(4G)=\pi r_h^2/G$. The plot of the function $GC_q/(\sqrt{\beta}q)$ vs. the horizon $x_h$ is given in Figs. 4 and 5 for different ranges of $x_h$. \begin{figure}[h] \includegraphics[height=3.0in,width=3.0in]{toy10.eps} \caption{\label{fig.4}The plot of the function $C_qG/(\sqrt{\beta}q)$ vs. $x_h$.} \end{figure} \begin{figure}[h] \includegraphics[height=3.0in,width=3.0in]{toy11.eps} \caption{\label{fig.5}The plot of the function $C_qG/(\sqrt{\beta}q)$ vs. $x_+$.} \end{figure} Figs. 4 and 5 show that indeed at $x_+\simeq 1.82$ ($r_+\simeq 1.53 \beta^{1/4}\sqrt{q}$), where heat capacity possesses discontinuity, the second-order phase transition of the black hole takes place. At $x_+<1.82$ the black hole is stable and at $x_+>1.82$ the heat capacity is negative and the black hole becomes unstable. \section{Conclusion} We have proposed a NLED model with the dimensional parameter $\beta$ so that the model at weak field limit becomes Maxwell's electrodynamics. Thus, the correspondence principle takes place. NLED coupled with the gravitational field was investigated and regular black hole solution was obtained. We have studied the magnetized black holes and found the asymptotic of the metric and mass functions at $r\rightarrow 0$ and $r\rightarrow\infty$. Corrections to the RN solution were obtained which are in the order of ${\cal O}(r^{-4})$. We have calculated the Hawking temperature and heat capacity of black holes and demonstrated that black holes undergo second-order phase transitions at $x_+\simeq 1.82$ ($r_+\simeq 1.53 \beta^{1/4}\sqrt{q}$). We have investigated the thermodynamic stabilities of black holes and shown that at the range $x_+<1.82$ the black holes are stable. It is also interesting to study the perturbation stability of black holes in the framework of the model under consideration.
2,877,628,089,957
arxiv
\partial{\partial} \def\mathcal{\mathcal} \def\text{Var\/}{\text{Var\/}} \newcommand{\boldsymbol}{\boldsymbol} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\e}[1]{\exp\left( #1\right)} \newcommand{\mathcal{CAT}}{\mathcal{CAT}} \newcommand{\mathcal{INV}}{\mathcal{INV}} \newcommand{U(\bs j)}{U(\boldsymbol j)} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Remark}[Theorem]{Remark} \theoremstyle{remark} \newtheorem*{Note}{Note} \newtheorem*{Example}{\bf Example} \newtheorem*{Claim}{\it Claim} \numberwithin{equation}{section} \linespread{1.1} \date{\today} \begin{document} \title[Stable partitions]{On random stable partitions} \author{Boris Pittel} \address{Department of Mathematics, The Ohio State University, Columbus, Ohio 43210, USA} \email{bgp@math.ohio-state.edu} \keywords {stable matchings, partitions, random preferences, asymptotics } \subjclass[2010] {05C30, 05C80, 05C05, 34E05, 60C05} \begin{abstract} The stable roommates problem does not necessarily have a solution, i.e. a stable matching. We had found that, for the uniformly random instance, the expected number of solutions converges to $e^{1/2}$ as $n$, the number of members, grows, and with Rob Irving we proved that the limiting probability of solvability is $e^{1/2}/2$, at most. Stephan Mertens's extensive numerics compelled him to conjecture that this probability is of order $n^{-1/4}$. Jimmy Tan introduced a notion of a stable cyclic partition, and proved existence of such a partition for every system of members' preferences, discovering that presence of odd cycles in a stable partition is equivalent to absence of a stable matching. In this paper we show that the expected number of stable partitions with odd cycles grows as $n^{1/4}$. However the standard deviation of that number is of order $n^{3/8}\gg n^{1/4}$, too large to conclude that the odd cycles exist with high probability (whp). Still, as a byproduct, we show that whp the fraction of members with more than one stable ``predecessor'' is of order $n^{-1/4}$. Furthermore, whp the average rank of a predecessor in every stable partition is of order $n^{1/2}$. The likely size of the largest stable matching is $n/2-O(n^{1/4+o(1)})$, and the likely number of pairs of unmatched members blocking the optimal complete matching is $O(n^{3/4+o(1)})$. \end{abstract} \maketitle \section{Introduction and main results} A roommates problem instance is specified by an even integer $n$, number of members, and for each $i$ ($1\le i\le n)$ a permutation $\sigma_i$ of the set $[n]=\{1,2,\dots, n\}$ in which $i$ itself occupies position $n$, ($\sigma_i(n)=i$). The permutation $\sigma_i$ forms the preference list of person $i$: $\sigma_i(k)=j$ if person $j$ occupies position $k$ in the preference list of person $i$, and each person $i$ is a the end of their own preference list. Equivalently, the instance can be specified by the ranking list $R_i$ of each person $i$, defined as the inverse permutation of $\sigma_i$: $R_i(j)=k$ if the person $j$ is the $k$-th best for person $i$. For a given roommates instance with $n$ members, a stable permutation (cyclic partition) is a permutation $\bold\Pi$ of $[n]$ such that: \begin{equation}\label{Tan1,2} \begin{aligned} &(1)\, \forall\,i\in [n]: \,\,R_i\bigl(\bold\Pi(i)\bigr)\le R_i\bigl(\bold\Pi^{-1}(i)\bigr);\\ &(2)\, \forall\, 1\le i\neq j\le n:\,\, R_i(j)<R_i\bigl(\bold\Pi^{-1}(i)\bigr)\Longrightarrow R_j(i)>R_j\bigl(\bold\Pi^{-1}(j)\bigr). \end{aligned} \end{equation} Viewing $\bold\Pi$ in terms of its cyclic decomposition, we will refer to $\bold\Pi(i)$ and $\bold\Pi^{-1}(i)$ as the successor of $i$ and the predecessor of $i$ in the permutation $\bold\Pi$. Then condition (1) states no person prefers his predecessor to his successor, and condition (2) states that no two mutually-unaligned members prefer each other to their predecessors. Note that equality in condition (1) is possible iff $\bold\Pi^2(i)=i$, i.e. either $i$ is a fixed point of $\bold\Pi$, or $(i,\bold\Pi(i))$ is a transposition in $\bold \Pi$---in this case we say that $(i,\bold\Pi(i))$ forms a pair in the partition $\bold\Pi$. Thus inequality (1) is not vacuous iff $i$ is in a cycle of length $3$ or more, in which case it is strict. Also if $i$ is a fixed point, then $R_i(\bold\Pi^{-1}(i))=R_i(i)=n$; so condition (2) implies that there are no other fixed points, and every $j\neq i$ prefers his own predecessor to $i$. Intuitively, each member $i$ proposes to $\bold\Pi(i)$ and holds a proposal from $\bold\Pi^{-1}(i)$. Clearly, if a stable partition $\bold\Pi$ is such that it has cycles of length $2$ only, then $\bold\Pi$ is a stable matching. However, while for every even $n\ge 4$ there are instances without a stable matching, Tan \cite{Tan}, who introduced the notion of a cyclic partition $\bold\Pi$, proved that, for every instance of preferences, (1) there is at least one stable permutation; (2) all stable permutations have the same odd cycles (``parties''); (3) replacing each even cycle $(i_1,i_2,\dots,i_{2m})$ of a stable permutation by the transpositions $(i_1,i_2),\dots,(i_{2m-1},i_{2m})$, or by the transpositions $(i_2,i_3),\dots, (i_{2m}, i_1)$ we get another stable, {\it reduced\/}, permutation; (4) thus a stable matching exists iff there are no odd cycles. Suppose that the random problem instance, call it $I_n$, is chosen uniformly at random among all $[(n-1)!]^n$ instances. We showed \cite{Pit2} that the expected number of stable matchings is $e^{1/2}$ in the limit, implying that the number of stable matchings, if any exist, is bounded in probability. With Robert Irving \cite{IrvPit} we proved that the probability that a stable matching exists is at most $e^{1/2}/2<1$ in the limit. In a pleasing contrast, the stable partitions do not have a fixed point (odd party of size $1$) with surprisingly high probability $\ge 1- O\bigl(e^{-\sqrt{n}}\bigr)$. So while a stable matching may not exist, stable partitions (that exist always) with high probability have no ``pariahs'': every member holds a proposal from another member, while his own proposal is accepted by possibly a different member. Our task is to analyze asymptotic behavior of a series of leading parameters of the family of stable (reduced) partitions for $I_n$, and we focus on those that have no fixed point. Among those parameters are $\mathcal S_n$ and $\mathcal O_n$, the total number of stable (reduced) partitions and the total number of ``parties'', i. e. odd (common to all those partitions) cycles. We will prove, for instance, that \begin{align} \textup{E\/}\bigl[\mathcal S_n\bigr]&=(1+o(1)) \frac{\Gamma(1/4)}{\sqrt{\pi e}\,2^{1/4}}\,n^{1/4},\label{1}\\ \textup{E\/}\bigl[\mathcal O_n\bigr]&\le(1+o(1)) \frac{\Gamma(1/4)}{4\sqrt{\pi e}\,2^{1/4}}\,n^{1/4}\log n. \label{2} \end{align} The fact that $\textup{E\/}\bigl[\mathcal S_n\bigr]\to\infty$, but at a moderate rate, can be charitably viewed as supporting the claim that, in probability, $\mathcal S_n\to\infty$; if this is the case then with high probability $I_n$ has no stable matching. Numerical experiments conducted by Stephan Mertens \cite{Mer} made him conjecture that solvability probability goes to zero, as fast as $n^{-1/4}$. For a rigorous transition from $\textup{E\/}\bigl[\mathcal S_n\bigr]\to\infty$ to $\mathcal S_n\to\infty$, one would normally want to show that $\text{Var}(\mathcal S_n)\ll \textup{E\/}^2\bigl[\mathcal S_n\bigr]$. It turns out, however, that $\text{Var}(\mathcal S_n)$ is of order $n^{3/4}$, thus exceeding $\textup{E\/}^2\bigl[\mathcal S_n\bigr]$ by the factor $n^{1/8}$, which invalidates this naive two-moment approach. Can the approach be gainfully modified by narrowing the pool of stable partitions? A key tool for estimating $\text{Var}(\mathcal S_n)$ is an asymptotic formula for the probability that each of two generic (reduced) partitions $\bold\Pi_1$ and $\bold\Pi_2$ (with the same odd parties, of course) are stable. The symmetric difference of the set of matched pairs in $\bold\Pi_1$ and the set of matched pairs in $\bold\Pi_2$ is the edge set of the disjoint even cycles of length $\ge 4$, whose edges are the matched pairs in $\bold\Pi_1$ interlacing the matched pairs in $\bold\Pi_2$. Each such cycle can be viewed as an even {\it rotation\/} in both partitions, so that the pair $(\bold\Pi_1,\bold\Pi_2)$ gives rise to $2^{\mu}$ of stable partitions, with $\mu$ being the total number of those even cycles. Define a random graph $G_n=(\mathcal V_n, \mathcal E_n)$, where $\mathcal V_n$ is the set of all stable partitions $\bold\Pi$, and $\mathcal E_n$ is the set of pairs $(\bold\Pi_1,\bold\Pi_2)$, each giving rise to a {\it single\/} even cycle. By \eqref{1}, $\textup{E\/}[\mathcal V_n]=\textup{E\/}[\mathcal S_n]$ is of order $n^{1/4}$. It turns out that $\textup{E\/}[\mathcal E_n]$ is of order $n^{1/4}$ as well. What, if anything, does this fact tell about the likely range of $\mathcal S_n$? There are two positive results that stem from \eqref{1}--\eqref{2}. Tan \cite{Tan}, \cite{Tan1} defined a maximum stable matching for an instance $I$ as a maximum-size matching $M=M(I)$ which is {\it internally\/} stable, i.e. not blocked by any two members from the agent (vertex) set of $M$. He proved that $|M(I)|=(n-\mathcal O(I))/2$. It follows from \eqref{2} that \[ \textup{ P\/}\Biggl(|M(I_n)|\ge \frac{n-\omega(n) n^{1/4}\log n }{2} \Biggr)\ge 1-O(\omega(n)^{-1}) \to 1, \] for $\omega(n)\to \infty$, however slowly. In short, the number of members not in the maximum stable matching is $O_p(n^{1/4}\log n)$. Abraham, Bir\'o and Manlove \cite{AbrBirMan} introduced the alternative notion of a ``maximally stable'' matching, i.e. a matching $M$ on $[n]$ that is blocked by the smallest number of pairs, call it $B(I)$, of agents unmatched in $M$. They obtained a two-sided bound for $B(I)$ in terms of preference lists lengths and the odd cycles. A cruder version of the ABM upper bound states that $B(I)\le d(I) \mathcal O(I)$, where $d(I)$ is the length of the longest preference list. Extending our approach, we will show that for the random instance $I_n$, with probability $\ge 1-\exp(-c(\log n)^{2(1+\delta)})$, every member's predecessor is among their best $n^{1/2}(\log n)^{1+\delta} $ choices. So we can apply the last bound with $d(I_n)=n^{1/2}(\log n)^{1+\delta} $. Therefore the bound \eqref{2} together with the ABM bound imply that with high probability there exists a complete matching which is blocked by $n^{3/4+o(1)}$ pairs, a strikingly small number relative to the total number ($\Theta(n^2)$) of potential blocking pairs. We will also show that with high probability the sum of the ranks of predecessors in every stable partition is asymptotic to $n^{3/2}$; consequently the worst predecessor's rank in every stable partition is $n^{1/2}(1-o(1))$ at least, nearly matching $n^{1/2}(\log n)^{1+\delta}$, the likely upper bound. Here is an application. Suppose we shrink every member's preference list to their own best $d$ choices. If the constrained instance has no fixed point then neither does the full-lists instance. Consider an instance $I_{n,d}$ of the stable partition problem chosen uniformly at random among all instances with some $d$ acceptable choices for every member. Randomly, and independently, ordering the remaining $n-1-d$ members for every member, we will get the uniformly random (full-lists) instance $I_n$. It follows then that if $d\le (1-\varepsilon)n^{1/2}$ ($d\ge n^{1/2}(\log n)^{1+\delta}$ resp.) then with high probability stable partitions for $I_{n,d}$ have (do not have resp.) a fixed point. Finally, we use the analysis of $\text{Var}(\mathcal S_n)$ to show that the expected fraction of members with multiple stable predecessors is of order $n^{-1/4}$. \section{Integral formulas for stability probabilities}\label{Intforms} At the core of our proofs are two integral formulas, one for the probability that a generic cyclic partition is stable, another for the probability that two generic cyclic partitions are stable. \begin{Lemma}\label{P(partstab)=} Let $\bold\Pi$ be a permutation of $[n]$ with even cycles of length $2$ only, and possibly a single fixed point $h^*$, i. e. $\bold\Pi(h^*)= h^*$. Let $\text{Odd}\,(\bold\Pi)$ be the set of all elements from the odd cycles of $\bold\Pi$ with an exception of the fixed point if it is present. Let $D(\bold\Pi)$ be the set of unordered pairs $(i\neq j)$ such that $i=\bold\Pi(j)$ or, not exclusively, $j=\bold\Pi(i)$. Then \begin{equation}\label{p(Pstable)=} \begin{aligned} &\textup{ P\/}(\bold\Pi):=\textup{ P\/}\bigl(\bold\Pi\text{ is stable}\bigr)=\idotsint\limits_{\bold x\in [0,1]^{n-1}}F(\bold x)\,d\bold x,\\ F(\bold x)&:=\prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\!\!\!x_h\,\,\cdot\prod_{(i,j)\notin D(\bold\Pi)}\!\!(1-x_ix_j) \cdot\prod_{k\neq h^*}(1-x_k); \end{aligned} \end{equation} if there is no fixed point $h^*$, then the third product is replaced by $1$, and $[0,1]^{n-1}$ by $[0,1]^n$. \end{Lemma} If $\bold\Pi$ is a matching, we get (\cite {Pit2}) \begin{equation}\label{P(Pstable)=} \textup{ P\/}\bigl(\bold\Pi\text{ is stable}\bigr)=\idotsint\limits_{\bold x\in [0,1]^{n}}\prod_{(i,j)\notin D(\bold\Pi)}\!\!(1-x_ix_j)\,d\bold x. \end{equation} \begin{proof} To generate the random instance $I_n$, introduce an array of the independent random variables $X_{i,j}$ ($1\le i\neq j\le n$), each distributed uniformly on $[0,1]$. Assume that each member $i\in [n]$ ranks the members $j\neq i$ in increasing order of the variables $X_{i,j}$. Such an ordering is uniform for every $i$, and the orderings by different members are independent. Then \begin{multline}\label{P(P)|x)} \textup{ P\/}\bigl(\bold\Pi\text{ is stable}\,\bold |\,X_{i,\bold\Pi^{-1}(i)}=x_i,\,i\in [n]\bigr)\\ = \!\!\!\prod_{h\in \text{Odd}(\bold\Pi)}\!\!\!\!\!x_h\,\cdot\prod_{(i,j)\notin D(\bold\Pi)}\!\!(1-x_ix_j)\prod_{k\neq h^*}(1-x_k) \end{multline} Indeed, by \eqref{Tan1,2}, $\bold\Pi$ is stable iff \begin{align*} &(1) \text{ for every }h\in \text{Odd}(\bold\Pi):\,\,X_{h, \bold\Pi(h)}<X_{h,\bold\Pi^{-1}(h)},\\ &(2)\text{ for every }(i,j)\notin D(\bold\Pi),\, i,j\neq h^*:\,\,X_{i,j}<X_{i,\bold\Pi^{-1}(i)}\Rightarrow X_{j,i}>X_{j,\bold\Pi^{-1}(j)},\\ &(3)\text{ for every }i\neq h: X_{i,\bold\Pi^{-1}(i)}<X_{i,h^*}. \end{align*} And, conditioned on the event $\bigl\{X_{i,\bold\Pi^{-1}(i)}=x_i,\,i\in [n]\}$, the events above are independent, with (conditional) probabilities $x_h$, $1-x_ix_j$ and $1-x_k$ respectively. Using Fubini's theorem, we have \eqref{p(Pstable)=}. \end{proof} Like analogous formulas in \cite{Pit1}, \cite{Pit2} and \cite{IrvPit}, this is a non-bipartite counterpart of Knuth's formula for stable bipartite matchings, \cite{Knu}. His derivation was based on the inclusion-exclusion method, coupled with ingenious observation that the resulting sum equals the multidimensional integral of a product-type integrand resembling our $F(\bold x)$. Of course, we could get a sum-type formula for $\textup{ P\/}\bigl(\bold\Pi\text{ is stable}\bigr)$ by expanding the product in \eqref{P(P)|x)} and integrating the resulting sum term-wise. Moving in the opposite direction, i.e. starting with an inclusion-exclusion formula for $\textup{ P\/}\bigl(\bold\Pi\text{ is stable}\bigr)$, finding an integral-type representation of the generic summand, and discerning that the sum of the attendant integrands happens to be an expansion of the ``out-of-the blue'' product in \eqref{P(P)|x)}, would have been very problematic. The identity \eqref{p(Pstable)=} is indispensable for asymptotic estimates, thanks to a simple, but powerful, bound \begin{equation}\label{simple1} \prod_{\{i,j\}\notin D(\bold\Pi)}\!\!(1-x_ix_j) \le \exp\Bigl(-\frac{s^2}{2}+4.5\Bigr),\quad s:=\sum_{i\in [n]}x_i. \end{equation} For instance, this bound and $\prod_k (1-x_k)\le e^{-s}$ will almost immediately yield that the stable partitions have no fixed point with probability $\ge 1- e^{-\Theta(n^{1/2})}$. We will prove a surprisingly simple, yet qualitatively sharp estimate: uniformly for a fixed-point free partitions $\bold\Pi$, \begin{equation}\label{P(Pstab)<simple} \textup{ P\/}(\bold\Pi\text{ is stable})=O\left(\frac{1}{(n+m-1)!!}\right),\quad m:=|\text{Odd}(\bold\Pi)|. \end{equation} We note that Alcalde \cite{Alc} defined an {\it exchange stable\/} matching as a matching $M$ that, to quote from \cite{Man}, ``admits no {\it exchange-blocking pair\/}, which is a pair of members each of whom prefers the other's partner in $M$ to their own''. Cechl\'arov\'a and Manlove \cite{CecMan} proved that, in stark contrast with the classic stable roommates model, the problem of determining whether a given instance admits an exchange-stable matching is NP-complete. The interested reader may wish to check that the formula \eqref{P(Pstable)=} continues to hold for $\textup{ P\/}(\bold\Pi\text{ is exchange-stable})$. Consequently the expected number of exchange-stable matchings and the expected number of the classic stable matchings are exactly the same, implying that the former is also asymptotic to $e^{1/2}$. Let us call a (fixed-point free) partition $\bold\Pi$ exchange stable if no two members prefer each other predecessors to their own predecessors under $\bold\Pi$. What about the partitions that are ``doubly-stable'', i.e. stable {\it and\/} exchange stable? It turns out that \begin{align*} &\textup{ P\/}(\bold\Pi):=\textup{ P\/}\bigl(\bold\Pi\text{ is doubly stable}\bigr)=\idotsint\limits_{\bold x\in [0,1]^{n}}F_2(\bold x)\,d\bold x,\\ &\qquad F_2(\bold x):=\prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\!\!\!x_h\,\,\cdot\prod_{(i,j)\notin D(\bold\Pi)}\!\!(1-x_ix_j)^2. \end{align*} The counterpart of \eqref{P(Pstab)<simple} is $\textup{ P\/}(\bold\Pi)=O\bigl(2^{-\frac{n+m}{2}}/(n+m-1)!!\bigr)$, implying that the expected number of the doubly stable partitions is of order $2^{-n/2}$, way down from $n^{1/4}$ for the stable partitions. Continuing, introduce $\mathcal R(\bold\Pi)$, the sum of the ranks of all predecessors in the preference lists of their successors in a partition $\bold\Pi$. Let $\textup{ P\/}_k(\bold\Pi):=\textup{ P\/}(\bold\Pi\text{ is stable and }\mathcal R(\bold\Pi)=k)$. \begin{Lemma}\label{P_k(P)=} Suppose $\bold\Pi$ is fixed-point free. Then, letting $m:=|\text{Odd}\,(\bold\Pi)|$, and $\bar x=1-x$, \begin{equation}\label{P_k(P)=} \begin{aligned} &\textup{ P\/}_k(\bold\Pi)=\idotsint\limits_{\bold x\in [0,1]^{n}}[z^{k-n-m}] F(\bold x,z)\,d\bold x,\\ F(\bold x,z)&:=\prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\!\!\!x_h\,\,\cdot\prod_{(i,j)\notin D(\bold\Pi)} \!\!\bigl(\bar x_i\bar x_j+zx_i\bar x_j+z\bar x_ix_j\bigr). \end{aligned} \end{equation} \end{Lemma} \begin{proof} First of all, using $\chi(A)$ to denote the indicator of an even $A$, we have \[ \textup{ P\/}_k(\bold\Pi)=[z^k]\textup{E\/}\Bigl[z^{\mathcal R(\bold\Pi)}\chi(\bold\Pi\text{ is stable})\Bigr]. \] Here \begin{align*} \chi(\bold\Pi\text{ is stable})&=\prod_{(i,j)\notin D(\bold\Pi)}\chi\bigl(X_{i,j}>X_{i,\Pi^{-1}(i)}\text{ or } X_{j,i}>X_{j,\bold\Pi^{-1}(j)}\bigr)\\ &\times \prod_{h\in \text{Odd}\,(\bold\Pi)}\chi\bigl(X_{h,\bold\Pi(h)}< X_{h,\bold\Pi^{-1}(h)}\bigr). \end{align*} Furthermore \begin{align*} &\mathcal R(\bold\Pi)=\sum_{(i,j)\notin D(\bold\Pi)}\bigl[\chi(X_{i,j}< X_{i,\bold\Pi^{-1}(i)}) +\chi(X_{j,i}<X_{j,\bold\Pi^{-1}(j)})\bigr]\\ &\qquad+\sum_{i\in [n]} 1 +\sum_{h\in \text{Odd}\,(\bold\Pi)}\!\!\chi(X_{h,\bold\Pi(h)}<X_{h,\bold\Pi^{-1}(h)}), \end{align*} where the second sum accounts for the pairs $(i,\bold\Pi^{-1}(i))$, $i\in [n]$. So \begin{align*} &\textup{E\/}\Bigl[\left.z^{\mathcal R(\bold\Pi)}\chi(\bold\Pi\text{ is stable})\,\right|\,X_{i,\bold\Pi^{-1}(i)}=x_i,\,i\in [n] \Bigr]\\ &=z^{n+|\text{Odd}\,(\bold\Pi)|}\!\!\prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\!x_h\\ &\times\!\!\!\prod_{(i,j)\notin D(\bold\Pi)}\!\! \textup{E\/}\Bigl[z^{\chi(X_{i,j}< x_i) +\chi(X_{j,i}<x_j)} \chi(X_{i,j}>x_i\text{ or }X_{j,i}>x_j)\Bigr]\\ &=z^{n+m}\!\!\prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\!x_h\cdot \!\!\!\prod_{(i,j)\notin D(\bold\Pi)}\!\! \bigl(\bar x_i\bar x_j +z x_i\bar x_j+z\bar x_ix_j\bigr). \end{align*} So \[ \textup{E\/}\Bigl[z^{\mathcal R(\bold\Pi)}\chi(\bold\Pi\text{ is stable})\Bigr]=z^{n+m} \idotsint\limits_{\bold x\in [0,1]^{n}} F(\bold x,z)\,d\bold x, \] which proves \eqref{P_k(P)=}. \end{proof} Finally, suppose we have a pair of distinct cyclic partitions, $\bold\Pi_1$ and $\bold\Pi_2$. Let $\textup{ P\/}(\bold\Pi_1,\bold\Pi_2)$ denote the probability that both $\bold\Pi_1$ and $\bold\Pi_2$ are stable. We assume the two partitions have the same odd cycles, since otherwise the probability is zero. Suppose also there is no fixed point. Let $\text{Odd}_{1,2}$ stand for the vertex set of the family of odd cycles, common to both partitions; so $\bold\Pi_1(h)=\bold\Pi_2(h)$ for all $h\in \text{Odd}_{1,2}$. The cardinality $|\text{Odd}_{1,2}|$ is even, and $\bold\Pi_1$ and $\bold\Pi_2$ induce a pair of perfect matchings $(M_1,M_2)$ on $\text{Even}_{1,2}:=[n]\setminus\text{Odd}_{1,2}$. Together, $M_1$ and $M_2$ determine a graph $G(M_1,M_2) =\Bigl(\text{Even}_{1,2}, E\Bigr)$, with the edge set $E$ formed by the pairs $(i,j)\in M_1\cup M_2$. Each component of $G(M_1,M_2)$ is either an edge $e\in M_1\cap M_2$, or a circuit of even length at least $4$, in which the edges from $M_1$ and $M_2$ alternate. The edge set for all these (alternating) circuits is the symmetric difference $M_1\Delta M_2$. \begin{Lemma}\label{p(P1,P2stab)=} Let $\textup{ P\/}(\bold\Pi_1,\bold\Pi_2)$ denote the probability that both $\bold\Pi_1$ and $\bold\Pi_2$ are stable. For $r=1,2$, let $D_r$ be the set of unordered pairs $(i\neq j)$ such $i=\bold\Pi_r(j)$ or, not exclusively, $j= \bold\Pi_r(i)$. Then \begin{align*} &\textup{ P\/}(\bold\Pi_1,\bold\Pi_2)=\idotsint\limits_{\bold x,\bold y\in [0,1]^n} F(\bold x,\bold y)\,d\bold xd\bold y,\\ &F(\bold x,\bold y)=\prod_{h\in \text{Odd}_{1,2}}\!\!\!\!\!x_h\,\,\cdot \prod_{(i,j)\in D_1^c\cup D_2^c}\!\!\! [1-x_ix_j-y_iy_j+(x_i\wedge y_i)(x_j\wedge y_j)]; \end{align*} here \[ d\bold x=\prod_{i\in [n]} dx_i,\quad d\bold y=\prod_{i: \bold\Pi_1(i)\neq \bold\Pi_2(i)} dy_i, \] and for every circuit $\{i_1,\dots,i_{\ell}\}$ of $G(M_1,M_2)$: \begin{align*} &\text{either } \,\,x_{i_1}>y_{i_1},\,x_{i_2}<y_{i_2},\dots, x_{i_{\ell}}<y_{i_{\ell}},\\ &\text{or}\qquad\,\, x_{i_1}<y_{i_1},\,x_{i_2}>y_{i_2},\dots, x_{i_{\ell}}>y_{i_{\ell}}. \end{align*} \end{Lemma} \noindent We omit the proof since it combines the elements of the proof for $\textup{ P\/}(\bold\Pi\text{ is}$ $\text{stable})$ and of the formula for $\textup{ P\/}(\bold\Pi_1,\bold\Pi_2)$ in the case when $\bold\Pi_1$ and $\bold\Pi_2$ are matchings, given in \cite{Pit2}. A counterpart of the bound \eqref{simple1} is \begin{equation}\label{simple2} \begin{aligned} \prod_{(i,j)\in D_1^c\cup D_2^c}\!\!\! &[1-x_ix_j-y_iy_j+(x_i\wedge y_i)(x_j\wedge y_j)]\\ &\le e^{2^8}\exp\Bigl(-\frac{s_1^2}{2}-\frac{s_2^2}{2}+\frac{s_{1,2}^2}{2}\Bigr); \end{aligned} \end{equation} here $s_1=\sum_i x_i$, $s_2=\sum_iy_i$, $s_{1,2}=\sum_i (x_i\wedge y_i)$ and $i$ runs over $[n]$. Never mind enormity of $e^{2^8}$; like \eqref{simple1}, the bound \eqref{simple2} is both simple and instrumental in identifying a relatively small, eminently tractable, part of the integration domain which is ``in charge'' of the asymptotic behavior of $\textup{ P\/}(\bold\Pi_1,\bold\Pi_2)$. {\bf Note.\/} The reader interested in our prior work on stable roommates problem (\cite{Pit1}, \cite{Pit2} and \cite{IrvPit}) will not find the inequalities \eqref{simple1} and \eqref{simple2} there. Working on this project, we detected a technical, estimational, glitch (see the next Section for details) in \cite{Pit1}, equally consequential for analysis in \cite{Pit2} and \cite{IrvPit}. Luckily the new bounds \eqref{simple1}-\eqref{simple2} allow to repair this error and, as an unexpected bonus, to simplify the arguments as well. The analysis in this paper can be viewed, in part, as a close template for the correction of that embarrassing oversight. We emphasize that, fortunately, this correction leaves the ultimate asymptotic results in those references intact. \begin{proof} As in the proof of Lemma \ref{P(partstab)=}, we use the array $\{X_{i,j}:1\le i\neq j\le n\}$. By the definition of stability, we have \[ \{\bold\Pi_1,\,\bold \Pi_2\text{ are both stable}\}=\bigcap_{h\in \text{Odd}(\bold\Pi_{1,2})}\!\!\!\!\!A_h\bigcap\limits_{(i,j)\in D^c_1\cup D^c_2} \bigl(B_{(i,j)}\bigr)^c. \] Here $A_h=\bigl\{X_{h,\bold\Pi_{1,2}(h)}<X_{h,\bold\Pi_{1,2}^{-1}(h)}\bigr\}$. Furthermore: {\bf (1)\/} if $(i,j)\in D^c_1\cap D^c_2$, then \begin{align*} B_{(i,j)}&=\Bigl\{X_{i,j}<X_{i,\bold\Pi_1^{-1}(i)}; \,X_{j,i}<X_{j,\bold\Pi_1^{-1}(j)}\}\\ &\qquad\cup\{X_{i,j}<X_{i,\bold\Pi_2^{-1}(i)}; \,X_{j,i}<X_{j,\bold\Pi_2^{-1}(j)}\Bigr\}; \end{align*} {\bf (2)\/} if $(i,j)\in D^c_1\cap D_2$, then necessarily $(i,j)\in M_1^c\cap M_2$, and, by stability of $\bold\Pi_1$, \[ B_{(i,j)}=\Bigl\{X_{i,\bold\Pi_2^{-1}(i)}<X_{i,\bold\Pi_1^{-1}(i)}; \,X_{j,\bold\Pi_2^{-1}(j)}<X_{j,\bold\Pi_1^{-1}(j)}\Bigr\}; \] {\bf (3)\/} if $(i,j)\in D_1\cap D^c_2$, then necessarily $(i,j)\in M_1\cap M_2^c$ and, by stability of $\bold\Pi_2$, \[ B_{(i,j)}=\Bigl\{X_{i,\bold\Pi_1^{-1}(i)}<X_{i,\bold\Pi_2^{-1}(i)}; \,X_{j,\bold\Pi_1^{-1}(j)}<X_{j,\bold\Pi_2^{-1}(j)}\Bigr\}. \] Conditioned on the values \[ X_{i,\bold\Pi_1^{-1}(i)}=x_i,\,\,(i\in [n]),\quad X_{i,\bold\Pi_2^{-1}(i)}=y_i, \,\,(i\in [n]:\bold\Pi_1(i)\neq \bold\Pi_2(i)), \] the events $A_h$, $B_{(i,j)}$ are all independent. And, denoting the characteristic function of a set $U\subset [0,1]^{2n}$ by $\chi(U)$, we have $\textup{ P\/}(A_h \boldsymbol|\boldsymbol\cdot) = x_h$, \[ \textup{ P\/}\bigl((B_{(i,j)})^c \boldsymbol|\boldsymbol\cdot\bigr)=\left\{\begin{aligned} &1-x_ix_j-y_iy_j+(x_i\wedge y_i)(x_j\wedge y_j),&&\text{ Case }{\bf (1)\/},\\ &\chi(y_i\ge x_i\text{ or } y_j\ge x_j),&&\text{ Case }{\bf (2\/}),\\ &\chi(x_i\ge y_i\text{ or }x_j\ge y_j),&&\text{ Case }{\bf (3)\/}.\end{aligned}\right. \] Therefore \begin{align*} &\textup{ P\/}\bigl(\bold\Pi_1,\,\bold \Pi_2\text{ are both stable}\boldsymbol|\boldsymbol\cdot\bigr)\\ &\prod_{h\in\text{Odd}_{1,2}}\!\!\!\!\!\!x_h\,\,\,\cdot\prod_{(i,j)\in D^c_1\cup D^c_2} \bigl[1-x_ix_j-y_iy_j+(x_i\wedge y_i)(x_j\wedge y_j)\bigr], \end{align*} provided that $\forall\,(i,j)\in M_1^c\cap M_2$, we have $y_i\ge x_i$ or $y_j\ge x_j$ and $\forall\,(i,j)\in M_1\cap M_2^c$, we have $x_i\ge y_i$ or $x_j\ge y_j$. (The conditional probability is zero otherwise.) Since the edges from $M_1\Delta M_2$ form the disjoint alternating circuits of length $\ge 4$, the condition means that for every such circuit $\{i_1, i_2,\dots, i_{\ell},\}$ [with $(i_1,i_2)\in M_1, (i_2,i_3)\in M_2,\dots, (i_{\ell},i_1)\in M_2$, say], we have \begin{align*} &y_{i_1}\le x_{i_1}\qquad\text{or}\qquad y_{i_2}\le x_{i_2},\\ &y_{i_2}\ge x_{i_2}\qquad\text{or}\qquad y_{i_3}\ge x_{i_3},\\ &\qquad\qquad\qquad\boldsymbol:\\ &y_{i_{\ell-1}}\le x_{i_{\ell-1}}\,\text{ or}\qquad y_{i_\ell}\le x_{i_\ell},\\ &y_{i_{\ell}}\ge x_{i_{\ell}}\qquad\!\!\!\,\,\,\text{ or}\qquad y_{i_1}\ge x_{i_1}. \end{align*} We may, of course, assume that all these inequalities are strict. Thus there are only two options on the circuit: either $x_{i_1}>y_{i_1},\,x_{i_2}<y_{i_2},\,\dots, x_{i_{\ell}}<y_{i_{\ell}},$ or $x_{i_1}<y_{i_1},\,x_{i_2}>y_{i_2},\,\dots, x_{i_{\ell}}>y_{i_{\ell}}$. (In both options, the inequalities alternate.) Application of Fubini's theorem completes the proof. \end{proof} \section{Estimation tools}\label{Esttools} To estimate the integrals in Lemma \ref{p(Pstable)=} and Lemma \ref{p(P1,P2stab)=} for $n\to\infty$, we will need the following claim, see \cite{Pit0}, \cite{Pit2}: \begin{Lemma}\label{intervals1} Let $X_1,\dots, X_{\nu}$ be independent $[0,1]$-Uniforms. Let $S=\sum_{i\in [\nu]}X_i$ and $\bold V=\{V_i=X_i/S; i\in [\nu]\}$, so that $\sum_{i\in [\nu]}V_i=1$. Let $\bold L=\{L_i ;\, i\in [\nu]\}$ be the set of lengths of the $\nu$ consecutive subintervals of $[0,1]$ obtained by selecting, independently and uniformly at random, $\nu-1$ points in $[0,1]$. Then (with $\chi(A)$ standing for the indicator of an event $A$) the joint density $f_{S,\bold V}(s,\bold v)$, ($\bold v=(v_1,\dots,v_{\nu-1})$), of $(S,V)$ is given by \begin{equation}\label{joint<} \begin{aligned} f_{S,\bold V}(s,\bold v)&=s^{\nu-1}\chi\bigl(\max_{i\in [\nu]} v_i\le s^{-1}\bigr) \chi(v_1+\cdots+v_{\nu-1}\le 1)\\ &\le \frac{s^{\nu-1}}{(\nu-1)!} f_{\bold L}(\bold v),\quad v_{\nu}:=1-\sum_{i=1}^{\nu-1}v_i; \end{aligned} \end{equation} here $f_{\bold L}(\bold v)=(\nu-1)!\,\chi(v_1+\cdots+v_{\nu-1}\le 1)$ is the density of $(L_1,\dots,L_{\nu-1})$. \end{Lemma} We will also use the classic identities, Andrews, Askey and Roy \cite{AndAskRoy}, Section 1.8: \begin{equation}\label{int,prod} \begin{aligned} &\overbrace {\idotsint}^{\nu}_{\bold x\ge \bold 0 \atop x_1+\cdots+x_{\nu}\le 1}\prod_{i\in [\nu]} x_i^{\alpha_i}\,\,d\bold x=\frac{\prod_{i\in [\nu]}\alpha_i!}{(\nu+\alpha)!},\quad \alpha:=\sum_{i\in [\nu]}\alpha_i,\\ &\overbrace {\idotsint}^{\nu-1}_{\bold x\ge\bold 0\atop x_1+\cdots+x_{\nu}=1}\prod_{i\in [\nu]} x_i^{\alpha_i}\,\,dx_1\cdots dx_{\nu-1} =\frac{\prod_{i\in [\nu]}\alpha_i!}{(\nu-1+\alpha)!}. \end{aligned} \end{equation} The identity/bound \eqref{joint<} is useful since the random vector $\bold L$ had been well studied. It is known, for instance, that \begin{equation}\label{L^{(nu)}=frac} \bold L\overset{\mathcal D}\equiv\left\{\frac{w_i}{\sum_{j\in [\nu]}w_j}\right\}_{i\in [\nu]}, \end{equation} where $w_j$ are independent, exponentially distributed, with the same parameter, $1$ say. Here is this property at work. \begin{Lemma}\label{sumsofLs} (1) Let $s\ge 2$. If $\varepsilon_{\nu}\to 0$, $\varepsilon_{\nu}\gg \nu^{-\tfrac{1}{s+1}}$. Then \begin{equation}\label{sumLj^s} \textup{ P\/}\Biggl(\Bigl|\frac{\nu^{s-1}}{s!}\sum_{j\in [\nu]}L_j^s-1\Bigr|\ge\varepsilon_{\nu}\Biggr)= O\Bigl(\exp\bigl(-c\,\varepsilon_{\nu}\nu^{\tfrac{1}{s+1}}\bigr)\Bigl) \end{equation} (2) For $\nu$ even, \begin{equation}\label{sumsofprods} \textup{ P\/}\Biggl(\Bigl|2\nu\sum_{j\in [\nu/2]}L_j L_{j+\nu/2}-1\Bigr|\ge\varepsilon_2\Biggr)= O\Bigl(\exp\bigl(-c\,\varepsilon_{\nu}\nu^{\tfrac{1}{s+1}}\bigr)\Bigl). \end{equation} \end{Lemma} \begin{proof} Observe that $\textup{E\/}[W]=1$, $\textup{E\/}[W^s]=s!$. Choose \[ a=\left(1+\frac{\varepsilon_{\nu}}{3}\right)\!s!,\quad b=1-\frac{\varepsilon_{\nu}}{3s}, \] so that $a/b^s<(1+\varepsilon)s!$, for $\nu$ sufficiently large. Then, denoting $\mathcal W^{(\ell)}=\sum_j W_j^{\ell}$, \begin{align*} &\textup{ P\/}\Bigl(\nu^{s-1}\sum_{j\in [\nu]}L_j^s\ge (1+\varepsilon_{\nu})s!\Bigr)= \! \textup{ P\/}\!\left(\frac{\mathcal W^{(s)}}{\bigl(W^{(1)}\bigr)^s}\ge \frac{(1+\varepsilon_{\nu})s!}{\nu}\right)\\ &\le \textup{ P\/}\Bigl(W^{(s)}\ge a\nu\text{ or }\mathcal W^{(1)}<b\nu)\Bigr) \le \!\textup{ P\/}\Bigl(\!\mathcal W^{(s)}\ge a\nu\Bigr) +\textup{ P\/}\Bigl(W^{(1)}<b\nu\Bigr). \end{align*} Since $\textup{E\/}\bigl[e^{-zW}\bigr]<\infty$ for every $z\ge 0$, the standard application of Chernoff's method yields \begin{equation}\label{denom} \textup{ P\/}\bigl(W^{(1)}<b\nu\bigr) \le \exp(-\nu c(b)),\quad c(b)=b-1-\log b=\Theta\bigl(\varepsilon_{\nu}^2\bigr). \end{equation} Bounding $\textup{ P\/}\Bigl(\!\mathcal W^{(s)}\ge a\nu\Bigr)$ is more problematic since $\textup{E\/}\bigl[e^{zW^2}\bigr]=\infty$ for $z>0$. Truncation to the rescue! Introduce $V=\min\{W,n^{\alpha}\}$, $(\alpha<1)$; then \begin{equation}\label{P(WneqV)<} \textup{ P\/}(W_j\not\equiv V_j,\, j\in [\nu])\le \nu\textup{ P\/}(W\ge \nu^{\alpha})=\nu e^{-\nu^{\alpha}}= e^{-\Theta(\nu^{\alpha})}. \end{equation} Further \begin{align*} \textup{E\/}\bigl[e^{n^{-s\alpha}V^s}\bigr]&=\int_0^{n^{\alpha}}e^{(n^{-\alpha} w)^s}\,e^{-w}\,dw +e^{1-\nu^{\alpha}}\\ &\le 1+ \nu^{-s\alpha}\int_0^{\infty}w^s e^{-w}\,dw+O\bigl(\nu^{-2s\alpha}\bigr)\\ &=1+\nu^{-s\alpha}s!+O\bigl(n^{-2s\alpha}\bigr). \end{align*} So \begin{align*} \textup{ P\/}\Biggl(\sum_{j\in [\nu]} V_j^s\ge a\nu\Biggr)&\le \frac{\left(1+\nu^{-s\alpha}s!+O\bigl(\nu^{-2s\alpha}\bigr)\right)^{\nu}} {\exp\bigl(\nu(an^{-s\alpha})\bigr)}\\ &=\exp\Bigl(-\nu^{1-s\alpha}(a-s!) +O\bigl(\nu^{1-2s\alpha}\bigr)\Bigr). \end{align*} Combining this bound with \eqref{P(WneqV)<}, we select the best $\alpha=1/(s+1)$ and obtain \begin{equation}\label{subexp} \textup{ P\/}\bigl(W^{(s)}\ge a\nu\bigl)\le \exp\bigl(-\hat c\, \varepsilon_{\nu}\nu^{\tfrac{1}{s+1}}\bigr),\quad (\hat c>0). \end{equation} This bound combined with \eqref{denom} prove that \[ \textup{ P\/}\Bigl(\nu^{s-1}\sum_{j\in [\nu]} L_j^s\ge (1+\varepsilon_{\nu})s!\Bigr)=O\Bigl(\exp\bigl(-c\,\varepsilon_{\nu}\nu^{\tfrac{1}{s+1}}\bigr)\Bigl). \] Since $\textup{E\/}\bigl[e^{-zW^s}\bigr]<\infty$ for all $z>0$, there is no need for truncation, and we get \[ \textup{ P\/}\Bigl(\nu^{s-1}\sum_{j\in [\nu]} L_j^s\le (1-\varepsilon_{\nu})s!\Bigr) \le e^{-\Theta\bigl(\nu\varepsilon_{\nu}^2\bigr)}, \] So \eqref{sumLj^s} follows. The proof of \eqref{sumsofprods} is similar, and we omit it. \end{proof} {\bf Note.\/} In \cite{Pit1} we claimed that the probabilities in Lemma \ref{sumsofLs}, for $\varepsilon$ fixed, could be shown to be exponentially small, and used this claim also in \cite{Pit2} and \cite{IrvPit}, hoping to apply it again in this study. We have realized though that for the right tail of the sums' distributions we could get only a {\it sub\/}-exponential bound, see \eqref{subexp}. Fortunately, with the new inequalities \eqref{simple1}-\eqref{simple2} put to use, the sub-exponential bounds \eqref{sumLj^s} and \eqref{sumsofprods} are all we need. The interested reader may see for themselves that the resulting proof provides a clear recipe for local changes in \cite{Pit1}, \cite{Pit2} and \cite{IrvPit}, which make the thorny issue of exponential bounds go away completely. In addition to the bounds \eqref{sumLj^s}, we will need \begin{equation}\label{P(L^+>)<} \textup{ P\/}\left(\max_{j\in [\nu]} L_j^{(\nu)}\ge \frac{1.01\log ^2\nu}{\nu}\right)\le e^{-\log^2\nu}, \end{equation} which directly follows from \[ \textup{ P\/}\left(\max_{j\in [\nu]} L_j^{(\nu)}\ge x\right)\le \nu(1-x)^{\nu-1}. \] \section{Estimates of $\textup{E\/}[\mathcal S_n]$ and $\textup{E\/}[\mathcal O_n]$, ramifications} We need to identify a part of the cube $[0,1]^n$ that provides the dominant contribution to the integral in \eqref{p(Pstable)=}. This will allow us to estimate, sharply, the expected total length of the odd cycles in the random instance $I_n$. Many of the intermediate estimates can be traced back to \cite{Pit1}, \cite{Pit2} and \cite{IrvPit}. We begin with the pair of two new, instrumental, bounds for the products in the integrands expressing $\textup{ P\/}(\bold\Pi):=\textup{ P\/}(\bold \Pi\text{ is stable})$ and $\textup{ P\/}(\bold\Pi_1,\bold\Pi_2):=\textup{ P\/}(\bold\Pi_1\text{ and }\bold\Pi_2\text{ are both stable})$. In the statement below and elsewhere we will write $A_n\le_b B_n$ as a shorthand for ``$A_n=O(B_n)$, uniformly over parameters that determine $A_n$, $B_n$'', when the expression for $B_n$ is uncomfortably bulky for an argument of the big O notation. We will also write $A_n\lesssim B_n$ if $\limsup A_n/B_n\le 1$. \begin{Lemma}\label{simple1,2} \begin{align*} &\prod_{\{i,j\}\notin D(\bold\Pi)}\!\!(1-x_ix_j) \le_b \exp\Bigl(-\frac{s^2}{2}\Bigr),\quad s:=\sum_{i\in [n]}x_i,\\ &\prod_{(i,j)\in D_1^c\cup D_2^c}\!\!\! [1-x_ix_j-y_iy_j+(x_i\wedge y_i)(x_j\wedge y_j)]\\ &\qquad\qquad\le_b\exp\Bigl(-\frac{s_1^2}{2}-\frac{s_2^2}{2}+\frac{s_{1,2}^2}{2}\Bigr); \end{align*} here $s_1=\sum_i x_i$, $s_2=\sum_iy_i$, $s_{1,2}=\sum_i (x_i\wedge y_i)$ and $i$ runs through $[n]$. \end{Lemma} \begin{proof} {\bf (1)\/} Using $1-z\le e^{-z-z^2/2}$, we have \begin{equation}\label{start} \prod_{(i,j)\notin D(\bold\Pi)}(1-x_ix_j)\le\exp\Biggl(-\sum_{(i,j)\notin D(\bold\Pi)}\Bigl(x_ix_j+\frac{x_i^2x_j^2}{2}\Bigr)\Biggr). \end{equation} Here, using $2ab\le a^2+b^2$, \begin{align} \sum_{(i,j)\notin D(\bold\Pi)}\!\!\!x_ix_j&=\frac{s^2}{2}-\frac{1}{2}\sum_{i\in [n]}x_i^2-\sum_{i\in [n_1/2]}\!\!x_i x_{i+n_1/2}-\sum_{h\in \text{Odd}(\bold\Pi)}\!\!\!x_hx_{\bold\Pi(h)}\label{exactexp}\\ &\ge \frac{s^2}{2}-\frac{3}{2}\sum_{i\in [n]}x_i^2.\notag \end{align} Analogously, and using $\max_i x_i\le 1$, \begin{align} \sum_{(i,j)\notin D(\bold\Pi)}x_i^2x_j^2&\ge \frac{1}{2}\Biggl(\sum_{i\in [n]}x_i^2\Biggr)^2-\frac{3}{2}\sum_{i\in [n]}x_i^4\label{exactexp2}\\ &\ge \frac{1}{2}\Biggl(\sum_{i\in [n]}x_i^2\Biggr)^2-\frac{3}{2}\sum_{i\in [n]}x_i^2. \end{align} Therefore \begin{align} \sum_{(i,j)\notin D(\bold\Pi)}\left(x_ix_j+\frac{x_i^2x_j^2}{2}\right)&\ge \frac{s^2}{2}+ \frac{1}{2}\left(\sum_{i\in [n]}x_i^2\right)^2-3\sum_{i\in [n]}x_i^2\\ &\ge \frac{s^2}{2}-4.5, \end{align} so that \begin{equation* \prod_{(i,j)\notin D(\bold\Pi)}(1-x_ix_j)\le\exp\left(-\frac{s^2}{2}+4.5\right). \end{equation*} {\bf (2)\/} Let $M_i$ be the perfect matching on $\text{Even}_{1,2}=[n]\setminus \text{Odd}_{1,2}$, induced by $\bold\Pi_i$. Then $M_1\cap M_2$ is the set of matched pairs common to $\bold\Pi_1$ and $\bold\Pi_2$, and $M_1\Delta M_2$ is the edge set of the even circuits, of length $4$ at least, formed (in alternating fashion) by the matched pairs in $M_1$ and $M_2$. So $D_1\cup D_2$ is the disjoint union of $M_1\cap M_2$, $M_1\Delta M_2$ the set of pairs $(i,u_{\bold\Pi_{1,2}})$. So, given $u_i$, $i\in [n]$, \begin{equation}\label{sumuiuj} \begin{aligned} &\sum_{(i\neq j)\in D^c_1\cap D^c_2}\!\!\!\!\!\!\!u_iu_j=\sum_{(i\neq j)}u_iu_j-\sum_{(i\neq j)\in D_1\cup D_2} \!\!\!\!\!\!\!u_iu_j\\ &=\sum_{(i\neq j)}u_iu_j\,\,-\sum_{(i,j)\in M_1\cap M_2}\!\!\!\!\!\!\!u_iu_j\,\,-\sum_{(i,j)\in M_1\Delta M_2} \!\!\!\!\!\!\!u_iu_j\,\, -\sum_{i\in \text{Odd}_{1,2}}\!\!\!\!\!u_iu_{\bold\Pi_{1,2}(i)}\\ &= \frac{1}{2}\Bigl(\sum_{i\in [n]} u_i\Bigr)^2-\frac{1}{2}\sum_{i\in [n]}u_i^2\,\,-\sum_{(i,j)\in M_1\cap M_2}\!\!\!\!\!\!\!u_iu_j\,\,-\!\sum_{(i,j)\in E_{1,2}}u_iu_j; \end{aligned} \end{equation} here $E_{1,2}$ is the edge set of the odd cycles and the even circuits, formed by $\bold\Pi_1$ and $\bold\Pi_2$. This exact formula certainly implies that \begin{equation*} \frac{1}{2}\Bigl(\sum_{i\in [n]} u_i\Bigr)^2-3\sum_{i\in [n]}u_i^2\,\,\,\,\le\sum_{(i\neq j)\in D^c_1\cap D^c_2}\!\!\!\!\!\!\!u_iu_j\,\,\,\le\,\,\,\, \frac{1}{2}\Bigl(\sum_{i\in [n]} u_i\Bigr)^2. \end{equation*} Therefore we bound \begin{align*} &\sum_{(i\neq j)\in D^c_1\cap D^c_2}\bigl[x_ix_j+y_iy_j-(x_i\wedge y_i)(x_j\wedge y_j)\bigr]\\ &\ge\sum_{(i\neq j)}\bigl[x_ix_j+y_iy_j-(x_i\wedge y_i)(x_j\wedge y_j)\bigr]-3\sum_{i\in [n]}(x_i^2+y_i^2) \\ &\ge\frac{s_1^2}{2}+\frac{s_2^2}{2}-\frac{s_{1.2}^2}{2}-3\sum_{i\in [n]}(x_i^2+y_i^2). \end{align*} Furthermore \begin{align*} &\bigl[x_ix_j+y_iy_j-(x_i\wedge y_i)(x_j\wedge y_j)\bigr]^2\ge \bigl[x_ix_j+y_iy_j-(x_ix_j\wedge y_iy_j)\bigr]^2\\ &\qquad\qquad\ge\left(\frac{x_ix_j+y_iy_j}{2}\right)^2\ge \frac{1}{8}(x_i^2x_j^2+y_i^2y_j^2). \end{align*} So \begin{align*} &\sum_{(i\neq j)\in D^c_1\cap D^c_2}\bigl[x_ix_j+y_iy_j-(x_i\wedge y_i)(x_j\wedge y_j)\bigr]^2\\ & \ge\frac{1}{8}\sum_{(i\neq j)\in D^c_1\cap D^c_2}(x_i^2x_j^2+y_i^2y_j^2)\\ &\ge \frac{1}{16}\Biggl(\sum_{i\in[n]}x_i^2\Biggr)^2+ \frac{1}{16}\Biggl(\sum_{i\in[n]}y_i^2\Biggr)^2-\sum_{i\in [n]} \bigl(x_i^4+y_i^4\bigr). \end{align*} As \[ \sum_{i\in [n]}(x_i^4+y_i^4)\le \sum_{i\in [n]}(x_i^2+y_i^2), \] we obtain \begin{align*} &\prod_{(i\neq j)\in D^c_1\cap D^c_2}\!\!\!\!\!\!\! \bigl[1-x_ix_j-y_iy_j+(x_i\wedge y_i)(x_j\wedge y_j)\bigr]\\ &\le\exp\left(-\frac{s_1^2}{2}-\frac{s_2^2}{2}+\frac{s_{1,2}^2}{2}\right)\\ &\times\exp\Biggl[-\frac{1}{32}\Biggl(\sum_{i\in[n]}x_i^2\Biggr)^2-\frac{1}{32}\Biggl(\sum_{i\in[n]}y_i^2\Biggr)^ +4\sum_{i\in [n]}(x_i^2+y_i^2)\Biggr]. \end{align*} It remains to observe that $-\frac{z^2}{32}+4z\le 128$. \end{proof} \subsection{Bounds for $\textup{ P\/}(\bold\Pi)$, probability of a fixed point, and the likely $|\text{Odd}\,(\bold\Pi)|$} Here are our first applications of Lemma \ref{simple1,2}. \begin{Lemma}\label{Odd(P)=m} Denoting $m=|\text{Odd}\,(\bold\Pi)|$, \[ \textup{ P\/}(\bold\Pi)\le_b\left\{\begin{aligned} &\frac{e^{-n^{1/2}}}{(n+m-2)!!},&&\bold\Pi\text{ has a fixed point},\\ &\frac{1}{(n+m-1)!!},&&\bold\Pi\text{ has no fixed point}.\end{aligned}\right. \] \end{Lemma} \begin{proof} For the second case, by \eqref{p(Pstable)=} and Lemma \ref{simple1,2}, \[ \textup{ P\/}(\bold\Pi)\le_b \text{Int}(m):=\overbrace {\idotsint}^{n}_{\bold x\in [0,1]^{n}}e^{-\tfrac{s^2}{2}}\!\!\!\!\prod_{h=n-m+1}^n\!\!\!\!\!x_h\, d\bold x. \] Disregarding the constraint $\max_i x_i\le 1$, and using \eqref{int,prod} we obtain \begin{equation}\label{Dis} \begin{aligned} \text{Int}(m)&\le \frac{1}{(n+m-1)!}\int_0^{\infty}e^{-\tfrac{s^2}{2}}s^{n+m-1}\,ds\\ &=\frac{(n+m-2)!!}{(n+m-1)!}=\frac{1}{(n+m-1)!!}. \end{aligned} \end{equation} If $\bold\Pi$ has a fixed point $h^*$, then using \[ \prod_{k\neq h^*}(1-x_k)\le e^{-s},\quad s:=\sum_{k\neq h^*}x_k, \] we obtain that \[ \textup{ P\/}(\bold\Pi)\le_b\frac{1}{(n+m-2)!}\int_0^{\infty}e^{-s-\frac{s^2}{2}}s^{n+m-2}\,ds. \] A quick glance at the integrand shows that the dominant contribution to the integral comes from $s$ within, say, $\log n$ distance from the integrand's maximum point \[ s^*=(n+m-2)^{1/2}-\frac{1}{2}+O\bigl(n^{-1/2}\bigr), \] $(n+m-2)^{1/2}$ being the maximum point of $e^{-s^2/2} s^{n+m-2}$. So the above integral is of order \[ e^{-n^{1/2}}\int_0^{\infty}e^{-\frac{s^2}{2}}s^{n+m-2}\,ds=e^{-n^{1/2}}(n+m-3)!!, \] whence \[ \textup{ P\/}(\bold\Pi\text{ is stable})\le_b\frac{e^{-n^{1/2}}}{(n+m-2)!!}. \] \end{proof} Now the total number of permutations $\bold\Pi$ of $[n]$ with a fixed point and $|\text{Odd}(\bold\Pi)= m$ is at most \[ n\binom{n-1}{m}m! (n-m-2)!!=\frac{n!}{(n-m-1)!!}. \] \begin{Corollary}\label{nofix} \[ \textup{ P\/}(\text{stable }\bold\Pi\text{'s have a fixed point})=O\bigl(n^2e^{-\sqrt{n}}\bigr)\to 0. \] \end{Corollary} \begin{proof} By Lemma \ref{simple1,2} and the union bound, the probability in question is of order \begin{align*} &e^{-n^{1/2}}\sum_{m\ge 3}\frac{n!}{(n-m-1)!!\,(n+m-2)!!}\\ &\quad\le e^{-n^{1/2}}n \left.\frac{n!}{(n-m-1)!!\,(n+m-2)!!}\right|_{m=3}= O\bigl(n^2e^{-n^{1/2}}\bigr). \end{align*} \end{proof} \noindent Our original proof in \cite{Pit2} was considerably more involved, and reliant on the problematic existence of the exponential bounds, the issue we touched upon in the previous sections, and will stop bringing up in the sequel. From now on we focus on stable partitions without a fixed point. Here is another low hanging fruit. \begin{Corollary}\label{fruit} Denoting by $\text{Odd}\,(\bold\Pi)$ the set of members in the odd cycles of stable partitions, \[ \textup{ P\/}\bigl(|\text{Odd}\,(\bold\Pi)|\ge n^{1/2}\log n\bigr)\le_b \exp(-\log^2 n/3), \] i.e. with super-polynomially high probability (quite surely in terminology of \cite{KnuMotPit}) the total length of all odd cycles is below $n^{1/2}\log n$. \end{Corollary} \begin{proof} Denote $m_n=\lceil n^{1/2}\log n\rceil$. The total number of potential stable partitions with an even $|\text{Odd}\,(\bold\Pi)|=m\ge 4$ is at most \[ \binom{n}{m}m! (n-m-1)!!=\frac{n!}{(n-m)!!}. \] So, by Lemma \ref{simple1,2}, Stirling formula, and the inequality \[ (1+x)\log(1+x)+(1-x)\log(1-x)\ge x^2, \] the probability in question is of order \begin{align*} &\sum_{m=m_n}^n \frac{n!}{(n-m)!!\,(n+m-1)!!}\le_b\sum_{m=m_n}^n\frac{n^n}{(n-m)^{\frac{n-m}{2}} (n+m)^{\frac{n+m}{2}}}\\ &\le \sum_{m=m_n}^n\exp\Bigl(-\frac{m^2}{2n}\bigr) \le_b n^{1/2}\!\!\!\!\!\!\!\!\!\int\limits_{x\ge m_n/n^{1/2}}\!\!\!\!\!\!\!e^{-\frac{x^2}{2}}\,dx\ll e^{-\log^2 n/3}. \end{align*} \end{proof} Focusing on the likely stable partitions, we may and will consider only the permutations $\bold\Pi$ without a fixed point and with $|\text{Odd}\,(\bold\Pi)|\le m_n$. \subsection{Sharp estimate of $\textup{ P\/}(\bold\Pi)$} In steps, we will chop off the peripheral parts of the integration cube $[0,1]^n$ till we get to its part narrow enough to allow us to approximate the integrand in the formula \eqref{p(Pstable)=} within $1+o(1)$ factor, so that the accumulative error cost is of order $e^{-\Theta(\log^2 n)}$. {\bf Step 1.\/} For the first reduction, we set $s_n=n^{1/2}+3\log n$, and define \begin{equation}\label{P_1} \textup{ P\/}_1(\bold\Pi):= \overbrace {\idotsint}^{n}_{\bold x\in C_1} F(\bold x)\,d\bold x,\quad C_1:=\{\bold x\in [0,1]^n: s\le s_n\}, \end{equation} \begin{Lemma}\label{C1} \begin{equation}\label{P(Pi)-P_1(Pi)} \textup{ P\/}(\bold\Pi)-\textup{ P\/}_1(\bold\Pi)\le_b \frac{e^{-3\log^2n}}{(n+m-1)!!}. \end{equation} \end{Lemma} \begin{proof} By Lemma \ref{simple1,2}, \begin{equation}\label{P-P_1,1} \textup{ P\/}(\bold\Pi)-\textup{ P\/}_1(\bold\Pi)\le_b \frac{1}{(n+m-1)!}\int\limits_{s\ge s_n}\!\!\exp\left(-\frac{s^2}{2}\right) s^{n+m-1}\,ds. \end{equation} The integrand, write it as $e^{h(s)}$, attains its maximum at $s_{n,m}=(n+m-1)^{1/2}$, and \begin{align*} e^{h(s(n,m))}&=\exp\left(-\frac{n+m-1}{2}\right)(n+m-1)^{\frac{n+m-1}{2}}\\ &\le_b n^{1/2}(n+m-2)!! . \end{align*} Further \begin{align*} h(s_n)&=h(s(n,m))+(1+o(1))\frac{h^{\prime\prime}(s(n,m))}{2}(s_n-s(n,m))^2\\ &\le h(s(n,m))-4\log^2n,\\ h^\prime(s_n)&=-s_n+\frac{n+m-1}{s_n}\le -5\log n. \end{align*} Now, since $h(s)$ is concave, we have \[ \int_{s\ge s_n}e^{h(s)}\,ds \le e^{h(s_n)}\int_{s\ge s_n}\exp\bigl(h^\prime(s_n)(s-s_n)\bigr)= \frac{e^{h(s_n)}}{-h^\prime(s_n)}. \] Therefore \begin{align*} \textup{ P\/}(\bold\Pi)-\textup{ P\/}_1(\bold\Pi)&\le_b\frac{n^{1/2}e^{-4\log^2n}}{\log n}\frac{(n+m-2)!!}{(n+m-1)!} \le \frac{e^{-3\log^2n}}{(n+m-1)!!}. \end{align*} \end{proof} Next, motivated by the inequalities \eqref{exactexp} and \eqref{exactexp2}, we will derive sharp asymptotics, on progressively smaller $C_j\subset C_1$, for the leading sums\linebreak $\sum_{i\in [n_1]} x_i^2$, $\sum_{i\in [n_1/2]}\!x_i x_{i+n_1/2}$, $(n_1:=n-m)$, and obtain sufficiently strong upper bounds for the secondary sums $\sum_{h\in \text{Odd}\,(\bold\Pi)}x_h^2$, $\sum_{i\in [n]}x_i^4$ and $\sum_{i\in [n]}x_i^6$. We will end up with a rather sharp asymptotic formula for $\prod_{(i,j)}(1-x_ix_j)$ on the terminal dominant subset of $C_1$.\\ {\bf Step 2.\/} With $s:=\sum_{i\in [n]} x_i$, define $\bold u=\{u_i=x_i/s: i\in [n]\}$. Introduce $t_1(\bold u)=\max_{i\in [n]}u_i$. Define $ C_2=\Bigl\{\bold x\in C_1: t_1(\bold u)\le 1.01\frac{\log^2n}{n}\Bigr\}, $ and let $P_j(\bold\Pi)$ be the integral of $F(\bold x)$ over $C_j$. Introduce $L_1,\dots,L_{n}$, the lengths of the $n$ consecutive subintervals of $[0,1]$ obtained by choosing, at random, $n-1$ points in $[0,1]$. Applying Lemma \ref{intervals1}, the identity \eqref{int,prod} and Lemma \ref{sumsofLs} (1) with $\nu=n$, we have \begin{align*} &P_1(\bold\Pi)-P_2(\bold\Pi)\le\!\!\overbrace {\idotsint}^{n}_{\bold x\ge \bold 0}\! s^m e^{-\frac{s^2}{2}}\chi\!\left\{t_1(\bold u)\ge 1.01\frac{\log^2n}{n}\right\}\! \prod_{h\in\text{Odd}(\bold\Pi)}\!\!\!\!\! u_h \,\,d\bold x\notag\\ &\le \frac{\textup{E\/}\left[\chi\left\{\max_{i\in [n]} L_i\ge 1.01\frac{\log^2n}{n}\right\}\prod_{h=1}^m L_h\right]} {(n-1)!}\int_0^{\infty}\!\! e^{-\frac{s^2}{2}}s^{m+n-1}\,ds. \end{align*} By the union bound, the expected value is below \begin{multline}\label{union} n\textup{E\/}\left[\chi\left\{ L_n\ge 1.01\frac{\log^2n}{n}\right\}\prod_{h=1}^{m-1} L_h\right]\le n!\overbrace {\idotsint}^{n-1}_{z_1+\cdots+z_{n-1}\atop\le 1-1.01\frac{\log^2n}{n}}\prod_{h=1}^{m-1} z_h\,d\bold z\\ = n!\left(1-1.01\frac{\log^2n}{n}\right)^{n+m-2}\overbrace {\idotsint}^{n-1}_{z_1+\cdots+z_{n-1}\le 1}\prod_{h=1}^{m-1} z_h\,d\bold z\le \frac{n! e^{-1.01\log^2 n}}{(n+m-2)!}. \end{multline} \begin{Lemma}\label{P1-P2} \begin{equation}\label{P_1(Pi)-P_2(Pi)} P_1(\bold\Pi)-P_2(\bold\Pi)\le \frac{e^{-\log^2 n}}{(n+m-1)!!}. \end{equation} \end{Lemma} In addition, since $m\le m_n =\lceil n^{1/2}\log n\rceil$ and $s\le s_n=n^{1/2}+3\log n$, it follows from \eqref{P_1(Pi)-P_2(Pi)} that on $C_2$ \begin{equation}\label{sumx_h^2} \begin{aligned} &\sum_{h\in \text{Odd}\,(\bold\Pi)}x_h\le_b s n^{-1}\log^3 n,\quad \sum_{h\in \text{Odd}\,(\bold\Pi)}x_h^2 \le_b n^{-1/2}\log^5 n,\\ &\qquad\quad\sum_{i=1}^n x_i^{4}\le_b n^{-1}\log^8n, \quad \sum_{i=1}^n x_i^{6}\le_bn^{-2}\log^{12}n. \end{aligned} \end{equation} {\bf Step 3.\/} With $\xi:=\sum_{i\in [n_1]}x_i$, define $\bold v=\{v_i=x_i/\xi: i\in [n_1]\}$. Introduce $t_2(\bold v)=\sum_{i\in [n_1]}\!v_i^2$. Define \[ C_3=\left\{\bold x\in C_2: \Bigl|\frac{n_1}{2}t_2(\bold v)-1\Bigr|\le n^{-\sigma}\right\},\quad \sigma<1/3. \] Introduce $\mathcal L_1,\dots, \mathcal L_{n_1}$, the lengths of the $n_1$ consecutive subintervals of $[0,1]$ in the partition of $[0,1]$ by the random $n_1-1$ points. Analogously to {\bf Step 2\/}, we have \begin{equation*}\label{P_2(Pi)-P_3(Pi)} \begin{aligned} &P_2(\bold\Pi)-P_3(\bold\Pi)\le\!\!\overbrace {\idotsint}^{n}_{\bold x\ge \bold 0}\!e^{-\frac{s^2}{2}}\chi\!\left\{\!\Bigl|\frac{n_1}{2}t_2(\bold v)-1\Bigr|\ge n^{-\sigma}\!\right\}\! \prod_{h\in\text{Odd}(\bold\Pi)}\!\!\!\!\!\!\! u_h \,\,d\bold x\\ &\le \frac{\textup{ P\/}\left(\Bigl|\frac{n_1}{2}t_2(\bold{\mathcal L})-1\Bigr|\ge n^{-\sigma}\right)} {(n_1-1)!(2m-1)!}\iint\limits_{\eta,\,\xi\ge 0}e^{-\frac{(\xi+\eta)^2}{2}}\xi^{n_1-1}\eta^{2m-1}\,d\xi\, d\eta\\ &\le \frac{e^{-\Theta(n^{1/3-\sigma})}}{(n+m-1)!}\int_0^{\infty}e^{-\frac{s^2}{2}} s^{n+m-1}\,ds =\frac{e^{-\Theta(n^{1/3-\sigma})}}{(n+m-1)!!}. \end{aligned} \end{equation*} \begin{Lemma}\label{P2-P3} \begin{equation}\label{P_2(Pi)-P_3(Pi)} P_2(\bold\Pi)-P_3(\bold\Pi)\le \frac{e^{-\Theta(n^{1/3-\sigma})}}{(n+m-1)!!}. \end{equation} \end{Lemma} \noindent Similarly, with $t_3(\bold v):=\sum_{i\in [n_1/2]}\,v_iv_{i+n_1/2}$ and \[ C_4:=\left\{\bold x\in C_3:\left|2n_1t_3(\bold v)-1\right|\le n^{-\sigma}\right\}, \] \begin{Lemma}\label{P3-P4} \begin{equation}\label{P_3(Pi)-P_4(Pi)} P_3(\bold\Pi)-P_4(\bold\Pi)\le\frac{e^{-\Theta(n^{1/3-\sigma})}}{(n+m-1)!!}. \end{equation} \end{Lemma} Combining the estimates \eqref{P(Pi)-P_1(Pi)}, \eqref{P_1(Pi)-P_2(Pi)}, \eqref{P_2(Pi)-P_3(Pi)}, we have \begin{Lemma}\label{summary} Let $\bold\Pi$ be such that $m=|\text{Odd}\,(\bold\Pi)|\le m_n$. Then \begin{equation*} P(\bold\Pi)-P_4(\bold\Pi)\le \frac{e^{-\Theta(\log^2 n)}}{(n+m-1)!!}, \end{equation*} where $P_4(\bold\Pi)$ is the integral of $F(\bold x)$ over $C_4\subset [0,1]^n$ defined by the additional constraints: with $s:=\sum_{i\in [n]}x_i$, $s_n=n^{1/2}+3\log n$, $\xi=\sum_{i\in [n_1]}x_i$, \begin{align} &\qquad\qquad s\le s_n,\quad \max_{i\in [n]} x_i\le 1.02\, \frac{s\log^2 n}{n};\label{s<,max x_i<}\\ &\left|\frac{n_1\sum_{i\in [n_1]}x_i^2}{2\xi^2}-1\right|\le n^{-\sigma}, \, \,\left|\frac{2n_1\sum_{i\in [n_1/2]}x_i x_{i+n_1/2}}{\xi^2}-1\right|\le n^{-\sigma}.\label{sumu_iu_{i+n1/2}/(sumu_j)^2} \end{align} \end{Lemma} \noindent The constraint \eqref{sumu_iu_{i+n1/2}/(sumu_j)^2} involves only $\{x_i\}_{i\in [n_1]}$. Furthermore, given $s$, the constraint \eqref{s<,max x_i<} imposes the uniform upper bound for the {\it individual\/} components $x_i$, $i\in [n]$: no mixing the components $x_i$, $i\in [n_1]$, and $x_h$, $h\in\text{Odd}\,(\bold\Pi)$, either. Also, this constraint {\it implies\/} that \begin{equation}\label{max<} \max_{i\in n]}x_i\le 4 n^{-1/2}\log^2 n =o(1), \end{equation} meaning that the constraint $\max_i x_i\le 1$ is superfluous. Moreover, the inequality \eqref{max<} yields the {\it equality\/} \[ \prod_{(i,j)\notin D(\bold\Pi)}\!(1-x_i x_j)=\exp\Biggl(\!-\sum_{(i,j)\notin D(\bold\Pi)}\!\Bigl(x_ix_j+\frac{x_i^2x_j^2}{2} \Bigr)+O\Bigl(\sum_{i\in [n]} x_i^6\Bigr)\!\Biggr), \] that holds uniformly for $\bold x\in C_4$, with the remainder term $\ll n^{-1}$, see \eqref{sumx_h^2}. It is the matter of simple algebra to obtain from the constraints on $C_4$: \begin{Lemma}\label{asform} Uniformly for $m\le m_n$ and $\bold x\in C_4$, \begin{equation}\label{F=sharp} F(\bold x)=\exp\!\left(\!-\frac{s^2}{2}\left(\!1-\frac{3}{n}\!\right)-\frac{s^4}{n^2}+O(n^{-\sigma})\!\right)\! \prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\! x_h. \end{equation} \end{Lemma} Thus, introducing $\eta=\sum_{h\in \text{Odd}\,(\bold\Pi)}x_h$, so that as $s=\xi+\eta$, within the factor $1+O(n^{-\sigma})$ the integrand depends on $(\xi,\eta)$ and $\prod_h x_h$. Observe also that, on $C_4$, \[ \max_{i\in [n_1]}\frac{x_i}{\xi}\sim\max_{i\in [n_1]}\frac{x_i}{s}\le 1.02\frac{\log^2 n}{n}\ll \frac{1}{\xi}. \] So denoting $\psi_n(s)=\tfrac{s^2}{2}\bigl(1-\tfrac{3}{n}\bigr)+\tfrac{s^4}{n^2}$, and applying Lemma \ref{intervals1}, \eqref{joint<}, we have: $P_4(\bold\Pi)$, the integral of $F(\bold x)$ over $C_4$, is given by \begin{equation}\label{P_3(P)approx} \begin{aligned} &P_4(\bold\Pi)=\bigl(1+O(n^{-\sigma})\bigr)\!\!\overbrace {\idotsint}^{n}_{\bold x\in C_4} e^{-\psi_n(\xi+\eta)}\prod_{h\in \text{Odd}\,(\bold\Pi)} x_h\,d\bold x\\ &=\bigl(1+O(n^{-\sigma})\bigr)\!\!\iint\limits_{\xi+\eta\le s_n}\!\!e^{-\psi_n(\eta+\xi)}\frac{\xi^{n_1-1}}{(n_1-1)!}\cdot \frac{\eta^{2m-1}}{(m-1)!}\,d\eta\,d\xi\\ &\cdot\textup{ P\/}\Biggl(\Bigl|\frac{n_1}{2}\!\sum_{i\in [n_1]}\mathcal L_i^2-1\Bigr|\le n^{-\sigma},\,\, \Bigl|2n_1\!\!\sum_{i\in [n_1/2]}\mathcal L_i\mathcal L_{i+n_1/2}-1\Bigr|\le n^{-\sigma}\Biggr). \end{aligned} \end{equation} From the step {\bf (4)\/} we know that the probability factor is at least $1-e^{-\log ^2n}$. The double integral, denote it $ I_{n,m}$, is given by \[ I_{n,m}=\frac{(2m-1)!}{(m-1)!\,(n+m-1)!}\int_{s\le s_n} e^{-\psi_n(s)} s^{n+m-1}\,ds. \] The integrand attains its maximum at $\hat s =(n+m-1)^{1/2}-\Theta(n^{-1/2})$, so that $s_n-\hat s\ge 2\log n$. and it is easy to show that \[ \int\limits_{|s-\hat s|\ge \log n}\!\!\!\!\!\!e^{-\psi_n(s)}s^{n+m-1}\,ds\le e^{-\Theta(\log ^2n)} \!\!\!\!\!\!\!\int\limits_{|s-s^*|\le\log n}\!\!\!\!\!\!\!e^{-\psi_n(a)}s^{n+m-1}\,ds. \] Besides, $s^4/n^2=1+O(m/n)$ for $|s-\hat s|\le \log n$. Therefore \begin{align*} I_{n,m}&=e^{-1+O(m/n)}\frac{(2m-1)!}{(m-1)!\,(n+m-1)!}\int_{s\ge 0}e^{-\frac{s^2(1-3/n)}{2}}s^{n+m-1}\,ds\\ &=e^{-1+O(m/n)}\frac{(2m-1)!}{(m-1)!\,(n+m-1)!}\cdot\frac{(n+m-2)!!}{(1-3/n)^{(n+m)/2}}\\ &=e^{1/2+O(m/n)}\frac{(2m-1)!}{(m-1)!\,(n+m-1)!!}. \end{align*} Since $m/n=O(n^{-1/2}\log n)$, and $\sigma<1/3$ in \eqref{P_3(P)approx}, we have proved \begin{Lemma}\label{P_4(P)=exact} Uniformly for even $m\le m_n$ and $\bold\Pi$ with $|\text{Odd}\,(\bold\Pi)|=m$, \[ P_4(\bold\Pi)=\bigl(1+O(n^{-\sigma})\bigr)\frac{e^{1/2}}{(n+m-1)!!}. \] Consequently, by Lemma \ref{summary} , \begin{equation}\label{P(P)sim!} P(\bold\Pi)=\bigl(1+O(n^{-\sigma})\bigr)\frac{e^{1/2}}{(n+m-1)!!}. \end{equation} \end{Lemma} \noindent {\bf Note.\/} The formula \eqref{P(P)sim!} works for $m=0$ as well, meaning that \[ \textup{ P\/}(\text{matching }\bold\Pi\text{ is stable})=\bigl(1+O(n^{-\sigma})\bigr)\frac{e^{1/2}}{(n-1)!!}. \] So the expected number of stable {\it matchings\/} tends to $e^{1/2}$ as $n\to\infty$, \cite{Pit1}. \subsection{The expectations of the numbers of stable partitions and odd parties} \begin{Theorem}\label{E[On]} Let $\mathcal S_n$ and $\mathcal O_n$ denote the total number of odd stable partitions $\bold\Pi$, and the total number of odd cycles. Then \begin{align} \textup{E\/}\bigl[\mathcal S_n\bigr] &=\bigl(1+O(n^{-1/4})\bigr)\frac{\Gamma(1/4)}{\sqrt{\pi e}\, 2^{1/4}}\,n^{1/4},\label{ESn=}\\ \textup{E\/}\bigl[\mathcal O_n\bigr]&\lesssim \frac{\Gamma(1/4)}{4\sqrt{\pi e}\, 2^{1/4}}\,n^{1/4}\log n.\label{EOn<} \end{align} \end{Theorem} \begin{proof} For even $m$, let $f(m)$ denote the total number of permutations of $m$, having only odd cycles, each of length $3$ at least. For even $k$, let $f(m,k)$ denote the total number of permutations of $[m]$ having only $k$ odd cycles, each of length $3$, at least; so $f(m)=\sum_k f(m,k)$. Then the total number of permutations of $[n]$ with $k$ odd cycles, each of length $3$ at least, with $m$ elements overall, and even cycles of length $2$ only, is $\binom{n}{m}f(m,k)(n-m-1)!!$. So, by Lemma \ref{P_4(P)=exact}, we have \begin{equation}\label{E[Sn]sim} \textup{E\/}\bigl[\mathcal S_n\bigr]=\bigl(e^{1/2}+O(n^{-\sigma})\bigr)\sum_{m\le m_n}\!\!\binom{n}{m} \frac{f(m)(n-m-1)!!}{(n+m-1)!!}. \end{equation} A standard argument from permutation enumeration shows that \begin{equation}\label{genf(m)} \sum_{m\ge 4}\frac{f(m)}{m!}\,x^m =\exp\!\!\left(\sum_{\text{odd }j\ge 3}\frac{x^j}{j}\right) =e^{-x}\sqrt{\frac{1+x}{1-x}}, \quad (|x|<1). \end{equation} So, using the saddle-point method (Flajolet and Sedgewick \cite{FlaSed}), \begin{equation}\label{f(m)sim} f(m)=\bigl(e^{-1}\sqrt{2}+O(m^{-1})\bigr)\frac{(2m-1)!!}{2^m}. \end{equation} With a bit of work, based on Stirling formula, it follows that \begin{align*} &\bigl(e^{1/2}+O(n^{-\sigma})\bigr)\binom{n}{m}\frac{f(m)(n-m-1)!!}{(n+m-1)!!}\\ &=\bigl(1+O(n^{-\sigma}+m^{-1})\bigr)\sqrt{\frac{2}{\pi e}}\cdot m^{-1/2}\exp\left(-\frac{m^2}{2n}\right). \end{align*} Combining this formula with \eqref{E[Sn]sim}, and choosing $\sigma=1/4$, we complete the proof of \eqref{ESn=}. A bivariate extension of \eqref{genf(m)} is \begin{equation*} \sum_{m\ge 4}\frac{x^m}{m!} \sum_{k\ge 2}y^kf(m,k)=\exp\!\!\left(y\!\sum_{\text{odd }j\ge 3}\frac{x^j}{j}\right). \end{equation*} Differentiating this identity at $y=1$, we obtain \begin{multline}\label{gensumkf(m,k)} \sum_{m\ge 4}\frac{x^m}{m!} \sum_{k\ge 2}kf(m,k)= \sum_{\text{odd }j\ge 3}\frac{x^j}{j}\exp\left(\sum_{\text{odd }j\ge 3}\frac{x^j}{j}\right)\\ =\left(\frac{1}{2}\log\frac{1}{1-x}+\frac{1}{2}\log(1+x)-1\right) e^{-x}\sqrt{\frac{1+x}{1-x}}. \end{multline} So, analogously to \eqref{f(m)sim}, we obtain \begin{equation*} \sum_{k\ge 2}kf(m,k)=\bigl(1+O(m^{-1})\bigr)\frac{e^{-1}\sqrt{2}\log m}{2}\cdot\frac{(2m-1)!!}{2^m}. \end{equation*} Combining this formula with the counterpart of \eqref{E[Sn]sim}, i.e. with \begin{equation*} \textup{E\/}\bigl[\mathcal O_n\bigr]\le \bigl(e^{1/2}+O(n^{-\sigma})\bigr)\sum_{m\le m_n}\!\!\binom{n}{m} \frac{(n-m)!!}{(n+m-1)!!}\sum_{k\ge 2}kf(m,k), \end{equation*} we have \eqref{EOn<}. \end{proof} Tan \cite{Tan}, \cite{Tan1} defined a maximum stable matching for an instance $I$ as a matching $M$ of maximum size (number of matched pairs) such that no pair of members, both having a partner in $M$, prefer each other to their partners. In short, no two members assigned in $M$, but not to each other, block $M$. He proved that a maximum stable matching has size $(n-\mathcal O)/2$, (see also Manlove \cite{Man}). \begin{Corollary}\label{maxst} Let $\mathcal M_n$ denote the size of the maximum stable matching for the random instance $I_n$. Then \begin{align*} &\quad\textup{E\/}\bigl[\mathcal M_n\bigr]\ge \frac{n - c n^{1/4}\log n}{2},\quad c=\frac{\Gamma(1/4)}{3\sqrt{\pi e}\, 2^{1/4}},\\ &\textup{ P\/}\Bigl(\mathcal M_n\ge \frac{n - n^{1/4}\log^2 n}{2}\Bigr)\ge 1- O\bigl(\log ^{-1} n)\bigr), \end{align*} so that the number of members unassigned in the maximum stable matching is likely to be of order $O\bigl(n^{1/4}\log ^2n)$. \end{Corollary} \subsection{A ``maximally stable'' matching in the random instance $I_n$} For a given set of preferences, Abraham, Bir\'o and Manlove \cite{AbrBirMan} (see also \cite{Man}) defined a ``maximally stable'' matching as a perfect matching $M$ on $[n]$ that is blocked by the smallest number of pairs, $B(I_n)$, of members not matched with each other in $M$. (Two members block $M$ if they prefer each other to their partners in $M$.) A weaker corollary of the bound in \cite{AbrBirMan} states that $B(I_n)\le d(I_n)\mathcal O(I_n)$, where $O(I_n)$ is the number of odd parties (common to all stable partitions for $I_n$) and $d(I_n)$ is the length of the longest preference list. Once we estimate $R_{\text{max}}$, defined as the largest rank of a predecessor in the uniformly random instance $I_n$, we will be able to apply the ABM inequality via replacing $d(I_n)$ with $R_{\text{max}}$. For a stable $\bold\Pi$ (without a fixed point), introduce $X(\bold\Pi):=\max_i X_{i,\bold\Pi^{-1}(i)}$. Intuitively, $\max_{\bold\Pi} X(\bold\Pi)$ controls the worst predecessor's rank. From Lemma \ref{summary}, and the proof of Theorem \ref{E[On]}, it follows that \begin{equation*} \textup{ P\/}\Biggl(\max_{\bold\Pi}X(\bold\Pi)\ge \frac{\log^2n}{n^{1/2}}\Biggr)\le e^{-\Theta(\log^2 n)}. \end{equation*} A bit more generally, for every $\delta>0$, \begin{equation}\label{P_delta} \textup{ P\/}_{\delta}:=\textup{ P\/}\Biggl(\max_{\bold\Pi}X(\bold\Pi)\ge \frac{\log^{1+\delta}n}{n^{1/2}}\Biggr)\le e^{-\Theta(\log^{1+\delta} n)}. \end{equation} Denoting $x_n=\frac{\log^{1+\delta}n}{n^{1/2}}$, let $R_i:=|\{j\neq i\,:\, X_{i,j}\le x_n\}|$. Since $X_{i,j}$ are independent $[0,1]$-Uniforms, we have $R_i\overset{\mathcal D}\equiv\text{Bin}(n-1, p=x_n)$. Let $c>1$; by the classic (Chernoff) bound for the tails of the binomial distribution, \[ \textup{ P\/}(R_i\ge c(n-1)x_n)\le \exp(-f(c)(n-1)x_n),\quad f(c):=1+c(\log c -1). \] So, $\textup{ P\/}(R_i\ge 2nx_n)\le e^{-n^{1/2}/3}$ if $n$ is large enough. Invoking \eqref{P_delta}, we have then \begin{align*} \textup{ P\/}\Bigl(R_{\text{max}}\ge 2n^{1/2}\log^{1+\delta}n\Bigr)&\le P_{\delta}+\sum_{i\in [n]}\textup{ P\/}\Bigl(R_i\ge 2n^{1/2}\log^{1+\delta}n\Bigr)\\ &\le e^{-\Theta(\log^{1+\delta} n)}+n e^{-n^{1/2}/3}. \end{align*} Thus \begin{Lemma}\label{R_max<} For $\delta>0$ arbitrarily small, quite surely $R_{\text{max}}$ is of order $n^{1/2}\log^{1+\delta}n$. \end{Lemma} Combining Lemma \ref{R_max<} with \eqref{EOn<} in Theorem \ref{E[On]}, we have proved \begin{Corollary}\label{ABM=>} With high probability, there exists a perfect matching which is blocked by at most $n^{3/4} (\log n)^{2+\delta}$ unmatched pairs. \end{Corollary} \subsection{Likely range of $\mathcal R(\bold\Pi)$ in a stable, fixed-point free, partition $\bold\Pi$} In Lemma \ref{P_k(P)=} we proved that $\textup{ P\/}_k(\bold\Pi)$ the probability that $\bold\Pi$ is stable and the total rank of all $n$ predecessors $\mathcal R(\bold\Pi)$ equals $k$, necessarily exceeding $n+|\text{Odd}\,(\bold\Pi)$, is given by \begin{equation}\label{P_k(P)=,again} \begin{aligned} &\textup{ P\/}_k(\bold\Pi)=\idotsint\limits_{\bold x\in [0,1]^{n}}\left[z^{\bar k}\right] F(\bold x,z)\,d\bold x,\\ F(\bold x,z)&:=\prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\!\!\!x_h\,\,\cdot\prod_{(i,j)\notin D(\bold\Pi)} \!\!\bigl(\bar x_i\bar x_j+zx_i\bar x_j+z\bar x_ix_j\bigr), \end{aligned} \end{equation} where $m:=|\text{Odd}\,(\bold\Pi)|$ and $\bar k:=k-(n+m)$. \begin{Theorem}\label{P(R(P)sim)} For $\varepsilon\in (0,1)$, \[ \textup{ P\/}\left(\max_{\bold\Pi}\left|\frac{\mathcal R(\bold\Pi)}{n^{3/2}}-1\right|\ge \varepsilon\right) \le e^{- \Theta(\log^2 n)}. \] \end{Theorem} \begin{proof} Predictably, we will prove the claim via the union bound, i.e. summing the bounds of the respective probabilities for the individual partitions. It suffices then to consider the partitions $\bold\Pi$ with $m\le m_n=\lceil n^{1/2}\log n\rceil$. First of all, since $F(\bold x,z)$ in \eqref{P_k(P)=,again} is a polynomial of $z$ with non-negative coefficients, we have a Chernoff-type bound: for $k:=\lceil (1+\varepsilon)n^{3/2}\rceil$, \[ \textup{ P\/}(\mathcal R(\bold\Pi)\ge k)\le I(\bold\Pi,k):=\idotsint\limits_{\bold x\in [0,1]^{n}}\inf_{z\ge 1}\Bigl[z^{-\bar k} F(\bold x,z)\Bigr]\,d\bold x. \] The integrand is, at most, \[ F(\bold x,1)=F(\bold x)\le_b e^{-\frac{s^2}{2}}\!\!\!\!\! \prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\!\!\!x_h, \] ($s=\sum_{i\in [n]} x_i$), see Lemma \ref{simple1,2}. Therefore the proof of Lemma \ref{summary} delivers, with only notational modification, that \begin{equation}\label{I-I4} I(\bold\Pi,k)-I_4(\bold\Pi,k)\le \frac{e^{-\Theta(\log^2 n)}}{(n+m-1)!!}. \end{equation} Here $I_4(\bold\Pi,k)$ is the integral of $\inf_{z\ge 1}\Bigl[z^{-\bar k} F(\bold x,z)\Bigr]$ over $C_4\subset [0,1]^n$, defined by the additional constraints: with $\xi=\sum_{i\in [n_1]}x_i$, ($n_1:=n-m)$, \begin{align} &\qquad\qquad s\le s_n:=n^{1/2}+3\log n,\quad \max_{i\in [n]} x_i\le 1.02\, \frac{s\log^2 n}{n};\label{I}\\ &\left|\frac{n_1\sum_{i\in [n_1]}x_i^2}{2\xi^2}-1\right|\le n^{-\sigma}, \, \,\left|\frac{2n_1\sum_{i\in [n_1/2]}x_i x_{i+n_1/2}}{\xi^2}-1\right|\le n^{-\sigma}.\label{II} \end{align} Instead of looking for the best $z=z(\bold x)\ge 1$ where $z^{-\bar k} F(\bold x,z)$ attains, or is close to, its infimum, we confine ourselves to a sub-optimal $z=z(s)\ge 1$ (i.e. dependent on $s$ only), which makes $z^{-\bar k} F(\bold x,z)$ suitably small for all $\bold x\in C_4$. Consider $z\le \frac{2\bar k}{sn}$; as we shall see shortly, the minimum point of an auxiliary bound for the integrand does satisfy this constraint. Using $1+x\le e^x$, the constraints \eqref{I}, \eqref{II} and $z\le \frac{2\bar k}{sn}$, we have \begin{align*} &\prod_{(i,j)\notin D(\bold\Pi)}\!\!\!\!\bigl(\bar x_i\bar x_j+zx_i\bar x_j+z\bar x_ix_j\bigr)=\!\!\!\! \prod_{(i,j)\notin D(\bold\Pi)}\!\!\!\!\bigl(1+(1-2z)x_ix_j+(z-1)(x_i+x_j)\bigr)\\ &\qquad\qquad\le\exp\Bigl(\sum_{(i,j)\notin D(\bold\Pi)}\bigl[(1-2z)x_ix_j +(z-1)(x_i+x_j)\bigr]\Bigr)\\ &\qquad\qquad\qquad\qquad\le_b\exp\Bigl((1-2z)\frac{s^2}{2}+n(z-1) s\Bigr); \end{align*} therefore \[ z^{-\bar k} F(\bold x,z)\le_b \exp\Bigl((1-2z)\frac{s^2}{2}+n(z-1) s-\bar k\log z\Bigr)\! \prod_{h\in \text{Odd}\,(\bold\Pi)}\!\!\!\!\!\!x_h. \] So, applying the identity \eqref{int,prod}, \begin{align} &\qquad\quad I_4(\bold\Pi,k)\le_b \frac{1}{(n+m-1)!}\int_0^{s_n}\exp(H(z,s))\,ds,\label{I4<}\\ &H(z,s):=(1-2z)\frac{s^2}{2}+n(z-1) s-\bar k\log z+(n+m-1)\log s.\notag \end{align} Let us use \eqref{I4<} to prove that \begin{equation}\label{I4<expl} I_4(\bold\Pi,k) \le\frac{e^{-\Theta(\varepsilon^2 n)}}{(n+m-1)!!}. \end{equation} As a function of $z$, $H(z,s)$ is convex, and has its absolute minimum at \[ \bar z=\bar z(s):=\frac{\bar k}{(n-s)s}\sim \frac{\bar k}{ns}< \frac{2\bar k}{ns}. \] This {\it decreasing\/} function of $s$ is ``Chernoff-admissible'' when $s$ is such that $z(s)\ge 1$. Let $s_1$ be the smaller root of the (quadratic) equation $\bar z(s)=1$: \[ s_1>\frac{\bar k}{n}, \quad s_1=\frac{\bar k}{n}+O(1)=(1+\varepsilon)n^{1/2}+O(1). \] Thus our best hope is a function \[ z(s)=\left\{\begin{aligned} &\bar z(s),&&\text{if }s\le s_1,\\ &1,&&\text{if }s>s_1.\end{aligned}\right. \] {\bf (i)\/} $s>s_1$. Here $s_1\sim\frac{\bar k}{n} > (n+m-1)^{1/2}$, the maximum point of \[ h(s):=H(1,s)=-\frac{s^2}{2} +(n+m-1)\log s. \] So, arguing as in the proof of Lemma \ref{C1}, \begin{equation}\label{arg} \begin{aligned} &\frac{1}{(n+m-1)!}\int_{s_1}^{s_n}\exp(H(1,s))\,ds \le_b \frac{e^{h(s_1)}}{(-h'(s_1))(n+m-1)!}\\ &\le\frac{e^{-\Theta(\varepsilon^2 n)}(n+m-2)!!}{(n+m-1)!}=\frac{e^{-\Theta(\varepsilon^2 n)}}{(n+m-1)!!}. \end{aligned} \end{equation} \noindent {\bf (ii)\/} $s<s_1$. Let $\bar h(s):=H(z(s),s)$. Since $H_z(z(s),s)=0$, we have \begin{align*} \bar h'(s)&=\left.H_s(z,s)\right|_{z=z(s)}=\Bigl(s-\frac{\bar k}{n-s}\Bigr)+\Bigl(\frac{\bar k}{s}-n\Bigr)+\frac{n+m-1}{s},\\ \bar h''(s)&=1-\frac{\bar k}{(n-s)^2}-\frac{k-1}{s^2}. \end{align*} By the second formula, we have $\bar h''(s)<0$ for $s\le s_n$, i.e. $\bar h(s)$ is concave. By the first formula, we have \begin{align*} \bar h'\Bigl(\tfrac{\bar k}{n}\Bigr)&\ge -(1+o(1))\frac{\bar k^2}{n^3}+\frac{n^2}{\bar k}\sim \frac{n^{1/2}}{1+\varepsilon}\to\infty,\\ \bar h'(s_1)&=\frac{(n+m-1)-s_1^2}{s_1}\sim -\frac{(2\varepsilon+\varepsilon^2)n^{1/2}}{1+\varepsilon}\to-\infty. \end{align*} Thus $\max\,\{\bar h(s): s\le s_n\}$ is attained at a unique point $s_2\in [\bar k/n, s_1]$; in particular, $s_1-s_2=O(1)$. Since $|\bar h''(s)|=O(n^{1/2})$, it follows\,--\,via Taylor's approximation of $\bar h(s_1) (=h(s_1))$ at $s_2$\,--\,that $\bar h(s_2)=h(s_1)+O(n^{1/2})$. Therefore, similarly to \eqref{arg}, we obtain \[ \frac{1}{(n+m-1)!}\int_{0}^{s_1}\exp(H(z(s),s))\,ds \le \frac{e^{-\Theta(\varepsilon^2 n)}}{(n+m-1)!!}. \] This bound together with \eqref{arg} imply \eqref{I4<expl}, which in combination with \eqref{I-I4} deliver \[ \textup{ P\/}(\mathcal R(\bold\Pi)\ge k)\le \frac{e^{-\Theta(\log^2 n})}{(n+m-1)!!}. \] As in the proof of Corollary \ref{fruit}, it follows that \[ \textup{ P\/}(\exists\,\bold\Pi: \mathcal R(\bold\Pi)\ge (1+\varepsilon)n^{3/2})\le e^{-\Theta(\log^2 n)}. \] Similarly \[ \textup{ P\/}(\exists\,\bold\Pi: \mathcal R(\bold\Pi)\le (1-\varepsilon)n^{3/2})\le e^{-\Theta(\log^2 n)}. \] \end{proof} \section{$\textup{E\/}\bigl[\mathcal S_n^2\bigr]$ and the expected number of members with multiple stable predecessors} First of all \[ \textup{E\/}\bigl[(\mathcal S_n)_2\bigr]=\sum_{\bold\Pi_1\neq \bold\Pi_2} \!\!\!\textup{ P\/}(\bold\Pi_1,\bold\Pi_2), \] where $\textup{ P\/}(\bold\Pi_1,\bold\Pi_2)$ is the probability that $\bold\Pi_1$ and $\bold\Pi_2$ are both stable. By Lemma \ref{p(P1,P2stab)=}, \begin{equation}\label{P_1,2=intF} \begin{aligned} &\quad \textup{ P\/}(\bold\Pi_1,\bold\Pi_2)=\idotsint\limits_{\bold x,\bold y\in [0,1]^n}F(\bold x,\bold y)\,d\bold x d\bold y,\\ F(\bold x,\bold y)&:=\prod_{h} x_h\,\cdot\prod_{(i\neq j)}\! [1-x_ix_j-y_iy_j+(x_i\wedge y_i)(x_j\wedge y_j)],\\ &\quad d\bold x=\prod_{i\in [n]} dx_i,\quad d\bold y=\prod_{i\in [n]:\bold\Pi_1(i)\neq \bold\Pi_2(i)} \!\!\!\!\!\!\!\!\!\!\!dy_i, \end{aligned} \end{equation} where $h\in \text{Odd}\,(\bold\Pi_{1,2})$, $(i\neq j)\in D_1^c\cap D^c_2$, $(D_t=D(\bold\Pi_t))$, $y_i=x_i$ if $\bold\Pi_1(i)=\bold\Pi_2(i)$, and for every circuit $\{i_1,\dots,i_{\ell}\}$, ($\ell\ge 4$), formed by alternating pairs matched either in $\bold\Pi_1$ or $\bold\Pi_2$, we have : \begin{equation}\label{alter} \begin{aligned} &\text{either } \,\,x_{i_1}>y_{i_1},\,x_{i_2}<y_{i_2},\dots, x_{i_{\ell}}<y_{i_{\ell}},\\ &\text{or}\qquad\,\, x_{i_1}<y_{i_1},\,x_{i_2}>y_{i_2},\dots, x_{i_{\ell}}>y_{i_{\ell}}. \end{aligned} \end{equation} Let $\mu\!=\!\mu(\bold\Pi_1,\bold\Pi_2)$ be the total number of these circuits, and $2\nu\!=\!2\nu(\bold\Pi_1,$$\bold\Pi_2)$, be their total length. Obviously, there are $2^{\mu}$ ways to select one of two ``alternation'' sequences described in \eqref{alter} for each of the $\mu$ circuits. Whatever the choice, there are exactly $\nu$ vertices $i$, on those circuits, where $y_i>x_i$ and $\nu$ vertices where $y_i<x_i$. Let $A$ and $B$ denote the correspondent subsets, $|A|=|B|=\nu$. So \begin{equation}\label{cone} \begin{aligned} &y_i>x_i,\,\, \text{ if }i\in A;\quad y_i<x_i,\,\,\text{ if }i\in B,\\ &y_i=x_i,\,\, \text{ if } i\in [n]\setminus (A\cup B)=\text{Odd}_{1,2}\cup\bigl(\text{Even}_{1,2}\setminus(A\cup B)\bigr), \end{aligned} \end{equation} $\text{Even}_{1,2}:=[n]\setminus \text{Odd}_{1,2}$. The individual contributions of these $2^{\mu}$ choices of the inequalities along the circuits to the integral in \eqref{P_1,2=intF} are all the same. This means that $\textup{ P\/}(\bold\Pi_1,\bold\Pi_2)$ equals the RHS integral in \eqref{P_1,2=intF}, with inequalities \eqref{cone} instead of \eqref{alter}, {\it times\/} $2^{\mu}$. As in the previous section, we need first to identify the subrange of $(\bold x,\bold y)$ that provides an asymptotically dominant contribution to the integral, and second to find a sharp approximation for that contribution. Like Theorem \ref{E[On]}, the key instrument is the bound for the double-indexed product in the definition of $F(\bold x,\bold y)$ proved in Lemma \ref{simple1,2}: \begin{equation}\label{F(x,y)<} \begin{aligned} &\quad F(\bold x,\bold y)\le_b \exp\left(-\frac{s_1^2}{2}-\frac{s_2^2}{2}+\frac{s_{1,2}^2}{2}\right)\,\prod_{h} x_h ,\\ &s_1:=\sum_{i\in [n]}x_i,\,\,s_2:=\sum_{i\in [n]} y_i,\,\,s_{1,2}:=\sum_{i\in [n]}(x_i\wedge y_i). \end{aligned} \end{equation} Here $(\bold x,\bold y)$ are subject to the constraints \eqref{cone}. To make use of this bound, we change the variables of integration: \begin{equation}\label{change} x'_i=\left\{\begin{aligned} &x_i-y_i,&&i\in B,\\ &x_i,&&i\not\in B,\end{aligned}\right. \quad y_i'=\left\{\begin{aligned} &y_i-x_i,&&i\in A,\\ &y_i,&&i\notin A.\end{aligned}\right. \end{equation} Here $\bold x',\bold y'\in [0,1]^n$, such that $x_i'=y_i'$ if $\bold\Pi_1(i)=\bold\Pi_2(i)$, and the Jacobian $\partial (\bold x,\bold y)/\partial(\bold x',\bold y')$ equals $1$. Furthermore, switching to $(\bold x',\bold y')$ and introducing \begin{equation}\label{xij=} \xi_1=\sum_{i\in [n]\setminus B}x_i'+\sum_{i\in B}y_i',\quad \xi_2=\sum_{i\in B}x_i',\quad \xi_3=\sum_{i\in A}y_i', \end{equation} we obtain \begin{equation}\label{sumsqrs} \begin{aligned} &\,\,\,\,\,\,-\frac{s_1^2}{2}-\frac{s_2^2}{2}+\frac{s_{1,2}^2}{2}=-\frac{1}{2}\Biggl(\sum_{i\in [n]\setminus B}x_i'+\sum_{i\in B}y_i'+\sum_{i\in B}x_i'\Biggr)^2\\ &\,\,\,\,\,\,-\frac{1}{2}\Biggl(\sum_{i\in [n]\setminus B}x_i'+\sum_{i\in B}y_i'+\sum_{i\in A}y_i'\Biggr)^2 +\frac{1}{2}\Biggl(\sum_{i\in [n]\setminus B}x_i'+\sum_{i\in B}y_i'\Biggr)^2\\ &=-\frac{1}{2}(\xi_1+\xi_2)^2-\frac{1}{2}(\xi_1+\xi_3)^2+\frac{1}{2}\xi_1^2 =-\frac{1}{2}(\xi_1+\xi_2+\xi_3)^2+\xi_2\xi_3. \end{aligned} \end{equation} Notice that \begin{equation}\label{sumxi=sumlor} \begin{aligned} &\xi_1+\xi_2+\xi_3=\sum_{i\in [n]}x_i'+\sum_{i\in A\cup B}y_i'=\sum_{i\in [n]}(x_i\lor y_i),\\ &\qquad\xi_2+\xi_3=\sum_{i\in B}x_i'+\sum_{i\in A}y_i'=\sum_{i\in [n]}|x_i-y_i|. \end{aligned} \end{equation} In full analogy with the case of $\textup{E\/}\bigl[\mathcal O_n\bigr]$, the bound \eqref{F(x,y)<} and the identity \eqref{sumsqrs} will allow us to shrink, in several steps, the range of $(\bold x,y)$ to a core range, on which the integrand $F(\bold x,\bold y)$ can be sharply approximated.\\ {\bf (1)\/} Recall that we consider the partitions $\bold\Pi$ with the total length of all odd cycles $m=m(\bold\Pi)\le m_n=[n^{1/2}\log n]$. Our first step is to dispense with the pairs $(\bold\Pi_1,\bold\Pi_2)$ of the partitions such that $2\nu=2\nu(\bold\Pi_1,\bold\Pi_2)\ge 2m_n$. \begin{Lemma}\label{nu>nu_n} \begin{align*} &\textup{E\/}\bigl[(\mathcal S_n)_2\bigr]-\textup{E\/}_1\bigl[(\mathcal S_n)_2\bigr]\le e^{-\Theta(\log^2 n)} ,\\ &\textup{E\/}_1\bigl[(\mathcal S_n)_2\bigr]:=\sum_{\nu(\bold\Pi_1,\bold\Pi_2)\le m_n} \!\!\!\!\!\!\!\!\!\textup{ P\/}(\bold\Pi_1,\bold\Pi_2). \end{align*} \end{Lemma} \begin{proof} By the equations \eqref{F(x,y)<} and \eqref{sumsqrs}, {\it and\/} the identity \eqref{int,prod}, we have \begin{equation}\label{sumxi_j} \begin{aligned} &\textup{ P\/}(\bold\Pi_1,\bold\Pi_2)\le_b\,2^{\mu}\idotsint\limits_{\bold x',\bold y'\ge \bold 0} \exp\Biggl(\!-\frac{1}{2}\Bigl(\sum_j\xi_j\Bigr)^2+\xi_2\xi_3\!\Biggr)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \Biggl(\,\prod_{h\in \text{Odd}_{1,2}}\!\!\!\! x_h'\Biggr)\,\,\,\prod_{i\in [n]} dx'_i\prod_{j\in A\cup B} dy'_j\\ &=2^{\mu}\iiint\limits_{\xi_j\ge \bold 0}\! \exp\Biggl(\!-\frac{1}{2}\Bigl(\sum_j\xi_j\Bigr)^2+\xi_2\xi_3\!\Biggr) \frac{\xi_1^{n+m-1}}{(n+m-1)!}\cdot\frac{(\xi_2\xi_3)^{\nu-1}} {[(\nu-1)!]^2}\,d\boldsymbol{\xi}. \end{aligned} \end{equation} Expanding $\exp(\xi_2\xi_3)=\sum_{k\ge 0}\xi_2^k\xi_3^k/k!$ and using again, term-wise, \eqref{int,prod}, we obtain \begin{equation}\label{explsum} \begin{aligned} \textup{ P\/}(\bold\Pi_1,\bold\Pi_2)&\le_b2^{\mu}\sum_{k\ge 0}s(n+m,\nu,k),\\ s(n+m,\nu,k)&:=\frac{[(\nu-1+k)!]^2}{[(\nu-1)!]^2k!(n+m+2(\nu+k)-1)!!}. \end{aligned} \end{equation} For $m=0$ this sum was estimated in \cite{Pit2}. For our case the estimate from \cite{Pit2} becomes \begin{equation}\label{sum_k} \sum_{k\ge 0}s(n+m,\nu,k)\le_b n\left(\frac{e}{n+m}\right)^{\frac{n+m}{2}}\!\! (n+m)^{-\nu}. \end{equation} Furthermore, the number of ordered pairs $(\bold\Pi_1,\bold\Pi_2)$ with parameters $m$, $\nu$ and $\mu$ is \begin{equation}\label{countpairs} \binom{n}{m} f(m) \binom{n-m}{2\nu}(n-m-2\nu-1)!!\cdot 2^{\mu}f(2\nu,\mu); \end{equation} here, as we recall, $f(m)$ is the total number of permutations of $[m]$ with only odd cycles of length $3$ or more, and $f(2\nu,\mu)$ is the total number of circuit partitions of $[2\nu]$ with $\mu$ circuits, each of even length $4$ at least. The factor $2^{\mu}$ counts the total number of ways to assign, in the alternating fashion, the edges of the circuits to the matching sets of $\bold\Pi_1$ and $\bold\Pi_2$. Clearly then, $2^{\mu}f(2\nu,\mu)$ is the total number of permutations of $[2\nu]$ with only even cycles, of length $4$ at least. We add that $(n-m-2\nu-1)!!$ is the total number of ways to form the $(n-m-2\nu)/2$ matched pairs out of $n-m-2\nu$ elements outside the circuits, i.e. the pairs {\it common\/} to $\bold\Pi_1$ and $\bold\Pi_2$. So, by \eqref{explsum}, \eqref{sum_k} and \eqref{countpairs}, we obtain \begin{equation}\label{clumsy} \begin{aligned} &\sum_{\nu(\bold\Pi_1,\bold\Pi_2)\ge m_n}\!\!\!\!\!\!\!\!P(\bold\Pi_1,\bold\Pi_2)\le_b n\!\!\!\!\!\sum_{m\le m_n}\!\binom{n}{m} f(m) \left(\!\frac{e}{n+m}\!\right)^{\frac{n+m}{2}}\\ &\qquad\times\sum_{\nu\ge m_n}\!\!\binom{n-m}{2\nu}(n-m-2\nu-1)!! (n+m)^{-\nu}\sum_{\mu}2^{2\mu}f(2\nu,\mu). \end{aligned} \end{equation} Now $f(m)\le m!$, and from \cite{Pit2} (Appendix) it follows that \begin{equation}\label{sum_{mu}} \sum_{\mu}2^{2\mu}f(2\nu,\mu)=e^{-1+O(\nu^{-1})} (2\nu)!=O((2\nu)!). \end{equation} Also \[ \frac{(n-m-2\nu-1)!!}{(n-m-2\nu)!}=\left[2^{\frac{n-m-2\nu}{2}}\left(\frac{n-m-2\nu}{2} \right)!\right]^{-1}. \] So the bound \eqref{clumsy} yields \begin{align*} &\sum_{\nu(\bold\Pi_1,\bold\Pi_2)\ge m_n}\!\!\!\!\!\!\!\!P(\bold\Pi_1,\bold\Pi_2)\le_b n!\,n\,\cdot\!\!\!\sum_{m\le m_n}\!\!\left(\!\frac{e}{n+m}\!\right)^{\frac{n+m}{2}}\\ &\qquad\qquad\qquad\times\sum_{\nu\ge m_n}\! \left[(n+m)^{\nu}\, 2^{\frac{n-m-2\nu}{2}}\left(\frac{n-m-2\nu}{2} \right) ! \right]^{-1}\\ &\le n!\,n^2\,\cdot\!\!\!\!\!\sum_{m\le m_n}\!\!\left(\!\frac{e}{n+m}\!\right)^{\frac{n+m}{2}} \!\!\left[(n+m)^{m_n}\, 2^{\frac{n-m-2m_n}{2}}\left(\frac{n-m-2m_n}{2} \right) !\,\right]^{-1}\!\!\!, \end{align*} since in the $\nu$-sum the terms decrease with $\nu$. Applying Stirling formula for the two factorials and using $m\le m_n\ll n^{2/3}$ in the expansions of $\log (1+z)$, $(z=m/n,\,z=-\tfrac{m+2m_n}{n})$, we transform this bound into \begin{equation}\label{sum_{nu>nu_2}} \sum_{\nu(\bold\Pi_1,\bold\Pi_2)\ge m_n}\!\!\!\!\!\!\!\!P(\bold\Pi_1,\bold\Pi_2) \le n^2\cdot\!\!\! \sum_{m\le m_n}\!\!\!\exp\left(\!-\frac{(m+2m_n)^2}{4n}\right)\le e^{-0.99\log^2 n}. \end{equation} \end{proof} From now on we will consider only {\it admissible\/} pairs $(\bold\Pi_1,\bold\Pi_2)$, i.e. those satisfying $m(\bold\Pi_1,\bold\Pi_2)\le m_n$ and $\nu(\bold\Pi_1,\bold\Pi_2)\le m_n$.\\ {\bf (2)\/} For the admissible pairs $(\bold\Pi_1,\bold\Pi_2)$, we can discard large parts of the $(\bold x,\bold y)$'s range, like we did for individual partitions $\bold\Pi$ in the case of $\textup{E\/}\bigl[\mathcal S_n\bigr]$. For a generic set $\mathcal C$ of $(\bold x,y)$ with $\bold x,\bold y\in [0,1]^n$, we define \begin{align*} &\textup{ P\/}_{\mathcal C}(\bold\Pi_1,\bold\Pi_2)=\idotsint\limits_{(\bold x,\bold y)\in \mathcal C}F(\bold x,\bold y)\,d\bold x d\bold y, \quad E_{\mathcal C}\bigl[(\mathcal S_n)_2\bigr]=\sum_{\bold\Pi_1,\bold\Pi_2}\!\!\!\textup{ P\/}_{\mathcal C}(\bold\Pi_1,\bold\Pi_2). \end{align*} \begin{Lemma}\label{sum(lor)} Introducing $s_n=n^{1/2}+6\log n$, and $ \mathcal C_1=\Bigl\{\bold x,\bold y: \sum_{i\in [n]}(x_i\lor y_i)\le s_n\Bigr\}, $ we have \[ E_1\bigl[(\mathcal S_n)_2\bigr]- E_{\mathcal C_1}\bigl[(\mathcal S_n)_2\bigr]\le e^{-\Theta(\log^2n)}. \] \end{Lemma} \begin{proof} We already observed, \eqref{sumxi=sumlor}, that $\sum_i (x_i\lor y_i)=\sum_j\xi_j$. So, similarly to \eqref{sumxi_j}-\eqref{explsum} we have: \begin{multline}\label{P-P_1} \textup{ P\/}(\bold\Pi_1,\bold\Pi_2)-\textup{ P\/}_{\mathcal C_1}(\bold\Pi_1,\bold\Pi_2)\\ \le_b 2^{\mu}\!\!\!\!\iiint\limits_{\xi_1+\xi_2+\xi_3\ge s_n}\! \!\!\!\!\exp\Biggl(\!-\frac{1}{2}\Bigl(\sum_j\xi_j\Bigr)^2+\xi_2\xi_3\!\Biggr) \frac{\xi_1^{n+m-1}}{(n+m-1)!}\cdot\frac{(\xi_2\xi_3)^{\nu-1}} {[(\nu-1)!]^2}\,d\boldsymbol{\xi}\\ = 2^{\mu}\sum_{k\ge 0}\frac{[(\nu-1+k)!]^2}{[(\nu-1)!]^2k!(n+m+2(\nu+k)-1)!}\\ \times\int_{s\ge s_n}\!\!\exp\Bigl(-\frac{s^2}{2}\Bigr) s^{n+m+2(\nu+k)-1}\,ds. \end{multline} (Relaxing the constraint on $s$ to $s\ge 0$ we get back to \eqref{explsum}.) The last integrand attains its maximum at \[ s_{\text{max}}=\bigl(n+m+2(\nu+k)-1\bigr)^{1/2}, \] which is below $s_n-3\log n$ if $k\le m_n$. Let $S_{\le m_n}$ and $S_{>m_n}$ denote the sub-sums of the sum above, for $k\le m_n$ and $k>m_n$ respectively. Then, expanding integration to $[0,\infty)$, we obtain \begin{align*} S_{>m_n}&\le \sum_{k>m_n}\frac{[(\nu-1+k)!]^2}{[(\nu-1)!]^2\,k!\,(n+m+2(\nu+k)-1)!!}\\ &\le_b \frac{[(\nu+m_n)!]^2}{[(\nu-1)!]^2\,m_n!\,(n+m+2(\nu+m_n)+1)!!}; \end{align*} since $\nu\le m_n$, the ratio of the consecutive terms in the sum is below $2/3$. Droping $[(\nu-1)!]^2$ in the denominator and using the Stirling formula for the other factorials, we simplify the bound to \begin{equation}\label{S_><} S_{>m_n}\le_b \left(\frac{e}{n+m}\right)^{\frac{n+m}{2}}\!\! (n+m)^{-(\nu+m_n)}. \end{equation} The bound is smaller than the bound \eqref{sum_k} for the full sum of $s(n+m,\nu,k)$ by the factor $(n+m)^{m_n}$. Turn to $S_{\le m_n}$. This time the bottom integral over $s\ge s_n$ in \eqref{P-P_1} is small, compared to the integral over all $s\ge 0$, because for $k\le m_n$ the maximum point of the integrand is at distance $3\log n$, at least, from the interval $[s_n,\infty)$. More precisely, using the argument in the proof of Lemma \ref{C1}, we have \[ \int_{s\ge s_n}\!\!\exp\Bigl(-\frac{s^2}{2}\Bigr) s^{n+m+2(\nu+k)-1}\,ds\le_b e^{-8\log^2 n}\bigl(n+m +2(\nu+k)-2\bigr)!!. \] Therefore \begin{equation}\label{S<<} \begin{aligned} S_{\le m_n}&\le_b e^{-8\log^2 n}\sum_{k\le m_n}\frac{[(\nu-1+k)!]^2}{[(\nu-1)!]^2k!(n+m+2(\nu+k)-1)!!}\\ &\le e^{-8\log^2 n}\sum_{k\ge 0}\frac{[(\nu-1+k)!]^2}{[(\nu-1)!]^2k!(n+m+2(\nu+k)-1)!!}\\ &=e^{-8\log^2 n}\sum_{k\ge 0}s(n+m,\nu,k). \end{aligned} \end{equation} Combining \eqref{S_><}, \eqref{S<<} and \eqref{sum_k} we transform the inequality \eqref{P-P_1} into \begin{equation}\label{P-P1expl} P(\bold\Pi_1,\bold\Pi_2)-P_{\mathcal C_1}(\bold\Pi_1,\bold\Pi_2)\le e^{-\Theta(\log^2 n)}\, 2^\mu \!\left(\frac{e}{n+m}\!\right)^{\frac{n+m}{2}}\!\! (n+m)^{-\nu}. \end{equation} So, like the part {\bf (1)\/} in the proof of Lemma \ref{nu>nu_n}, \begin{equation}\label{sum(P(P1,P_2)-P_1(P_1,P_2))} \begin{aligned} &\quad\sum_{\bold\Pi_1,\bold\Pi_2}\bigl[P(\bold\Pi_1,\bold\Pi_2)-P_{\mathcal C_1}(\bold\Pi_1,\bold\Pi_2)] \le e^{-\Theta(\log^2n)}n!\,m_n\\ &\times\!\!\! \sum_{m\le m_n]}\!\!\!\left(\frac{e}{n+m}\right)^{\frac{n+m}{2}} \left[(n+m)^{0}\, 2^{\frac{n-m-2\cdot 0}{2}}\left(\frac{n-m-2\cdot 0}{2} \right) ! \right]^{-1}\\ &\qquad\qquad\qquad=e^{-\Theta(\log^2n)}. \end{aligned} \end{equation} \end{proof} We need some additional reduction of the last range $\mathcal C_2$. The bound \eqref{F(x,y)<} will continue to be the key tool, until the resulting range is narrow enough to permit a sufficiently sharp bound of the double product \[ G(\bold x,\bold y)=\prod_{(i\neq j)\in D^c_1\cap D^c_2}\!\!\!\!\!\!\!\bigl[1-x_ix_j-y_iy_j+(x_i\wedge y_i)(x_j\wedge y_j)\bigr] \] in \eqref{P_1,2=intF}. Define $\mathcal N=\mathcal N(\bold\Pi_1,\bold\Pi_2)$ and $\mathcal M=\mathcal M(\bold\Pi_1,\bold\Pi_2)$ as the vertex set of all odd cycles and even cycles, of length $4$ or more, and the vertex set of the edges common to both partitions, respectively. So $|\mathcal N|=m+2\nu$, and $|\mathcal M|=n-(m+2\nu)$. Arguing as in the proof of Lemma \ref{simple1,2}, but retaining more terms, we have \begin{equation}\label{G(x,y)<expl} \begin{aligned} G(\bold x,\bold y)&\le \exp\Biggl(\!-\frac{s_1^2}{2}-\frac{s_2^2}{2}+\frac{s_{1,2}^2}{2}+ \frac{1}{2}\sum_{i\in \mathcal M}x_i^2 +\sum_{(i\neq j)\in M_1\cap M_2}\!\!\!\!\!\!x_ix_j\\ &\,\,\,\,-\frac{1}{4}\Bigl(\sum_{i\in [n]} (x_i\wedge y_i)^2\Bigr)^2 +O\Bigl(\sum_{i\in \mathcal N}(x_i^2+y_i^2)\!\Bigr)+O\Bigl(\sum_{i\in [n]}x_i^4\Bigr)\!\Biggr). \end{aligned} \end{equation} Thus we have to find sharp approximations of the three explicit sums and to establish the $o(1)$ bounds of the remainders for almost all $(\bold x,\bold y)\in \mathcal C_1$. With those approximations at hand we will obtain an explicit upper bound for $\textup{E\/}\bigl[(\mathcal S_n)_2\bigr]$. For brevity will not present a proof of a matching lower bound. \\ {\bf (3)\/} By \eqref{change} and \eqref{xij=}, $s:=\xi_1+\xi_2+\xi_3=\sum_{i\in [n]}x_i'+\sum_{i\in A\cup B}y_i'.$ \begin{Lemma}\label{C1-C2} Define $\bold u'=\{u_i'\}_{i\in [n]}$, where $u_i'=x_i'/s$, for $i\in [n]$, and $u_i'=y_i'/s$ for $i\in A\cup B$. Define $T_1(\bold u')=\max_i u_i'$. For \[ \mathcal C_2:=\Bigl\{(\bold x,\bold y)\in \mathcal C_1: T_1(\bold u')\le 1.01\frac{\log^2 n}{n}\Bigr\}, \] we have \begin{equation*} \textup{ P\/}_{\mathcal C_1}(\bold\Pi_1,\bold\Pi_2)-\textup{ P\/}_{\mathcal C_2}(\bold\Pi_1,\bold\Pi_2) \le 2^{\mu} e^{-\Theta(\log^2 n)}\left(\frac{e}{n+m}\right)^{\frac{n+m}{2}}\!\!n^{-\nu}. \end{equation*} \end{Lemma} \begin{proof} Introduce $L_1',\dots,L_{n+2\nu}'$, the intervals lengths in the random partition of $[0,1]$ by the $n+2\nu-1$ random points. Analogously to \eqref{sumxi_j}, but using the sharper inequality in Lemma \ref{intervals1}, \eqref{joint<}, we have: with $s:=\xi_1+\xi_2+\xi_3$, \begin{equation* \begin{aligned} &\textup{ P\/}_{\mathcal C_1}(\bold\Pi_1,\bold\Pi_2)-\textup{ P\/}_{\mathcal C_2}(\bold\Pi_1,\bold\Pi_2)\\ &\le_b\,2^{\mu}\!\!\!\!\!\!\idotsint\limits_{\bold x', \bold y'\ge \bold 0\atop T_1(\bold u')>1.01\frac{\log^2 n}{n}}\!\!\!\!\! e^{-\frac{s^2}{2}+\xi_2\xi_3} \prod_{h\in\text{Odd}_{1,2}}\!\!\!\!x_h\,\prod_{i\in [n]} dx'_i\prod_{j\in A\cup B}\!\! dy'_j\\ &\le \frac{2^{\mu}}{(n+2\nu-1)!}\,\textup{E\/}\Biggl[\chi\Bigl(T_1(\bold L')\ge 1.01\frac{\log^2n}{n}\Bigr)\prod_{h\in \text{Odd}_{1,2}}\!\!\!\!\!L_h'\Biggr]\\ &\quad\times \idotsint\limits_{\bold x', \bold y'\ge \bold 0} e^{-\frac{s^2}{2}+\xi_2\xi_3}\,s^m\prod_{i\in [n]} dx'_i\prod_{j\in A\cup B}\!\! dy'_j. \end{aligned} \end{equation*} Arguing as in \eqref{union}, the expectation factor is less than \[ e^{-1.01\log^2 n}\frac{(n+2\nu)!}{(n+m+2\nu-2)!}. \] The integral is less than \begin{align*} I_n(m,\nu)&:=\iiint\limits_{\xi_j\ge 0} e^{-\frac{s^2}{2}+\xi_2\xi_3} s^m\frac{\xi_1^{n-1}}{(n-1)!} \cdot\frac{(\xi_2\xi_3)^{\nu-1}}{\bigl[(\nu-1)!\bigr]^2}\,d\boldsymbol\xi\\ &=\sum_{k\ge 0}\frac{\bigl[(\nu-1+k)!\bigr]^2}{(n-1)!\,\bigl[(\nu-1)!\bigr]^2\,k!\,\bigl(2(\nu+k)-1\bigr)!}\\ &\times\iint\limits_{\xi_1,\xi_4\ge 0}e^{-\frac{(\xi_1+\xi_4)^2}{2}} (\xi_1+\xi_4)^m\, \xi_1^{n-1}\xi_4^{2(\nu+k)-1}\,d\xi_1 d\xi_4. \end{align*} Here the double integral equals \[ \frac{(n-1)!\,\bigl(2(\nu+k)-1\bigr)!\,\bigl(n+m+2(\nu+k)-2\bigr)!!}{\bigl(n+2(\nu+k)-1\bigr)!}. \] So \[ I_n(m,\nu)=\sum_{k\ge 0}\frac{\bigl[(\nu-1+k)!\bigr]^2\,\bigl(n+m+2(\nu+k)-2\bigr)!!} {\bigl[(\nu-1)!\bigr]^2\,k!\,\bigl(n+2(\nu+k)-1\bigr)!} \] Therefore \begin{align*} &\quad\textup{ P\/}_{\mathcal C_1}(\bold\Pi_1,\bold\Pi_2)-\textup{ P\/}_{\mathcal C_2}(\bold\Pi_1,\bold\Pi_2) \le_b 2^{\mu} e^{-\log^2 n}\sum_{k\ge 0} s'(n,m,\nu,k),\\ &s'(n,m,\nu,k):=\frac{\bigl[(\nu-1+k)!\bigr]^2\bigl(n+m+2(\nu+k)-2\bigr)!!\,(n+2\nu)!}{(n+m+2\nu-2)!\,\bigl[(\nu-1)!\bigr]^2\,k!\,\bigl(n+2(\nu+k)-1\bigr)!}. \end{align*} The summand $s'(n,m,\nu,k)$ is similar to the summand $s(n+m,\nu,k)$ defined in \eqref{explsum}. Closely following the derivation of the bound for $\sum_{k\ge 0}s(n,\nu,k)$ in \cite{Pit2}, we obtain \begin{equation*} \sum_{k\ge 0} s'(n,m,\nu,k)\le_b n^2\left(\frac{e}{n+m}\right)^{\frac{n+m}{2}}\!\!n^{-\nu}, \end{equation*} compare to \eqref{sum_k}. The last two bounds complete the proof. \end{proof} On $\mathcal C_1\supset \mathcal C_2$ we have \[ s=\sum_{ i\in [n]}x_i'+\sum_{i\in A\cup B}y_i'=\sum_{i\in [n]}(x_i\lor y_i)\le s_n=n^{1/2}+6\log n. \] and on $\mathcal C_2$ \[ \max\Bigl\{\max_i\frac{x_i'}{s},\,\max_{j\in A\cup B}\frac{y_j'}{s}\le 1.01\frac{\log^2 n}{n}\Bigr\}. \] Since $m,\,\nu\le n^{1/2}\log n$, we have then the counterparts of the bounds in \eqref{sumx_h^2}. Namely, on $\mathcal C_2$, \begin{equation}\label{onC_2} \begin{aligned} \sum_{i\in \mathcal N}(x_i'+&y_i')\le_b sn^{-1}\log^3n,\quad \sum_{i\in \mathcal N}(x_i'+y_i')^2\le_b n^{-1/2}\log^5n,\\ &\sum_{i\in \mathcal N}(x_i'+y_i')^4\le_b n^{-1}\log^8 n, \end{aligned} \end{equation} \\ {\bf (4)\/} With $\xi:=\sum_{i\in \mathcal M}x_i'(=\sum_{i\in \mathcal M}x_i)$, define $v_i'=x_i'/\xi$ for $i\in \mathcal M$, $|\mathcal M|=n-m-2\nu$. Introduce $T_2(\bold v')=\sum_{i\in \mathcal M}(v_i')^2$, and $V'$ the set of all $\bold v'$ such that \[ \left|\frac{n-m-2\nu}{2}\,T_2(\bold v')-1\right|\le n^{-\sigma}. \] \begin{Lemma}\label{C2-C3} For $\sigma<1/3$, let $ \mathcal C_3=\bigl\{(\bold x,\bold y)\in \mathcal C_2: \bold v'\in V\bigr\}. $ Then \begin{equation*} \textup{ P\/}_{\mathcal C_2}(\bold\Pi_1,\bold\Pi_2)-\textup{ P\/}_{\mathcal C_3}(\bold\Pi_1,\bold\Pi_2) \le_b 2^{\mu}\, e^{-\Theta(n^{1/3-\sigma})}\left(\frac{e}{n+m}\right)^{\frac{n+m}{2}}\!\!n^{-\nu}. \end{equation*} \end{Lemma} \begin{proof} Introduce \[ \xi_4:=\xi_1-\xi=\sum_{i\in B^c\cap \mathcal M^c}x_i'+\sum_{i\in B}y_i'=\sum_{i\in \text{Odd}_{1,2}}x_i'+ \sum_{i\in A} x_i'+\sum_{i\in B}y_i'. \] Then with $s:=\xi+\xi_4+\xi_2+\xi_3$, \begin{align*} &\textup{ P\/}_{\mathcal C_2}(\bold\Pi_1,\bold\Pi_2)-\textup{ P\/}_{\mathcal C_3}(\bold\Pi_1,\bold\Pi_2)\\ &\qquad\quad \le_b\,2^{\mu}\idotsint\limits_{\bold x',\,\bold y':\, \bold v'\in V} e^{-\frac{s^2}{2}+\xi_2\xi_3} \prod_{i\in M} dx'_i\, \prod_{j\in\text{Odd}_{1,2}}\!\!\!\!x_j dx_j\\ &\qquad\qquad\qquad\qquad\quad\times\prod_{k\in A}dx_k'\,\prod_{\ell\in B}dy_{\ell}'\,\prod_{b\in B} dx_b'\,\prod_{a\in A}dy_a'. \end{align*} Now the integrand depends on $\{x_i'\}_{i\in \mathcal M}$ only through $\xi=\sum_{i\in \mathcal M}x_i'$. So, introducing the random intervals $\mathcal L_1',\dots, \mathcal L_{n-m-2\nu}'$ forming the partition of $[0,1]$, we obtain \begin{align*} &\textup{ P\/}_{\mathcal C_2}(\bold\Pi_1,\bold\Pi_2)-\textup{ P\/}_{\mathcal C_3}(\bold\Pi_1,\bold\Pi_2)\\ &\le_b 2^{\mu}\!\textup{ P\/}\Bigl(\Bigl|\frac{n-m-2\nu}{2}T_2(\boldsymbol{\mathcal L}')-1\Bigr|\ge n^{-\sigma}\!\Bigr) \iiiint\limits_{\xi, \,\xi_j\ge 0}e^{-\frac{s^2}{2}+\xi_2\xi_3}\frac{\xi^{|\mathcal M|-1}\,d\xi}{(|\mathcal M|-1)!}\\ &\qquad\qquad\qquad\times \frac{\xi_4^{2m+2\nu-1}\,d\xi_4}{(2m+2\nu-1)!}\cdot \frac{\xi_2^{\nu-1}\,d\xi_2}{(\nu-1)!}\cdot\frac{\xi_3^{\nu-1}\,d\xi_3}{(\nu-1)!}. \end{align*} The probability is of order $e^{-\Theta(n^{1/3-\sigma})}$, and the integral equals the bottom $3$-dimensional integral in \eqref{sumxi_j}. Jointly with \eqref{explsum} and \eqref{sum_k} this proves the claim. \end{proof} Finally, introduce $T_3(\bold v')=\sum\limits_{(i,j)\in M_1\cap M_2}\!\!v_i' v_j'$; (here, of course, $i,j\in\mathcal M$). \begin{Lemma}\label{C3-C4} For $\sigma<1/3$, let \[ \mathcal C_4=\left\{(\bold x,\bold y)\in \mathcal C_3:\left|2(n-m-2\nu)T_3(\bold v')-1\right|\le n^{-\sigma}\right\}; \] Then \begin{equation*} \textup{ P\/}_{\mathcal C_3}(\bold\Pi_1,\bold\Pi_2)-\textup{ P\/}_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2) \le 2^{\mu} e^{-\Theta(n^{1/3-\sigma})}\left(\frac{e}{n+m}\right)^{\frac{n+m}{2}}\!\!n^{-\nu}. \end{equation*} \end{Lemma} \noindent The proof is a copy of the previous argument. The Lemmas \ref{sum(lor)}, \ref{C1-C2}, \ref{C2-C3} and \ref{C3-C4} imply \begin{Lemma}\label{comb} For every admissible pair $\bold\Pi_1,\,\bold\Pi_2$, \[ P(\bold\Pi_1,\bold\Pi_2)-P_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2)\le_b e^{-\Theta(\log^2n)}\,2^{\mu} \left(\frac{e}{n+m}\right)^{\frac{n+m}{2}}\!\!n^{-\nu}. \] Here $P_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2)$ is the integral of $F(\bold x,\bold y)$ over \[ \mathcal C_4\subset \{\bold x,\,\bold y\in [0,1]^n:\, x_i=y_i\ \text{ if }\,\bold\Pi_1(i)=\bold\Pi_2(i)\}, \] defined by the additional constraints: denoting $\xi:=\sum_{i\in \mathcal M} x_i'\,\left(=\sum_{i\in \mathcal M}x_i\right)$, \begin{align} s&:=\sum_{i\in [n]} x_i'+\sum_{j\in A\cup B} y_j' (=\xi_1+\xi_2+\xi_3)\le s_n (=n^{1/2}+6\log n), \label{1}\\ &\max\left\{\max_{i\in [n]} x_i',\,\max_{j\in A\cup B} y_j'\right\}\le 1.01\frac{s\log^2 n}{n}, \label{2}\\ &\left|\frac{|\mathcal M|}{2\xi^2}\sum\limits_{i\in\mathcal M} x_i^2-1\right|\le n^{-\sigma},\quad \left|\frac{2|\mathcal M|}{\xi^2}\sum\limits_{(i,j)\in M_1\cap M_2}\!\!\!\!\!\!\!\!x_ix_j-1\right|\le n^{-\sigma}. \label{3} \end{align} \end{Lemma} The constraint \eqref{3} involves only $\{x_i\}_{i\in \mathcal M}$, and the constraint \eqref{2} imposes the bound for the individual components $x_i'$ and $y_j'$. Since $s_n\le 2n^{1/2}$, the latter implies that \begin{equation}\label{max[n],A,B<} \max\left\{\max_{i\in [n]} x_i',\,\max_{j\in A\cup B} y_j'\right\}\le 3 n^{-1/2}\log^2 n, \end{equation} obviating the constraint $x_i'\le 1,\,y_j'\le 1$. On $\mathcal C_4$ the inequality \eqref{G(x,y)<expl} can be drastically simplified. First of all, the bottom part of the bound \eqref{G(x,y)<expl} is \[ -\frac{1}{4}\Bigl(\sum_{i\in\mathcal M} x_i^2\Bigr)^2 +O\bigl(n^{-1/2}\log^5n\bigr). \] Second, \begin{align*} \sum_{i\in\mathcal M}x_i^2&=\bigl(1+O(n^{-\sigma})\bigr)\frac{2\xi^2}{|\mathcal M|}=\frac{2\xi^2}{|\mathcal M|}+O(n^{-\sigma}),\\ \sum_{(i,j)\in M_1\cap M_2}\!\!\!\!\!\!\!\!x_ix_j&=\bigl(1+O(n^{-\sigma})\bigr)\frac{\xi^2}{2|\mathcal M|} =\frac{\xi^2}{2|\mathcal M|} +O(n^{-\sigma}), \end{align*} {\it and\/} $\xi=\xi_1\bigl(1+O(n^{-1}\log^2 n)\bigr)$. In addition, $|\mathcal M|=n\bigl(1+O(n^{-1/2}\log n)\bigr)$. Therefore \eqref{G(x,y)<expl} becomes \begin{align*} G(\bold x,\bold y)&\le \bigl(1+O(n^{-\sigma})\bigr)\exp\bigl[H(\boldsymbol\xi)\bigr],\,\, H(\boldsymbol\xi)=-\frac{s^2}{2}+\xi_2\xi_3+\frac{3\xi_1^2}{2n} -\frac{\xi_1^4}{n^2}. \end{align*} \begin{Lemma}\label{P*sim} \begin{align*} &\textup{ P\/}_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2)\le \frac{2^{\mu}\bigl(1+O(n^{-\sigma})\bigr)} {(n+m-1)!\,\bigl[(\nu-1)!\bigr]^2}\cdot \mathcal I(n+m,\nu),\\ &\mathcal I(n+m,\nu):=\iiint\limits_{(\xi_1,\xi_2,\xi_3)\in \bold R}\! \exp\bigl[H(\boldsymbol{\xi})\bigr]\,\xi_1^{n+m-1}\cdot(\xi_2\xi_3)^{\nu-1}\,d\boldsymbol{\xi},\\ &\,\,\bold R:=\bigl\{\boldsymbol\xi\ge\bold 0:\, \xi_1\le n^{1/2}+6\log n;\,\,\xi_2, \xi_3\le 2\log^3 n\bigr\}. \end{align*} \end{Lemma} \noindent The proof, of course, is based on the description of $\mathcal C_4$, and it runs along the familiar lines of our preceding proofs; in particular, see the proof of Lemma \ref{C2-C3}. We omit the details. Furthermore, by the asymptotic formula for $\mathcal I(n,\nu)$ from \cite{Pit2} (3.60), we have \begin{align*} \mathcal I(n+m,\nu)=(1+o(1))\left(\frac{\pi e}{n+m}\right)^{1/2}\!\left(\frac{n+m}{e}\right)^{\frac{n+m}{2}} \!\!\!(n+m)^{-\nu}\bigl[(\nu-1)!\bigr]^2. \end{align*} So, by Lemma \ref{P*sim}, \begin{equation* \begin{aligned} \textup{ P\/}_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2)&\le \frac{2^{\mu}\,\bigl(1+O(n^{-\sigma})\bigr)}{(n+m-1)!} \left(\frac{\pi e}{n+m}\right)^{1/2}\!\left(\frac{n+m}{e}\right)^{\frac{n+m}{2}}\!\!(n+m)^{-\nu}. \end{aligned} \end{equation*} Therefore, within the factor $1+O(n^{-\sigma})$, \begin{equation* \begin{aligned} &\sum_{\bold\Pi_1,\bold\Pi_2}P_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2)\\ &\le \sum_{m\le m_n} \binom{n}{m}\, \frac{f(m)}{(n+m-1)!} \left(\frac{\pi e}{n+m}\right)^{1/2}\left(\frac{n+m}{e}\right)^{\frac{n+m}{2}}\\ &\,\,\,\,\,\times\sum_{\nu\le m_n}\binom{n-m}{2\nu}(n-m-2\nu-1)!!\,(n+m)^{-\nu} \cdot\sum_{\mu}2^{2\mu}f(2\nu,\mu) \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &=n!\sum_{m\le m_n}\frac{f(m)}{m!\,(n+m-1)!}\left(\frac{\pi e}{n+m}\right)^{1/2}\left(\frac{n+m}{e}\right)^{\frac{n+m}{2}}\\ &\,\,\,\,\,\times\sum_{\nu\le m_n}\frac{(n-m-2\nu-1)!!}{(n-m-2\nu)!\,(2\nu)!} (n+m)^{-\nu} \cdot\sum_{\mu}2^{2\mu}f(2\nu,\mu). \end{aligned} \end{equation*} cf. \eqref{clumsy}. The sum over $\mu$ is $e^{-1}(2\nu)! (1+O(1/\nu))$, see \eqref{sum_{mu}}. So the sum over $\nu$ is asymptotic to \begin{align*} &\frac{e^{-1}}{(n-m)!!}\sum_{\nu\le m_n}(n+m)^{-\nu}\prod_{j=0}^{\nu-1}(n-m-2j) \sim \frac{e^{-1}}{(n-m)!!}\sum_{\nu\le m_n}e^{-\frac{\nu^2}{n}-\frac{2\nu m}{n}}. \end{align*} Thus, since $f(m)/m!=e^{-1}\sqrt{\frac{2}{\pi m}}(1+O(m^{-1}))$, the $m$-term in the resulting sum is (within a factor $1+O(m^{-1})$) \begin{align*} &e^{-2}\sqrt{\frac{2}{\pi m}}\cdot\frac{n!}{(n-m)!!\,(n+m-1)!} \left(\frac{ e}{n+m}\right)^{1/2}\left(\frac{n+m}{e}\right)^{\frac{n+m}{2}}\\ &\times \sum_{\nu\le m_n}e^{-\frac{\nu^2}{n}-\frac{2\nu m}{n}}\\ & \sim e^{-3/2}\sqrt{\frac{2}{\pi^2 m}}\cdot e^{-\frac{m^2}{2n}}\sum_{\nu\le m_n}e^{-\frac{\nu^2}{n}-\frac{2\nu m}{n}}. \end{align*} So \begin{equation*} \begin{aligned} &\sum_{\bold\Pi_1,\bold\Pi_2}\!\!P_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2) \lesssim e^{-3/2}\sqrt{\frac{2}{\pi^2}}\sum_{m,\,\nu\le m_n}\!\!\!\!\frac{1+O(m^{-1})}{m^{1/2}}e^{-\frac{m^2}{2n}-\frac{2\nu m}{n}-\frac{\nu^2}{n}} \sim c n^{3/4},\\ &\qquad\qquad\quad c:=e^{-3/2}\sqrt{\frac{2}{\pi^2}}\iint\limits_{x,\,y\ge 0} x^{-1/2}e^{-\frac{x^2}{2}-2xy -y^2}\,dxdy\approx 0.617. \end{aligned} \end{equation*} Thus, since $E[\mathcal S_n]$ is of order $n^{1/4}$, we have \begin{Theorem}\label{E[Sn^2]sim} $\textup{E\/}\bigl[\mathcal S_n^2\bigr]\lesssim cn^{3/4}$. \end{Theorem} \noindent With extra work, we could have proved that $\textup{E\/}\bigl[\mathcal S_n^2\bigr]\gtrsim cn^{3/4}$, as well. Since $\textup{E\/}\bigl[\mathcal S_n^2\bigr]\gg \textup{E\/}^2[\mathcal S_n]$, we cannot deduce that $\mathcal S_n\to\infty$ in probability, even though $\textup{E\/}[\mathcal S_n]\to\infty$. We firmly believe that the argument itself may help to define a subset of stable partitions for which the two-moments approach will work just fine. For now we are content to use the techniques above to prove a result that would have been out of reach if not for the analysis of $\textup{E\/}\bigl[\mathcal S_n^2]$. \begin{Theorem}\label{mult} Let $q_n$ denote the fraction of members that have more than one stable predecessor. Then $\textup{E\/}[q_n]\lesssim 2ec\, n^{-1/4}$, so that with high probability almost all members have a unique stable predecessor. \end{Theorem} \begin{proof} It suffices to consider the members outside the odd cycles. If any such member has some two stable partners, it belongs to a cycle of even length $\ge 4$ formed by the alternating pairs matched in the corresponding stable partitions $\bold \Pi_1$ and $\bold\Pi_2$. Notice that selecting every other edge of those cycles, we get a stable partition. Therefore, without loss of generality we can assume that $\bold\Pi_1$ and $\bold\Pi_2$ form a unique cycle of even length $2\nu\ge 4$. It follows that $Q_n$, the total number of members with at least two stable partners, is below the total length of the single cycles formed by these special pairs of stable partitions $\bold\Pi_1$ and $\bold\Pi_2$. The bound does look crude, but it works. To bound the total expected length of those cycles, we need to estimate $\sum 2\nu(\bold\Pi_1,\bold\Pi_2) P_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2)$. For those pairs we have $\mu:=\mu(\bold\Pi_1,\bold\Pi_2)=1$, and $\sum_{\mu}2^{2\mu}f(2\nu,\mu)=2(2\nu-1)!$. Therefore \[ \textup{E\/}[Q_n]\lesssim \sum_{\bold\Pi_1,\bold\Pi_2}2\nu(\bold\Pi_1,\bold\Pi_2) P_{\mathcal C_4}(\bold\Pi_1,\bold\Pi_2)\lesssim 2ec\, n^{3/4}. \] \end{proof} {\bf Acknowledgment.\/} Almost thirty years ago Don Knuth introduced me to his ground-breaking work on random stable marriages. Don's ideas and techniques have been a source of inspiration for me ever since. The masterful book by Dan Gusfield and Rob Irving encouraged me to continue working on stable matchings. I am very grateful to Rob for the chance to work with him on the stable roommates problem back in $1994$, and for his encouragement these last months. Itai Ashlagi, Peter Bir\'o, Jennifer Chayes, Gil Kalai, Yash Kanoria, Jacob Leshno and the recent monograph by David Manlove made me aware of a significant progress in theory and applications of two/one--sided stable matchings. I thank the organizers for a valuable opportunity to participate in MATCH-UP 2017 Conference.
2,877,628,089,958
arxiv
\section{Introduction} Charmonium production off nuclei is one of the most promising probes for studying the properties of matter created in ultrarelativistic heavy-ion collisions. It was realized long time ago, that only collisions where a large density of particles is produced in the central region give rise to an anomalous $J/\psi$ suppression, i.e. above the one observed in $pA$ collisions. The anomalous suppression was advocated as a signal of charmonium melting in a thermalized quark-gluon plasma, but can also be described as a final state interaction of the $J/\psi$ with co-moving matter, of both partonic and hadronic nature, see e.g. \cite{Brodsky:1988,Koch:1990}. At RHIC, charmonium has been measured for several collision systems and both at forward and mid-rapidities, and hence provide unique insights into the production and interaction of charmonium in nuclear environments at very high energies. We present an approach based on Glauber-Gribov theory, which encompasses several nuclear effects for $J/\psi$ suppression in $dA$ collisions, supplemented with additional interaction with comovers in the final state in $AA$, which also allows for secondary $J/\psi$ production from recombination. \section{Baseline: charmonium in $dA$ collisions} \begin{figure}[b] \begin{center} \vspace{-0.6cm} \includegraphics[scale=0.35]{Figure_1.eps}% \vspace{-0.4cm} \caption{\label{fig:dAuJpsi} Rapidity dependence of $J/\psi$ suppression for minimum bias d+Au collisions at RHIC and predictions for p+Pb collisions at LHC. Data are taken from \cite{Adler:2005ph,Adare:2007gn}.} \vspace{-0.6cm} \end{center} \end{figure} At lower energies, nuclear suppression of charmonium in $pA$ collisions was attributed to the successive interaction of the produced $J/\psi$ (or, rather, the pre-resonant $c\bar{c}$ pair) with the surrounding nuclear matter. In the Glauber model, this mechanism could be quantified by an absorptive cross section, $\sigma_{abs}^\psi$, which was found to be about 5 mb at $\sqrt{s} = 19$ GeV \cite{Abreu:2002rn}. Within this semiclassical picture, most models predicted a growth of absorption with energy, see e.g. \cite{Bedjidian:2003}. On the contrary, measurements of $J/\psi$ production in d+Au collisions at RHIC \cite{Adler:2005ph,Adare:2007gn} revealed a significant reduction of nuclear absorption compared to measurements at lower energies. The RHIC results signal a breakdown of the semiclassical probabilistic picture of ordered multiple scattering, and is in line with predictions from the relativistic Glauber-Gribov theory of nuclear interactions \cite{Boreskov:1993,Braun:1998}. Above a certain critical energy $E_M$ the incoming hadron can fluctuate into a state containing the both heavy $c\bar{c}$ state and light quarks and gluons long before the collision with the nucleus takes place. In the coherent limit, both the soft partons of the fluctuation and the heavy $c\bar{c}$ system itself can interact almost simultaneously with several nucleons in the target. Schematically, the former process leads to shadowing of nuclear parton densities and dominates at mid-rapidity while the latter imposes limits from energy-momentum conservation in the forward region. In other words, the diagrams involving multiple scattering ordered in the longitudinal direction, the so-called Glauber-type diagrams, are suppressed at $E > E_M$ and hence nuclear absorption drops out. Note, that we do not assume any diminution of the absorptive cross section. We refer to \cite{Capella:2007,Arsene:2008} for further details on the Glauber-Gribov theory and analysis of RHIC data. In figure~\ref{fig:dAuJpsi} we compare calculations of gluon shadowing \cite{Tywoniuk:2007} alone (solid curve) and additionally with energy-momentum conservation (dash-dotted curve) to data on $J/\psi$ suppression in d+Au collisions at $\sqrt{s} = 200$ GeV \cite{Adler:2005ph,Adare:2007gn}. This constitutes the baseline in the search for the origin of anomalous suppression in $AA$ collisions. \section{Charmonium recombination and dissociation in $AA$} \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth,height=6cm]{Figure_2_left.eps}% \includegraphics[width=0.45\textwidth,height=6cm]{Figure_2_right.eps} \vspace{-0.4cm} \caption{\label{fig:AuAuJpsi} Centrality dependence of $J/\psi$ suppression in Au+Au collisions at RHIC at mid- (left figure) and forward (right figure) rapidities. Data are taken from \cite{Adare:2007pr}.} \vspace{-0.6cm} \end{center} \end{figure} One finds a large density of particles in central $AA$ collisions at RHIC. In addition to the nuclear effects present in the smaller $pA$ collisions described above, one should also take into account the possible interaction of $J/\psi$'s with co-moving matter produced in the collision. This interaction can be quantified by a simple rate equation which, assuming boost-invariant longitudinal expansion, takes the form \begin{eqnarray} \label{eq:comov} \tau \frac{d N_{J/\psi}}{d \tau} = - \sigma_{co} \left[N^{co} N_{J/\psi} \,-\, N_c N_{\bar{c}} \right] \;, \end{eqnarray} where $N^{co}$, $N_{J/\psi}$ and $N_{c (\bar{c})}$ denote the density of comovers, hidden and open charm at the impact parameter of the initially produced $J/\psi$, respectively. The first term in (\ref{eq:comov}) is responsible for charmonium dissociation while the second gives rise to secondary $J/\psi$ production due to recombination. The interaction cross section $\sigma_{co}$ was found at SPS to be 0.65 mb in a model where charmonium recombination was neglected due to the small density of open charm \cite{Armesto:1999}. Calculations including both terms in (\ref{eq:comov}) and the same $\sigma_{co}$ were presented in \cite{Capella:2007aa}. The density of charmonium and open charm at RHIC was inferred from measurements in $pp$ collisions at the same energy \cite{Adare:2007,Adare:2006} (the density of open charm at forward rapidity was taken from PYTHIA). Results for $J/\psi$ suppression in Au+Au collisions at both mid- and forward rapidities ($\eta = 2.2$) at RHIC are compared to experimental data \cite{Adare:2007pr} in figure~\ref{fig:AuAuJpsi}. Note, that no parameters were fitted to obtain such good agreement with the experimental data. In particular, the stronger suppression at forward rapidities is a result of strong initial state suppression and smaller recombination. Calculations for Cu+Cu collisions at the same energy are also matching the experimental data. We refer to \cite{Capella:2007aa} for further details of the calculation. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth,height=6.5cm]{Figure_3.eps} \vspace{-0.4cm} \caption{\label{fig:PbPbJpsi} Predictions for the centrality dependence of $J/\psi$ suppression at mid-rapidity in Pb+Pb collisions at LHC.} \vspace{-0.6cm} \end{center} \end{figure} Recombination effects will be of crucial importance in Pb+Pb collisions at $\sqrt{s} = 5.5$ TeV. Predictions for $J/\psi$ suppression at LHC have been made assuming $d \sigma_{c\bar{c}} \big/ d y \approx 1$~mb at mid-rapidity and the non-diffractive $pp$ cross section $\sigma_{pp} = 59$ mb \cite{Capella:2007aa}, with $\sigma_{co}$ kept fixed. This corresponds to $C = 2.5$ in figure~\ref{fig:PbPbJpsi}. In our model, this stands for four times stronger recombination effects at LHC than at RHIC. The strong and almost impact parameter independent $J/\psi$ suppression is mainly due to strong initial state gluon shadowing, depicted with a dotted curve in figure~\ref{fig:PbPbJpsi}, and because of a large density of comovers leading to strong dissociation. This is in stark contrast to predictions of enhanced production of $J/\psi$ obtained in a model assuming thermalization of the heavy quarks with the entire partonic system formed at a given impact parameter \cite{Andronic:2006}. \section*{References}
2,877,628,089,959
arxiv
\section{Introduction} The interplay between superconductivity and magnetism is still one of the main topics in the field of high-temperature superconductivity \cite{Mazin10,Lynn2009,pagl10,scal12a,tran14}. While commensurate antiferromagnetic (AF) order appears to compete with superconductivity, magnetic excitations are widely believed to be important in mediating electron pairing in many high-$T_c$ superconductors \cite{Tranquada2004,Gxu2009,Rossat1991,Bourges1996,Dai2000,Fong1999,Hayden2004,Vignolle2007,Hinkov2007np,pagl10,scal12a,tran14,lums10r}. One of the most important signatures of the coupling between magnetic excitations and superconductivity is the ``spin resonance'', where magnetic intensity detected by neutron scattering at the resonance energy exhibits a sharp enhancement when the system is cooled into the superconducting (SC) state \cite{Christianson2008,Christianson2013,Lumsden2009prl,Chis2009prl,Inosov2010nf,Qiu2009,Wen2010H,yu09}. In many Fe-based superconductors (FBS), such as the `122' \cite{Christianson2008,Lumsden2009prl,Chis2009prl,Lis2009prb,Inosov2010nf},`1111' \cite{Wakimoto2010} and `111' families \cite{Zhang2013,Zhang2015}, the magnetic order in the parent compound~\cite{Dai2015} corresponds to the stripe antiferromagnet (SAF), characterized by the in-plane wave vector $Q_{\rm SAF}=(0.5,0.5)$, and the spin-resonance in the SC compositions appears at the same location in momentum space. This is not the case in FeTe$_{1-x}$Se$_{x}$, which is known as the `11' system~\cite{Bao2009,Lynn2009,Lumsden2010nf,zxu2010fetese1,Xu2016}. Here the parent compound Fe$_{1+y}$Te exhibits long-range AF order made up of double stripes of parallel spins within each Fe layer. Based on a crystallographic unit cell containing two Fe atoms, the in-plane component of this double-stripe antiferromagnetic (DSAF) order is characterized by the wave vector $Q_{\rm DSAF}=(0.5,0)$, with spin-wave type magnetic excitations emerging from $Q_{\rm DSAF}$~\cite{Lipscombe2011,Lumsden2010nf,Zaliznyak2011}. When sufficient Se is substituted to yield bulk superconductivity, a spin resonance is observed, but it occurs at $Q_{\rm SAF}$ as in the other FBS families \cite{Qiu2009,Lumsden2010nf,Lee2010,Xu2016}. The magnetic excitations tend to disperse out from $Q_{\rm SAF}$ in the transverse directions, with the bottom of the dispersion being around 5~meV, and the spin resonance occurs around $\hbar\omega=6.5$~meV. A unique feature of FeTe$_{1-x}$Se$_x$ is that the character of the low-energy magnetic excitations changes dramatically with temperature \cite{Xu2016,Xu2017}. Well above the superconducting critical temperature, $T_c$, the low-energy magnetic excitations shift away from $Q_{\rm SAF}$ and instead develop the signature of short-range correlations associated with a local DSAF modulation. As shown in Fig.~\ref{fig:6}, the long-range DSAF order in Fe$_{1+y}$Te$_{1-x}$Se$_x$ disappears at $x\approx0.1$; it is associated with an orthorhombic lattice distortion that disappears at the same Se concentration \cite{mart10b}. In as-grown crystals, bulk superconductivity appears for $x\gtrsim0.3$ \cite{Katayama2010,liu10}, while glassy, short-range DSAF order coexists with weak, inhomogeneous superconductivity for $0.1<x<0.3$. Studies deliberately varying the concentration $y$ of excess Fe have shown that the excess is correlated with the suppression of superconductivity, especially in this intermediate range of $x$ \cite{Bendele2010,rodr11}. By reducing the excess Fe in such samples, one can drive the system towards SC \cite{Bendele2010,rodr11,Dong2011}. There are several different annealing methods available for this purpose, including annealing in air, oxygen, Se, Te and S vapor \cite{Koshika2013,Dong2011,Sun2013}. In this work, we use Te vapor~\cite{Koshika2013}, which avoids the introduction of extra elements such as oxygen while maintaining a high Te concentration. In this paper, we report a systematic study of the magnetic correlations in single crystals of Fe$_{1+y}$Te$_{1-x}$Se$_{x}$ with $x=0.1$ and 0.2 that have been annealed in Te vapor for sufficient time to yield bulk superconductivity. Our neutron scattering measurements reveal low-energy magnetic excitations with a {\bf Q} dependence characteristic of short-range DSAF correlations, as seen previously in FeTe$_{0.87}$S$_{0.13}$ \cite{zali15}. The new feature here is that we also observed a spin gap and resonance for $T<T_c$. The increase in signal associated with the resonance has a {\bf Q} dependence that appears to mix the characteristics of SAF and DSAF correlations, which, in turn, is different than the pure SAF spin correlations observed at low temperature in other SC FeTe$_{1-x}$Se$_x$ samples \cite{Qiu2009,Lumsden2010nf,Lee2010,Xu2016}. This provides an interesting test case for theoretical models that connect the magnetism and superconductivity. \section{Experimental Details} \begin{figure}[t] \includegraphics[width=0.9\linewidth]{fig0.ps} \caption{(Color online) Phase diagram of FeTe$_{1-x}$Se$_{x}$ as function of Se content ($x$) and temperature ($T$). The red circles represent the N\'eel temperature ($T_{N}$); blue circles represent the as-grown samples' superconducting onset temperature $T_{c}$; purple circles represent the superconducting onset temperature in the treated samples. Data from Refs.~\onlinecite{Katayama2010,Dong2011} are included here. } \label{fig:6} \end{figure} \begin{figure}[t] \includegraphics[width=0.9\linewidth]{fig1.ps} \caption{(Color online) (a) Zero-field-cooled magnetization measurements by SQUID with a 10~Oe field perpendicular to the $a$-$b$ plane for all samples: $x = 0.1$ (green solid line) and $x = 0.2$ (purple solid line). (b) Diagram of reciprocal space indicating the characteristic wave vectors ${\bf Q}_{\rm SAF}$ and ${\bf Q}_{\rm DSAF}$. (c) Elastic neutron-scattering measurements performed on $x = 0.1$ sample around magnetic order peak at $(0.5,0,0.5)$ measured on BT-7. Intensity profiles along [100] direction ($H$ scans) at temperatures below ($T=3$~K, red) and above $T_{N}$ (50~K, blue). The horizontal (black) bar represents the $H$ resolution. (d) The integrated magnetic peak intensity (from fitted Gaussian peak intensity) vs temperature. The error bars represent the square root of the number of counts.} \label{fig:1} \end{figure} \begin{table} \caption{List of the Fe$_{1+y}$Se$_x$Te$_{1-x}$ samples used in our measurements, with their Fe composition before and after annealing in Te vapor measured by EDX spectroscopy, and the superconducting transition temperature, $T_{c}$, obtained from the magnetic susceptibility measurements in Fig.~\ref{fig:1}(a). } \begin{ruledtabular} \begin{tabular}{cccccc} Sample & As-grown & Annealed & $T_{c}$ \\ & & & (K) \\ \hline x = 0.1 & y=0.025 & y=-0.027 & 12 \\ x = 0.2 & y=0.096 & y=0.045 & 13 \\ \end{tabular} \end{ruledtabular} \label{tab:1} \end{table} The single-crystal samples used in this experiment were grown by a unidirectional solidification method~\cite{JWen2011} at Brookhaven National Laboratory. The as-grown single crystals, which contained excess Fe and were not superconducting ~\cite{Katayama2010}, were annealed at 400 $^{\circ}$C for 10 days in Tellurium (Te) vapor~\cite{Koshika2013}. The Fe excess, $y$, before and after annealing was measured by Energy-Dispersive X-ray (EDX) spectroscopy; the results listed in Table~I indicate that the Te-vapor annealing caused a substantial reduction in $y$. The bulk susceptibilities, measured with a superconducting quantum interference device (SQUID) magnetometer, are shown in Fig.~\ref{fig:1}(a). They demonstrate a bulk superconducting response for each sample, though less than 100\%\ shielding fraction. Neutron scattering experiments were carried out on the triple-axis spectrometers BT-7~\cite{Lynn2012} at NIST Center for Neutron Research (NCNR) and HB-1 located at the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). We used beam collimations of open-$80'$-S-$80'$-$120'$ (S = sample) with fixed final energy of 14.7~meV and two pyrolytic graphite (PG) filters after the sample to reduce higher-order neutrons at BT-7 and $48'$-$80'$-S-$80'$-$120'$ with the same fixed final energy and one PG filter after the sample at HB-1. Except for the elastic scattering measurements in Fig.~\ref{fig:1}, which were performed in the $(H0L)$ scattering plane, all inelastic scattering measurements were performed in the $(HK0)$ scattering plane. The lattice constants for these samples are $a = b \approx 3.8$~\AA, and $c \approx 6.1$~\AA, using a unit cell containing two Fe atoms. The wave vectors are specified in reciprocal lattice units (r.l.u.) of $(a^*, b^*, c^*) = (2\pi/a, 2\pi/b, 2\pi/c)$. \section{Results} We have performed a series of neutron scattering measurements on the Te-vapor annealed superconducting samples of FeTe$_{1-x}$Se$_x$. We started with elastic measurements to test for static magnetic order in the $x=0.1$ sample. In Fig.~\ref{fig:1} (c), we plot $H$ scans through the $ {\bold Q}_{\rm AF} \approx (0.5, 0, 0.5)$ wave vector at $T =3$~K and 50~K. The magnetic peak observed at low temperature is broader than experimental resolution, and the peak center is slightly incommensurate, consistent with previous results \cite{Katayama2010,Wen2009}. The integrated intensity of this peak, shown in Fig.~\ref{fig:1}(d), gradually decreases upon heating and disappears around 40~K, consistent with susceptibility measurements on air-annealed superconducting crystals with similar $x$ \cite{Dong2011}. As we will see next, the low-energy inelastic magnetic scattering bears no simple connection to these elastic peaks, and hence we believe that the static order occurs in a minority of the sample volume that is likely segregated from the superconducting regions. We note that a recent scanning tunneling microscopy study on an $x=0.1$ sample found evidence for local coexistence of AF order and pairing gaps \cite{alur17}; however, that sample did not exhibit the degree of bulk superconducting order found in our crystal. \begin{figure}[t] \hskip20pt\includegraphics[width=0.9\linewidth]{fig2.ps} \caption{(Color online) Contour intensity maps of magnetic neutron scattering intensity measured on HB-1 in $(HK0)$ plane at energy transfer $\hbar\omega=6.5$~meV. The maps are plotted for the $x = 0.1$ (left column) and $x = 0.2$ (right column) samples at sample temperatures: (a), (b) 5~K, (c), (d) 20~K and (e), (f) 100~K. The data have been folded from the first quadrant ($H > 0$, $K> 0$). Intensity scale is the same in all panels and the data have been smoothed.} \label{fig:2} \end{figure} Next, we consider measurements of the low-energy magnetic excitations. Figure~\ref{fig:2} shows color contour plots of spin excitations measured in the $(HK0)$ plane at an energy of 6.5~meV, which corresponds to the spin-resonance energy at optimal doping in this compound~\cite{Qiu2009}. Panels in the left column show data from the $x=0.1$ sample at temperatures of 5, 20, and 100~K. The data in the right column for $x=0.2$ correspond to lower counting statistics, but are qualitatively similar to those for $x=0.1$. At $T = 5$~K, well below $T_c$, the data are quite different from the simple commensurate ellipse shape at $ {\bf Q}=(0.5, 0.5)$ seen previously for optimal doping \cite{Qiu2009,Lee2010,Xu2016}. Instead, they closely resemble the model of short-range double-stripe correlations proposed in a study of FeTe$_{0.87}$S$_{0.13}$ by Zaliznyak {\it et al.} \cite{zali15}. Note that the intensity pattern associated with the short-range correlations is not characterized by a well-defined wave vector; rather, it involves a distribution of spectral weight that is broad in {\bf Q} and that, in the vicinity of ${\bf Q}_{\rm SAF}$, appears incommensurate. \begin{figure}[t] \hskip20pt\includegraphics[width=0.9\linewidth]{fig4.ps} \caption{(Color online) Contour intensity maps of temperature difference of magnetic neutron scattering intensity measured on HB-1 in $(HK0)$ plane at energy transfer $\hbar\omega=6.5$~meV. The maps are plotted for the $x = 0.1$ (left column) and $x = 0.2$ (right column) samples at temperature differences of: (a), (b) $5-20$~K, and (c), (d) $100-20$~K. The data have been folded from first quadrant ($H>0$, $K>0$). (e) Intensity calculated based on the same UDUD spin-plaquette model described in Ref.~\onlinecite{Xu2016,Zaliznyak2015}, with the volume ratio of interplaquette correlation being 25\% SAF and 75\% DSAF. Intensity scale is the same in all panels and the data have been smoothed.} \label{fig:3} \end{figure} The change in the scattering pattern on warming across $T_c$ is subtle, but the changes are larger when the temperature is increased to 100~K. To get a better view of the changes, temperature differences are plotted in Fig.~\ref{fig:3}. The difference between 5 and 20 K for the $x=0.1$ sample shown in Fig.~\ref{fig:3} (in contrast to the absolute signal at 5 K) is similar to measurements of the resonance in optimally-superconducting FeTe$_{1-x}$Se$_x$ \cite{Qiu2009,Lee2010,Xu2016}. However, the intensity maxima are not located at the commensurate $(0.5,0.5)$ positions but slightly further out in the transverse directions. One can see that the difference, which is indeed the Q-distribution of the spin resonance, appears to be highly consistent with a model calculation [Fig.~\ref{fig:3} (e)] based on the same UDUD spin plaquette model described in Ref.~\onlinecite{Xu2016,Zaliznyak2015}, with the volume ratio of interplaquette correlation being 25\% SAF and 75\% DSAF. On the other hand, the difference between 100 and 20 K bears the signature of ferromagnetic plaquettes with short-range antiferromagnetic correlations, as previously discussed for FeTe$_{0.87}$S$_{0.13}$ \cite{zali15}, where such a component was also found to be enhanced with increasing temperature. The data from the $x=0.2$ sample are less informative but are qualitatively in agreement with the $x=0.1$ data. \begin{figure}[t] \hskip20pt\includegraphics[width=0.9\linewidth]{fig3.ps} \caption{(Color online) Contour intensity maps of magnetic neutron scattering intensity in energy-momentum space along the transverse direction. The maps are plotted for the $x = 0.1$ (left column, measured on BT-7) and $x = 0.2$ (right column, measured on HB-1) samples at sample temperatures: (a), (b) 5~K and (c), (d) 20~K. The data have been smoothed.} \label{fig:4} \end{figure} To characterize the energy dispersion in the vicinity of the resonance, we plot in Fig.~\ref{fig:4} the energy dependence of the magnetic scattering along the transverse direction, ${\bf Q}=(H,1-H,0)$, around $H=0.5$. As one can see, the low-energy dispersion in the $x=0.1$ sample takes the form of two vertical columns; in the case of $x=0.2$, the commensurate region between the columns has begun to fill in. In both cases, a comparison of the data at 5 and 20 K clearly reveals the opening of a spin gap below 5 meV and the intensity enhancement of the resonance above that. \begin{figure}[t] \hskip20pt\includegraphics[width=0.8\linewidth]{fig5.ps} \caption{(Color online) Constant energy scans of magnetic scattering intensity along the transverse direction at excitation energy 6.5~meV for the (a) $x = 0.1$ and (b) $x = 0.2$ samples at sample temperatures: 3~K (red circles) and 20 K (blue circles). The wave vector dependence of the spin resonance from the temperature difference $3-20$~K is plotted in (c) $x = 0.1$ and (d) $x = 0.2$. The purple lines are model calculation based on the same UDUD spin plaquette model described in Ref.~\onlinecite{Xu2016,Zaliznyak2015}, with the volume ratio of interplaquette correlation being 25\% SAF and 75\% DSAF in (c) and 20\% SAF and 80\% DSAF in (d). The green dashed lines are a similar model calculation based on 100\% SAF correlations. (e) and (f) show the temperature dependence of the spin resonance from peak intensities at $(0.6, 0.4, 0)$ at 6.5~meV for respective samples. The error bars represent the square root of the number of counts. } \label{fig:5} \end{figure} For a more detailed look at the resonance, Fig.~\ref{fig:5} shows constant-energy scans along the transverse direction at 6.5~meV obtained at 3 and 20~K. By subtracting the 20~K data from the 3~K data, the {\bf Q} dependence of the intensity enhancements is displayed in Fig.~\ref{fig:5}(c) and (d). The response is strongly peaked at incommensurate positions with incommensurability $\sim 0.08$. One can clearly see the discrepancy between model calculations based on a phase with 100\% SAF correlations [green dashed lines in panels (c) and (d)] and the measured $q$-distribution of the resonance. Instead, only when we consider a phase with mixed SAF and DSAF correlations, can the incommensurate response be reproduced. As shown in the insets of Fig.~\ref{fig:5}(c) and (d), the spin resonance intensity starts to rise on cooling below 12~K in the $x = 0.1$ sample and below 13~K in the $x = 0.2$ sample, consistent with the $T_{c}$ values obtained from the susceptibility measurements in Fig.~\ref{fig:1}(a). \section{Discussion} In our Te-rich crystals of FeTe$_{1-x}$Se$_x$, we have observed low-energy magnetic excitations consistent with short-range double-stripe spin correlations coexisting with bulk superconductivity. In evaluating this coexistence, we must certainly take account of inhomogeneity. For example, we also see elastic magnetic scattering consistent with intermediate-range DSAF order, which we expect is in a limited volume of each sample, spatially segregated from the superconductivity. It is possible that the Te-vapor annealing was not done for a sufficiently long time to homogeneously modify all regions of our large crystals. Of course, there is always the intrinsic inhomogeneity associated with the difference in local Fe-Te and Fe-Se bond lengths \cite{louc10} and the tendency to spatial segregation \cite{hu11}. The key observation, however, is that the magnetic scattering changes across $T_c$, developing both a spin gap and resonance peak. The resonance intensity, which is not sensitive to any possible nonsuperconducting portion of the sample, appears at incommensurate positions, slightly away from (0.5,0.5). Measuring the resonance provides a direct probe of the SC portion of the sample(s) even with a nonsuperconducting portion present. Our results imply that the spin correlations from the SC portion of our Te-vapor treated samples exhibit a mixed DSAF and SAF character, distinct from the typical behavior in SC FeTe$_{1-x}$Se$_x$ systems at low temperature. This provides a clear indication of superconductivity developing locally within regions where the spin correlations have substantial DSAF character. The low-temperature two-column dispersion along $(H,1-H,0)$ has been observed previously, but in association with the suppression of superconductivity in Cu-doped FeTe$_{0.5}$Se$_{0.5}$ \cite{xu12a}. The same dispersion is also seen at high temperatures in samples with optimal superconductivity \cite{xu12a,tsyr12,Xu2016}. It was previously pointed out that the thermal evolution of the spin correlations is connected to the change in the tetrahedral bond angles \cite{Xu2016,Xu2017} which results in changes in hybridization between Fe $3d$ orbitals and ligand $p$ orbitals \cite{yin11b}. Of course, the average bond angles also change with Se concentration. It appears that we can roughly correlate the pattern of low-energy magnetic scattering in reciprocal space with the ratio of lattice parameters, $a/c$. The interesting point is that, while the {\bf Q} dependence of the low-energy magnetic scattering may vary significantly with composition, the resonance always appears in the vicinity of $(0.5,0.5,0)$. The general pattern of the magnetic scattering in our samples is not compatible with simple Fermi-surface nesting arguments \cite{Yin2010}; nevertheless, the wave vectors at which the resonance occurs connect Fermi surface pockets about the $\Gamma$ and $M$ points of the Brillouin zone where the superconducting gap appears \cite{sark17,maie09,scal12a}. The magnetic excitations certainly appear to interact with the superconducting electrons; however, the general relationship between the magnetism and superconductivity in these samples is less clear. Analyzing this relationship, taking account of the present results, could lead to new insights into the pairing mechanism in iron chalcogenides. \section{Summary} We have used Te-vapor annealing to induce bulk superconductivity in crystals of Fe$_{1+y}$Te$_{1-x}$Se$_x$ with $x=0.1$ and 0.2. Neutron scattering measurements reveal low-energy magnetic excitations with a wave vector dependence characteristic of short-range DSAF spin correlations. While the presence of such correlations at low temperature has previously been associated with suppressed superconductivity, we find that the excitations in the vicinity of, but not exactly at, $(0.5,0.5,0)$ develop a spin gap and resonance peak. Thus, it appears that superconductivity can coexist with magnetic correlations different from the common stripe form. These results provide an interesting test case for understanding the relationship between magnetism and superconductivity in the iron chalcogenides. \section{Acknowledgments} The work at Brookhaven National Laboratory and Lawrence Berkeley National Laboratory was supported by the Office of Basic Energy Sciences (BES), Division of Materials Science and Engineering, U.S. Department of Energy (DOE), under Contract Nos.\ DE-SC0012704 and DE-AC02-05CH1123, respectively. A portion of this research used resources at the High Flux Isotope Reactor, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory.
2,877,628,089,960
arxiv
\section{Introduction} \bigskip\noindent The problem of moment is an interesting topic in Functional Analysis, especially in measure theory. It has important applications in probability theory.\\ \noindent Although there is a significant number of research works in probability theory on this problem (see \cite{gutt}, \cite{billinsgleymp},\cite{loeve} and references therein, etc.), the most important source of that question, when treated in its generality, is \cite{shohat}. Up to our knowledge, we did not see another full set up of that theory beyond that main reference.\\ \noindent We already pointed out that this problem is used in Probability Theory, but the following special form : given a probability law $\mathbb{P}_X$on $\mathbb{R}$ having moments of all orders $(m_n)_{n\geq 0}$, does the sequence $(m_n)_{n\geq 1}$ uniquely determine the probability law $\mathbb{P}_X$? This is a consequence of the moment problem, which goes back to Stieltjes (see \cite{shohat} for references on all particular form of that problem) formulated as follows :\\ \Bin (\textbf{Stieltjes's problem}) [Around 1890, see \cite{shohat} and references therein]. Given $(m_n)_{n\geq 1} \subset \mathbb{R}_{+}$, does it exists finite measure $\rho$ supported by $\mathcal{V}=\mathbb{R}_{+}$ such that \begin{equation} \forall n\geq 0, \ m_n=\int_{\mathcal{V}} x^n \ d\rho(x). \label{mp_01} \end{equation} \noindent Of course, $m_0=\rho(\mathbb{R})=\rho(\mathcal{V})$. The term $m_0\neq 0$ is the bound of $\rho$. Later, the same problem is set for a general sequence of real numbers and for $\mathcal{V}=\mathbb{R}$ or $\mathcal{V}=[0,1]$ and is named after Hamburger and Hausdorff respectively.\\ \Bin The general solution of the problem, when the support $\mathcal{V}$ is bounded by a closed set $S_0$, is given in \cite{shohat}. From there, we face two major concerns about the exposition of the general theory.\\ \bigskip\noindent First, the paper of \cite{shohat}, in our view, is written with the Stieltjes integrals and is based on the rudimentary tools of measure theory and weak convergence of that time of 1943. During the preparation of a master degree dissertation of the second author, we find out that a lot of arguments used by \cite{shohat} may be replaced by arguments that are common now and more appropriate. Essentially, the authors used the Stieltjes integration, the notion of \textit{substancially} continuity or of \textit{substancially} convergences, extension theorems, etc., all those tools seeming to be obsolete now.\\ \noindent By using the modern Lebesgue-Stieltjes integration, the modern theory of weak convergence, the extension theorems of measures on semi-algebras or on algebras, the Caratheodory theorem instead for example, in one word, measure theory arguments, we think that this master-piece paper on the topic can be rendered into a far more readable text for mathematicians of our modern days.\\ \noindent Secondly, the proofs of \cite{shohat} are directly given on $\mathbb{R}^d$, $d\geq 1$. By comparison, classical graduate textbooks in probability refer to the moment problem in one dimension and common readers are used to a multivariate approach of the problem of moments.\\ \noindent Based on the importance of the question and its connections to the characterizations of the weak convergence through the convergence of the moments (it they all exist), we wish to produce a general introduction to the question and entirely expose it at the light of the modern theorem under the following organization :\\ \noindent (1) Treating entirely the dimensional stage with the full details and address the weak convergence through the convergence of the moments (as in \cite{billinsgleymp} and \cite{loeve}).\\ \noindent (2) By exposing the ideas of \cite{shohat}, our contribution is two-fold :\\ \noindent (2a) We provide relevant complements and variety of modern arguments that will make the text readable just after a course of Measure Theory and Probability Theory. We intend to formulate the main theorem in \cite{shohat} in the frame of measure theory with the help of some well-known criteria. But, at least, we include needed the mathematical background. At the end, we hope that a graduate student will be able to read it more comfortably.\\ \noindent (2b) In the proofs themselves, we bring more clarity on the linear spaces on which the linear mapping is constructed (see Step 1 in page \pageref{step1}). In the original paper, the roles of $r$ is ambiguous. Actually, the right space should be the class of functions bounded by finite linear combinations of functions $u\mapsto A(u_1^{2r_i}+\cdots+u_d^{2r_d})+B$, $A\geq 0$ and $B\geq 0$ (in dimension $d\geq 1$) with non-negative coefficients.\\ \noindent (2c) All along the proof, the right modern tool is used, in particular the Fatou-Lebesgue theorem and the construction of the Lebesgue definition for measurable function of constant sign.\\ \bigskip\noindent Let us organize the paper as follows.\\ \noindent In the next section \ref{mp_maths}, we state the tools we are going to use on modern theory of distribution functions, Lebesgue-Stieltjes integration, limit theorems, etc.\\ \noindent In the Section \ref{mp_r_proba}, we deal with the moment problem within Probability Theory on $\mathbb{R}$ and link it to weak convergence, following mainly \cite{billinsgleymp}.\\ \noindent In Section \ref{mp_r_shohat}, we expose the full proof of \cite{shohat} on $\mathbb{R}$.\\ \newpage \section{Mathematical background} \label{mp_maths} \noindent \textbf{A - Distribution functions on $\mathbb{R}^d$, $d\geq 1$}.\\ \noindent The properties we summarize in this Part can be found in major sources as \cite{loeve}, \cite{billingsley}, etc. or in \cite{ips-mestuto-ang} (Chapter 11, page 664) for the links between distribution function and Lebesgue-Stieltjes measures and in \cite{ips-wcia-ang} for $F$-continuous intervals.\\ \noindent \textbf{A1- Recalls of definitions}. Let us introduce the following internal operation on $\mathbb{R}^d$: \begin{equation} (x,y)\ast (X,Y)=(x_1X_1, x_2X_2, ..., y_dY_d). \label{prodRd} \end{equation} \bigskip\noindent Let us consider a real-valued function $F$, defined as follows: \begin{equation*} \begin{array}{ccc} \mathbb{R}^{d} & \mapsto & \mathbb{R} \\ t & \hookrightarrow & F(t). \end{array} \end{equation*} \noindent For any interval of $\mathbb{R}^d$ of the form $$ ]a,b]=\prod_{i=1}^{d} ]a_i,b_i] $$ \noindent for $a=(a_{1},...,a_{d})\leq b=(b_{1},...,b_{d})$, in the sense that $a_i\leq b_i$ for all $i\in \{1,\cdots,d\}$, we define its $F$-volume by \begin{equation*} \Delta F(a,b)=\sum_{\varepsilon \in \{0,1\}^d}(-1)^{s(\varepsilon)}F(b+\varepsilon \ast (a-b)), \end{equation*} \Bin where for $\varepsilon=(\varepsilon_1,\cdots, \varepsilon_d)\in \{0,1\}^d$, $s(\varepsilon)=\varepsilon_1+\cdots+\varepsilon_d$ \bigskip \noindent An expanded version of that formula is : \begin{equation*} \Delta F(a,b)=\sum_{\varepsilon =(\varepsilon _{1},...,\varepsilon _{d})\in \{0,1\}^{d}}(-1)^{s(\varepsilon )}F(b_{1}+\varepsilon _{1}(a_{1}-b_{1}),...,b_{d}+\varepsilon _{d}(a_{d}-b_{d})). \end{equation*} \noindent Let us try to understand the formula in a progressive way.\\ \bigskip \noindent \textbf{General rule of forming $\Delta F(a,b)$}. Let $a=(a_{1},...,a_{d})\leq b=(b_{1},...,b_{d})$ two points of $\mathbb{R}^{d}$ and let $F$ an arbitrary function from $\mathbb{R}^{d}$ to $\mathbb{R}$. We form $\Delta F(a,b)$ in this way. First consider $F(b_{1},b_{2},...,b_{d})$ the value of $F$ at right endpoint $b=(b_{1},b_{2},...,b_{d})$ of the interval $]a,b].$ Next proceed to the replacement of each $b_{i}$ by $a_{i}$ by replacing exactly one of them, next two of them etc., and add the each value of $F$ at these points with a sign plus $(+)$ if the number of replacements is even and with a sign minus $(-)$ if the number of replacements is odd.\\ \noindent We recall the definition of distribution function on $\mathbb{R}$. \begin{definition} A function $F : \mathbb{R}^{d} \rightarrow \mathbb{R}$ is a distribution function \textit{(df)} on $\mathbb{R}^{d}$ if and only the two following conditions hold.\\ \noindent (a) $F$ assigns non-negative volumes to cuboids, that is $\Delta F(a,b)\geq 0$ for $a\leq b$.\\ \noindent (b) $F$ is right-continuous.\\ \noindent It is a probability distribution function \textit{pr.df} on $\mathbb{R}^{d}$\ if and only if the following three conditions are satisfied, where (c) is composed by two sub-conditions.\\ \noindent (a) $F$ assigns non-negative volumes to cuboids. \noindent (b) $F$ is right-continuous.\\ \noindent (c) $F$ satisfies\\ $(i)$ \begin{equation*} \lim_{\exists i,1\leq i\leq d,t_{i}\rightarrow -\infty}F(t_{1},...,t_{d})=0 \end{equation*} (ii) \begin{equation*} \lim_{\forall i,1\leq i\leq d,t_{i}\rightarrow +\infty }F(t_{1},...,t_{d})=1. \end{equation*} \end{definition} \bigskip\noindent The link between \textit{df}'s and Lebesgue-Stieltjes measures (LS-measures) is given by the following. We can associated to the \textit{df} $F$ a measure $\lambda_F$, called Lebesgue-Stieltjes measure associated to $F$, which is characterized by its values on the semi-algebra $$ \mathcal{S}=\{]a,b], \ a\leq b, \ (a,b) \in \overline{\mathbb{R}}^d\}, $$ \noindent which are $$ \lambda_F(]a,b])=\Delta F(a,b). $$ \Bin If $F$ is \textit{pr.df}, $\lambda_F$ is a probability measure. Conversely if $m$ is a measure on $\mathbb{R}^d$ such that \begin{equation} \forall x \in \mathbb{R}^d \ F_m(x)= m(]-\infty, x])<\infty, \label{C} \end{equation} \Bin then $F_m$ is a \textit{df} (\textit{pr.df} if $m$ is a probability measure) such that $m=\lambda_{F_m}$.\\ \Bin \textbf{A2 - Spectrum and support}.\\ \Ni In this paper we need to introduce the notions of spectrum. First let us $\mathcal{O}$ as the class of all open sets in $\mathbb{R}^d$. We denote $\mathcal{N}(x)$the collection of neighborhoods of $x \in \mathbb{R}^d$. The spectrum of the \textit{df} $F$ is the following set $$ s(F)=\{x \in \mathbb{R}^d, \ \forall O \in \mathcal{N}(x), \ \lambda_F(O)>0\}. $$ \Bin The point spectrum of $F$ is the set of atoms of $\lambda_F$, that is $$ ps(F)=\{x \in \mathbb{R}^d, \, \ \lambda_F(\{x\})>0\}. $$ \Bin \noindent and the support of $F$ is the closure $\overline{ps(F)}$ of the point spectrum $ps(F)$.\\ \noindent \textbf{A3 - Moments}. Let us define the class $\Gamma$ of a multi-indices in $\mathbb{N}^d$, that is, all the row-vectors $\alpha = (\alpha_1,\alpha_2,\cdots,\alpha_d)$ with $\alpha_i \in \mathbb{N}$ for $1\leq i\leq d$. Define the class of multi-index of level $\ell \in \mathbb{N}$. $$ \Gamma(\ell)=\{\alpha=(\alpha_1,\alpha_2,\cdots,\alpha_d) \in \Gamma, \ |\alpha|\equiv \alpha_1+ \cdots + \alpha_d=\ell\}. $$ \Bin For $u=(u_1,\cdots,u_d) \in \mathbb{R}^d$, we denote $$ u^{\alpha}= \prod_{j=1}^{d} u_j^{\alpha_j} $$ \noindent and the function $u \rightarrow u^\alpha$ is a polynomial of degree $|\alpha|$. Now we may define the moments of a \textit{df}.\\ \begin{definition} The moment of order $\alpha$ of a \textit{df} $F$ on $\mathbb{R}^d$ is the real number (whenever the integral exists) given by \begin{equation*} \mu_\alpha = \int_{\mathbb{R}^d} \prod_{j=1}^{d} u_j^{\alpha_j} d\lambda_F(u_1,\cdots, u_d)\equiv \int_{\mathbb{R}^d} u^\alpha d\lambda_F(u). \end{equation*} \end{definition} \Bin \noindent The moment problem we face in this paper amounts to the characterization of $F$ by all the field of moments $(\mu_\alpha)_{\alpha \in \Gamma}$, given they exist all.\\ \noindent \textbf{A4 - F-continuous interval}. First of all, a point $x=(x_1,\cdots,x_d)^t$ of $\mathbb{R}^d$ is a discontinuity point $x$, that is an element of the point spectrum $ps(F)$ of $F$ if and only if the boundary of $A_x=]-\infty, x]$ is not a $\lambda_F$-null set, i.e, \begin{equation} \lambda_F\left(\partial A_x \right)>0. \end{equation} \Bin We recall that $$ \partial A_x=\{y=(y_1,\cdots,y_d)^t \in \mathbb{R}^d, \forall j\in \{1,\cdots,d\} \ y_j\leq x_j, \ \exists j\in \{1,\cdots,d\} \ s.t. \ y_j=x_j\}. $$ \Bin Further, for any an interval \begin{equation*} (a,b)=\prod\limits_{i=1}^{d}(a_{i},b_{i}) \end{equation*} \noindent of $\mathbb{R}^{d}$, we denote \begin{equation*} E(a,b)=\{c=(c_{1},...,c_{d})\in \mathbb{R}^{d},\text{ }\forall 1\leq i\leq d,(c_{i}=a_{i}\text{ ou }c_{i}=b_{i})\}. \end{equation*} \Bin By using the internal product ($8$) defined earlier, we have a compact form of $E(a,b)$ as \begin{equation} \label{sec_append_not} E(a,b)=\{b+\varepsilon*(a-b), \varepsilon=(\varepsilon_1,...,\varepsilon_d) \in \{0,1\}^d\}. \end{equation} \bigskip \noindent By definition, the interval $(a,b)$ is $F$-continuous if and only if $(a,b)$ is bounded and each element of $E(a,b)$ is a continuity point of $F$, that is \begin{equation*} \forall c\in E(a,b),\ \lambda_F(\partial ]-\infty ,c])=0. \end{equation*} \bigskip \noindent Let $\mathcal{U}(F)$ be the class of all $F$-continuous intervals. A key result which is very useful in weak convergence is the following proposition.\\ \begin{proposition} \label{cv.GFcontinuous} Let $F$ be any probability distribution function on $\mathbb{R}^{d}$, $d\geq 1$. Then any open $G$ set in $\mathbb{R}^{d}$ is a countable union of $F$-continuous intervals of the form $]a,b]$ or $]a,b[$, where by definition, an interval $(a,b)$ is $F$-continuous if and only if, for any $$ \varepsilon=(\varepsilon_1, \varepsilon_2, ..., \varepsilon_d) \in \{0,1\}^d, $$ \noindent the point $$ b+\varepsilon*(a-b)=(b_1+\varepsilon_1 (a_1-b_1), b_2+\varepsilon_2 (a_2-b_2), ..., b_d+\varepsilon_d (a_d-b_d)) $$ \noindent is a continuity point of $F$. \end{proposition} \Bin (see \cite{ips-wcia-ang}, Proposition 18, page 82 for a proof). A final consequence of that proposition is that any point $x=(x_1,\cdots,x_d)^t$ of $\mathbb{N}^d$ is limit of sequences of continuity points of $F$ from above and limit of sequences of continuity points of $F$ from below.\\ \noindent \textbf{B - An ordered version of Hahn-Banach theorem}.\\ \noindent Let us consider a linear space $E$ of real-valued functions $x$ defined on some space non-empty set $\Omega$ whose elements are denoted as $$ x:\Omega\rightarrow \mathbb{R}. $$ \bigskip\noindent Let $f$ be an element of the dual space $E^{\prime}$ of $E$, that is, $f: E \rightarrow \mathbb{R}$ is a linear functional (not necessary continuous). When we endow $E$ with the addition of functions and the external multiplication of functions by scalars and the following partial order $$ \forall(x, y)\in E^2, (x\leq y) \Leftrightarrow \ (\forall t \in\Omega, x(t)\leq y(t)), $$ \Bin we can see that $(E, +, ., \leq)$ is an $\mathbb{R}$-ordered linear space, that is, $(E, +, .)$ is an $\mathbb{R}$-linear space and the order relation is compatible with the linear structure, i.e. $$ \forall(x, y, z)\in E^3, \ \ x\leq y \Leftrightarrow x+z \leq y+z $$ \noindent and $$ \forall (x, y)\in E^2, \forall \ \lambda \in\mathbb{R}_+\setminus \{0\}, \ (x \leq y) \Leftrightarrow (\lambda x \leq \lambda y). $$ \bigskip \noindent Given a non-empty subset $\Omega_0$ of $\Omega$ which may be equal to $\Omega$. We have the following definition. \begin{definition} \label{partialNN} We say that $f \in E^{\prime}$ is $\Omega$-non-negative if and only if $$ (\forall x \in E \ and \ (\forall t\in \Omega_0, x(t) \geq 0) \Rightarrow (f(x) \geq 0), $$ \Bin meaning that any function $x \in E$ which is non-negative on $\Omega_0$ has a non-negative image by $f$. \end{definition} \bigskip\noindent The following theorem is very similar to the Hahn-Banach theorem : given a linear sub-space $E_0$ of $E$ and given $f_0 \in E_0^{\prime}$ which is $\Omega$-non-negative, is it possible to extend $f \in E^{\prime}$ while preserving the $\Omega_0$-non-negativity? An affirmative response is given below. \begin{theorem} \label{HBLIKE} Let $E$ be an ordered linear space of real-valued functions defined on some space non-empty set $\Omega$. Let $\Omega_0$ a non-empty subset of $\Omega$. Let $E_0$ be a sub-linear space of $E$. Let $f_0 \in E^{\prime}_0$ be $\Omega_0$-non-negative. Suppose that $E_0$ has the following property: \begin{equation} \forall x \in E, \exists (x^{\prime}, x^{\prime\prime}) \in {E_0}^2, \ \ x^{\prime} \leq x\leq x^{\prime\prime} \ on \ \Omega_0, \label{ineq_00} \end{equation} \bigskip\noindent that is $$ \forall x\in E, \exists (x^{\prime}, x^{\prime\prime})\in E_0^2, \ (\forall t\in \Omega_0, \ x^{\prime}(t) \leq x(t) \leq x^{\prime\prime}(t)). $$ \bigskip\noindent Then $f_0$ is extensible to a linear functional on $E$ which is still $\Omega_0$-non-negative. \end{theorem} \Ni \textbf{Proof}. We closely follow the proof of Hahn-Banach theorem which,by the way, is the approach used in \cite{shohat}. We notice that there is nothing to if $E=E_0$ or for $E_0=\{0\}$, $f_0=0$ and it is extended to $f=0$. So we proceed with $f_0 \neq 0 $ and $E\neq E_0 \neq \{0\}$. So there exists $x_0 \in E$ and $x_0 \notin E_0$. We consider the linear space spanned by $E_0$ and ${x_0}$ which is $$ E_1= E_0 + \mathbb{R} x_0 = \{ y= x+ \lambda x_0, x\in E_0, \ \lambda \in \mathbb{R} \} $$ \Bin We define on $ E_1$ the functional $$ \forall y= x+\lambda x_0 \in E_1, f_1(y)=f_0(x)+\lambda a, $$ \bigskip\noindent where $a$ is arbitrary real number and is taken as $f_1(x_0)$. For each $a$ fixed, $f_1$ is linear on $E_1$. $f_1$ is a extension of $f_0$ from $E_0$ to $E_1$, since any $y \in E_1$ is uniquely written as $y=x+\lambda x_0$ and then we have for $\lambda=0$, $$ f_1(y)=f_0(y) +0 a =f_0(x). $$ \bigskip\noindent Now, the problem is how to choose $a=f_0(x_0)$ such that $f_1$ is $\Omega$-non-negative. To do us, we begin by recalling the assumption $$ A_1= \{ x^{\prime}\in E_0, x^{\prime}\leq x_0 \ on \ \Omega_0 \} \neq \emptyset \ and \ A_2 = \{ x^{\prime\prime} \in E_0, \ x^{\prime\prime}\geq x_0 \ on \ \Omega_0\} \neq \emptyset. $$ \Bin This implies that for any $(x^{\prime}, x^{\prime\prime}) \in A_1 \times A_2$, $x^{\prime}\leq x_0 \leq x^{\prime\prime}$ on $\Omega_0$, and thus $(x^{\prime\prime}-x^{\prime})\geq 0$, on $\Omega_0$. Since $f_0$ is $\Omega_0$-non-negative, we have $f_0(x^{\prime\prime}-x^{\prime})\geq 0$ [and hence $f_1(x^{\prime\prime}-x^{\prime})\geq 0$], that is $$ \forall(x^{\prime}, x^{\prime\prime})\in A_1 \times A_2, \ f_0(x^{\prime})\leq f_0(x^{\prime\prime}) \ on \ \Omega_0. $$ \bigskip\noindent Hence $$ \forall x^{\prime}\in A_1, \ f_0(x^{\prime}) \leq \inf_{ x^{\prime\prime}\in A_ 2} f_0(x^{\prime\prime}) \ on \ \Omega_0, $$ \bigskip\noindent Next, by taking the supremum on $x^{\prime}$, we have $$ C_1=:\sup_{x^{\prime} \in A_1} f_0(x^{\prime})\leq \inf_{x^{\prime\prime} \in A_2} f_0(x^{\prime\prime})=:C_2. $$ \bigskip\noindent Let us choose $a \in [C_1, C_2]$. Let us show that for a such choice, $f_1$ will be $\Omega_0-$ non-negative. Indeed, let $$ y=x+ \lambda x_0 \in E_1, $$ \bigskip\noindent such that for $ t\in \Omega_0$, $y(t)=x(t)+ \lambda x_0(t) \geq 0$. We have to prove that $f_1(y) \geq 0$. Let us discuss on the sign of $\lambda$.\\ \bigskip\noindent (a) Let $\lambda=0$. Here, for all $t \in \Omega_0$, $ y(t)= x(t) \geq 0$. So $f_0(x)=f_1(y) \geq 0$.\\ \noindent (b) Let $ \lambda>0$. Thus $(-x/\lambda) \leq x_{0}$ on $\Omega_0$. Thus $(-x/\lambda) \in A_1$. Hence $$ f_1(x_0) \geq C_1= \sup_{x^{\prime} \in A_1}f(x^{\prime})\geq f_0(-\frac{x}{\lambda}) $$ \bigskip\noindent which leads to \begin{eqnarray*} f_1(x_0)- f_0\left(-\frac{x}{\lambda}\right)&=&{\frac{1}{\lambda}(f_0(x)+\lambda f_1(x_0))}\\ &=&{\frac{1}{\lambda}f_1(y)\geq 0} \end{eqnarray*} \bigskip\noindent that is $$ f_1(y) \geq 0. $$ \Bin (c) Let $\lambda< 0$. Thus $(-x/\lambda) \geq x_0$. Thus $(-x/\lambda) \in A_2$. We use a similar argument to get $$ f_1(x_0)\leq C_2 = \inf_{x^{\prime\prime} \in A_2} f_0(x^{\prime\prime}) \leq f_0(-x/y) $$ \bigskip\noindent and this leads to $$ \frac{1}{\lambda}\left(f_0(x)+\lambda f_1(x_0)\right)={\frac{1}{\lambda}f(y)\leq 0} $$ \bigskip\noindent that is, since $\lambda<0$, $$ f(y)\geq 0. $$ \bigskip\noindent We conclude that for $E_0 \subsetneq E$, we may extend $f_0$ to a bigger linear sub-space of at least on dimension, say $E_1$, as a linear and $\Omega_0$-non-negative functional.\\ \noindent For the second part, let us consider the class $\mathcal{A}$ of extensions of $f_0$ preserving $\Omega_0$-non-negativity. Let us denote them by $(f,A)$, meaning that $f: A\rightarrow \mathbb{R}$ is linear, $A$ subspace of $E$, $E_0\subsetneq A$ and $f_{|E_0}=f_0$ and $f$ is $\Omega_0$-non-negative. We say that $(f,A) \leq (f^{\prime},A^{\prime})$ if and only if $$ (A\subseteq A^{\prime} \ \ and \ \ f^{\prime}_{|A}=f). $$ \bigskip\noindent Clearly, this is an order relation. Let us exploit the first part. If $E_0\neq E$, there exists $x_0\neq E_0$ and $f_1 : E_1=E_0+\mathbb{R}x_0 \rightarrow \mathbb{R}$, $f_1\in \mathcal{A}$. If $E_1\neq E$, there exists $x_1 \in E\setminus E_1$ and we set $f_1 : E_2=E_1+\mathbb{R}x_1$, and we get $f_2\in \mathcal{A}$.\\ \noindent \textbf{Either}, we stop at some $n$ with $E_n=E$, and the proof is finished \textbf{or} we continue infinitely. But, by construction, we have $$ (f_0,E_0)\leq (f_1,E_1)\leq (f_2,E_2)\leq \cdots \leq (f_j,E_j) \cdots $$ \bigskip\noindent So the class $\lbrace (f_j,E_j), j\geq 0 \rbrace$ is a chain. The Zorn's lemma says that it has a maximal element. It is not difficult to see that this maximal element is $(f_{\infty},E_{\infty})$ with \begin{equation*} \left\{ \begin{array}{lll} E_{\infty}=\bigcup_{j\geq 0}E_j\\ \\ \forall x \in E_{\infty},\ f_{\infty}(x)=f_j(x), \ \text{for} \ x \in E_j\\ \end{array}. \right. \end{equation*} \bigskip\noindent Since the $(E_j)_{j\geq 0}$ is an increasing sequence (w.r.t to the inclusion), $$ E_{\infty}=\bigcup_{j\geq 0}E_j $$ \Bin is a linear sub-space of $E$. Let us see that the definition is coherent. Indeed, let us suppose that $x \in E_{\infty}$ belongs two distinct spaces $E_{j_1}$ and $E_{j_2}$, $j_1\geq 0$ and $j_2\geq 0$. Without loss of generality, we can suppose that $j_1<j_2$. Hence, we have $$ (f_{j_1},E_{j_1})\leq (f_{j_2},E_{j_2}). $$ \bigskip\noindent and thus, $$ f_{j_1}(x)=f_{{j_2}| E_{j_1}}(x)=f_{j_2}(x). $$ \bigskip\noindent We may take $f_{\infty}(x)$ as $f_{j}(x)$ for any $j\geq 1$ such that $x\in E_j$. All these values are equal by the previous formula. So, the definition of $f_{\infty}$ is coherent.\\ \noindent The mapping $f_{\infty}$ is linear since for $x\in E_{\infty}$, $y\in E_{\infty}$, there exist $j_1$ and $j_2$ (say $j_1\leq j_2$) such that $x\in E_{j_1}$ and $y \in E_{j_2}$. So $(x,y) \in E_{j_2}$. For $(\alpha,\beta)\in \mathbb{R}^2$, $\alpha x +\beta y \in E_{j_2}$ \begin{eqnarray*} f_{\infty}(\alpha x +\beta y)&=&f_{j_2}(\alpha x +\beta y)=\alpha f_{j_2}( x )+\beta f_{j_2}(y)\\ &=&\alpha f_{\infty}(x)+ \beta f_{\infty}(y). \end{eqnarray*} \bigskip\noindent We have $E_0 \subseteq E_{\infty}$ obviously and for all $j\geq 0$, for all $x\in E_j$ $$ f_{\infty}(x)=f_{j}(x). $$ \bigskip\noindent So, $f_{\infty|E_j}(x)=f_j(x)$ and hence $f_{\infty|E_0}(x)=f_0(x)$. We also have that $f_{\infty}$ is $\Omega_0$-non-negative. Indeed for $x \in E_{\infty}$, $x \geq 0$ on $\Omega_0$, we have for $x \in E_j$, $f_{\infty}(x)=f_j(x)\geq 0$.\\ \noindent So $f_{\infty} $ belongs to $\mathcal{A}$ and dominates all elements of $\mathcal{A}$. Hence $$ (f_{\infty}, E_{\infty})=\max \lbrace (f_{j}, E_{j}), j\geq 0 \rbrace. $$ \bigskip\noindent We necessarily have $E_{\infty}=E$. Indeed if $E_{\infty}\subsetneq E$, we might use the first part and set $0\neq x_{\infty} \in E \setminus E_{\infty}$ and we obtain a greater extension $f_{\infty}^\ast$ preserving the $\Omega_0$-non-negativity, defined on $E_{\infty}^\ast=E_{\infty}+\mathbb{R}x_{\infty}$, which is impossible. $\blacksquare$\\ \newpage \section{The moment problem in Probability Theory of $\mathbb{R}$} \label{mp_r_proba} \noindent Suppose that we have a probability measure $\rho$ on $\mathbb{R}$ having moments of all orders $(m_n)_{n\geq 1}$, with $m_0=1$, as in Formula \eqref{mp_01}. The question is whether the sequence characterizes the measure $\rho$ in the following form : If $(m_n)_{n\geq 0}$, with $m_0=1$, are the moments of two measures $\rho_1$ and $\rho_2$ on $\mathbb{R}$, do we have $\rho_1=\rho_2$? We have the particular answer as follows.\\ \textbf{(I) - A sufficient condition for the moments to determine the probability measure}.\\ \begin{theorem} \label{mp_proba_r} Let $\rho$ be a probability measure on $\mathbb{R}$ having moments of all orders $(m_n)_{n\geq 1}$, with $m_0=1$. Suppose that the Cauchy radius exists and is not zero, i.e., $$ R=\lim_{n \rightarrow +\infty} |n!/m_n|^{1/n}>0, $$ \Bin or the series $\sum_{n=0}^{+\infty} m_n x^n/n!$ has a positive radius of convergence.\\ \Bin Then the moments determine $\rho$. \end{theorem} \Bin The simple tool of Cauchy's rule for convergence of functional series is used here. Let us just make a recall. Let us consider a sequence of real numbers $(a_n)_{n\geq 0}$. Suppose that $|1/a_n|^{1/n}\rightarrow r>0$. Then for $|x|<r$, such that $0<\varepsilon=1-|x/r|>0$. We have \begin{eqnarray*} |a_n x^n|&=&\biggr(|x| |a_n|^{1/n}\biggr)^n\\ &=&\biggr(\left|\frac{x}{r} \right| \biggr[\left|r |a_n|^{1/n}\right|\biggr]\biggr)^n. \end{eqnarray*} \Bin Since the term between the brackets converges to one, it is less that $(1-\varepsilon/2)^{-1}>1$ for $n$ large enough, say $n\geq n_0$. We get $$ |a_n x^n|\leq \left(\frac{1-\varepsilon}{1-\varepsilon/2}\right)^n. $$ \Bin since $0<(1-\varepsilon)/(1-\varepsilon/2)<1$, the series $\sum_{n} a_n x^n$ converges for all $|x|<r$. Similarly, we prove that the series $\sum_{n} a_n x^n$ diverges for $|x|>r$. We are going to use that rule below based on arguments in \cite{billinsgleymp}, page 388.\\ \noindent \textbf{Proof of Theorem \ref{mp_proba_r}}. Let us denote by $\psi$ the characteristic function of $\rho$. The Taylor-Lagrange formula (see \cite{valiron}, p. ??) for the complex exponential function gives : for $(x,t,h)\in \mathbb{R}^3$, $n\geq 1$, $$ e^{ihx}=\sum_{j=0}^{n}\frac{(ihx)^j}{j!} +\frac{(i xh)^{n+1} e^{(i\theta xh)}}{(n+1)!}, \ |\theta|<1. $$ \Bin This leads to (since $e^{itx}$ has norm one) $$ \left|e^{itx} \left(e^{ihx}-\sum_{j=0}^{n}\frac{(ihx)^j}{j!}\right)\right|\leq \frac{|xh|^{n+1}}{(n+1)!}, $$ \Bin which yields $$ \left|e^{i(t+h)x} -\sum_{j=0}^{n}\frac{h^j}{j!} (ix)^j e^{itx} \right|\leq \frac{|h|^{n+1}}{(n+1)!} |x|^{n+1}. $$ \Bin By integrating the three members of that double inequality with respect to $\rho$ and by identifying $\int (ix)^j e^{itx} \rho(x)$ as the derivative of $\psi$ at $j$, we get \begin{equation} \left|\psi(t+h)- \sum_{j=0}^{n}\frac{h^j}{j!} \psi^{(j)}(t) \right| \leq \frac{|h|^{n+1}}{(n+1)!} \mu_{n+1}, \label{mp_tool_01} \end{equation} \Bin where $\mu_{j}$ is the absolute moment of order $n\neq 1$ given by $$ \forall j\geq 1, \ \mu_{j}=\int \ |u|^j \ d\rho(u). $$ \Bin Now, under the hypotheses, we can find $r$ and $s$ such that $0<r<s<1$ and $\sum_{j=0}^{+\infty} m_j s^{j}/j!$ converge. Hence by the properties of convergent series, $m_j s^{j}/j!\rightarrow 0$ and $m_j r^{j}/j!\rightarrow 0$ as $j\rightarrow +\infty$. Further, $2 e^{\log j + (2j-1) \log (r/s)} \rightarrow -\infty$ (since $0<r/s<1)$ and then $2 e^{\log j + (2j-1) \log (r/s)} <s$ for $j$ large enough, say $j\geq j_0$, which is $$ 2j r^{2j-1} <s^{2j}, \ for \ j\geq j_0, $$ \Bin which, combined with the inequality $|a|^{r_1} \leq 1 +|a|^{r_2}$ valid for $0<r_1\leq r_2$, leads to \begin{eqnarray*} \frac{|x|^{2j-1}r^{2j-1}}{(2j-1)}&\leq& \frac{r^{2j-1}}{(2j-1)!}+\frac{|x|^{2j}r^{2j-1}}{(2j-1)!}\\ &\leq &\frac{r^{2j-1}}{(2j-1)!}+\frac{|x|^{2j}s^{2j-1}}{(2j-1)!(2j)}\\ &\leq &\frac{r^{2j-1}}{(2j-1)!}+\frac{|x|^{2j}s^{2j}}{(2j)!}. \end{eqnarray*} \Bin By integration with respect to $\rho$, we get $$ \frac{\mu_{2j-1}r^{2j-1}}{(2j-1)!}\leq \frac{r^{2j-1}}{(2j-1)}+\frac{\mu^{2j}s^{2j-1}}{(2j)!}. $$ \Bin So $\mu_{2j-1}r^{2j-1}/(2j-1)! \rightarrow 0$. We already have $\mu_{2j}r^{2j}/(2j)!=m_{2j}r^{2j}/(2j)! \rightarrow 0$. So, the convergence covers odd and even terms. We arrive at $$ \mu_{n+1}r^{n+1}/(n+1)! \rightarrow 0 \ as \ n\rightarrow 0. $$ \Bin We apply this to the bound in Formula \ref{mp_tool_01} to get \begin{equation} \forall t \in \mathbb{R}, \ \forall |h|\leq r, \ \psi(t+h)= \sum_{j=0}^{+\infty}\frac{h^j}{j!} \psi^{(j)}(t). \label{mp_tool_02} \end{equation} \Bin We conclude as follows. Let us suppose that another probability measure has the same moments $(m_n)_{n\geq 1}$ with characteristic function $\psi_1$. By taking For $t=0$, we get that $\psi$ and $\psi_1$ coincide on $[-r,r]$. Let us show we may extend that equality to all interval $[sr, \ (s+1)r]$, $s\geq 1$. We begin by preceeding for $s=1$. We say that $\psi$ and $\psi_1$ have the same derivative functions on $]0,r[$ and $\psi^{(j)}(r/2)=\psi^{(j)}_1(r/2)$ for all $j\geq 1$ in particular. By taking $t=r/2$, Formula \eqref{mp_tool_02} shows that $\psi$ and $\psi_1$ are equal on $[r/2, 3r/2]$ and hence $\psi^{(j)}(r)=\psi^{(j)}_1(r)$ for all $j\geq 1$. Now using Formula \eqref{mp_tool_02} extends the equality on $[r, 2r]$. By proceeding so forth and by handling intervals $[-(s+1)r, \ -sr]$ in the same way, we get the desired equality on $\mathbb{R}$ by induction. $\blacksquare$\\ \newpage \noindent \textbf{(II) - Application to weak convergence}.\\ \noindent We get the following criteria of convergence.\\ \begin{theorem} \label{wc_moments} Let $X_n : (\Omega, \mathcal{A}_n, \mathbb{P}_n) \rightarrow \mathbb{R}$, $n\geq 1$, be a sequence of random variables and $X_{\infty} : (\Omega_{\infty}, \mathcal{A}_{\infty}, \mathbb{P}_{\infty}) \rightarrow \mathbb{R}$ be another random variable. Let us suppose that the $X_n$'s and $X_{\infty}$ have moments of all orders and that the probability law of $X_{\infty}$ is determined by its moments and $$ \forall j\geq 1, \ \mathbb{E}_{\mathbb{P}_n} X_n^j \rightarrow \mathbb{E}_{\mathbb{P}_{\infty}} X_{\infty}^j \ as \ n\rightarrow +\infty. $$ \Bin Then $X_n$ weakly converges to $X_{\infty}$ as $n\rightarrow +\infty$, i.e., $X_{n}\rightsquigarrow X_{\infty}$. \end{theorem} \Bin \textbf{Proof}. Since the sequence $\mathbb{E}_{\mathbb{P}_n} X_n^2$ converges, it is bounded, say by $C$. For any $\varepsilon>0$, for $k>0$ and $C/k2<\varepsilon$, we apply the Markov inequality to get $$ \mathbb{P}_n(|X_n|\geq k)=\mathbb{P}_n(X_n^2\geq k^2)\leq C/k^2<\varepsilon, $$ \Bin that is, there exists a compactum $K=[-k,k]$ of $\mathbb{R}$ such that $$ \liminf_{n\rightarrow +\infty} \mathbb{P}_n(X_n \in K)>1-\varepsilon. $$ \Bin So the sequence $(X_n)_{n\geq 1}$ is asymptotically tight and by Prohorov's theorem, every sub-sequence of $(X_n)_{n\geq 1}$ contains a weakly convergent sub-sequence (see Theorem Prohorov-Helly Bray in \cite{ips-wcia-ang}, Section 3, Sub-section 3). Now let $f : \mathbb{R} \rightarrow \mathbb{R}$ continuous and bounded. The sequence $s_n(f)=\mathbb{E}_{\mathbb{P}_n}f(X_n)$ is bounded (by the bound of $f$). So, it contains converging sub-sequence $s_{n_k}(f)$ to $s(f)$. But, by Prohorov's theorem, $X_{n_k}$ contains a sub-sequence $X_{n_k(\ell)}$ weakly converging, say to $Z$ of probability measure $\mu$. So $$ s(f)=\int f \ \ d\mu. $$ \Bin Let us use the Skorohod theorem (see \cite{wichura} ) to have $X^{\ast}_{n_k(\ell)}=_d X_{n_k(\ell)}$ and $Z^{\ast}=_d Z$ on the same probability space with $X^{\ast}_{n_k(\ell)}$ converges \textit{a.s.} to $Z^{\ast}$. For any $r\geq 1$ fixed, $\mathbb{E} (X^{\ast}_{n_k(\ell)})^{4r}$ is bounded and hence, for any $r\geq 1$, $(X^{\ast}_{n_k(\ell)})^{r}$ is uniformly and continuously integrable and converges to $(Z^{\ast})^r$. By Theorem 16.4 in \cite{billinsgleymp}, page 218 , $(Z^{\ast})^r$ is integrable and $\mathbb{E} (X^{\ast}_{n_k(\ell)})^{r}$ converges to $\mathbb{E}(Z^{\ast})^r$. By getting back to our original random variables, we get $$ \mathbb{E}X_{n_k(\ell)}^r \rightarrow \mathbb{E} Z^r. $$ \Bin Since $\mathbb{E}X_{n_k(\ell)}^r$ converges to $\mathbb{E} X^r_{\infty}$, we get that $X_{\infty}$ and $Z$ have the same moments (which determine the probability law of $X_{\infty}$), we conclude that $\rho=\mathbb{P}_Z=\mathbb{P}_{X_\infty}$. Hence $$ s(f)=\int f \ \ d\mathbb{P}_{X_{\infty}}. $$ \Bin We conclude that any sub-sequence of $s_n(f)$ contains a sub-sequence converging to $s(f)=\int f \ d\mathbb{P}_{\infty}$ for any bounded and continuous function $f$. Thus, $X_n \rightsquigarrow X_{\infty}$. $\blacksquare$\\ \newpage \section{Solution of the moment problem on $\mathbb{R}$ and application to weak convergence} \label{mp_r_shohat} \noindent Here, we use simpler notations. Let be given the sequences $(m_n)_{n\geq 0}$ with $m_0=1$. Let $\mathcal{P}$ the linear space of all polynomials. A non-zero polynomial $P$ is associated with coefficients $(x_n)_{n\geq 0}$, where all the $x_n$'s vanish beyond some integer $d$ for which $x_d\neq 0$, the number $d$ being its degree. For sake of simplicity, we use the representation $P\equiv (x_n)_{n\geq 0}$ and use infinite sums with in mind the fact that only a finite number of the sum is non-zeros : $$ \forall u \in \mathbb{R}, \ P(u)=\sum_{n\geq 0} x_n u^n. $$ \noindent We define the linear functional $\mu$ as follows : $$ \forall P\equiv (x_n)_{n\geq 0} \in \mathcal{P}, \ \mu(P)=\sum_{n\geq 0} x_n m_n. $$ \Bin That functional is well-defined and is linear. Here is the solution of the moment problem on $\mathbb{R}$. \begin{theorem} \label{shohat_R} Given a non-empty closed subset $S_0$ of $\mathbb{R}$, there exists a probability measure $\rho$ associated to a \textit{df} $F$ such that : (a) $supp(F)\subset S_0$ and (b) for all $n\geq 0$, $$ m_n= \int u^n \ d\rho(u) $$ \Bin if and only if : (c) $\mu$ is $S_0$-non-negative, i.e., if $\mathcal{P} \ni P$ satisfies : $P(u)\geq 0$ for all $u\in S_0$, then $\mu(P)\geq 0$. \end{theorem} \Bin \textbf{Proof}. We are going to provide a detailed proof.\\ \noindent \textbf{Let us begin by proving that (a) and (b) imply (c)}. For any polynomial $P=(x_n)_{n\geq 0}$ $S_0$-non-negative, we have $$ \mu(P)=\sum_{n\geq 0 1} x_n \left(\int u^n \ dF(u)\right)=\int \left(\sum_{n\neq 1} x_n \ u^n \right) dF(u)=\int P(u) dF(u), $$ \noindent where we were able to interchange summation and integration symbols since only a finite number of terms of the summation are non-zero. But, we have $$ \mu(P)= \int P(u) dF(u) = \int_{S_0^c} P(u) dF(u) + \int_{S_0} P(u) dF(u). $$ \Bin But, on $\mathbb{R}$, the support $supp(F)$ and the spectrum $s(F)$ coincide and since $S_0^c \subset supp(F)^c$, we have $$ \int_{S_0^c} P(u) dF(x)=0 $$ \Bin and we get $$ \mu(P)= \int P(u) dF(u) = \int_{S_0} P(u) dF(u). $$ \Bin which is non-negative whenever $P$ is $S_0$-non-negative.\\ \noindent \textbf{Let us prove that (c) implies (a) and (b)}. Let us proceed with three steps.\\ \noindent \textit{Step 1. Construction of $\rho$}. \label{step1} Let us consider the class $E$ of functions $f : \mathbb{R} \rightarrow \mathbb{R}$ bounded of linear combinations of functions of the form $A u^{2r} + B$, where $A\geq 0$, $B\geq 0$, $r \in \mathbb{N}$. In other words $f \in E$ if and only if it is bounded by a function of the form \begin{equation} g=\sum_{i=1}^{p} A_i u^{2r_i} + B_i, \ p\geq 1, \ (A_i,B_i,r_i) \in \mathbb{R}_+ \times \mathbb{R}_+ \times \mathbb{N}. \label{eq_01} \end{equation} \noindent We set $E_0=E \cap \mathcal{P}$ as the subclass of $E$ restricted to polynomials. It is clear that for a function $g$ as in Formula \eqref{eq_01}, $-g$ and $g$ belong to $E_0$ and hence : \begin{equation} \forall f \in E, \ \exists (f_1, f_2) \in E_0^2, \ \ f_1 \leq f \leq f_2 \ on \ \mathbb{R}. \label{ineq_01} \end{equation} \Bin We may apply Theorem \ref{HBLIKE} since $E_0$ is a sub-linear space of $E$, $\mu$ is an $S_0$-non-negative linear functional defined on $E_0$ and Condition \eqref{ineq_00} of Theorem is true through Formula \eqref{ineq_01}. So $\mu$ est extensible on $E$ to an $S_0$-non-negative linear functional, still denoted by $\mu$. For any subset $C$ of $\mathbb{R}$, $f=1_C$ is bounded by $g=1= 0 \times u^2 + 1$ so that $1_C \in E$. So define the mapping $m$ on the class $\mathcal{B}(\mathbb{R})$ of Borel sets of $\mathbb{R}$ by $$ \forall C \in \mathcal{B}(\mathbb{R}), \ m(C)=\mu(1_C). $$ \Bin The mapping is clearly additive. For any $C \in \mathcal{B}(\mathbb{R})$, $1_C\geq 0$ on $\mathbb{R}$ and thus on $S_0$, we have by $S_0$-non-negativity of $\mu$, that $\mu(C)=\mu(1_C)\geq 0$. As well, for $(C_1,C_2) \in \mathcal{B}(\mathbb{R})^2$, $C_1 \subset C_2$ implies $1_{C_1} \leq 1_{C_2}$ on $\mathbb{R}$ and hence on $S_0$ and by $S_0$ non-negativity of $\mu$, $m(C_1)\leq m(C_2)$. Finally $$ \forall C \in \mathcal{B}(\mathbb{R}), \ m(C) \leq \mu(1_{\mathbb{R}}=\mu(1)=m_0=1. $$ \noindent We conclude that $m$ is a finite and non-negative additive mapping on $\mathcal{B}(\mathbb{R})$. The mapping should be a measure if we could prove that it is $\sigma$-sub-additive or continuous at $\emptyset$, that is $m(A_n)\downarrow 0$ if $A_n\downarrow \emptyset$ as $n\downarrow +\infty$. But it seems very difficult to prove that. So we are going to use the same method as in \cite{shohat} but in the modern frame of Measure Theory.\\ \noindent We define the function $F_0(x)=m(]-\infty, \ x])$, $x \in \mathbb{R}$. We are not sure that it is right-continuous. So we work with $$ F(x)=\lim_{h \searrow 0} F_{0}(x+h), \ x \in \mathbb{R}. $$ \noindent The limits exist by the monotonicity of $F_0$ and the function $F$ is right-continuous and assigns to intervals $]a,b]$ non-negative lengths, that is $\Delta F(a,b)=F(b)-F(a)\geq 0$. Hence $F$ is a distribution function. Let us denote $\rho=\lambda_F$ the Lebesgue-Stieltjes measure associated with $F$.\\ \noindent It is useful to remark that $F_0$, as a monotone function, has at most a countable number of discontinuity, so that $$ \int 1_{]a,b]} \ d\rho =F(b)-F(a)=F_0(b)-F_0(a), $$ \Bin except, eventually, for at most a countable number of pairs $(a,b)$. Now let us check that : Any non-negative and increasing or decreasing function $f \in E$ is $\rho$-integrable and we have \begin{equation} 0\leq \int f d\rho \leq \mu(f). \label{boundedness} \end{equation} \Bin Let us finish this step by proving the above claims. We suppose that $f$ is increasing. By definition of the integral with respect to $\rho$, the integral of $f$ is the monotone limit of integrals of a sequence $(g_n)_{n\geq 1}$ of elementary functions, each of them having the following form $$ g= \sum_{j=1}^{p} \alpha_j 1_{(a_j\leq f <b_j)}, \ p>1, \ p \ finite, \ (\alpha_j)_{1\leq j \leq p} \subset \mathbb{R}_{+} $$ \Bin with $0\leq g \leq f$. But we have \begin{eqnarray*} \int g \ d\rho &=& \sum_{j=1}^{p} \alpha_j \rho(a_j\leq f <b_j)\\ &=& \sum_{j=1}^{p} \alpha_j \rho([f^{-1}(a_j), f^{-1}(b_j)[)\\ &=& \sum_{j=1}^{p} \alpha_j \{F(f^{-1}(b_j)+0)-F(f^{-1}(a_j)-0)\}, \end{eqnarray*} \noindent where $F(x+0)$ and $F(x-0)$ are the left and the right limit of $F$ at $x$ respectively. The boundaries $a_j$ and $b_j$ can be chosen as continuity points of $F_0$ (which still are continuity points of $F$), the only requirement being that the modulii $b_j-a_j$ be small enough. Hence \begin{eqnarray} \int g \ d\rho &=&\sum_{j=1}^{p} \alpha_j \{F_0(f^{-1}(b_j))-F_0(f^{-1}(a_j)\} \notag\\ &=&\sum_{j=1}^{p} \alpha_j m(a_j\leq f <b_j)= \mu \left(\sum_{j=1}{p} \alpha_j 1_{(a_j\leq f <b_j)}\right) \notag\\ &=& \mu(g)\leq \mu(f). \notag \end{eqnarray} \Bin So, for all $n\geq 1$, $$ 0\leq \mu(g_n) = \int g_n \ d\rho. $$ \noindent At the limit, we have $\int f \ d\rho \leq \mu(f)$. Hence $f$ is $\rho$-integrable and its integral is bounded by $\mu(f)$. The proof is easily adapted for $f$ decreasing. Let us give an example. For each function $\ell_n(u)=u^n$, $\ell_n^{+}$ and $\ell_n^{-}$ are still in $E$ and the bound given above applies to them and we finally have $$ \left|\int \ell_n(u) \ d\rho(u)\right| \leq \int \ell_n(u)^{+} \ d\rho(u)+ \int \ell_n(u)^{-} \ d\rho(u)\leq \mu(\ell_n^{+})+\mu(\ell_n^{-})=\mu(|\ell_n|), $$ \Bin we have the following \begin{fact} \label{factIntegrability} For any $n\geq 0$, the function $\ell_n(u)=u^n$ of $u\in \mathbb{R}$ is $\rho$-integrable and \begin{equation} \left| \int \ell_n d\rho \right| \leq \mu(|\ell_n|). \label{boundednessMn} \end{equation} \end{fact} \Bin \textit{Step 2. $s(F) \subset S_0$}. Let us prove that $S_0^c \subset s(F)^c$. Let $x \in S_0^c$, which is an open set. So there exists an interval $]a,b[$ such that $x \in ]a,b[$ and $]a,b]\subset S_0^c$. The number $a$ and $b$ can be taken as continuity points of $F_0$. Since $1_{]a,b]}=0$ on $S_0$, i.e., $1_{]a,b]}$ is non-positive on $S_0$, we have $\mu(1_{]a,b]})\leq 0$ and since $\mu(1_{]a,b]})\geq 0$, we have $$ 0=\mu(1_{]a,b]})=F_0(b)-F_0(a)=F(b)-F(a)=\rho(]a,b])\geq \rho(]a,b[). $$ \Bin Since $G=]a,b[$ is an open neighborhood of $x$ such that $\rho(G)=0$, we conclude that $x \notin s(F)$. Let us move to the last step.\\ \noindent \textit{Step 3. $\rho$ has the desired moments}.\\ \noindent Let $n\geq 1$. Let us show that $$ m_n=\int u^n \ d\rho(u). $$ \Bin Let $\varepsilon \in ]0, 1[$ be fixed. Let $K$ be a positive integer such that $1/K \leq \varepsilon$ (and thus $K\geq 1$). Hence for an positive integer $r$ such that $2r-n-1\geq 1$, we have for $u \notin ]-K,K[$, \begin{eqnarray*} |u|^n&=& u^{2r} \frac{1}{|u|^{2r-n}}\\ &\leq& u^{2r} \frac{1}{K^{2r-n}}=\frac{u^{2r}}{K} \frac{1}{K^{2r-n-1}}\\ &\leq& \varepsilon u^{2r}. \end{eqnarray*} \Bin We conclude that for $K$ such that $1/K \leq \varepsilon$, for $u \notin ]-K,K[$ \begin{equation} |u|^n \leq \varepsilon u^{2r} \leq u^{2r}.\label{boundE} \end{equation} \Bin Now, the function $\ell_n(u)=u^n$ of $u\in \mathbb{R}$ is uniformly continuous on $I_K=]-K, \ K]$. Let us fix $\eta>0$ and let us divide $]-K,\ K]$ into a finite number $p$ of intervals $]a_h,b_h]$ such that the variation of $\ell_n$ over $]a_h,b_h]$ is less than $\eta$. It is possible to choose the $a_h$'s and the $b_h$'s as continuity points of $F_0$ (and hence of $F$). [To do that, we may divide each intervals into two at the middle and to move each $a_h$ and $b_h$ very slightly to be a continuity point. The variation of $\ell_n$ over the new intervals remain is less than $\eta$].\\ \Ni Let us define an elementary function $\ell_{p,n}$ by choosing $u_{(h)}$ from each interval $]a_h,b_h]$ as follows $$ \ell_{p,n}(u)=\sum_{j=1}^{p} \ell_n(u_{(h)}) 1_{]a_h,b_h]}, \ u \in \mathbb{R}. $$ \Bin We have \begin{eqnarray} \mu(\ell_{p,n})&=&\sum_{j=1}^{p} \ell_n(u_{(h)}) \mu\left(1_{]a_h,b_h]}\right) \label{ellRho}\\ &=&\sum_{j=1}^{p} \ell_n(u_{(h)}) (F_0(b_h)-F_0(a_h) \notag\\ &=&\sum_{j=1}^{p} \ell_n(u_{(h)}) (F(b_h)-F(a_h)) \notag\\ &=&\int \ell_{p,n} \ d\rho. \end{eqnarray} \Bin we notice that $\ell_{p,n}$ is null on $]-K, \ K]^c$. By using Formula \eqref{boundE} and the continuity modulus of $\ell_n$ over $]-K, \ K]$, we have \begin{eqnarray*} |\ell_n(u)-\ell_{p,n}(u)| &\leq& |\ell_n(u)-\ell_{p,n}(u)| 1_{]-K, \ K]}+ |\ell_n(u)-\ell_{p,n}(u)| 1_{]-K, \ K]^c}\\ &\leq& \eta 1_{\mathbb{R}}+ \varepsilon u^{2r}, \end{eqnarray*} \Bin i.e., for all $u\in \mathbb{R}$, \begin{equation} \ell_{p,n}(u) - \eta - \varepsilon u^{2r} \leq \ell_n(u) \leq +\ell_{p,n}(u) + \eta + \varepsilon u^{2r}. \end{equation} \Bin By applying $\mu$ to that ordering on $\mathbb{R}$ (and hence on $S_0$) and by using Line \eqref{ellRho}, we get \begin{equation} \int \ell_{p,n} \ d\rho - \eta \mu(1_{\mathbb{R}}) - \varepsilon m_{2r} \leq m_n \leq \int \ell_{p,n} \ d\rho +\eta \mu(1_{\mathbb{R}}) +\varepsilon m_{2r}. \end{equation} \Bin We notice that $$ \int \ell_{p,n} \ d\rho=\int 1_{]-K, \ K]} \ell_{p,n} \ d\rho. $$ \Bin On $]-K, \ K]$, $\ell_{p,n} \rightarrow \ell_n$ and bounded by $|\ell_n|$ which is integrable by Fact \ref{factIntegrability}. By letting $\eta \downarrow 0$, we will have $p\rightarrow +\infty$ and the dominated convergence theorem, as $$ \int \ell_{p,n} \ d\rho \rightarrow \int 1_{]-K, \ K]} \ell_{n} \ d\rho. $$ \Bin and hence \begin{equation} \int 1_{]-K, \ K]} \ell_{n} \ d\rho - \varepsilon m_{2r} \leq m_n \leq \int 1_{]-K, \ K]} \ell_{n} \ d\rho + \varepsilon m_{2r}. \end{equation} \Bin For $\varepsilon>0$ fixed, we can let $K\uparrow +\infty$, $1_{]-K, \ K]} \ell_{n} \rightarrow \ell_n$ while being dominated by the integrable function $|\ell_n|$ and hence \begin{equation} \int \ell_{n} \ d\rho - \varepsilon m_{2r} \leq m_n \leq \int \ell_{n} \ d\rho + \varepsilon m_{2r}. \end{equation} \Bin Now, we may let $\varepsilon \rightarrow 0$ to get $$ m_n=\int \ell_{n} \ d\rho. \blacksquare $$
2,877,628,089,961
arxiv
\section{Introduction} Consider a weighted hypergraph $H=(V,E,w)$ with $w \in \mathbb R_+^E$ and the corresponding energy: For $x \in \mathbb R^V$, \[ Q_H(x) \mathrel{\mathop:}= \sum_{e \in E} w_e \max_{\{u,v\} \in {e \choose 2}} (x_u-x_v)^{2} \] The problem of minimizing the energy $Q_H$ over various convex bodies occurs in many applied contexts, especially in machine learning; we refer to the discussion in \cite{KKTY21b}. In the graph case---when all the hyperedges have cardinality $2$---this corresponds to the quadratic form associated to the weighted Laplacian and carries a physical interpretation as the potential energy of a family of springs indexed by $\{u,v\} \in E$ whose respective endpoints are pinned at $x_u$ and $x_v$. Let us mention the appealing analog for hypergraphs: If we stretch a rubber band around vertices pinned at locations $\{ x_u : u \in e \}$, then $\max_{\{u,v\} \in {e \choose 2}} (x_u-x_v)^2$ is proportional to its potential energy. Here the weight $w_e$ represents the elasticity of the band. \smallskip For hypergraphs, the edge set $E$ could have cardinality as large $2^{|V|}$, and one can ask if there is a substantially smaller hypergraph that approximates the energy for every configuration of vertices. Soma and Yoshida \cite{SY19} formalized the following notion of spectral sparsification for hypergraphs, generalizing the well-studied notion for graphs \cite{ST11}. Say that a weighted hypergraph $\tilde{H}=(V,\tilde{E},\tilde{w})$ is a {\em spectral $\epsilon$-sparsifier} for $H$ if $\tilde{E} \subseteq E$, and \begin{equation}\label{eq:eps-sparsifier} |Q_H(x) - Q_{\tilde{H}}(x)| \leq \epsilon Q_H(x),\qquad \forall x \in \mathbb R^V\,. \end{equation} We will use $n \mathrel{\mathop:}= |V|$ throughout. The authors \cite{SY19} showed that one can always find a spectral $\epsilon$-sparsifier $\tilde{H}$ with $|\tilde{E}| \leq O(n^3/\epsilon^2)$. In \cite{BST19}, the authors established a bound of $O(\epsilon^{-2} D^3 n \log n)$, where $D \mathrel{\mathop:}= \max \{ |e| : e \in E \}$ is often called the {\em rank of $H$,} and subsequently the authors of \cite{KKTY21a} achieved an upper bound of $n D (\epsilon^{-1} \log n)^{O(1)}$. \smallskip Finally, in a recent and remarkable breakthrough, the authors of \cite{KKTY21b} show that one can obtain a spectral sparsifier with at most $O(n (\log n)^3/\epsilon^4)$ hyperedges, bypassing the polynomial dependence on the rank, and coming within $\poly(\epsilon^{-1} \log n)$ factors of the optimal bound. By refining their approach via Talagrand's powerful generic chaining theory, we obtain the following improvement. \begin{theorem}\label{thm:main} For any $n$-vertex weighted hypergraph $H=(V,E,w)$ and $\epsilon > 0$, there is a spectral $\epsilon$-sparsifier $\tilde{H}=(V,\tilde{E},\tilde{w})$ for $H$ with \[ |\tilde{E}| \leq O\left(\frac{\log D}{\epsilon^2} n \log n\right)\,, \] where $D \mathrel{\mathop:}= \max_{e \in E} |e|$. \end{theorem} As in many prior works, \pref{thm:main} is proved by defining a distribution on $E$ and then sampling edges independently from this distribution. For approaches based on independent sampling, the bound of \pref{thm:main} is tight up to a constant factor for every fixed $D$. In particular, this generalizes the analysis of independent random sampling for graph sparsifiers \cite{SS11} where $D=2$. It should be noted that for {\em cut sparsifiers}, the $\log D$ factor can be removed \cite{CKN20}. This corresponds to the weaker notion where we only require that \eqref{eq:eps-sparsifier} holds for $x \in \{-1,1\}^V$. Whether the $\log D$ factor can be removed in general remains an intriguing open question. Our proof of \pref{thm:main} entails an algorithm for constructing the sparsifier $\tilde{H}$ whose running time is polynomial in the size of the input. But our sampling analysis can also be applied directly to the faster algorithm presented in \cite{KKTY21b} whose running time is $|E|D \poly(\log |E|)+ \poly(n)$. \medskip \pref{thm:main} was proved independently and concurrently by Jambulapati, Liu, and Sidford \cite{JLS22}, via a closely related approach. While their main chaining result is somewhat less general than the one proved here (see \eqref{eq:lem11prev} below), they also present a near-linear time algorithm for generating suitable sampling probabilities $\{\mu_e : e \in E \}$. This improves the running time to $|E| D \poly(\log |E|)$. \subsection{The random selector method and chaining for subgaussian processes} Suppose we have a probability distribution $\mu \in \mathbb R_+^E$ on hyperedges in $H$. We sample hyperedges $\tilde{E} = \{e_1,e_2,\ldots,e_M\}$ independently according to $\mu$, and define the random weighted hypergraph $\tilde{H}=(V,\tilde{E},\tilde{w})$ so that \[ Q_{\tilde{H}}(x) = \frac{1}{M} \sum_{k=1}^M \frac{w_{e_k}}{\mu_{e_k}} Q_{e_k}(x)\,, \] where we define \[ Q_e(x) \mathrel{\mathop:}= \max_{\{i,j\} \in {e \choose 2}} (x_i-x_j)^2\,, \] and the edge weights \begin{equation}\label{eq:edge-weights} \tilde{w}_e \mathrel{\mathop:}= \frac{\# \left\{ k \in [M] : e_k = e \right\}}{M} \cdot \frac{w_e}{\mu_e}\,. \end{equation} In particular, this gives $\E[Q_{\tilde{H}}(x)] = Q_{H}(x)$ for all $x \in \mathbb R^V$. \smallskip Now in order to find a spectral $\epsilon$-sparsifier, we want to choose $M$ sufficiently large so that \[ \E \max_{x : Q_H(x) \leq 1} \left|Q_H(x)-Q_{\tilde{H}}(x)\right| \leq \epsilon\,. \] To control concentration of $Q_{\tilde{H}}(x)$ around its mean, it suffices to bound the average maximal fluctuations. Thus by a standard sort of reduction (see \pref{sec:sampling} and also \cite[Lem 9.1.11]{TalagrandBook2014} for a general formulation), it suffices to prove that for any {\em fixed} hyperedges $e_1,\ldots,e_M \in E$, \begin{equation}\label{eq:suffices} \E \max_{x : Q_H(x) \leq 1} \sum_{k=1}^M \epsilon_k \frac{w_{e_k}}{\mu_{e_k}} Q_{e_k}(x) \leq O(\epsilon M)\,, \end{equation} where $\epsilon_1,\ldots,\epsilon_M \in \{-1,1\}$ are i.i.d. random signs. Thus our task is now to control the left-hand side of \eqref{eq:suffices}. If we define the random variable \[ V_x \mathrel{\mathop:}= \sum_{k=1}^M \epsilon_k \frac{w_{e_k}}{\mu_{e_k}} Q_{e_k}(x)\,, \] then $\{V_x : x \in \mathbb R^n \}$ is a subgaussian process (defined in \eqref{eq:subgaussian-tail}) with respect to the (semi)metric \[ d(x,\hat{x}) \mathrel{\mathop:}= \left(\sum_{k=1}^M \left(\frac{w_{e_k}}{\mu_{e_k}}\right)^2 \left|Q_{e_k}(x)-Q_{e_k}(\hat{x})\right|^2\right)^{1/2}. \] There are well-developed tools for studying quantities like $\E \max \{ V_x : Q_H(x) \leq 1 \}$, but they rely on an understanding of the geometry of the space $(\mathbb R^n,d)$, and a correct choice of distribution $\mu$ is essential for making this geometry well-behaved. \paragraph{Importance sampling} For spectral graph sparsification, one chooses the sampling probability $\mu_e$ to be proportional to the effective resistance across $e$ \cite{SS11}. In order to extend this to hypergraphs, the authors of \cite{BST19} define sampling probabilities $\{\mu_e : e \in E\}$ derived from the graph $G=(V,F)$, where $F \mathrel{\mathop:}= \bigcup_{e \in E} {e \choose 2}$ is a union of cliques on every hyperedge. They take \[ \mu_e \propto \sum_{\{u,v\} \in {e \choose 2}} \mathsf R_{uv}\,, \] where $\mathsf R_{uv}$ denotes the effective resistance between a pair of vertices $u,v$ in $G$. To remove the polynomial dependence on $D$, the authors of \cite{KKTY21b} choose a {\em weighted graph} $G=(V,F,c)$ and define \[ \mu_e \propto w_e \max \left\{ \mathsf R_{uv} : \{u,v\} \in \textstyle{{e \choose 2}} \right\}. \] Now $\mathsf R_{uv}$ is the effective resistance in $G$, where edges $\{u,v\} \in F$ have conductance $c_{uv}$. \smallskip Let $L_G$ denote the corresponding (weighted) graph Laplacian, and use $L_G^{+}$ to denote its pseudoinverse. Define $T \mathrel{\mathop:}= \{ v \in \mathbb R^n : Q_H(L_G^{+/2} v) \leq 1\}$. This construction of the sampling probabilities allows us to write \begin{equation}\label{eq:s2} \E \max_{Q_H(x) \leq 1} V_x = \E \max_{v \in T} \sum_{k=1}^M \epsilon_k \max_{\{i,j\} \in e_k} \langle v, y_{ij}^{e_k}\rangle^2\,, \end{equation} for a family of vectors $\{ y_{ij}^{e_k} \}$ that depends on our choice of edge conductances $c \in \mathbb R_+^F$ in $G$. A central component of this approach is the existence of conductances that ensure two key properties: \begin{enumerate} \item $T \subseteq B_2^n \mathrel{\mathop:}= \{ x \in \mathbb R^n : \|x\| \leq 1 \}$, \item $\|y_{ij}^{e_k}\| \leq O(\sqrt{n})$ for all $k=1,\ldots,M$ and $\{i,j\} \in {e_k \choose 2}$. \end{enumerate} We return to a discussion of these properties in a moment. \paragraph{Chaining bounds} Note that the right-hand side of \eqref{eq:s2} can be written as \[ \E \max_{v \in T} \sum_{k=1}^M \epsilon_k N_k(v)^2, \] where $N_k$ is an $\ell_{\infty}$ norm on a subset of the coordinates of $Av$, and $A$ is a matrix whose rows are the vectors $\{y_{ij}^{e_k}\}$. Thus in \pref{sec:bernoulli}, we apply aspects of the generic chaining theory (see the extensive reference \cite{TalagrandBook2014}) to the analysis of such expected maxima. \smallskip For readers familiar with the theory, let us note that a bound of $|\tilde{E}| \leq O(\epsilon^{-2} n (\log n)^3)$ in \pref{thm:main} follows from applying Dudley's entropy bound (cf. \eqref{eq:dudley}) in a straightforward way. A bound of $|\tilde{E}| \leq O(\epsilon^{-2} n (\log n)^2)$ follows from a deeper inequality of Talagrand (see \pref{thm:talagrand-uc} and \pref{sec:warmup}) that exploits property (1) above, that $T$ is a subset of the Euclidean unit ball. \smallskip Finally, in order to achieve $|\tilde{E}| \leq O(\epsilon^{-2} \log(D) \cdot n \log n)$, we need to exploit further structure of the norms $\{N_k\}$ in a novel way. Our approach is modeled after Rudelson's geometric argument \cite{Rudelson99} which, roughly speaking, handles the case where each $N_k$ is a $1$-dimensional norm, as well as Talagrand's method of chaining via growth functionals (see \pref{sec:growth-functionals} and \pref{sec:advanced}). \smallskip To state this bound, let us consider arbitrary norms $N_1,\ldots,N_M$ on $\mathbb R^n$. Define: \begin{align*} \kappa &\mathrel{\mathop:}= \E \max_{k \in [M]} N_k(g)\,, \\ % \lambda &\mathrel{\mathop:}= \max_{k \in [M]} \left(\E[N_k(g)^2]\right)^{1/2}\,, \end{align*} where $g$ is a standard $n$-dimensional Gaussian. In \pref{sec:advanced}, we prove that for any $T \subseteq B_2^n$, \begin{equation}\label{eq:lem11prev} \E \sup_{x \in T} \sum_{k=1}^M \epsilon_k N_k(x)^2 \leq O\!\left(\lambda \sqrt{\log n} + \kappa\right) \cdot \sup_{x \in T} \left(\sum_{k=1}^M N_k(x)^2 \right)^{1/2} \end{equation} When $M=m$, each $N_k$ is a $1$-dimensional norm $N_k(x)\mathrel{\mathop:}= |\langle x, a_k\rangle|$ for some $a_k \in \mathbb R^n$, and $T=B_2^n$, this lemma recovers Rudelson's concentration bound for Bernoulli sums of rank-$1$ matrices \cite{Rudelson99b} (as mentioned there, the inequality we state next is a consequence of the noncommutative Khintchine inequalities \cite{LPP91}). Observe that $N_k(x)^2 = \langle x, a_k\rangle^2 = \langle x, a_k a_k^* x\rangle$, and using $\|\cdot\|_{op}$ to denote the operator norm, the preceding bound asserts that \[ \E \left\|\sum_{k=1}^m \epsilon_k a_k a_k^*\right\|_{op} = \E \max_{x \in B_2^n} \left\langle x, \left(\sum_{k=1}^m \epsilon_k a_k a_k^*\right) x \right\rangle \leq O(\slog{(m+n)}) \max_{k \in [m]} \|a_k\| \cdot \left\|\sum_{k=1}^m a_k a_k^*\right\|_{op}^{1/2}, \] where we use $\lambda \leq O(1) \max_{k \in [m]} \|a_k\|$ and $\kappa \leq O(\slog{m}) \max_{k \in [m]} \|a_k\|$. When applying \eqref{eq:lem11prev} to hypergraph sparsification, one picks up an additional $\sqrt{\log D}$ factor because each $N_k$ is an $\ell_{\infty}$ norm on a subset of at most $D$ coordinates. \begin{remark} As far as we know, it is an open problem to replicate consequences of the noncommutative Khintchine bound for higher-rank matrices using chaining, i.e., in the setting where $N_k(x) = \|A_k x\|$ for matrices $A_1,\ldots,A_M$. \end{remark} \paragraph{Choosing good conductances} In order to satisfy properties (1) and (2) above, one chooses nonnegative numbers \[ \left\{ c_{ij}^e \geq 0 : \{i,j\} \in \textstyle{{e \choose 2}}, e \in E \right\} \] for which \begin{equation}\label{eq:cap-split} \sum_{\{i,j\} \in {e \choose 2}} c_{ij}^e = w_e,\qquad \forall e \in E\,. \end{equation} Define the edge conductances $c_{ij} \mathrel{\mathop:}= \sum_{e \in E : \{i,j\} \in {e \choose 2}} c_{ij}^e$. As argued in \pref{sec:choose-con}, any such choice satisfies property (1). Let $\mathsf R_{ij}$ denote the effective resistance between $\{i,j\} \in F$ in the weighted graph $G=(V,F,c)$. To satisfy property (2), it suffices that for all hyperedges $e \in E$, the effective resistances $\mathsf R_{ij}$ are the same for all pairs $\{i,j\} \in {e \choose 2}$ with $c_{ij}^e > 0$. (This continues to hold even if the resistances are only comparable up to universal constant factors.) Let $J$ denote the all-ones matrix and consider maximizing the quantity \[ \log \det(L_G+J) \] over all choices of $(c_{ij}^e)$ satisfying \eqref{eq:cap-split}. This quantity is a concave function of the conductances $(c_{ij}^e)$ and the KKT conditions for the maximizer establish the desired property for the effective resistances. See \pref{sec:balanced}. This is essentially a reformulation and simplification of the method used in \cite{KKTY21b} for establishing the existence of nice conductances $c : F \to \mathbb R_+$. It is also reminiscent of Barthe's method for analyzing the Gaussian maximizers of the Brascamp-Lieb (and reverse Brascamp-Lieb) inequalities \cite{Barthe98} (see also the treatment in \cite{HM13}). \subsection{Notation} For two expressions $A$ and $B$, we will use the equivalent notations $A \lesssim B$ and $A \leq O(B)$ to denote that there is a constant $C > 0$ such that $A \leq C B$. If $A$ and $B$ depend on some parameters $\alpha_1,\alpha_2,\ldots$, we use the notation $A \lesssim_{\alpha_1,\alpha_2,\ldots} B$ to denote that there is a number $C = C(\alpha_1,\alpha_2,\ldots)$ such that $A \leq C B$. We use $A \asymp B$ to denote the conjunction of $A \lesssim B$ and $B \lesssim A$. A number of vector and matrix norms will appear in what follows. When $x \in \mathbb R^n$ is a vector, $\|x\|$ will always refer to the standard Euclidean norm of $x$. For a positive integer $M \geq 1$, we will sometimes use the notation $[M] \mathrel{\mathop:}= \{1,2,\ldots,M\}$. \section{Extrema of random processes} \label{sec:bernoulli} \subsection{Background on generic chaining} A space $(T,d)$ is called a {\em $K$-quasimetric} if satisfies \begin{enumerate} \item $d(x,y)=d(y,x)$ for all $x,y \in T$\,. \item $d(x,x) = 0$ for all $x \in T$\,. \item There is a constant $K > 0$ such that \begin{equation*}\label{eq:quasi-metric} d(x,y) \leq K \left(d(x,z)+d(z,y)\right),\qquad \forall x,y,z \in T\,. \end{equation*} \end{enumerate} Say that $(T,d)$ is a {\em quasimetric space} if $(T,d)$ is a $K$-quasimetric for some $K > 0$. Consider a distance $d$ on $T$. A random process $\{V_x : x \in T\}$ is said to be {\em subgaussian with respect to $d$} if there is a number $\alpha > 0$ such that \begin{equation}\label{eq:subgaussian-tail} \Pr\left(|V_x-V_y| > t\right) \leq \exp\left(-\alpha \frac{t^2}{d(x,y)^2}\right),\qquad t > 0\,. \end{equation} \paragraph{The generic chaining functional} For a quasimetric space $(T,d)$, let us recall Talagrand's generic chaining functional \cite[Def. 2.2.19]{TalagrandBook2014}. Define $N_h \mathrel{\mathop:}= 2^{2^h}$. Then \begin{equation}\label{eq:gamma2} \gamma_2(T,d) \mathrel{\mathop:}= \inf_{\{\mathcal A_h\}} \sup_{x \in T} \sum_{h=0}^{\infty} 2^{h/2} \mathrm{diam}_d(\mathcal A_h(x))\,, \end{equation} where the infimum runs over all sequences $\{\mathcal A_h : h \geq 0\}$ of partitions of $T$ satisfying $|\mathcal A_h| \leq N_h$ for each $h \geq 0$. Note that we use the notation $\mathcal A_h(x)$ for the unique set of $\mathcal A_h$ that contains $x$, and $\mathrm{diam}_d(S) \mathrel{\mathop:}= \sup_{x,y \in S} d(x,y)$ for $S \subseteq T$. The next theorem constitutes the generic chaining upper bound; see \cite[Thm 2.2.18]{TalagrandBook2014}. \begin{theorem} If $\{V_x : x \in T\}$ is a centered subgaussian process satisfying \eqref{eq:subgaussian-tail} with respect to a $K$-quasimetric $(T,d)$, then \begin{equation}\label{eq:chaining} \E \sup_{x \in T} V_x \lesssim_{K,\alpha} \gamma_2(T,d)\,. \end{equation} \end{theorem} Define the entropy numbers $e_h(T,d) \mathrel{\mathop:}= \inf \{ \sup_{t \in T} d(t,T_h) : T_h \subseteq T, |T_h| \leq 2^{2^h} \}$. This is the infimum of numbers $r > 0$ such that $T$ can be covered by at most $2^{2^h}$ balls of radius $r$. A classical way of controlling $\gamma_2(T,d)$ is given by Dudley's entropy bound (see, e.g., \cite[Prop 2.2.10]{TalagrandBook2014}): \begin{equation}\label{eq:dudley} \gamma_2(T,d) \lesssim \sum_{h \geq 0} 2^{h/2} e_h(T,d)\,. \end{equation} But often additional structure of the space $(T,d)$ allows one to improve on \eqref{eq:dudley}. The next lemma is a consequence of \cite[Thm 4.1.11 \& (4.23)]{TalagrandBook2014}. It actually holds whenever $T$ is the unit ball of a uniformly $2$-convex Banach space and $d$ is induced by some (possibly different) norm. \begin{theorem}\label{thm:talagrand-uc} Suppose that $T=B_2^n$ is the unit Euclidean ball in $\mathbb R^n$ and $\|\cdot\|_X$ is a norm on $\mathbb R^n$. Then, \[ \gamma_2(T,\|\cdot\|_X) \lesssim \left(\sum_{h \geq 0} \left(2^{h/2} e_h(T, \|\cdot\|_X)\right)^2\right)^{1/2}. \] \end{theorem} In order to bound the entropy numbers $e_h(B_2^n, \|\cdot\|_X)$, we will use the following classical fact; see, e.g., \cite[(3.15)]{LedouxTalagrand2011}. \begin{lemma}[Dual Sudakov inequality] \label{lem:dual-sudakov} Let $B_2^n$ denote the unit Euclidean ball, and suppose that $\|\cdot\|_X$ is a norm on $\mathbb R^n$. Then \[ e_h(B_2^n,\|\cdot\|_X) \lesssim 2^{-h/2} \E \|g\|_X, \] where $g$ is a standard $n$-dimensional Gaussian. \end{lemma} \begin{corollary}\label{cor:talagrand} Suppose $\|\cdot\|_X$ is a norm on $\mathbb R^n$, and furthermore that $\|\cdot\|_X \leq L \|\cdot\|$ for some $L \geq 1$. Then, \[ \gamma_2(B_2^n,\|\cdot\|_X) \lesssim L + \sqrt{\log n} \E \|g\|_X\,, \] where $g$ is a standard $n$-dimensional Gaussian. \end{corollary} \begin{proof} A straightforward volume argument shows that any set of $\delta$-separated points in $(B_2^n, \|\cdot\|)$ must have cardinality at most $(4/\delta)^n$, and therefore \[ e_h(T,\|\cdot\|) \leq 4 \cdot N_h^{-1/n} = 4\cdot 2^{-2^h/n}\,. \] By assumption, we have $e_h(B_2^n,\|\cdot\|_X) \leq L \cdot e_h(B_2^n, \|\cdot\|)$, and therefore \[ e_h(B_2^n,\|\cdot\|_X) \leq 4L \cdot (2^{-2^h/n}). \] Denote $S \mathrel{\mathop:}= \sup_{h \geq 0} 2^{h/2} e_h(T,\|\cdot\|_X)$. Applying \pref{thm:talagrand-uc} yields, for any $h_0 \geq 0$, \begin{align*} \gamma_2(T,d) \lesssim S \sqrt{h_0} + 4L \left(\sum_{h \geq h_0} (2^{h/2} 2^{-2^h/n})^2\right)^{1/2}\,. \end{align*} Choosing $h_0 \geq 2 \log n$ bounds the latter sum by $O(1)$, yielding \[ \gamma_2(T,d) \lesssim S \sqrt{\log n} + L\,. \] To conclude, use \pref{lem:dual-sudakov} to bound $S$. \end{proof} \subsection{Warm up} \label{sec:warmup} The next lemma will allow us to establish the existence of hypergraph spectral sparsifiers with at most $O(\epsilon^{-2} n (\log n)^2)$ hyperedges. It also provides a nice warm up for the more delicate arguments in \pref{sec:advanced}. Let $A : \mathbb R^n \to \mathbb R^m$ denote a linear operator. We use the notation \[ \|A\|_{2\to \infty} \mathrel{\mathop:}= \max_{\|x\| \leq 1} \|Ax\|_{\infty}\,. \] This is equal to the maximum $\ell_2$ norm of a row of $A$. Define the norm \[ \|x\|_{A} \mathrel{\mathop:}= \|Ax\|_{\infty}\,, \] and let us observe the following. \begin{lemma}\label{lem:cov1} If $g$ is a standard $n$-dimensional Gaussian, it holds that \[ \E \|g\|_A \lesssim \|A\|_{2\to \infty} \sqrt{\log m}\,. \] In particular, \pref{lem:dual-sudakov} gives \[ e_h(B_2^n, \|\cdot\|_A) \lesssim 2^{-h/2} \sqrt{\log m} \|A\|_{2\to \infty}\,. \] \end{lemma} \begin{proof} If $a_1,\ldots,a_m$ are the rows of $A$ and $g$ is an $n$-dimensional Gaussian, then \[ \E \|Ag\|_{\infty} = \E \max_{i \in [m]} |\langle g,a_i\rangle| \lesssim \max_{i \in [m]} \|a_i\| \sqrt{\log m} = \|A\|_{2 \to \infty} \sqrt{\log m}\,.\qedhere \] \end{proof} Additionally, let $\f_1,\f_2,\ldots,\f_M : \mathbb R^m \to \mathbb R$ be arbitrary functions. \begin{lemma}\label{lem:Esup-warm} For any subset $T \subseteq B_2^n$, it holds that \[ \E \sup_{x \in T} \sum_{j=1}^M \epsilon_j \f_j(A x)^2 \lesssim \sqrt{\log m \log n} \left\|A\right\|_{2 \to \infty} \cdot \sup_{\substack{j \in [M], \\ \|z-z'\|_{\infty} \leq 1}} |\f_j(z)-\f_j(z')| \cdot \sup_{x \in T} \left(\sum_{j=1}^M \f_j(Ax)^2\right)^{1/2}\,, \] where $\epsilon_1,\ldots,\epsilon_M$ are i.i.d. Bernoulli $\pm 1$ random variables. \end{lemma} \begin{proof} Define \begin{align} \alpha &\mathrel{\mathop:}= \max_{j \in [M]} \sup_{\|z-z'\|_{\infty} \leq 1} |\f_j(z)-\f_j(z')| \,,\label{eq:w1} \\ \beta &\mathrel{\mathop:}= \sup_{x \in T} \left(\sum_{j=1}^M \f_j(Ax)^2\right)^{1/2},\label{eq:w2} \\ V_x &\mathrel{\mathop:}= \sum_{j=1}^M \epsilon_j \f_j(Ax)^2\,,\nonumber \end{align} and note that $\{V_x : x \in \mathbb R^n \}$ is a subgaussian process with respect to the distance \[ d(x,\hat{x}) \mathrel{\mathop:}= \left(\sum_{j=1}^M \left|\f_j(Ax)^2 - \f_j(A\hat{x})^2\right|^2\right)^{1/2}\,. \] Thus in light of \eqref{eq:chaining}, it suffices to prove that \begin{equation}\label{eq:goal-w} \gamma_2(T,d) \lesssim \sqrt{\log m \log n} \|A\|_{2\to\infty} \cdot \alpha\beta\,. \end{equation} Note that for $x,\hat{x} \in T$, \begin{align} d(x,\hat{x})^2 &= \sum_{j=1}^M \left(\f_j(Ax) - \f_j(A\hat{x})\right)^2 \left(\f_j(Ax) + \f_j(A\hat{x})\right)^2 \nonumber \\ &\leq \,2 \sum_{j=1}^M \left(\f_j(Ax) - \f_j(A\hat{x})\right)^2 \left(\f_j(Ax)^2 + \f_j(A\hat{x})^2\right) \nonumber \\ &\stackrel{\mathclap{\eqref{eq:w1}}}{\leq} \,2 \alpha^2\,\|A(x-\hat{x})\|_{\infty}^2\sum_{j=1}^M \left(\f_j(Ax)^2 + \f_j(A\hat{x})^2\right) \nonumber \\ &\stackrel{\mathclap{\eqref{eq:w2}}}{\leq} \,4 \alpha^2\beta^2 \, \|x-\hat{x}\|^2_A\,. \label{eq:last-warm-line} \end{align} In particular, we have \begin{equation}\label{eq:aineq-1} \gamma_2(T, d) \leq 2\alpha\beta \cdot \gamma_2(T, \|\cdot\|_A) \leq 2 \alpha\beta \cdot\gamma_2(B_2^n, \|\cdot\|_A), \end{equation} where the last inequality uses $T \subseteq B_2^n$. We can thus apply \pref{lem:cov1} and \pref{cor:talagrand} with $\|\cdot\|_X = \|\cdot\|_A$ and $L \mathrel{\mathop:}= \|A\|_{2\to \infty}$ to conclude that \[ \gamma_2(B_2^n, \|\cdot\|_A) \lesssim \|A\|_{2\to \infty} \sqrt{\log m\log n}\,. \] Combining this with \eqref{eq:aineq-1} completes our verification of \eqref{eq:goal-w}. \end{proof} In \pref{sec:advanced}, we will obtain an improved bound by using convexity in a stronger way. In particular, we will assume that each of the functions $\f_j$ in \pref{lem:Esup-warm} is a norm on $\mathbb R^m$. \subsection{Growth functionals} \label{sec:growth-functionals} Talagrand introduced a powerful way to control $\gamma_2(T,d)$ via the existence of certain growth functionals. For $x \in T$ and $\rho > 0$, define the ball \begin{equation}\label{eq:balldef} B_d(x,\rho) \mathrel{\mathop:}= \{ y \in T : d(x,y) \leq \rho \}\,. \end{equation} \begin{definition}[Separated sets]\label{def:ar-separated} Let $(T,d)$ denote a metric space and consider numbers $a > 0, r \geq 4$. Say that subsets {\em $H_1,\ldots,H_m \subseteq T$ are $(a,r)$-separated} if \[ H_{\ell} \subseteq B_d(x_{\ell}, a/r), \quad \ell=1,\ldots,m\,, \] where $x_1,\ldots,x_m \in T$ are points satisfying \begin{equation}\label{eq:sep-centers} a \leq d(x_{\ell},x_{\ell'}) \leq ar,\quad \forall \ell \neq \ell'. \end{equation} \end{definition} \begin{definition}[The growth condition]\label{def:growth} Consider nonnegative functionals $\{F_h : h \geq 0\}$ defined on subsets of a metric space $(T,d)$ and which satisfy the following two conditions for every $h \geq 0$: \begin{align*} F_h(S) &\leq F_h(S'),\qquad \forall S \subseteq S' \subseteq T\,, \\ F_{h+1}(S) &\leq F_{h}(S),\qquad \forall \ S \subseteq T\,. \end{align*} Say that such functionals satisfy the {\em growth condition with parameters $r \geq 4$ and $c^* > 0$} if for any integer $h \geq 0$ and $a > 0$, the following holds true with $m=N_{h+1}$: For each collection of subsets $H_1,\ldots,H_m \subseteq T$ that are $(a,r)$-separated, we have \begin{equation}\label{eq:no-packing} F_{h}\left(\bigcup_{\ell \leq m} H_{\ell}\right) \geq c^* a 2^{h/2} + \min_{\ell \leq m} F_{h+1}(H_{\ell})\,. \end{equation} \end{definition} \begin{theorem}[{\cite[Thm 2.3.16]{TalagrandBook2014}}] \label{thm:f-chaining} Let $(T,d)$ be a $K$-quasimetric space and consider a sequence of functionals $\{F_h\}$ satisfying the growth condition (cf. \pref{def:growth}) with parameters $r \geq 4$ and $c^* > 0$. Then, \[ \gamma_2(T,d) \lesssim_K \frac{r}{c^*} F_0(T) + r\cdot\mathrm{diam}_d(T)\,. \] \end{theorem} \begin{remark}[Packing/covering duality] For the reader encountering \pref{def:growth} and \pref{thm:f-chaining} for the first time, the role of the functionals $\{F_h\}$ might appear mysterious. Some intuition can be gained by considering the duality between covering and packing: A set $S$ in some metric space can be covered by $m$ balls of radius $r > 0$ if it is impossible to find $m$ points in $S$ that are pairwise separated by distance $r$. The quantity $\gamma_2(T,d)$ (cf. \eqref{eq:gamma2}) is a sort of multiscale covering functional. The growth functionals $\{F_h\}$ measure the ``size'' of packings of various cardinalities, and \eqref{eq:no-packing} asserts a form of packing impossibility. This makes \pref{thm:f-chaining} a multiscale analog of the simple packing/covering argument recalled above. Those familiar with convex optimization and duality may find the approach of \cite{BDOM21} instructive in this regard. It is shown that the corresponding {\em fractional} multiscale covering and packing values are equal by convex duality, and then a rounding argument establishes that the integral versions are equivalent up to constant factors. \end{remark} We will use the following corollary of \pref{thm:f-chaining} that simplifies the construction of functionals if we have a bound on the growth rate of nets in $(T,d)$. \begin{corollary}\label{cor:f-chaining} Let $(T,d)$ be a $K$-quasimetric and assume there are numbers $k, L \geq 1$ and $r \geq 4$ such that that for every $a > 0$, \begin{equation}\label{eq:growth-cond} H_1,\ldots,H_m \subseteq T \textrm{ are $(a,r)$-separated} \implies m \leq \left(\frac{L}{a}\right)^k. \end{equation} Let $h_0$ be the largest integer $h \geq 0$ such that \begin{equation}\label{eq:hlk} 2^{2^{h}} \leq 2^{k(h-1)/2}\,. \end{equation} Consider a sequence of functionals $\{F_0,F_1,\ldots,F_{h_0}\}$ satisfying the growth condition \eqref{eq:no-packing} with parameters $r$ and $c^* > 0$. Then, \begin{equation}\label{eq:f-chaining} \gamma_2(T,d) \lesssim_K \frac{r}{c^*} F_0(T) + r \left(\mathrm{diam}_d(T) + L\right)\,. \end{equation} \end{corollary} \begin{proof} Define the numbers \begin{align*} c_j &\mathrel{\mathop:}= c^* L \cdot 2^{-2^j/k} 2^{(j-1)/2} \\ C_0 &\mathrel{\mathop:}= \sum_{j=h_0+1}^{\infty} c_j\,, \end{align*} and note that $C_0 \lesssim c^* L$, since \eqref{eq:hlk} is violated for every $h \geq h_0 + 1$. Define a new family of functionals $\{\tilde{F}_h : h \geq 0\}$ so that for every $S \subseteq T$, \begin{align*} \tilde{F}_h(S) &\mathrel{\mathop:}= F_h(S) + C_0\,, \quad && h=0,1,\ldots,h_0\,, \\ \qquad\qquad\qquad\qquad \tilde{F}_h(S) &\mathrel{\mathop:}= F_{h_0}(S) + C_0 - \sum_{j=h_0+1}^{h} c_j\,, \quad && h > h _0\,. \qquad\qquad\qquad\qquad\rightspace \end{align*} By construction, these satisfy the growth condition \pref{def:growth} since for $h \geq h_0$, if $H_1,\ldots,H_m \subseteq T$ are $(a,r)$-separated sets with $m=2^{2^{h+1}}$, then \[ \tilde{F}_{h+1}\left(\bigcup_{\ell \leq m} H_{\ell}\right) \geq c_{h+1} + \tilde{F}_h\left(\bigcup_{\ell \leq m} H_{\ell}\right) \geq c_{h+1} + \min_{\ell \leq m} \tilde{F}_h\left(H_{\ell}\right) \geq c^* a 2^{h/2} + \min_{\ell \leq m} \tilde{F}_h\left(H_{\ell}\right), \] where the last inequality uses the fact that $a \leq L 2^{-2^{h+1}/k}$ from \eqref{eq:growth-cond}. Moreover, we have \[ \tilde{F}_0(T) = F_0(T) + C_0 \leq F_0(T) + O(c^* L)\,, \]and therefore we can apply \pref{thm:f-chaining} to $\{\tilde{F}_h\}$ to complete the proof. \end{proof} \subsection{Further exploiting convexity} \label{sec:advanced} We will now use the growth functional approach (cf. \pref{sec:growth-functionals}) to prove a more elaborate upper bound under the additional assumption that our summands are derived from norms. This will allow us in \pref{sec:h-sparse} to find spectral $\epsilon$-sparsifiers with $O\!\left(\frac{\log D}{\epsilon^2} n \log n\right)$ hyperedges. \smallskip Let $N_1,N_2,\ldots,N_M$ be norms on $\mathbb R^n$ and define \begin{align*} \kappa &\mathrel{\mathop:}= \E \max_{j \in [M]} N_j(g)\,, \\ % \lambda &\mathrel{\mathop:}= \max_{j \in [M]} \left(\E[N_j(g)^2]\right)^{1/2}\,, \end{align*} where $g$ is a standard $n$-dimensional Gaussian. \begin{lemma}\label{lem:Esup} For any $T \subseteq B_2^n$, it holds that \[ \E \sup_{x \in T} \sum_{j=1}^M \epsilon_j N_j(x)^2 \lesssim \left(\lambda \sqrt{\log n} + \kappa\right) \cdot \sup_{x \in T} \left(\sum_{j=1}^M N_j(x)^2 \right)^{1/2}\,, \] where $\epsilon_1,\ldots,\epsilon_M$ are i.i.d. Bernoulli $\pm 1$ random variables. \end{lemma} Before proving the lemma, let us illustrate a corollary that we will use to construct hypergraph sparsifiers. Consider a linear operator $A : \mathbb R^n \to \mathbb R^m$, and suppose that each $N_i$ is a (weighted) $\ell_{\infty}$ norm on some subset $S_i \subseteq [m]$ of the coordinates: \begin{equation}\label{eq:norm-def} N_i(z) = \max_{j \in S_i} w_j |(Az)_j|\,,\qquad w \in [0,1]^{S_i}\,. \end{equation} Let $a_1,\ldots,a_m$ denote the rows of $A$, and observe that $(Ag)_j = \langle a_j,g\rangle$ is a normal random variable with variance $\|a_j\|^2$, and therefore \[ \E[N_i(g)]^2 = \max_{j \in S_i} w_j^2 |\langle a_j,g\rangle|^2 \lesssim \max_{j \in S_i} \|a_j\|^2 \cdot \sqrt{\log |S_i|}\,. \] Similarly, we have \[ \kappa = \E \max_{i \in [M]} \max_{j \in S_i} w_j^2 |\langle a_j,g\rangle|^2 \leq \E \max_{i \in [m]} |\langle a_i,g\rangle|^2 \lesssim \|A\|_{2\to\infty} \sqrt{\log m}\,. \] \begin{corollary}\label{cor:lp-version} If the norms $N_1,\ldots,N_M$ are of the form \eqref{eq:norm-def} for some $A : \mathbb R^n \to \mathbb R^m$ and subsets $S_1,\ldots,S_M \subseteq [m]$ with $\max_{i \in [M]} |S_i| \leq D$, then for any $T \subseteq B_2^n$, it holds that \[ \E \sup_{x \in T} \sum_{j=1}^M \epsilon_j N_j(x)^2 \lesssim \|A\|_{2\to\infty} \sqrt{\log (m+n) \log D} \cdot \sup_{x \in T} \left(\sum_{j=1}^M N_j(x)^2 \right)^{1/2}\,, \] where $\epsilon_1,\ldots,\epsilon_M$ are i.i.d. Bernoulli $\pm 1$ random variables. \end{corollary} The proof of \pref{lem:Esup} is modeled after arguments of Rudelson \cite{Rudelson99} and Talagrand; see \cite[\S 16.7]{TalagrandBook2014} and the historical notes in \cite[\S 16.10]{TalagrandBook2014}. A version of the latter argument first appeared in \cite{Rudelson99}, as a simplification of Rudelson's original construction of an explicit majorizing measure. In the proof of \cite[Prop 16.7.4]{TalagrandBook2014}, one encounters growth functionals of the form $F(S) = 1 - \inf \{ \|u\| : u \in \conv(S) \}$, where $\|\cdot\|$ is a uniformly $2$-convex norm. We recall this definition. \begin{definition}[Uniform $p$-convexity] A Banach space $Z$ is called {\em uniformly $p$-convex} if there is a number $\eta > 0$ such that for all $x,y \in Z$ with $\|x\|_Z,\|y\|_Z \leq 1$, \[ \left\|\frac{x+y}{2}\right\|_Z \leq 1 - \eta \|x-y\|_Z^p\,. \] \end{definition} \smallskip We remark that the statement of \pref{lem:Esup} actually holds when $T$ is a subset of the unit ball of any uniformly $2$-convex norm on $\mathbb R^n$ (with an implicit constant that depends on $\eta$). We will instead employ functionals of the form \[ F(S) = 2 - \inf \left\{ \|u\|^2 + \sum_{j=1}^M N_j(u)^2 : u \in \conv(S) \right\}. \] Problematically, the norm $u \mapsto \left(\|u\|^2 + \sum_{j=1}^M N_j(u)^2\right)^{1/2}$ is potentially very far from uniformly $2$-convex, thus we have to be careful in using only $2$-convexity of the Euclidean norm, along with $2$-convexity of the ``outer'' $\ell_2$ norm of the $N_j$'s. This requires application of the inequality $|N_j(x)-N_j(\hat{x})| \leq N_j(x-\hat{x})$ only at judiciously chosen points in the argument. We offer some further explanation in \pref{rem:discussion} after the proof. \begin{proof}[Proof of \pref{lem:Esup}] For a set $S \subseteq \mathbb R^n$, let $\conv(S)$ denote the closed convex hull of $S$. Note that by convexity, \[ \sup_{x \in T} \left(\sum_{j=1}^M N_j(x)^2\right)^{1/2} = \sup_{x \in \conv(T)} \left(\sum_{j=1}^M N_j(x)^2\right)^{1/2}. \] Therefore we may replace $T$ by $\conv(T)$ and henceforth assume that $T$ is compact and convex. \smallskip By scaling $\{N_j\}$, we may assume that \begin{align} \sup_{x \in T} \sum_{j=1}^M N_j(x)^2 &= 1\,. \label{eq:opnorm1} \end{align} Define $V_x \mathrel{\mathop:}= \sum_{j=1}^M \epsilon_j N_j(x)^2$. Then $\{V_x : x \in \mathbb R^n\}$ is a subgaussian process with respect to the metric \[ \tilde{d}(x,\hat{x}) \mathrel{\mathop:}= \left(\sum_{j=1}^M |N_j(x)^2 - N_j(\hat{x})^2|^2\right)^{1/2}, \] therefore from \eqref{eq:chaining}, we have \begin{equation}\label{eq:chaining0} \E \sup_{x \in T} V_x \lesssim \gamma_2(T,\tilde{d})\,. \end{equation} \paragraph{Passing to a nicer distance} Define the related distance \begin{align*} d(x,\hat{x}) &\mathrel{\mathop:}= \left(\sum_{j=1}^M N_j(x-\hat{x})^2 \left(N_j(x)^2 + N_j(\hat{x})^2\right)\right)^{1/2}, \end{align*} and note that for all $x,\hat{x} \in \mathbb R^n$, \begin{align*} \tilde{d}(x,\hat{x})^2 &= \sum_{j=1}^M \left(N_j(x)-N_j(\hat{x})\right)^2 \left(N_j(x) + N_j(\hat{x})\right)^2 \nonumber \\ &\leq 2 \sum_{j=1}^M N_j(x-\hat{x})^2 \left(N_j(x)^2 + N_j(\hat{x})^2\right) = 2\, d(x,\hat{x})^2\,. \label{eq:basic-compare} \end{align*} We will observe momentarily that \begin{equation}\label{eq:real-qm} d(x,\hat{x}) \leq 2\sqrt{2} \left(d(x,y)+d(y,\hat{x})\right),\qquad \forall x,\hat{x},y \in \mathbb R^n\,. \end{equation} Since $\tilde{d} \leq \sqrt{2} d$ and $d$ is a quasimetric, \eqref{eq:chaining} gives \begin{equation*}\label{eq:chaining1} \E \sup_{x \in T} V_x \lesssim \gamma_2(T,d)\,, \end{equation*} and thus our goal is to establish that \begin{equation}\label{eq:goal} \gamma_2(T,d) \lesssim \lambda \sqrt{\log n} + \kappa\,. \end{equation} \begin{lemma}\label{lem:simple-qm} For any metric space $(X,D)$ and $x_0 \in X$, it holds that the distance \[ \tilde{D}(x,\hat{x}) \mathrel{\mathop:}= D(x,\hat{x}) \left(D(x,x_0)+D(\hat{x},x_0)\right) \] is a $2$-quasimetric. \end{lemma} \begin{proof} Define $\psi(x) \mathrel{\mathop:}= D(x,x_0)$ and consider $x,\hat{x},y \in X$. Then, \begin{align*} \tilde{D}(x,\hat{x}) &\leq (D(x,y) + D(\hat{x},y)) \left(\psi(x)+\psi(\hat{x})\right) \\ &\leq D(x,y) \left(\psi(x) + \psi(y) + D(\hat{x},y)\right) + D(\hat{x},y) \left(\psi(\hat{x}) + \psi(y) + D(x,y)\right) \\ &\leq \tilde{D}(x,y) + \tilde{D}(\hat{x},y) + 2 D(x,y) D(\hat{x},y)\,. \end{align*} Now use $2 D(x,y) D(\hat{x},y) \leq D(x,y)^2 + D(\hat{x},y)^2 \leq \tilde{D}(x,y) + \tilde{D}(\hat{x},y)$, completing the proof. \end{proof} Applying the preceding lemma with $D(x,\hat{x}) = N_j(x-\hat{x})$ and $x_0=0$ shows that the distance $(x,\hat{x}) \mapsto N_j(x-\hat{x}) (N_j(x)+N_j(\hat{x})^2)^{1/2}$ is a $2\sqrt{2}$-quasimetric for each $j=1,\ldots,M$, and therefore $d$ is a $2\sqrt{2}$-quasimetric on $\mathbb R^n$, verifying \eqref{eq:real-qm}. \paragraph{Balls in $(\mathbb R^n,d)$ are approximately convex} Recall the definition of the balls $B_d(x,\rho)$ from \eqref{eq:balldef}. \begin{lemma}\label{lem:approx-convex} For any $x \in \mathbb R^n$ and $\rho > 0$, it holds that \[ \conv(B_d(x,\rho)) \subseteq B_d(x,4\rho)\,. \] \end{lemma} \begin{proof} For $y \in B_d(x,\rho)$, we have \begin{equation}\label{eq:bnd1} \left(\sum_{j=1}^M N_j(x-y)^2 N_j(x)^2\right)^{1/2} \leq \rho\,, \end{equation} as well as \begin{equation} \sqrt{\rho} \geq d(x,y)^{1/2} = \left(\sum_{j=1}^M N_j(x-y)^2 \left(N_j(x)^2+N_j(y)^2\right)\right)^{1/4} \geq \left(\frac12 \sum_{j=1}^M N_j(x-y)^4\right)^{1/4},\label{eq:bnd2} \end{equation} where the final inequality uses $N_j(x-y) \leq N_j(x)+N_j(y)$. Since the left-hand side of \eqref{eq:bnd1} and the right-hand side of \eqref{eq:bnd2} are both convex functions of $y$, these inequalities remain true for all $y \in \conv(B_d(x,\rho))$. In particular, for any $y \in \conv(B_d(x,\rho))$, we can use $a^2+b^2 \leq 4a^2 + 2(a-b)^2$ to write \begin{align*} d(x,y) &\leq \left(\sum_{j=1}^M N_j(x-y)^2 \left(4 N_j(x)^2 + 2 (N_j(x)-N_j(y))^2\right)\right)^{1/2} \\ &\leq 2 \left(\sum_{j=1}^M N_j(x-y)^2 N_j(x)^2\right)^{1/2} + \sqrt{2} \left(\sum_{j=1}^M N_j(x-y)^4\right)^{1/2} \leq 4 \rho\,.\qedhere \end{align*} \end{proof} \paragraph{Covering estimates} Define now the following norms on $\mathbb R^n$: \begin{align*} \|x\|_{\mathcal N} &\mathrel{\mathop:}= \max_{j \in [M]} N_j(x)\,, \\ \|x\|_{\mathcal E(u)} &\mathrel{\mathop:}= \left(\sum_{j=1}^M N_j(x)^2 N_j(u)^2\right)^{1/2}, \quad u \in \mathbb R^n\,. \end{align*} \begin{lemma}\label{lem:dist-norms} For all $x,\hat{x},u \in \mathbb R^n$, \begin{align*} d(x,\hat{x})^2 &\leq 2\, \|x-\hat{x}\|_{\mathcal N}^2 \left(\sum_{j=1}^M \left(N_j(x)-N_j(u)\right)^2+\sum_{j=1}^M \left(N_j(\hat{x})-N_j(u)\right)^2\right) + 4 \|x-\hat{x}\|_{\mathcal E(u)}^2\,. \end{align*} \end{lemma} \begin{proof} Use the inequalities \begin{align} % N_j(x)^2 \leq 2 (N_j(x)-N_j(u))^2 + 2N_j(u)^2\,, \qquad x,u \in \mathbb R^n\nonumber \end{align} to write \begin{align*} \sum_{j=1}^M N_j(x-\hat{x})^2 N_j(x)^2 % % & \leq 2 \|x-\hat{x}\|_{\mathcal N}^2 \sum_{j=1}^M \left(N_j(x)-N_j(u)\right)^2 + 2 \sum_{j=1}^M N_j(x-\hat{x})^2 N_j(u)^2 \\ &= 2 \|x-\hat{x}\|_{\mathcal N}^2 \sum_{j=1}^M \left(N_j(x)-N_j(u)\right)^2 + 2 \|x-\hat{x}\|^2_{\mathcal E(u)}\,.\qedhere \end{align*} \end{proof} \begin{lemma}\label{lem:cov2} It holds that \begin{align*} e_h(B_2^n, \|\cdot\|_{\mathcal N}) &\lesssim 2^{-h/2} \kappa\,,\\ e_h(B_2^n, \|\cdot\|_{\mathcal E(u)}) &\lesssim 2^{-h/2} \lambda\,,\quad \forall u \in T\,. \end{align*} \end{lemma} \begin{proof} Both inequalities follow readily from \pref{lem:dual-sudakov}: If $g$ is a standard $n$-dimensional Gaussian, then \[ e_h(B_2^n, \|\cdot\|_{\mathcal N}) \lesssim 2^{-h/2} \E \|g\|_{\mathcal N} = 2^{-h/2} \kappa, \] by the definition of $\kappa$. For the second inequality, \[ e_h(B_2^n, \|\cdot\|_{\mathcal E(u)}) \lesssim 2^{-h/2} \E \|g\|_{\mathcal E(u)}\,. \] Now use convexity of the square to bound \[ \left(\E \|g\|_{\mathcal E(u)}\right)^2 \leq \E \|g\|_{\mathcal E(u)}^2 = \sum_{j=1}^M N_j(u)^2 \E [N_j(g)^2] \leq \lambda^2\,, \] where the final line uses the definition of $\lambda$ and $\sum_{j=1}^M N_j(u)^2 \leq 1$ by \eqref{eq:opnorm1}, because $u \in T$. \end{proof} We also need a basic estimate that we will use to apply \pref{cor:f-chaining}. Observe that for $x,\hat{x} \in T$, \begin{equation}\label{eq:diam-bnd} d(x,\hat{x}) \stackrel{\substack{\eqref{eq:opnorm1}}}{\leq} \sqrt{2} \|x-\hat{x}\|_{\mathcal N} \leq \sqrt{2} \left(\|x\|_{\mathcal N} + \|\hat{x}\|_{\mathcal N}\right) \leq 2 \sqrt{2}\,, \end{equation} where the last inequality uses $\|x\|_{\mathcal N} \leq (\sum_{j=1}^M N_j(x)^2)^{1/2} \leq 1$ for $x \in T$, by \eqref{eq:opnorm1}. \begin{lemma} \label{lem:simple-count} For any $a > 0$, if $x_1,\ldots,x_K \in T$ satisfy $d(x_i,x_j) \geq a$ for $i \neq j$, then, $K \leq \left(\frac{6}{a}\right)^n$. \end{lemma} \begin{proof} As noted above, we have $\|x\|_{\mathcal N} \leq 1$ for $x \in T$, and \eqref{eq:diam-bnd} gives $\|x_i-x_j\|_{\mathcal N} \geq a/\sqrt{2}$ for $i \neq j$. Therefore by a simple volume argument (valid for any norm on $\mathbb R^n$): \[ K \leq \left(1+\frac{2\sqrt{2}}{a}\right)^n \leq \left(\frac{6}{a}\right)^n, \] where the last inequality follows because if $K \geq 2$, then \eqref{eq:diam-bnd} implies $a \leq 2\sqrt{2}$. \end{proof} \paragraph{The growth functionals} Define a norm on $\mathbb R^n$ by \begin{equation}\label{eq:iii} \vertiii{u} \mathrel{\mathop:}= \left(\|u\|^2 + \sum_{j=1}^M N_j(u)^2\right)^{1/2}\,. \end{equation} Denote $r \mathrel{\mathop:}= 64$. Let $h_0$ be the largest integer so that $2^{2^{h_0}} \leq 2^{n(h-1)/2}$, and note that $h_0 \leq O(\log n)$. Define \begin{align} F_h(S) &\mathrel{\mathop:}= 2 - \inf \left\{ \vertiii{u}^2 : u \in \conv(S) \right\} + \frac{\max(h_0+1-h,0)}{\log n}, \qquad && h=0,1,\ldots,h_0\,.\label{eq:Fh} \end{align} Recall that $T \subseteq B_2^n$ and, along with \eqref{eq:opnorm1}, this gives $\max_{u \in T} \vertiii{u}^2 \leq 2$. Since $h_0 \leq O(\log n)$, we have $F_0(T) \leq O(1)$. \smallskip From \eqref{eq:diam-bnd}, we have $\mathrm{diam}_d(T) \leq O(1)$. Note also that from \pref{lem:simple-count}, it holds that the packing assumption \eqref{eq:growth-cond} is satisfied with $L \leq O(1)$ and $k = n$. Therefore if we can verify that our functionals satisfy the growth conditions \eqref{eq:no-packing} for $h=0,1,\ldots,h_0$, then we will conclude from \eqref{eq:f-chaining} that \begin{equation}\label{eq:gamma-pre} \gamma_2(T,d) \lesssim \frac{1}{c^*} + 1\,. \end{equation} \medskip \paragraph{Consideration of $(a,r)$-separated sets} Define $K \mathrel{\mathop:}= N_{h+1}$ and consider points $\{x_{1},\ldots,x_K \} \subseteq T$ such that $d(x_{\ell},x_{\ell'}) \geq a$ whenever $\ell \neq \ell'$, along with sets $H_{\ell} \subseteq T \cap B_d(x_{\ell}, a/r)$ for $\ell=1,\ldots,K$. \medskip Let $z_0$ be a minimizer of $\vertiii{u}^2$ over $u \in\conv(\bigcup_{\ell \leq K} H_{\ell})$, and note that $z_0 \in T$ since $T$ is closed and convex. Define $\theta_0 \mathrel{\mathop:}= \vertiii{z_0}^2$ and \[ \theta \mathrel{\mathop:}= \max_{\ell \leq K} \min \left\{ \vertiii{u}^2 : u \in \conv(H_{\ell}) \right\}, \] and for each $\ell \in [K]$, let $z_{\ell} \in \conv(H_{\ell})$ be such that $\vertiii{z_{\ell}}^2 \leq \theta$. Note that $\conv(H_{\ell}) \subseteq \conv(B_d(x_{\ell},a/r)) \subseteq B_d(x_{\ell},4a/r)$, where the latter inclusion follows from \pref{lem:approx-convex}. Since $z_{\ell} \in \conv(H_{\ell})$, we have $d(x_{\ell},z_{\ell}) \leq 4a/r$ for all $\ell \in \{1,\ldots,K\}$. In particular for $\ell,\ell' \in \{1,\ldots,K\}$ with $\ell \neq \ell'$, we can use the quasimetric inequalities \eqref{eq:real-qm} to write \begin{align*} a \leq d(x_{\ell},x_{\ell'}) &\leq 2 \sqrt{2} \left(d(x_{\ell},z_{\ell}) + d(z_{\ell},x_{\ell'})\right) \\ &\leq 2 \sqrt{2}\,\frac{4a}{r} + 8 \left(d(z_{\ell},z_{\ell'}) + d(z_{\ell'},x_{\ell'})\right) \leq (8+2\sqrt{2}) \frac{4a}{r} + 8\, d(z_{\ell},z_{\ell'}). % \end{align*} Using our choice $r = 64$, we conclude that that for $\ell \neq \ell'$, \begin{equation}\label{eq:still-sep} d(z_{\ell},z_{\ell'}) \geq \frac{a}{32}\,. \end{equation} Observe that \[ F_h\left(\bigcup_{\ell \leq m} H_{\ell}\right) - \min_{\ell \leq K} F_{h+1}(H_{\ell}) = (2-\theta_0) - (2-\theta) + \frac{1}{\log n} = \theta - \theta_0 + \frac{1}{\log n}\,, \] thus to verify that the growth condition \pref{def:growth} holds for $\{F_h\}$, our goal is to show that \begin{equation}\label{eq:goal-growth} \theta - \theta_0 + \frac{1}{\log n} \gtrsim \frac{2^{h/2} a}{\kappa + \lambda \sqrt{\log n}}\,,\qquad h=0,1,\ldots,h_0\,. \end{equation} This will confirm the growth condition with $c^* \asymp \left(\lambda \sqrt{\log n} + \kappa\right)^{-1}$, and therefore \eqref{eq:gamma-pre} yields our desired goal \eqref{eq:goal}. \smallskip The next lemma exploits $2$-uniform convexity of the $\ell_2$ distance. Note that the claimed inequality would fail (in general) if the left-hand side were replaced by the larger quantity $\vertiii{z_0-z_{\ell}}^2$, as $\vertiii{\cdot}$ is not necessarily $2$-convex. \begin{lemma}\label{lem:uc2} For every $\ell=1,\ldots,K$, it holds that \[ \left\|z_0-z_{\ell}\right\|^2 + \sum_{j=1}^M \left(N_j(z_0)-N_j(z_{\ell})\right)^2 \leq 2 (\theta-\theta_0)\,. \] \end{lemma} \begin{proof} Let us use \[ \left(\frac{a-b}{2}\right)^2 = \frac12 a^2 + \frac12 b^2 - \left(\frac{a+b}{2}\right)^2. \] to write \begin{align*} \left\|\frac{z_0-z_{\ell}}{2}\right\|^2 + \sum_{j=1}^M \left(\frac{N_j(z_0)-N_j(z_{\ell})}{2}\right)^2 &= \frac12 \left(\|z_{\ell}\|^2 + \sum_{j=1}^M N_j(z_{\ell})^2\right) + \frac12 \left(\|z_{0}\|^2 + \sum_{j=1}^M N_j(z_0)^2\right) \\ & \qquad\qquad - \left\|\frac{z_0+z_{\ell}}{2}\right\|^2 - \sum_{j=1}^M \left(\frac{N_j(z_0)+N_j(z_{\ell})}{2}\right)^2. \end{align*} By convexity of the norm $N_j$, we have $\frac12 (N_j(z_0)+N_j(z_{\ell})) \geq N_j(\tfrac{z_0+z_{\ell}}{2})$, so the preceding identity gives \begin{align*} \left\|\frac{z_0-z_{\ell}}{2}\right\|^2 + \sum_{j=1}^M \left(\frac{N_j(z_0)-N_j(z_{\ell})}{2}\right)^2 &\leq \frac12 \vertiii{z_{\ell}}^2 + \frac12 \vertiii{z_0}^2 - \vertiii{\tfrac{z_0+z_{\ell}}{2}}^2 \\ &\leq \vertiii{z_{\ell}}^2 - \vertiii{\tfrac{z_0+z_{\ell}}{2}}^2 \\ &\leq \theta - \theta_0\,, \end{align*} where the inequality $\vertiii{\frac{z_0+z_{\ell}}{2}}^2 \geq \theta_0$ follows from $\frac{z_0+z_{\ell}}{2} \in \conv(\bigcup_{\ell \leq K} H_{\ell})$, since $z_0 \in \conv(\bigcup_{\ell \leq K} H_{\ell})$ and $z_{\ell} \in \conv(H_{\ell})$. \end{proof} Define $\rho \mathrel{\mathop:}= \theta - \theta_0$. One consequence of \pref{lem:uc2} is that \[ z_1,\ldots,z_K \in z_0 + \sqrt{2 \rho} B_2^n\,. \] We can cover $z_0 + \sqrt{2\rho} B_2^n$ by $N_h$ sets that have $\|\cdot\|_{\mathcal N}$-diameter bounded by $2 e_h(\sqrt{2\rho} B_2^n, \|\cdot\|_{\mathcal N})$. Since we have $K = N_{h+1} = N_h^2$ points $z_1,\ldots,z_K$, at least $N_{h}$ of them $z_{i_1},\ldots,z_{i_{N_h}}$ must lie in the same set of the cover. And by definition, these points cannot all have pairwise $\|\cdot\|_{\mathcal E(z_0)}$ distance greater than $e_h(\sqrt{2\rho} B_2^n, \|\cdot\|_{\mathcal E(z_0)})$. Therefore we must have at least two points $z_{\ell}$ and $z_{\ell'}$ with $\ell \neq \ell'$ and $\ell,\ell' \geq 1$, and such that \begin{align*} \|z_{\ell}-z_{\ell'}\|_{\mathcal N} &\leq 2 e_h(\sqrt{2\rho} B_2^n, \|\cdot\|_{\mathcal N})\lesssim 2^{-h/2} \kappa \sqrt{\rho} \,, \\ \|z_{\ell}-z_{\ell'}\|_{\mathcal E(z_0)} &\leq e_h(\sqrt{2\rho} B_2^n, \|\cdot\|_{\mathcal E(z_0)}) \lesssim 2^{-h/2} \lambda \sqrt{\rho}\,, \end{align*} where the latter two estimates follow from \pref{lem:cov1} and \pref{lem:cov2}, respectively. Let us also note a second consequence of \pref{lem:uc2}, that \[ \sum_{j=1}^M \left(N_j(z_0)-N_j(z_{\ell})\right)^2 + \sum_{j=1}^M \left(N_j(z_0)-N_j(z_{\ell'})\right)^2 \leq 4 \rho\,. \] Using the three preceding inequalities in \pref{lem:dist-norms} yields \[ a^2 \stackrel{\eqref{eq:still-sep}}{\lesssim} d(z_{\ell},z_{\ell'})^2 \lesssim 2^{-h} \rho^2 \kappa^2 + 2^{-h} \rho \lambda^2 \leq \max\left(2^{-h} \kappa^2 \rho^2, 2^{-h} \lambda^2 \rho\right). \] This implies \[ \rho \gtrsim \min\left(\frac{2^{h/2} a}{\kappa},\frac{2^h a^2}{\lambda^2}\right). \] Since it holds that \[ \frac{2^h a^2}{\lambda^2} + \frac{1}{\log n} \geq \frac{2^{h/2} a}{\lambda \sqrt{\log n}}, \] we conclude that \[ \rho + \frac{1}{\log n} \gtrsim \min\left(\frac{2^{h/2} a}{\kappa},\frac{2^{h/2} a}{\lambda \sqrt{\log n}}\right) \gtrsim \frac{2^{h/2} a}{\lambda \sqrt{\log n} + \kappa}\,. \] Recalling that $\rho = \theta - \theta_0$, we have established \eqref{eq:goal-growth}, completing the proof. \end{proof} \begin{remark}[Discussion of the implicit partitioning] \label{rem:discussion} It is often more intuitive to think about bounding $\gamma_2(T,d)$ by explicitly constructing the sequence of partitions $\{\mathcal A_h\}$ (recall \eqref{eq:gamma2}). This is a technical process that is aided significantly by \pref{thm:f-chaining}, whose proof involves the construction of partitions from growth functionals. Recall the norm $\vertiii{\cdot}$ from \eqref{eq:iii} and for a subset $S \subseteq B_2^n$, define the quantity \[ \f(S) \mathrel{\mathop:}= 2 - \min \left\{ \vertiii{x}^2 : x \in \conv(S) \right\}. \] Then $\f(S)$ can be considered as an approximate measure of the ``size'' of $S$, where sets of larger $\f(S)$ value tend to have a larger $\E \sup_{x \in S} \sum_{j=1}^M \epsilon_j N_j(x)^2$ value. \smallskip Recall that $r \mathrel{\mathop:}= 64$. Consider a ball $B_d(x_0,\eta)$, and let $z_0 \in B_d(x_0,4\eta)$ be such that $\f(B_d(x_0,\eta)) = 2-\vertiii{z_0}^2$. Let us think of $z_0$ as the ``analytic center'' of the ball $B_d(x_0,\eta)$. (We have to take $z_0 \in B_d(x_0,4\eta)$ because the ball $B_d(x_0,\eta)$ is only approximately convex.) Define the distance \[ \Delta(x,y) \mathrel{\mathop:}= \left(\|x-y\|^2 + \sum_{j=1}^M (N_j(x)-N_j(y))^2\right)^{1/2} \,,\qquad x,y \in \mathbb R^n\,. \] For $x \in B_d(x_0,\eta)$, let $\hat{x} \in B_d(x,4\eta/r^2)$ denote a point satisfying $\f(B_d(x,\eta/r^2)) = 2 - \vertiii{\hat{x}}^2$. Then \pref{lem:uc2} gives \begin{equation}\label{eq:size-fact} \Delta(z_0,\hat{x})^2 \lesssim \f\left(B_d(x_0, \eta)\right) - \f\left(B_d(x, \eta/r^2)\right)\,. \end{equation} In other words, either the $\f$-value of $B_d(x, \eta/r^2)$ is significantly smaller than that of $B_d(x_0,\eta)$, or $\hat{x}$ is close (in the distance $\Delta$) to the analytic center $z_0$. The second part of the argument involves bounding the number of centers that can be within a certain distance of $z_0$. Consider now any points $x_1,\ldots,x_M \in B_d(x_0,\eta)$ with $d(x_i,x_j) > \eta/r$ for $i \neq j$. \pref{lem:dist-norms} and the covering estimates on $e_h(B_2^n, \|\cdot\|_{\mathcal E(z_0)})$ and $e_h(B_2^n, \|\cdot\|_{\mathcal N})$ together give that for some constant $C > 0$, \begin{equation}\label{eq:card-fact} \# \left\{ i \geq 1 : \Delta(z_0, \hat{x}_i)^2 \leq \rho \right\} \leq \exp\left(\frac{C}{\eta^2} \left(\kappa^2 \rho^2 + \lambda^2 \rho\right)\right). \end{equation} Now \eqref{eq:size-fact} and \eqref{eq:card-fact} imply that for any $\delta > 0$, \begin{equation}\label{eq:key-tradeoff} \# \left\{ i \geq 1 : \f\left(B_d(x_i,\eta/r^2)\right) \geq \f\left(B_d(x_0,\eta)\right) - \delta \right\} \leq\exp\left(\frac{C}{\eta^2} \left(\kappa^2 \delta^2 + \lambda^2 \delta\right)\right). \end{equation} This is the key tradeoff occuring in the argument: A bound on the number of pairwise separated ``children'' $B_d(x_i,\eta/r^2)$ of $B_d(x_0,\eta)$ that do not experience a significant reduction in their $\f$-value. Employing this bound repeatedly, in a sufficiently careful manner, allows one to construct a sequence of partitions $\{\mathcal A_h\}$ that yields the desired upper bound on $\gamma_2(T,d)$. The role of \pref{thm:f-chaining} is to automate this process. \end{remark} \section{Hypergraph sparsification} \label{sec:h-sparse} Suppose $H=(V,E,w)$ is a weighted hypergraph and denote $n \mathrel{\mathop:}= |V|$. For a single hyperedge $e \in E$, let us recall the definitions \[ Q_e(x) \mathrel{\mathop:}= \max_{\{u,v\} \in {e \choose 2}} (x_u-x_v)^2\,, \] as well as the energy \[ Q_H(x) \mathrel{\mathop:}= \sum_{e \in E} w_e Q_e(x)\,. \] \subsection{Sampling} \label{sec:sampling} Suppose we have a probability distribution $\mu \in \mathbb R_+^E$ on hyperedges in $H$. Let us sample hyperedges $\tilde{E} = \{ e_1,e_2,\ldots,e_M \}$ independently according to $\mu$. The weighted hypergraph $\tilde{H}=(V,\tilde{E},\tilde{w})$ is defined so that \[ Q_{\tilde{H}}(x) = \frac{1}{M} \sum_{k=1}^M \frac{w_{e_k}}{\mu_{e_k}} Q_{e_k}(x)\,, \] In particular, $\E[Q_{\tilde{H}}(x)] = Q_{H}(x)$ for all $x \in \mathbb R^V$. Recall that the hyperedge weights in $\tilde{H}$ are given by \eqref{eq:edge-weights}. To help us choose the distribution $\mu$, we now introduce a Laplacian on an auxiliary graph. \paragraph{An auxiliary Laplacian} Define the edge set $F \mathrel{\mathop:}= \bigcup_{e \in E} {e \choose 2}$, and let $G=(V,F,c)$ be a weighted graph, where we will choose the edge conductances $c \in \mathbb R_+^F$ later. Denote by $L_G : \mathbb R^V \to \mathbb R^V$ the weighted Laplacian \begin{equation}\label{eq:weighted-lap} L_G \mathrel{\mathop:}= \sum_{\{i,j\} \in F} c_{ij} (\chi_i-\chi_j) (\chi_i-\chi_j)^*, \end{equation} where $\chi_1,\ldots,\chi_n$ is the standard basis of $\mathbb R^n$. Let $L_G^{+}$ denote its Moore-Penrose pseudoinverse and define \begin{align} \mathsf R_{ij} &\mathrel{\mathop:}= \|L_G^{+/2} (\chi_i-\chi_j)\|^2, \qquad\quad\, \{i,j\} \in F\,, \nonumber \\ \mathsf R_{\max}(e) &\mathrel{\mathop:}= \max \left\{ \mathsf R_{ij} : \{i,j\} \in \textstyle{{e \choose 2}} \right\}, \,\qquad e \in E\,, \nonumber \\ Z &\mathrel{\mathop:}= \sum_{e \in E} w_e \mathsf R_{\max}(e)\,, \nonumber \\ \mu_e &\mathrel{\mathop:}= \frac{w_e \mathsf R_{\max}(e)}{Z}\,, \label{eq:p-def} \quad\qquad\qquad\qquad % e\ \, \in E\,. \end{align} \begin{lemma}\label{lem:hsparse} Suppose it holds that \begin{equation}\label{eq:norm-compare-cond} \|x\|^2 \leq Q_H(L_G^{+/2} x)\,,\qquad \forall x \in \mathbb R^n\,. \end{equation} Then for any $\epsilon \in (0,1)$, there is a number \[ M_0 \lesssim \frac{\log D}{\epsilon^2} Z \log n \] such that for $M \geq M_0$, with probability at least $1/2$, the hypergraph $\tilde{H}$ is a spectral $\epsilon$-sparsifier for $H$. \end{lemma} \begin{proof} By convexity, \begin{equation}\label{eq:start} \E_{\tilde{H}} \max_{v : Q_H(v) \leq 1} \left|Q_H(v)-Q_{\tilde{H}}(v)\right| \leq \E_{\tilde{H},\hat{H}} \max_{v : Q_H(v) \leq 1} |Q_{\tilde{H}}(v)-Q_{\hat{H}}(v)|\,, \end{equation} where $\hat{H}$ is an independent copy of $\tilde{H}$. The latter quantity can be written as \begin{align} \nonumber \E_{\tilde{e},\hat{e}} &\max_{v : Q_H(v) \leq 1} \left|\frac{1}{M} \sum_{i=1}^M \frac{w_{\tilde{e}_i}}{\mu_{\tilde{e}_i}} Q_{\tilde{e}_i}(v) - \frac{1}{M} \sum_{i=1}^M \frac{w_{\hat{e}_i}}{\mu_{\hat{e}_i}} Q_{\hat{e}_i}(v)\right| \\ &= \E_{\epsilon} \E_{\tilde{e},\hat{e}} \max_{v : Q_H(v) \leq 1} \left|\frac{1}{M} \sum_{i=1}^M \epsilon_i \left(\frac{w_{\tilde{e}_i}}{\mu_{\tilde{e}_i}}Q_{\tilde{e}_i}(v) - \frac{w_{\hat{e}_i}}{\mu_{\hat{e}_i}} Q_{\hat{e}_i}(v)\right)\right| \label{eq:sign-trick} \\ &\leq 2 \E_{\tilde{H}} \E_{\epsilon} \max_{v : Q_H(v) \leq 1} \left|\frac{1}{M} \sum_{i=1}^M \epsilon_i \frac{w_{e_i}}{\mu_{e_i}} Q_{e_i}(v)\right|,\label{eq:latterq} \end{align} where $\epsilon_1,\ldots,\epsilon_M$ are i.i.d. Bernoulli $\pm 1$ random variables. Note that we can introduce signs in \eqref{eq:sign-trick} because the distribution of $\frac{w_{\tilde{e}_i}}{\mu_{\tilde{e}_i}}Q_{\tilde{e}_i}(v) - \frac{w_{\hat{e}_i}}{\mu_{\hat{e}_i}} Q_{\hat{e}_i}(v)$ is symmetric. For $e \in E$ and $\{i,j\} \in {e \choose 2}$, define the vectors \begin{align*} y_{ij} &\mathrel{\mathop:}= L_G^{+/2} (\chi_i-\chi_j) \\ y^e_{ij} &\mathrel{\mathop:}= \sqrt{\frac{w_e}{\mu_e}}\ y_{ij} = \sqrt{\frac{Z}{\mathsf R_{\max}(e)}}\ y_{ij}\,. \end{align*} Then we have \begin{equation}\label{eq:edge-exp} \frac{w_e}{\mu_e} Q_e(L_G^{+/2} x) = \frac{w_e}{\mu_e} \max_{\{i,j\} \in {e \choose 2}}|\langle L_G^{+/2} x, \chi_i-\chi_j\rangle|^2 = \max_{\{i,j\} \in {e \choose 2}}\langle x,y_{ij}^e\rangle^2\,. \end{equation} Define the values \[ S_{ij} \mathrel{\mathop:}= \max_{e \in E : \{i,j\} \in {e \choose 2}} \|y_{ij}^e\|\,,\quad \{i,j\} \in F, \] and the linear map $A : \mathbb R^n \to \mathbb R^F$ by $(Ax)_{\{i,j\}} \mathrel{\mathop:}= S_{ij} \langle x,y_{ij}/\|y_{ij}\|\rangle$. \smallskip For $k=1,\ldots,M$, define the weighted $\ell_{\infty}$ norms \[ N_k(z) \mathrel{\mathop:}= \max \left\{ \left|(Az)_{\{i,j\}}\right| \frac{\|y_{ij}^{e_k}\|}{S_{ij}} : \{i,j\} \in {e_k \choose 2}, S_{ij} > 0 \right\}. \] It holds that \[ N_k(x) = \max_{\{i,j\} \in e_k} |\langle x,y_{ij}^{e_k}\rangle|\,, \] so from \eqref{eq:edge-exp}, we have \begin{align} Q_{\tilde{H}}(L_G^{+/2} x) &= \frac{1}{M} \sum_{i=1}^M N_i(x)^2\,, \label{eq:rewrite1} \\ \frac{1}{M} \sum_{i=1}^M \epsilon_i \frac{w_{e_i}}{\mu_{e_i}} Q_{e_i}(L_G^{+/2} x) &= \frac{1}{M} \sum_{i=1}^M \epsilon_i N_i(x)^2\,.\label{eq:rewrite0} \end{align} Thus we can write the quantity \eqref{eq:latterq} as \[ 2\E_{\tilde{H}} \E_{\epsilon} \max_{x : Q_H(L_G^{+/2} x) \leq 1} \left|\frac{1}{M} \sum_{i=1}^M \epsilon_i N_i(x)^2\right| \leq 4 \E_{\tilde{H}} \E_{\epsilon} \max_{x : Q_H(L_G^{+/2} x) \leq 1} \frac{1}{M} \sum_{i=1}^M \epsilon_i N_i(x)^2, \] Define $T \mathrel{\mathop:}= \{ x \in \mathbb R^n : Q_H(L_G^{+/2} x) \leq 1 \}$ and note that from \eqref{eq:norm-compare-cond}, we have $T \subseteq B_2^n$. Now apply \pref{cor:lp-version} to bound \begin{equation}\label{eq:ucbnd} \E_{\epsilon} \max_{x \in T} \frac{1}{M} \sum_{i=1}^M \epsilon_i N_i(x)^2 \lesssim \frac{\|A\|_{2\to\infty} \sqrt{\log n \log D}}{M^{1/2}} \max_{x \in T} \left(\frac{1}{M} \sum_{i=1}^M N_i(x)^2\right)^{1/2}. \end{equation} Note also that \[ \max_{x \in T} \frac{1}{M} \sum_{i=1}^M N_i(x)^2 = \max_{v : Q_H(v) \leq 1} \frac{1}{M} \sum_{i=1}^M N_i\left(L_G^{1/2} v\right)^2= \max_{v : Q_H(v) \leq 1} Q_{\tilde{H}}(v)\,. \] where the first equality follows from the fact that $Q_H(x)=Q_H(\hat{x})$ when $x-\hat{x} \in \ker(L_G)$, and the second inequality uses this and an application of \eqref{eq:rewrite1} with $x = L_G^{1/2} v$. Recalling our starting point \eqref{eq:start}, it follows that for some universal constant $C > 0$, \begin{align*} \tau \mathrel{\mathop:}= \E_{\tilde{H}} \max_{v : Q_H(v) \leq 1} \left|Q_H(v)-Q_{\tilde{H}}(v)\right| &\leq C \frac{\|A\|_{2\to\infty} \sqrt{\log n \log D}}{M^{1/2}} \E_{\tilde{H}} \left(\max_{v : Q_H(v) \leq 1} Q_{\tilde{H}}(v)\right)^{1/2} \\ &\leq C \frac{\|A\|_{2\to\infty} \sqrt{\log n \log D}}{M^{1/2}} \left(\E_{\tilde{H}} \max_{v : Q_H(v) \leq 1} Q_{\tilde{H}}(v)\right)^{1/2}, \end{align*} where the last inequality is by concavity of the square root. Observe that \[ \max_{v : Q_H(v) \leq 1} Q_{\tilde{H}}(v) \leq \max_{v : Q_H(v) \leq 1} \left(\left|Q_H(v)-Q_{\tilde{H}}(v)\right|+Q_H(v)\right) \leq 1 + \max_{v : Q_H(v) \leq 1} |Q_H(v) - Q_{\tilde{H}}(v)|\,, \] and therefore we have \[ \tau \leq C \frac{\|A\|_{2\to\infty} \sqrt{\log n \log D}}{M^{1/2}} \left(1+\tau\right)^{1/2}\,. \] It follows that if $M \geq (2C\|A\|_{2\to\infty} \sqrt{\log n \log D})^2$, then $\tau \leq 4C \frac{\|A\|_{2\to\infty} \sqrt{\log n \log D}}{M^{1/2}}$. For $0 < \epsilon < 1$, choosing \[ M \mathrel{\mathop:}= \frac{4C^2 \log D}{\epsilon^2} \|A\|_{2\to\infty}^2 \log n \] gives \[ \E_{\tilde{H}} \max_{v : Q_H(v) \leq 1} \left|Q_H(v)-Q_{\tilde{H}}(v)\right| = \tau \leq \epsilon\,. \] The proof is complete once we observe that \[ \|A\|^2_{2\to\infty} = \max_{\{i,j\} \in F} S_{ij}^2 = \max_{e \in E, \{i,j\} \in {e \choose 2}} \|y_{ij}^e\|^2 = Z \max_{\{i,j\} \in {e \choose 2}} \frac{\mathsf R_{ij}}{\mathsf R_{\max}(e)} \leq Z\,. \qedhere \] \end{proof} \subsection{Choosing conductances} \label{sec:choose-con} We are therefore left to find edge conductances in the graph $G=(V,F,c)$ so that \eqref{eq:norm-compare-cond} holds and $Z$ is small. To this end, let us choose nonnegative numbers \[ \left\{ c_{ij}^e \geq 0 : \{i,j\} \in {e \choose 2}, e \in E \right\} \] such that \begin{equation}\label{eq:cap1} \sum_{\{i,j\} \in {e \choose 2}} c_{ij}^e = w_e, \quad \forall e \in E\,. \end{equation} For $\{i,j\} \in F$, we then define our edge conductance \begin{equation}\label{eq:he-split} c_{ij} \mathrel{\mathop:}= \sum_{e \in E : \{i,j\} \in {e \choose 2}} c_{ij}^e\,. \end{equation} In this case, \begin{align*} \|L_G^{1/2} v\|^2 = \langle v, L_G v\rangle &= \sum_{\{i,j\} \in F} c_{ij} (v_i-v_j)^2 \\ &= \sum_{e \in E} \sum_{\{i,j\} \in {e \choose 2}} c_{ij}^e (v_i-v_j)^2 \\ &\leq \sum_{e \in E} \sum_{\{i,j\} \in {e \choose 2}} c_{ij}^e \max_{\{i,j\} \in {e \choose 2}} (v_i-v_j)^2 \\ &\stackrel{\mathclap{\eqref{eq:cap1}}}{\leq}\ \ \sum_{e \in E} w_e \max_{\{i,j\} \in {e \choose 2}} (v_i-v_j)^2 = Q_H(v)\,. \end{align*} Taking $v = L_G^{+/2} x$ gives \[ \|x\|^2 \leq Q_H(L_G^{+/2} x), \] verifying \eqref{eq:norm-compare-cond}. \begin{lemma}[Foster's Network Theorem] \label{lem:foster} It holds that $\sum_{\{i,j\} \in F} c_{ij} \mathsf R_{ij} \leq n-1$. \end{lemma} \begin{proof} Recall that $\mathsf R_{ij} = \langle \chi_i-\chi_j, L_G^+ (\chi_i-\chi_j)\rangle$ and $L_G = \sum_{\{i,j\} \in F} c_{ij} (\chi_i-\chi_j) (\chi_i-\chi_j)^*$. It follows that \[ \sum_{\{i,j\} \in F} c_{ij} \mathsf R_{ij} = \sum_{\{i,j\} \in F} \tr(c_{ij}(\chi_i-\chi_j)(\chi_i-\chi_j)^* L_G^+) = \tr(L_G L_G^+) \leq n-1\,, \] since $\rank(L_G) \leq n-1$. \end{proof} Define \begin{equation}\label{eq:hatk} K \mathrel{\mathop:}= \max_{e \in E} \max_{\{i,j\} \in {e \choose 2}} \frac{\mathsf R_{\max}(e)}{\mathsf R_{ij}} \vvmathbb{1}_{\{c_{ij}^e > 0\}} \end{equation} so that \[ Z = \sum_{e \in E} w_e \mathsf R_{\max}(e) = \sum_{e \in E} \sum_{\{i,j\} \in {e \choose 2}} c_{ij}^e \mathsf R_{\max}(e) \leq K\sum_{e \in E} \sum_{\{i,j\} \in {e \choose 2}} c_{ij}^e \mathsf R_{ij} \leq K (n-1)\,, \] where the last inequality uses \eqref{eq:he-split} and \pref{lem:foster}. In conjunction with \pref{lem:hsparse}, we have proved the following. \begin{lemma}\label{lem:hsparse2} Suppose there is a choice of conductances so that \eqref{eq:cap1} holds. Then for any $\epsilon > 0$, there is a spectral $\epsilon$-sparsifier for $H$ with at most $O(K \frac{\log D}{\epsilon^2} n \log n)$ hyperedges, where $K$ is defined in \eqref{eq:hatk}. \end{lemma} \subsection{Balanced effective resistances} \label{sec:balanced} We will exhibit conductances satisfying \eqref{eq:cap1} and \eqref{eq:hatk} with $K \leq 1$. To this end, we may assume that the weighted hypergraph $H=(V,E,w)$ has strictly positive edge weights and that the (unweighted) graph $G_0=(V,F)$ is connected. Define $\hat{F} \mathrel{\mathop:}= \{ (e,\{i,j\}) : e \in E, \{i,j\} \in {e \choose 2} \}$, and consider vectors $\left(c_{ij}^e : e \in E, \{i,j\} \in {e \choose 2}\right) \in \mathbb R_+^{\hat F}$. Define the convex set \[ \mathsf K \mathrel{\mathop:}= \mathbb R_+^{\hat{F}} \cap \left\{ \sum_{\{i,j\} \in {e \choose 2}} c_{ij}^e = w_e : e \in E \right\}. \] We use $\mathcal S_+^n$ and $\mathcal S_{++}^n$ for the cones of positive semidefinite (resp., positive definite) $n \times n$ matrices. Define $c_{ij} \mathrel{\mathop:}= \sum_{e : \{i,j\} \in {e \choose 2}} c_{ij}^e$ and denote the linear function $L_G : \mathbb R_+^{F} \to \mathcal S_+^n$ by \[ L_G\left((c_{ij})\right) \mathrel{\mathop:}= \sum_{\{i,j\} \in F} c_{ij} (\chi_i-\chi_j)(\chi_i-\chi_j)^*\,. \] Let $J$ be the all-ones matrix and consider the objective \[ \Phi\left((c_{ij})\right) \mathrel{\mathop:}= - \log \det\left(L_G\left((c_{ij})\right) + J\right)\,. \] Note that $X \mapsto - \log \det(X)$ is a convex function on the cone $\mathcal S_{+}^n$ of $n \times n$ positive semidefinite matrices (see, e.g., \cite[\S 3.1]{bv04}) and takes the value $+\infty$ on $\mathcal S_{+}^n \setminus \mathcal S_{++}^n$. Consider finally the convex optimization problem: \begin{equation}\label{eq:optimization} \min \left\{ \Phi\left((c_{ij})\right) : (c_{ij}^e) \in \sf{K} \right\}. \end{equation} Since $G_0$ is connected, it holds that if $\left(c_{ij}\right) \in \mathbb R_{++}^F$, then $\ker(L_G)$ is the span of $(1,1,\ldots,1)$, and therefore $L_G\left((c_{ij})\right) + J \in \mathcal S_{++}^n$. Therefore $\Phi$ is finite on the strictly positive orthant $\mathbb R_{++}^F$. \begin{lemma}\label{lem:ok-prog} The value of \eqref{eq:optimization} is finite and there is a feasible point in the relative interior of $\mathsf K$. \end{lemma} \begin{proof} It is straightforward to check that the maximum of eigenvalue of $L_G$ is bounded by $2 \sum_{\{i,j\} \in {e \choose 2}} c_{ij} = 2\sum_{e \in E} w_e$, hence the value of \eqref{eq:optimization} is finite. Moreover, the vector defined by $c_{ij}^e \mathrel{\mathop:}= \frac{1}{|{e \choose 2}|} w_e$ is feasible and lies in $\mathbb R_{++}^{\hat{F}}$ since the weights $w_e$ are strictly positive. \end{proof} \smallskip We can write the corresponding Lagrangian as \begin{align*} g\left((c_{ij}^e); \alpha,\beta\right) = - \log &\det\left(L_G\left((c_{ij})\right) + J\right) + \sum_{e \in E} \alpha_e \left(\sum_{\{i,j\} \in {e \choose 2}} c_{ij}^e - w_e\right) - \sum_{e \in E} \sum_{\{i,j\} \in {e \choose 2}} \beta_{ij}^e c_{ij}^e \end{align*} \pref{lem:ok-prog} allows one to conclude that there are vectors $(\hat{c}_{ij}^e), \hat{\alpha},\hat{\beta}$ with $\hat{\beta} \geq 0$ and such that the KKT conditions hold; see \cite[Thm 28.2]{Rockafellar70}. In particular, for all $e \in E$ and $\{i,j\} \in {e\choose 2}$, we have \begin{align} \partial_{c_{ij}^e}\, g\!\left((\hat{c}_{ij}^e); \hat{\alpha},\hat{\beta}\right) &= 0\,,\label{eq:kkt1} \\ \hat{\beta}_{ij}^e > 0 \implies \hat{c}_{ij}^e &= 0\,. \label{eq:kkt2} \end{align} By the rank-one update formula for the determinant, we have \[ \partial_{c_{ij}^e} \log \det(L_G+ J) = \langle \chi_i-\chi_j, (L_G + J)^{-1} (\chi_i-\chi_j)\rangle\,. \] Define $\hat{L}_G \mathrel{\mathop:}= L_G\left((\hat{c}_{ij})\right)$. Define $\hat{\mathsf R}_{ij} \mathrel{\mathop:}= \langle \chi_i-\chi_j, \hat{L}_G^{+} (\chi_i-\chi_j)\rangle$. Taking the derivative of $g$ with respect to each $c_{ij}^e$ and using \eqref{eq:kkt1} gives \[ \hat{\mathsf R}_{ij} = \langle \chi_i-\chi_j, (\hat{L}_G+ J)^{-1} (\chi_i-\chi_j)\rangle = \hat{\alpha}_e - \hat{\beta}_{ij}^e,\qquad \forall e \in E, \{i,j\} \in {e \choose 2}\,, \] where the first equality uses the fact that the eigenvectors of $\hat{L}_G$ and $J$ are orthogonal and $\chi_i - \chi_j \in \ker{J}$. Note that since $\hat{\beta} \geq 0$ coordinate-wise, this implies that \[ \hat{\mathsf R}_{\max}(e) \mathrel{\mathop:}= \max_{\{i,j\} \in {e \choose 2}} \hat{\mathsf R}_{ij} \leq \hat{\alpha}_e\,. \] Moreover, if $\hat{c}_{ij}^e > 0$, then $\hat{\beta}^e_{ij} = 0$ (cf. \eqref{eq:kkt2}), and in that case $\hat{\mathsf R}_{ij} = \hat{\alpha}_e = \hat{\mathsf R}_{\max}(e)$. \smallskip We conclude that the edge conductances $\hat{c}_{ij}^e$ yield $K \leq 1$ in \eqref{eq:hatk}, and therefore \pref{lem:hsparse2} gives a sparsifier with $O(\frac{\log D}{\epsilon^2} n \log n)$ edges, completing the proof of \pref{thm:main}. \subsection*{Acknowledgements} I am grateful to Thomas Rothvoss for many suggestions and comments on preliminary drafts. \bibliographystyle{alpha}
2,877,628,089,962
arxiv
\section{Introduction} Since the spin gap was discovered in a typical two-dimensional material CaV$_4$O$_9$,\cite{STaniguchi-JPSJ64-2758} the mechanism of the spin gap formation in this system has been intensively studied by many theoretical methods. \cite{NKatoh-JPSJ64-4105,KUeda-PRL76-1932,MPGelfand-PRL77-2794,KSano-JPSJ65-1514,KTakano-preprint,MAlbrecht-PRB53-2945,OAStarykh-preprint,MTroyer-PRL76-3822,SRWhite-PRL77-3633,TMiyazaki-JPSJ65-2370} This material consists of VO$_5$ pyramid layers. Crystal structure of the VO$_5$ pyramid layer is shown in Fig.\ref{fig-lattice-structure}(a). The oxygens constitute a complete square lattice while in the square lattice of the vanadium atoms, 1/5 of them are depleted. The vanadium atom is nearly located at the center of the pyramid constructed from four oxygens in the layer and one apical oxygen. Each VO$_5$ pyramid is connected by edge sharing. The unit cell of the layer includes two edge-shared plaquettes of V atoms shown in Fig.\ref{fig-lattice-structure}(a), because the apical oxygen of one edge-shared plaquette is located above the VO$_2$ plane while that of the other one is below the plane. \begin{figure} \caption{ (a) Crystal structure of VO$_5$ pyramid layer. Full circles represent vanadium atoms, while open circles are oxygen atoms. Apical oxygens are omitted in this figure. Shaded square area represents the unit cell of this system. Vectors $\mbox{\boldmath $a$}_1$ and $\mbox{\boldmath $b$}_1$ are the unit lattice vectors ; $\mbox{\boldmath $a$}_1=3\mbox{\boldmath $x$}-\mbox{\boldmath $y$}$ and $\mbox{\boldmath $b$}_1=\mbox{\boldmath $x$}+3\mbox{\boldmath $y$}$, respectively. Here $\mbox{\boldmath $x$}$ and $\mbox{\boldmath $y$}$ represent the unit lattice vectors of the square lattice composed of oxygens defined as $\mbox{\boldmath $x$}=a\mbox{\boldmath $e$}_x$ and $\mbox{\boldmath $y$}=a\mbox{\boldmath $e$}_y$. Parameter $a$ is a lattice constant while $\mbox{\boldmath $e$}_x$ and $\mbox{\boldmath $e$}_y$ are unit vectors. (b) Four kinds of the superexchange couplings. Bold-gray, bold-black, thin-gray and thin-black lines represent the spin exchange couplings, $J_{\mbox{\scriptsize ep}}$, $J_{\mbox{\scriptsize ed}}$, $J_{\mbox{\scriptsize cp}}$ and $J_{\mbox{\scriptsize cd}}$, respectively. } \label{fig-lattice-structure} \end{figure} Since the valence of V atom is 4+, the $d$ electron on V atom can be treated as a nearly localized spin with $S=1/2$. In the literature, it has been assumed that a non-degenerate orbital on V atom is occupied. \cite{NKatoh-JPSJ64-4105,KUeda-PRL76-1932,MPGelfand-PRL77-2794,KSano-JPSJ65-1514,KTakano-preprint,MAlbrecht-PRB53-2945,OAStarykh-preprint,MTroyer-PRL76-3822,SRWhite-PRL77-3633,TMiyazaki-JPSJ65-2370} Based on this assumption, the $S=1/2$ antiferromagnetic Heisenberg (AFH) model with the nearest-neighbor and the next-nearest-neighbor exchange couplings has been introduced. When the nearest-neighbor exchange couplings are only taken into account, the system is described by the AFH model on the square lattice of the plaquettes, where each plaquette consists of four V atoms almost centered in the edge-shared pyramid. (We call this lattice plaquette lattice.) On the other hand, when only the next-nearest-neighbor exchange couplings are considered, the system is described by two disconnected square lattice of the plaquettes, where each plaquette is twice as large as and 45$^{\circ}$ tilted from the plaquette element of the previous case. In this case each plaquette consists of four corner-shared pyramid. Here, four different kinds of spin exchange couplings are introduced, as is shown in Fig.\ref{fig-lattice-structure}(b). The spin exchange couplings between V atoms in the edge-shared and the corner-shared plaquettes are represented by $J_{\mbox{\small ep}}$ and $J_{\mbox{\small cp}}$, respectively. While the spin exchange couplings connecting the edge-shared and the corner-shared plaquettes are $J_{\mbox{\small ed}}$ and $J_{\mbox{\small cd}}$, respectively. In the literature, the case where $J_{\mbox{\small ed}} \simeq J_{\mbox{\small ep}}$ is taken as $J$ and $J_{\mbox{\small cd}} \simeq J_{\mbox{\small cp}}$ is $J'$, has been mainly studied theoretically since only two kinds of spin exchange couplings should be derived from the previous single-orbital assumptions. The effects of the Jahn-Teller distortion, tilting of the pyramids and the spin-orbit couplings may cause the differences between $J_{\mbox{\small ed}}$ and $J_{\mbox{\small ep}}$ or between $J_{\mbox{\small cd}}$ and $J_{\mbox{\small cp}}$. The theoretical methods such as perturbation expansions, \cite{NKatoh-JPSJ64-4105,KUeda-PRL76-1932,MPGelfand-PRL77-2794} exact diagonalization, \cite{KSano-JPSJ65-1514} mean field approximations, \cite{MAlbrecht-PRB53-2945,OAStarykh-preprint} quantum Monte Carlo method, \cite{NKatoh-JPSJ64-4105,MTroyer-PRL76-3822} high-temperature expansions \cite{MPGelfand-PRL77-2794,KSano-JPSJ65-1514,KTakano-preprint} and the density matrix renormalization group method \cite{SRWhite-PRL77-3633} have led to the result that this system has a spin gap in the region around $J_{\mbox{\small ep}} > J_{\mbox{\small ed}}$ without frustration $(J'=0)$ or in the region $J'\simeq 0.5J$. Here the spin gap is defined as the energy difference between the singlet ground state and the lowest triplet states. In the above mentioned regions, the origin of the spin gap is ascribed to the edge-shared plaquette singlet. The energy dispersion of the triplet states has been calculated by the perturbation expansion from the edge-shared plaquette singlet. \cite{NKatoh-JPSJ64-4105,KUeda-PRL76-1932,MPGelfand-PRL77-2794} The wavenumber of the lowest energy excitation is ($\pi$,$\pi$) in the absence of the frustration, which is also obtained by the variational Monte Carlo method. \cite{TMiyazaki-JPSJ65-2370} However, it shifts to incommensurate wavenumbers as the frustration due to $J'$ becomes relatively large. Recently, the neutron inelastic scattering study for single crystal of CaV$_4$O$_9$ has been performed. \cite{KKodama-JPSJ65-1941,KKodama-preprint} The experiments indicate that the predictions from the theoretical studies for the conventional model are inconsistent with the experimental results. First of all, the wavenumber of the lowest energy excitation given by the experiments is (0,0) in the magnetic first Brillouin zone, which is different from the theoretical results. Here, the magnetic first Brillouin zone in the wavenumber space is expanded by the vectors $\tilde{\mbox{\boldmath $a$}_2}=\tilde{\mbox{\boldmath $a$}_1} +\tilde{\mbox{\boldmath $b$}_1}$ and $\tilde{\mbox{\boldmath $b$}_2}=-\tilde{\mbox{\boldmath $a$}_1} +\tilde{\mbox{\boldmath $b$}_1}$, as shown in Fig.\ref{fig-zone}. It is $\sqrt{2}\times\sqrt{2}$ times larger than the first Brillouin zone of the unit cell expanded by the reciprocal lattice vectors $\tilde{\mbox{\boldmath $a$}_1}=\mbox{\boldmath $a$}_1/10a^2$ and $\tilde{\mbox{\boldmath $b$}_1}=\mbox{\boldmath $b$}_1/10a^2$, where $a$ is a distance between the nearest-neighbor V atoms. Secondly, if the spin exchange couplings between V atoms are determined so as to reproduce the dispersion of the lowest triplet excitations in the experimental results, a large difference is resulted between the couplings, i.e. $J_{\mbox{\small cd}} =0.088 J_{\mbox{\small cp}}$ and $J_{\mbox{\small ep}} = J_{\mbox{\small ed}} =0.395 J_{\mbox{\small cp}}$, \cite{KKodama-preprint} which is hard to understand within the conventional framework. \begin{figure} \caption{ Brillouin zone of two-dimensional VO$_2$ plane in CaV$_4$O$_9$ lattice. Square surrounded by broken lines is the first Brillouin zone of the lattice, while the square with bold-solid lines is the magnetic first Brillouin zone obtained by the experiment. \cite{KKodama-preprint} The wavenumber $\mbox{\boldmath{$q$}}$ is represented by the vectors $\tilde{\mbox{\boldmath $a$}}_1$ and $\tilde{\mbox{\boldmath $b$}}_1$. While $\mbox{\boldmath{$k$}}$ is described by $\tilde{\mbox{\boldmath $a$}_2}=\tilde{\mbox{\boldmath $a$}_1} +\tilde{\mbox{\boldmath $b$}_1}$ and $\tilde{\mbox{\boldmath $b$}_2}=-\tilde{\mbox{\boldmath $a$}_1} +\tilde{\mbox{\boldmath $b$}_1}$, respectively. Here, the lattice constant between the nearest-neighbor V atoms is $a$. } \label{fig-zone} \end{figure} In order to understand the contradiction between the experimental results and the theoretical ones, we should reconsider the mechanism of the spin gap formation for CaV$_4$O$_9$. In this paper, we study effects of orbital degeneracy and orbital order which has been neglected in the literature. The crystal field from the oxygen ions on the corners of the pyramid lifts the degeneracy of $t_{2g}$ orbitals in the atomic level. However, two orbitals whose wavefunctions are expanded by $d_{xz}$ and $d_{yz}$ orbitals may still be degenerate even in this crystal field. Then the effective Hamiltonian sensitively depends on configuration of the occupied $d$ orbitals through the mechanism of the superexchange interaction if the occupied orbitals have substantial contributions from the $d_{xz}$ and $d_{yz}$ orbitals. In Section 2, we introduce several possible effective spin Hamiltonians with the orbital order. It is found that strength of spin exchange couplings strongly depends on patterns of the orbital occupancy. This may explain the appearance of large difference in the spin exchange couplings needed to reproduce experimental results. In Section 3, we estimate the strength of the spin exchange couplings from analyses of the temperature dependence of the uniform magnetic susceptibility. We use the exact diagonalization (ED) method. Although the system size tractable by the ED is limited, the cases we studied show rather small system size dependence due to the opening the spin gap which makes it possible to infer the thermodynamic limit. Then we calculate the energy dispersion of the triplet states and the wavenumber dependence of the equal-time spin-spin correlation which is related to the integrated intensity of the neutron inelastic scattering within the perturbation expansion (PE). The important elements to explain the experimental results are pointed out. The origin of the spin gap in each case of the orbital order is also discussed. In Section 4, we compare these theoretical results with the experimental ones and discuss the importance of the orbital order. Section 5 is devoted to summary. \section{Effective Hamiltonians} For the purpose of understanding the problems mentioned in the previous section, we consider several models for the spin gap formation. The Hamiltonian is given by the AFH model written as \begin{equation} H = \sum_{<i,j>} J_{ij} \mbox{\boldmath $ S $}_{i} \mbox{\boldmath $ \cdot S $}_{j}, \label{Hami} \end{equation} where $\mbox{\boldmath $ S $}_{i}$ represents the spin operator with $S=1/2$ at $i$ site and $J_{ij}$ is the spin exchange coupling between the spins at $i$ and $j$ sites. In this study, we assume that the orbitals giving the lowest energy in the atomic level due to the crystal field are doubly degenerate. In other words, the ground state wavefunction has substantial weight of the $d_{xz}$ and $d_{yz}$ orbitals and that the $d$ electron occupies these orbitals at least partially. Below we refer to occupied orbitals simply as $d_{xz}$ or $d_{yz}$ orbitals in this paper. However, it does not necessarily mean that those consist of pure $d_{xz}$ and $d_{yz}$ orbitals. What we need to make the proposals in this paper relevant is the double-degeneracy of occupied orbitals with $d_{xz}$ and $d_{yz}$ component contained. Even small contribution of $d_{xz}$ and $d_{yz}$ orbitals may make the exchange coupling sensitively dependent on the occupied orbitals. Although the static Jahn-Teller distortion or tilting of the pyramid could lift the degeneracy of $d_{xz}$ and $d_{yz}$ orbitals, we do not consider this effect within this study, because such static distortions have not been observed so far. Therefore, we investigate effects of the orbital order for the $d_{xz}$ and $d_{yz}$ orbitals. As is mentioned later, the strength of the spin exchange couplings between V atoms depends on the configuration of the occupied $d$ orbitals on V atoms. We consider the spin exchange couplings between the nearest as well as between the next-nearest neighbor V atoms through the superexchange mechanism, as is shown in Fig.\ref{fig-orbital-exchange}. \begin{figure} \caption{ Lattice structure of V-O-V and configuration of occupied orbital. Figures (a) and (b) show the cases with the right angle of V-O-V. Figure (a) shows the case that the occupied $d$ orbitals on V atoms are different, while (b) is the case with the same occupied orbital. Figures (c) and (d) show the cases with the straight line of V-O-V. The configurations of the occupied $d$ orbitals in Fig.(c) and (d) are the same as those in Fig.(a) and (b), respectively. } \label{fig-orbital-exchange} \end{figure} In the first case, the V-O-V bonds make approximately right angle. Then the superexchange coupling works relevantly through the $p_{z}$ orbital of the oxygen when the $d$ electron occupies $d_{xz}$ orbital on V atom and the other $d$ electron is localized at $d_{yz}$ orbital on the other V atom shown in Fig.\ref{fig-orbital-exchange}(a). However, the superexchange coupling may be relatively small when the both $d$ electrons occupy the same orbitals on each V atom shown in Fig.\ref{fig-orbital-exchange}(b). The reason is that one of the transfer integrals between $p_{z}$ orbital on oxygen and one of the $d$ orbitals is relatively small due to the symmetry of the wavefunction of these orbitals. In the second case, the V-O-V bonds make approximately straight line. Then the superexchange coupling works effectively when the $d$ electrons are localized at the same orbitals which extend to the oxygen as shown in Fig.\ref{fig-orbital-exchange}(d). While the superexchange coupling should be relatively small in the case shown in Fig.\ref{fig-orbital-exchange}(c). When the $d$ electrons occupy the orbitals both of which do not extend to the oxygen located between the V atoms, the strength of the superexchange couplings should be also relatively small. Consequently, the anisotropy of the spin exchange couplings arises through this mechanism. Here, we introduce five models with the possible patterns of the orbital order as shown in Figs.\ref{fig-pattern}. The orbital patterns in Figs.\ref{fig-pattern}(a) and (b) are the cases where the electrons occupy the $d$ orbitals alternatingly within the edge-shared plaquettes but in a uniform way from plaquette to plaquette. The relevant spin exchange couplings in Fig.\ref{fig-pattern}(a) are the edge-shared plaquette bonds $J_{\mbox{\small ep}}$, while the ones in Fig.\ref{fig-pattern}(b) are $J_{\mbox{\small ep}}$ and $J_{\mbox{\small cd}}$. Figure \ref{fig-pattern}(c) shows the case where the plaquette unit in Fig.\ref{fig-pattern}(a) and (b) are placed alternatingly. Figure \ref{fig-pattern}(d) shows the case where all the occupied orbitals have the same symmetry. Figure \ref{fig-pattern}(e) shows the case where the occupied $d$ orbitals are alternating in the $x$-direction while are uniform in the $y$-direction. From here, we take the strengths of the relatively small spin exchange couplings as the same values. In reality, these value are not necessarily the same because these small couplings appear due to some small distortion of the lattice from the perfect square lattice. Then to study difference between them, these small effects have to be considered. This point is discussed in detail in Section 4. In other possible patterns of the orbital order, the magnetic unit cell becomes more complicated. In this paper, we do not consider these more complicated possibilities. \begin{figure} \caption{ Five possible patterns of orbital order viewed in the projection to the $xy$ plane. Bold lines connecting V atoms represent the dominant superexchange couplings. The symbols on V atoms represent the $d_{xz}$ and $d_{yz}$ orbitals schematically. (a) A case with the alternating pattern of the orbitals in the edge-shared plaquette. The filled orbitals are ``tiled'' in a uniformed way among different plaquettes. (b) This is the same as the one in Fig.(a). However, the $d_{xz}$ and $d_{yz}$ orbitals are replaced each other. (c) The case where the orbital in all the nearest-neighbor V atoms are occupied always in the alternating configuration. (d) The case that all the orbitals are occupied uniformly. (e) The case that the occupied orbitals are alternating in the $x$-direction while are uniform in the $y$-direction. } \label{fig-pattern} \end{figure} \section{Results} In this section, we study magnetic properties for the models in Figs.\ref{fig-pattern}(a), (b) and (c) in more detail than for (d) and (e) since the configuration of the occupied orbitals in the edge-shared plaquette for these models has four-fold rotational symmetry which appears to be consistent with experimental observations. \cite{KKodama-preprint} \subsection{Uniform Magnetic Susceptibility} In this subsection, we determine the amplitude of the relevant spin exchange couplings and the other relatively small ones to reproduce the temperature dependence of the uniform magnetic susceptibility $\chi$ given by the experiments. Here, we assume that eq.(\ref{Hami}) is an effective spin Hamiltonian of magnetic properties below 700K. We calculate the $\chi$ of 16-site system by the ED with the periodic boundary condition. The uniform magnetic susceptibility is defined as \begin{equation} \chi=\frac{\beta}{N}\frac{\mbox{Tr} \sum_i \sum_j S^{z}_iS^{z}_j \exp({-\beta \epsilon_n})} {\mbox{Tr}\exp({-\beta \epsilon_n})}, \label{eq-chi} \end{equation} where $\beta$ represents the inverse temperature and $N$ is the number of site. In terms of comparison with the experimental data in the unit of emu/g, we should multiply our data with energy scale $J$ by a factor $4 N_{\mbox{\scriptsize A}} g^2 \mu_{\mbox{\scriptsize B}} /J k_{\mbox{\scriptsize B}} M$ with $N_{\mbox{\scriptsize A}}$ the Avogadro number, $\mu_{\mbox{\scriptsize B}}$ the Bohr magneton, $k_{\mbox{\scriptsize B}}$ the Boltzmann constant and $M$ the gram per mole. The factor 4 comes from the number of V atoms in a unit cell. The parameter $J$ depends on each model. Here, the $g$-factor is taken as a fitting parameter. The experimental data show a peak at a temperature about $T_{\mbox{\small p}} \simeq 110$K. The temperature $T^*$ where the amplitude of $\chi$ becomes a half of that at $T_{\mbox{\small p}}$ is about 595K. We choose the amplitude of the spin exchange couplings so as to give the best fit between the ED and the experimental results in the region above $T_{\mbox{\small p}}$. Figure~\ref{fig-chi}(a) shows the temperature dependence of the $\chi$ for the model shown in Fig.\ref{fig-pattern}(a). The fitting of the data leads to the spin exchange coupling $J_{\mbox{\small ep}}$ is 183K, while relatively small ones are $J_{\mbox{\small ed}} = J_{\mbox{\small cp}} = J_{\mbox{\small cd}} =$97K, respectively. The ratio $J_{\mbox{\small ed}} / J_{\mbox{\small ep}} $ is 0.53. The $g$-factor is 1.71, which is rather small compared to $g\simeq 2.0$. In principle, the mechanism which reduces the value of the $g$ factor estimated from $\chi$ in experiments could exist. One possibility is an effect of small spin-orbit couplings. However, we do not discuss this point in this paper. As a reference, the data not only for 16-site system but also for 8-site system are shown in Fig.\ref{fig-chi}(a). The size dependence appears below $T_{\mbox{\small p}}$. However, the finite size effect is small above $T_{\mbox{\small p}}$, which justifies the present estimate of the spin exchange couplings to take as the value in the thermodynamic limit. The spin gap for 8-site, 16-site and 24-site systems were calculated. The size dependence of the spin gap is shown in the inset in Fig.\ref{fig-chi}(a). If we assume that the fitting function for two-dimensional spin gap system has a form as $\Delta_{\mbox{\small s}} (N)= \Delta_{\mbox{\small s}} (\infty)+\frac{A}{N}$, the spin gap extrapolated to the thermodynamic limit is estimated to be about 101K, namely $\Delta_{\mbox{\small s}} / J_{\mbox{\small ep}} =0.550$. The reason for taking this fitting function is as follows. The energy dispersion of the triplet excitation near the wavenumber with the lowest energy excitation is approximated as $\epsilon_{\mbox{\scriptsize T}} (\mbox{\boldmath{$k$}})= \Delta_{\mbox{\small s}} +c_x k_x^2 + c_y k_y^2$ for sufficiently small $k_x$ and $k_y$, where $\Delta_{\mbox{\small s}}$ represents the bulk spin gap. Using the periodic boundary conditions, the wavenumber $k_x(k_y)$ has a $L_x^{-1}(L_y^{-1})$ correction, where $L_x(L_y)$ is the number of sites in the $x(y)$-direction. Because $L_x$ and $L_y$ are taken as $N^{-1/2}$ in two dimensions, the above fitting function is obtained. If the energy dispersion can be approximated by the quadratic form in a wide range of the wavenumber space, the fitting function reduced from the quadratic dispersion may be available even in small size systems. This model may belong to this case. \begin{figure} \caption{ Temperature dependence of uniform magnetic susceptibility for five models. Figures (a), (b), (c), (d) and (e) correspond to $\chi$ for the models in Figs.\ref{fig-pattern} (a), (b), (c), (d) and (e), respectively. Bold-gray lines represent the experimental result. Insets show the extrapolation of the spin gap as a function of $N$ in the ED results. } \label{fig-chi} \end{figure} For the case shown in Fig.\ref{fig-chi}(b), we take the relevant spin exchange couplings as $J_{\mbox{\small cd}} >J_{\mbox{\small ep}}$, for example, $J_{\mbox{\small ep}} =0.9 J_{\mbox{\small cd}}$. The reason is discussed in the next subsection. Figure~\ref{fig-chi}(b) shows the temperature dependence of the $\chi$. In a similar way, the relevant spin exchange couplings are estimated to be $J_{\mbox{\small cd}} \sim 185$K and $J_{\mbox{\small ep}} \sim 166$K, while the relatively small ones are $J_{\mbox{\small cp}} = J_{\mbox{\small ed}} =85$K. The ratio $J_{\mbox{\small cp}} / J_{\mbox{\small cd}} $ is 0.46. The g-factor is 1.72. In this case, one should be careful in estimating the spin gap from the data only for small size systems since the system seems to be near the critical point where the spin gap vanishes. Beyond the critical point when the spin gap disappears, the energy dispersion of the triplet states near the lowest excitation is expected to have a form as $\epsilon_{\mbox{\scriptsize T}}(\mbox{\boldmath{$k$}})= \sqrt{c_x k_x^2 + c_y k_y^2}$, namely, a linear dispersion. As the system with the spin gap is close to the critical point, the wavenumber space where the form of the energy dispersion is approximated to be quadratic becomes narrower and narrower while the linearly dispersive region becomes wider. Then the leading term of the finite-size correction may crossover from $N^{-1/2}$ for small system size to $N^{-1}$ for large system size. Here we take the form $\Delta_{\mbox{\small s}} (N)= \Delta_{\mbox{\small s}} (\infty)+\frac{A}{\sqrt{N}}$ as the fitting function. We have tried extrapolations by using two types of the fitting function. However, we found that the leading term of the finite size correction is $N^{-1/2}$ rather than $N^{-1}$ in this case due to the limitation of the system size. Using this fitting function, the spin gap extrapolated to the thermodynamic limit is estimated to be about 34K, or $\Delta_{\mbox{\small s}} / J_{\mbox{\small cd}} =0.186$. From the above reason, this estimate of the spin gap is expected to give a slight underestimate. In this paper, in order to estimate the bulk spin gap, these two fitting functions are employed according as the situation. In Fig.\ref{fig-pattern} (c), the relevant spin exchange couplings are divided into two groups. One is the nearest-neighbor spin exchange coupling $J$ and the other is the next-nearest-neighbor one $J'$. We consider the case that $J'$ is larger than $J$ and the other relatively small exchange couplings are neglected. The reason is discussed in the next subsection. Then the parameters $J$ and $J'$ are estimated to be 90K and 237K, respectively. The ratio $J/J'$ is 0.38. The $g$-factor is $1.68$. The relatively large spin gap survives in the thermodynamic limit and is estimated to be $\Delta_{\mbox{\small s}} /J'\simeq 0.564$ or 133K when we use the fitting function $\Delta_{\mbox{\small s}} (N)= \Delta_{\mbox{\small s}} (\infty)+\frac{A}{N}$. The values of the spin gap for the first and the third model are close to the one in the experimental results. However, in the second case, the spin gap value is much smaller than the experimental one. \subsection{Origin of Spin Gap and Triplet Energy Dispersion} We consider the mechanism of the spin gap formation. To investigate the dispersion of triplet excitations, the perturbation method is useful since it is found by the analysis mentioned in the previous subsection that the strength of the relevant spin exchange couplings is relatively large compared to the others. In the case shown in Fig.\ref{fig-pattern} (a), the term in eq.(\ref{Hami}) including the relevant spin exchange coupling $J_{\mbox{\small ep}}$ is treated as the unperturbed Hamiltonian, while the other terms are the perturbed ones. The strength of the other spin exchange couplings is taken as $J$. The singlet ground state of the isolated edge-shared plaquette is the resonating valence bond (RVB) singlet. Then the ground state of the unperturbed Hamiltonian is the product state of the RVB singlets on the edge-shared plaquettes. The origin of the spin gap is understood by the edge-shared plaquette singlet. The lowest excited state of the plaquette is the extended state of the triplet pair in the RVB state. Then the first excited states of the unperturbed Hamiltonian are constructed from the triplet state on one of the plaquettes and the singlets on the others. The degeneracy is lifted by the first-order perturbation through the transfer of the triplet due to the translational invariance, which yields the dispersion of the triplet states. We calculate the energy difference between the ground state and the triplet states within the second-order PE. The detailed procedure is explained in Appendix A. Figure~\ref{fig-dispersion} (a) shows the energy dispersion of the triplet excitation $\Delta_{\mbox{\small s}} (\mbox{\boldmath{$k$}})$. The wavenumber shown as $(k_x,k_y)$ represents the vectors $k_x\tilde{\mbox{\boldmath $a$}}_2 + k_y\tilde{\mbox{\boldmath $b$}}_2$ with $\tilde{\mbox{\boldmath $a$}}_2 = \tilde{\mbox{\boldmath $a$}_1}+\tilde{\mbox{\boldmath $b$}_1}$ and $\tilde{\mbox{\boldmath $b$}}_2= -\tilde{\mbox{\boldmath $a$}_1}+\tilde{\mbox{\boldmath $b$}_1}$. The first magnetic Brillouin zone is expanded by $\tilde{\mbox{\boldmath $a$}}_2$ and $\tilde{\mbox{\boldmath $b$}}_2$ as shown in Fig.\ref{fig-zone}. In this case, the wavenumber of the lowest excitation is (0,0). Since a strong antiferromagnetic correlation due to $J_{\mbox{\small cp}}$ is larger than the one due to $J_{\mbox{\small ed}}$ in this model, the triplet states which have a strong in-phase correlation between the nearest-neighbor edge-shared plaquettes become the lowest excited states. The amplitude of the spin gap at $(0,0)$ calculated by the second-order PE is $0.575 J_{\mbox{\small ed}}$, which is close to the one estimated by the ED and is also consistent with the experimental result. These behaviors qualitatively reproduce the experimental results. \begin{figure} \caption{ Wavenumber dependence of triplet and singlet excited state energies for three models. Figures (a), (b) and (c) correspond to the models shown in Figs.\ref{fig-pattern} (a), (b) and (c), respectively. Filled circles with error bar represent the experimental results. \cite{KKodama-preprint} } \label{fig-dispersion} \end{figure} In the case shown in Fig.\ref{fig-pattern} (b), the terms with the relevant spin exchange couplings $J_{\mbox{\small ep}}$ and $J_{\mbox{\small cd}}$ are treated as the unperturbed Hamiltonians, while the other terms are the perturbed ones. In the $J_{\mbox{\small cd}} > J_{\mbox{\small ep}}$ case, the product state of the dimer pairs on $J_{\mbox{\small cd}}$ bonds is the unperturbed ground state. Then the origin of the spin gap is ascribed to the corner-shared dimer singlets. The lowest excited states of the edge-shared plaquette are the product states of the triplet pair on one $J_{\mbox{\small cd}}$ bond and the singlet pair on the other $J_{\mbox{\small cd}}$ bonds. Because two kinds of triplet states exist, the lowest excited states are six-fold degenerate, which is also different from the previous case. We calculate the energy gap between the singlet state and those triplet states in a similar way. Figure~\ref{fig-dispersion} (b) shows the energy dispersion of the triplet states. The degeneracy is lifted by the first-order perturbation and the bonding and the anti-bonding triplet states appear. The wavenumbers of the lowest excitations are $(0,\pi)$ and $(\pi,0)$. The spin gap is $0.197 J_{\mbox{\small cd}}$, which is consistent with the value obtained by the ED. In this case, the square area surrounded by the lines connecting the nearest-neighbor points of the lowest excitation becomes a half of the original first magnetic Brillouin zone in the wavenumber space, which contradicts the experimental results. In the model shown in Fig.\ref{fig-pattern} (c), the ground state of the Hamiltonian given by the relevant spin exchange couplings is not straightforwardly obtained since the system has no isolated terms. To understand the mechanism of the spin gap formation, we neglect the small spin exchange couplings other than $J$ and $J'$ in this model. We consider three possible ground states and the origins of the spin gap for the Hamiltonian with only the relevant spin exchange couplings. To clarify three possibilities with the help of the perturbation method, the relevant terms are divided into two parts, unperturbed and perturbed terms. In the first case, the unperturbed Hamiltonian is taken as the term with $J_{\mbox{\small ep}}$ while the others are the perturbed ones. Then the ground state of the unperturbed Hamiltonian is represented by the edge-shared plaquette singlet. In the second case, we take the unperturbed Hamiltonian as the $J_{\mbox{\small ed}}$ term. Then the ground state is described by the edge-shared dimer singlet. In the third case, the unperturbed Hamiltonian is the term with the next-nearest-neighbor spin exchange couplings. Then the ground state is represented by the four-site stripe singlet. Since the first and the second cases have been already investigated, \cite{NKatoh-JPSJ64-4105,KUeda-PRL76-1932} we concentrate on the third case. Then the amplitudes of the nearest and the next-nearest neighbor couplings are chosen as $J$ and $J'$, respectively. Figure~\ref{fig-gse} shows the ground state energy per site as a function of the strength of the spin exchange coupling $J/(J+J')$ calculated by the PE. The detailed calculation by the PE is written in Appendix B. \begin{figure} \caption{ Ground state energy per site $\epsilon_{\mbox{\small g}}/(J+J')$ as a function of $J/(J+J')$. Bold, broken and dash-dotted lines represent the ground state energy by the second-order perturbation from the plaquette singlet, dimer singlet and stripe singlet, respectively. } \label{fig-gse} \end{figure} The ground state described by the stripe singlet is favored when $J/(J+J')$ is smaller than 0.483, or the ratio $J/J'$ is smaller than 0.932. The system with $J=90$K and $J'=237$K estimated by the ED belongs to the stripe singlet phase as shown in Fig.\ref{fig-gse}. Therefore, we may conclude that the stripe singlet is a possible mechanism of the spin gap formation in this model. We calculate the triplet dispersion as shown in Fig.\ref{fig-dispersion}(c). The ground state is the product state of the stripe singlet for the unperturbed Hamiltonian. The magnetic unit cell in the real space contains two stripes crossing each other. Then the lowest triplet states in the unit cell are six-fold degenerate. Similarly to the analysis for the model given in Fig.\ref{fig-pattern}(b), the bonding and anti-bonding triplet states appear. The spin gap is $0.539J'$, which is close to the value obtained by the ED and is also close to the experimental result. However, the wavenumbers of the lowest excitations are $\pi\tilde{\mbox{\boldmath $a$}_1}$ and $\pi\tilde{\mbox{\boldmath $b$}_1}$ respectively, which is inconsistent with the experimental results. The magnetic periodicity is also different from the experimental one, since the magnetic first Brillouin zone is the same as that of the unit lattice. In this model, small exchange couplings in the corner-sharing have been neglected. They may change the quantitative feature such as band-width of the triplet dispersion. However, an effect due to them is very small since both of the occupied $d$ orbitals on V atoms do not extend to the oxygen. Qualitative aspects such as the origin of the spin gap and the magnetic periodicity are not affected by these small exchange couplings. \subsection{Scattering Intensity} We investigate the scattering intensity of the neutron inelastic scattering given by the experiments. The scattering intensity is proportional to the Fourier component of the spin-spin correlation written as \begin{equation} I(\mbox{\boldmath{$q$}},\omega) \propto \int dt \mbox{e}^{\mbox{i}\omega t} \langle \mbox{\boldmath $S$}( \mbox{\boldmath{$q$}},t) \mbox{\boldmath $\cdot S$} (-\mbox{\boldmath{$q$}},0) \rangle, \label{intensity} \end{equation} where \begin{equation} \mbox{\boldmath{$S$}}( \mbox{\boldmath{$q$}},t)=N^{-\frac{1}{2}}\sum_i \mbox{\boldmath{$S$}}_i(t)\mbox{e}^{-\mbox{i} \mbox{\boldmath{$q\cdot r$}}_i}, \end{equation} and $\langle \cdots \rangle$ represents the thermal average. The wavenumber $\mbox{\boldmath{$q$}}$ are expanded by the vectors $\tilde{\mbox{\boldmath $a$}_1}$ and $\tilde{\mbox{\boldmath $b$}_1}$, namely we take $q_x$ axis in the $\tilde{\mbox{\boldmath $a$}_1}$ direction and $q_y$ in the $\tilde{\mbox{\boldmath $b$}_1}$ direction. When the temperature is sufficiently lower than the spin gap, eq.(\ref{intensity}) is approximated by the transition probability from the ground state $|\mbox{g.s.}\rangle$ to the triplet states $|n\rangle$, which is described as \begin{equation} I(\mbox{\boldmath{$q$}},\omega) \propto \sum_{n} |\langle n |\mbox{\boldmath $S$}(\mbox{\boldmath{$q$}}) |\mbox{g.s.}\rangle|^2 \delta (\omega-E_n+E_g), \label{intensity-2} \end{equation} where the summation is restricted to the triplet states and $E_n$ and $E_g$ are the energy of the triplet states and the ground state, respectively. Here, when we calculate eq.(\ref{intensity-2}), we use the wavefunctions of $|n\rangle$ and $|\mbox{g.s.}\rangle$ obtained by the PE, which is written in Appendix A in detail. The integrated intensity of the neutron inelastic scattering is obtained theoretically by integrating eq.(\ref{intensity-2}) over $\omega$, described as \begin{equation} I(\mbox{\boldmath{$q$}}) \equiv \int d\omega I(\mbox{\boldmath{$q$}},\omega) \propto \sum_{n} |\langle n |\mbox{\boldmath $S$}(\mbox{\boldmath{$q$}}) |\mbox{g.s.}\rangle|^2. \label{intensity-3} \end{equation} Figures~\ref{fig-intensity}(a),(b) and (c) show the wavenumber dependence of the integrated intensity calculated by the PE. The experimental results are also shown in these figures. Especially, Figs.\ref{fig-intensity}(a) and (c) indicate important points. In the case shown in Fig.\ref{fig-pattern}(a), since the nearest-neighbor antiferromagnetic correlation due to $J_{\mbox{\small ep}}$ is dominant, the scattering intensity has a minimum at $(\pi/a) \mbox{\boldmath{$e$}}_x$ and becomes larger as the wavenumber approaches $(\pi/a)\mbox{\boldmath{$e$}}_x+(\pi/a)\mbox{\boldmath{$e$}}_y$. While in the cases shown in Figs.\ref{fig-pattern}(b) and (c), since the next-nearest-neighbor antiferromagnetic correlation is dominant, the scattering intensity has a maximum at $(\pi/a)\mbox{\boldmath{$e$}}_x$ and becomes smaller as the wavenumber approaches $(\pi/a)\mbox{\boldmath{$e$}}_x+(\pi/a)\mbox{\boldmath{$e$}}_y$. From this result, experimental results indicate that the next-nearest-neighbor antiferromagnetic correlation is strong. \cite{KKodama-preprint} This supports the models in Figs.\ref{fig-pattern}(b) and (c). However, the wavenumber dependence for the model in Figs.\ref{fig-pattern}(c) is qualitatively different from the experimental results as shown in Figs.\ref{fig-intensity}(a) and (b). Therefore, the experimental results are rather consistent with the integrated intensity for the model shown in Fig.\ref{fig-pattern}(b) than those for the models shown in Figs.\ref{fig-pattern}(a) or (c). \begin{figure} \caption{ Wavenumber dependence of the integrated intensity for three models shown in Fig.\ref{fig-pattern}(a),(b) and (c). The wavenumber $\mbox{\boldmath{$q$}}$ in Figures (a), (b) and (c) are represented by $\mbox{\boldmath{$q$}} =2\pi(\tilde{\mbox{\boldmath $a$}_1}+k \tilde{\mbox{\boldmath $b$}_1})$ with $0.5 < k < 1.5$, $\mbox{\boldmath{$q$}} =2\pi(h\tilde{\mbox{\boldmath $a$}_1}+ \tilde{\mbox{\boldmath $b$}_1})$ with $1 < h < 2$ and $\mbox{\boldmath{$q$}} =2\pi[(1+\delta)\tilde{\mbox{\boldmath $a$}_1}+ (1-\delta)\tilde{\mbox{\boldmath $b$}_1}]$ with $0 < \delta < 1$, respectively. Filled circles represent the experimental results given by Kodama {\it et al.} \cite{KKodama-preprint} The right ordinates represent the scale of the integrated intensity of the neutron inelastic scattering experiments. Here, the value 200 counts/5500kmon in the experiments corresponds to 0.04 obtained by the PE. This gives the best fit of the data for the model shown in Fig.\ref{fig-pattern}(b) with the experimental results. } \label{fig-intensity} \end{figure} \section{Discussion} To understand the mechanism of spin gap in CaV$_4$O$_9$, we have studied the orbital-order dependence of the physical quantities such as the uniform magnetic susceptibility, the energy dispersion of the triplet states and the integrated scattering intensity. Before discussing these results, we show the temperature dependence of $\chi$ for the system with $J_{\mbox{\small cp}} =171$K, $J_{\mbox{\small cd}} =15$K and $J_{\mbox{\small ep}} =J_{\mbox{\small ed}} =68$K. These parameter values are the ones suggested by an analysis of the neutron inelastic scattering experiments. \cite{KKodama-preprint} \begin{figure} \caption{ Temperature dependence of the uniform magnetic susceptibility for the model with $J_{\mbox{\small cp}} =171$K, $J_{\mbox{\small cd}} =15$K and $J_{\mbox{\small ep}} = J_{\mbox{\small ed}} =68$K. Broken and solid curves represent $\chi$ for 8-site and 16-site systems calculated by the ED, respectively, while the bold-gray line is the experimental result. } \label{fig-chi-Kodama} \end{figure} The uniform magnetic susceptibilities for 8-site and 16-site systems are calculated by the ED. The theoretical and the experimental results are shown in Fig.\ref{fig-chi-Kodama}. The size dependence is almost negligible and the numerical result of the $\chi$ for 16-site system may be regarded as the one in the thermodynamic limit. Since it is difficult to estimate the amplitude of the spin exchange couplings and the $g$-factor from the high temperature expansion, the $g$-factor is taken as 1.7 here, which is nearly the same as the value obtained by the analysis mentioned in the previous section. Figure~\ref{fig-chi-Kodama} indicates that a qualitative difference between the experimental and theoretical results appears around the peak temperature. We may conclude that the parameters suggested by the analysis of the neutron inelastic scattering \cite{KKodama-preprint} are not consistent with the experimental result of the uniform magnetic susceptibility. In Section 3, we have mainly studied the cases in Figs.\ref{fig-pattern}(a)-(c). Here, we briefly discuss the cases in Figs.\ref{fig-pattern}(d) and (e) for completeness. Figure~\ref{fig-chi}(d) and (e) show the temperature dependences of $\chi$. For the model in Fig.\ref{fig-pattern}(d), strength of the relevant spin exchange coupling $J_{\mbox{\scriptsize R}}$ and the other small ones $J$ are estimated to be 159K and 111K, respectively. The ratio $J/J_{\mbox{\scriptsize R}}$ is 0.70. The $g$-factor is 1.71. The amplitude of the spin gap is extrapolated to be about 29K or $\Delta_{\mbox{\small s}}/J_{\mbox{\scriptsize R}} \simeq 0.179$. For the model in Fig.\ref{fig-pattern}(e), $J_{\mbox{\scriptsize R}}$ and $J$ are estimated to be 207K and 89K, respectively. The ratio $J/J_{\mbox{\scriptsize R}}$ is 0.43. The $g$-factor is 1.70. The amplitude of the spin gap is about 3.6K or $\Delta_{\mbox{\small s}}/J_{\mbox{\scriptsize R}} \simeq 0.017$. In both cases, the model with given parameters $J_{\mbox{\scriptsize R}}$ and $J$ seems to be near the critical point because the leading term of the finite size correction in the fitting function is $N^{-1/2}$ rather than $N^{-1}$ and the extrapolated values of the spin gap are very small. From the perturbational results, the origin of the spin gap for these models may also be described by the stripe singlet as well as the case in Fig.\ref{fig-pattern}(c). Here, we discuss more quantitative aspects of the magnetic properties. We first discuss the temperature dependence of $\chi$. The experimental results for the $\chi$ have not been well understood by the model with only the nearest-neighbor interaction, i.e. $J_{\mbox{\small cp}} = J_{\mbox{\small cd}} =0$. In experiments, the spin gap $\Delta_{\mbox{\small s}}$, the peak temperature $T_{\mbox{\small p}}$ and the temperature $T^*$ which gives a half amplitude of $\chi$ at $T_{\mbox{\small p}}$ are 107K, 110K and 595K, respectively. The ratios $\Delta_{\mbox{\small s}} / T_{\mbox{\small p}}$ and $T^*/ T_{\mbox{\small p}}$ are 0.97 and 5.41, respectively. The peak temperature is relatively small compared to the spin gap and $\chi$ decays rapidly below $T_{\mbox{\small p}}$. The numerical results for the model with the nearest-neighbor interactions have several different features from the experimental ones. \cite{NKatoh-JPSJ64-4105,MTroyer-PRL76-3822} For example, the ratios $\Delta_{\mbox{\small s}} / T_{\mbox{\small p}}$ and $T^*/ T_{\mbox{\small p}}$ for the isolated plaquette model$(J_{\mbox{\small ed}} =0)$ are 1.28 and 3.83, respectively. In the plaquette singlet regions, these ratios become small as $J_{\mbox{\small ed}}$ becomes large. While the ratios $\Delta_{\mbox{\small s}} / T_{\mbox{\small p}}$ and $T^*/ T_{\mbox{\small p}}$ for the isolated dimer model $(J_{\mbox{\small ep}}=0)$ are 1.60 and 3.48, respectively. In the dimer singlet regions, these ratios also become small as $J_{\mbox{\small ep}}$ becomes large. Then the ratios estimated from the experiments are no longer reproduced from this model. We consider several possible reasons of this discrepancy from the experimental results below $T_{\mbox{\small p}}$. One is that the frustration due to the next-nearest-neighbor couplings reduces the band width of the triplet dispersion while keeps the strength of the spin gap, as has already been pointed out. \cite{KUeda-PRL76-1932} The other is that the low-lying excitations from the singlet ground state are not only the triplet states but also the singlet states. For example, in the case shown in Fig.\ref{fig-pattern}(b), the singlet dispersion exists in the low energy region, as shown in Fig.\ref{fig-dispersion}(b). These singlet excited states are constructed from the product states of the lowest excited singlet state on one of the edge-shared plaquettes and the corner-shared dimer singlets on the others for the unperturbed Hamiltonian. Within the PE, the energy gap between the lowest singlet excitation and the ground state is comparable to the spin gap. In addition, since the singlet dispersion is nearly flat, the energies of the singlet excitations are lower than those of the triplet excitations in a wide region in the Brillouin zone. Then at the temperature around the singlet-singlet gap, $\chi$ decays rapidly since the strength of the denominator in eq.(\ref{eq-chi}) increases relatively due to the increase of the weight of the excited singlet states. In order to discuss the possibility of the existence of the low-lying excitation of the singlet states, the detailed analysis of the experimental results for the specific heat is needed because the temperature dependence of $\chi$ below the spin gap temperature or the neutron inelastic scattering do not contain the information about the singlet excitations directly. Next, we discuss the treatment of the spin exchange couplings. The spin exchange couplings have been divided into two groups. One is strong and the others are relatively small. The same value has been assigned for the values of the other relatively small couplings. This weaker bonds have minor contributions to physical properties and do not change the essential feature obtained in Sec.3. However, within these weaker bonds, it is possible that the amplitude of the nearest-neighbor couplings becomes twice as large as that of the next-nearest-neighbor couplings. Because the number of the path for the superexchange mechanism is two in the nearest-neighbor case while it is one in the next-nearest-neighbor one. If this estimate is adopted, for example in the case shown in Fig.\ref{fig-pattern}(a), the band width of the triplet dispersion becomes narrow and the wavenumber giving the lowest excitation shifts from (0,0) to $(\pi,\pi)$ within the second-order PE. Since, in the terminology of the perturbation method, the transfer energy of the triplet between the nearest-neighbor edge-shared plaquettes is determined by $ J_{\mbox{\small ed}} -2 J_{\mbox{\small cp}} $ within the first-order PE, it vanishes at $ J_{\mbox{\small ed}} =2 J_{\mbox{\small cp}} $. The higher-order perturbation may change the wavenumber of the lowest excitation according as the sign of the transfer energy of the triplet or the amplitudes of the nearest and the next-nearest neighbor couplings. Through this study, we have concentrated on the superexchange mechanism for the $d_{xz}$ and the $d_{yz}$ orbitals. In the real materials, however, there are several possible subtleties which further invalidate the assumption that the amplitude of the relatively small nearest-neighbor couplings is twice as large as that of the relatively small next-nearest ones. One is the contribution from the superexchange mechanism in the $d_{xy}$ orbital since the wavefunction of the occupied orbital may also have a $d_{xy}$ component in the real material. Due to this contribution, the next-nearest-neighbor couplings become antiferromagnetic and relevant. Another is the contribution from the direct exchange mechanism for both the $d_{xy}$ orbital and the $d_{xz}$ and $d_{yz}$ orbitals. Since the bonding of the $t_{2g}$ orbitals with the $p_{z}$ orbital on oxygen is not $\sigma$ bonding but $\pi$ bonding, the effect of the direct exchange mechanism may be relatively strong compared to that of the superexchange mechanism. The detailed character of the direct exchange couplings may depend on the symmetry of the wavefunction and the distance between the V atoms. The contributions from tilting of the pyramid or the Jahn-Teller distortion may also influence the exchange coupling. Detailed quantitative analysis for the spin exchange couplings remain for further studies. Finally, we discuss the importance of the $d_{xz}$ and $d_{yz}$ orbitals in the magnetic properties for CaV$_4$O$_9$. In the real material, the wavefunction lying near the Fermi energy level may be the linear combination of the $d$ orbitals. In this context, Marini and Khomskii \cite{SMarini-pre} also pointed out the importance of the orbital order. They discussed the effect of the crystal field and estimated the coefficient of the linear combination of the $d_{xz}$ and $d_{yz}$ orbitals in the wavefunction. The given stable orbital is tilted and has a chirality. They concluded that the relevant spin exchange coupling is $J_{\mbox{\small ed}}$ and the origin of the spin gap is an edge-shared dimer singlet. They proposed that the next strongest spin exchange couplings are the next-nearest-neighbor ones and $J_{\mbox{\small ep}}$ is weakly ferromagnetic. In this case, however, the nearest-neighbor spin correlation is enhanced and the out-of-phase correlation between the nearest-neighbor edge-shared dimer singlets should be strong due to the next-nearest-neighbor antiferromagnetic spin exchange couplings and weakly ferromagnetic $J_{\mbox{\small ep}}$. These contradict the reported experimental results. \cite{KKodama-preprint} Neutron measurement suggests substantial antiferromagnetic correlations for the next-nearest-neighbor pair of V. This is naturally explained by the occupation of $d_{xy}$ orbital. However, if only $d_{xy}$ orbitals are occupied, it is hard to justify the model by Kodama {\it et al.} \cite{KKodama-preprint} where a much larger $J_{\mbox{\small cp}}$ than $J_{\mbox{\small cd}}$ is assumed. Aside from details it appears indeed necessary to take different $J_{\mbox{\small cp}}$ and $J_{\mbox{\small cd}}$ to reproduce the nuetron data. Our model of orbital order can explain why the two next-nearest-neighbor spin exchange couplings differ. However, simple model with only $d_{xz}$ and $d_{yz}$ occupations appears to be insufficient to explain both of the susceptibility and neutron results simultaneously as we discussed in this paper. All the above results imply, within this framework, that the wavefunction is in fact represented by a linear combination of the $d_{xy}$ orbital and the other $t_{2g}$ orbitals with the orbital order. We propose that the ground state is described as the wavefunction with large weight of $d_{xy}$ orbital superposed with the ordered $d_{xz}$ and $d_{yz}$ orbitals given in Fig.\ref{fig-pattern}(a). In this case, the basic origin of the spin gap may be a corner-shared plaquette singlet. Then the next-nearest-neighbor antiferromagnetic spin correlation becomes relatively large since the large weight of $d_{xy}$ orbital enhances $J_{\mbox{\small cp}}$, which may explain the experimental results of the wavenumber dependence of the integrated intensity of the neutron scattering. Additionally, a strong in-phase correlation between the nearest-neighbor corner-shared plaquette singlets increases because of the existence of $J_{\mbox{\small ed}}$ due to the presence of the orbital order. Then the wavenumber of the lowest triplets may become $(0,0)$, which is consistent with the recent experimental results. \cite{KKodama-preprint} If this situation is true, small plaquette which is constructed by four oxygens without a V atom at the center may rotate or some other specific type of the lattice distortion may occur in order to favor the given orbital order. To obtain the information on the weight of these orbitals, a detailed analysis of the lattice structure is needed, especially at very low temperatures. Quantitative and detailed theoretical analyses of this proposal remain for further studies. The orbital order effect may also be observed in the temperature dependence of $\chi$ for CaV$_2$O$_5$. \cite{HIwase-JPSJ65-2397} This material also shows the spin gap behavior. The spin gap and the peak temperature are about 464K and 400K, respectively. One possible mechanism of the spin gap formation may be due to the ladder structure. Since it is assumed that $d$ electron is localized at the $d_{xy}$ orbital, the spin exchange couplings $J$ in the leg and $J_{\perp}$ in the rung have almost the same values. Numerical analysis of $\chi$ for the AFH model on the ladder lattice has led to the result that the ratio $\Delta_{\mbox{\small s}} / T_{\mbox{\small p}}$ is almost 0.5 at $J=J_{\perp}$. \cite{MTroyer-PRB50-13515} However, the ratio obtained in the experiments is 1.16, which is different from the theoretical prediction. If the localized orbital on V atom is represented by the linear combination of the $d_{xy}$ and one of the other $t_{2g}$ orbitals which extends to the oxygen on the rung due to the orbital order, $J_{\perp}$ becomes larger than $J$. Since the ratio $\Delta_{\mbox{\small s}} / T_{\mbox{\small p}}$ is 1.60 in the dimer limit $(J=0)$, it may become 1.16 as $J$ increases from zero. Quantum or thermal fluctuation of the orbital degree of freedom also changes a quantitative behavior of $\chi$ at finite temperature. We propose that the orbital order of $d_{xz}$ and $d_{yz}$ orbitals with partially fillings of $t_{2g}$ orbitals is also a promising explanation for rather puzzling experimental results in CaV$_2$O$_5$. The quantitative study for this problem remains for further study. \section{Summary} We have investigated orbital order effects in magnetic properties of CaV$_4$O$_9$. Several possible models with the orbital order have been considered. The amplitudes of the spin exchange couplings for each model are determined in order to reproduce the temperature dependence of $\chi$ in experiments. The origins of the spin gap are not necessarily the same as the originally proposed plaquette singlet but are ascribed to the generalized four-site singlet. By using the estimated values of the spin exchange couplings, the dispersion of the triplet excitations and the spin-spin correlation corresponding to the integrated scattering intensity have been calculated within the PE. The strong in-phase correlation between the nearest-neighbor pair of four-site singlets and the large antiferromagnetic spin correlation between the next nearest-neighbor spins are minimal requirements for the explanations of the experimental results in the neutron scattering. Although the wavenumber dependences of the triplet dispersion and the spin-spin correlation for our models do not show completely satisfactory agreement with those obtained by the presently available experimental data, they are much improved from the single-orbital cases. We further propose the mechanism required to explain each feature of the experimental results. We suggest that the order of the $d_{xz}$ and $d_{yz}$ orbitals hybridized with uniform and partial occupation of $d_{xy}$ orbital may explain the experimental results. We have also discussed that effects of the orbital order may be observed in other vanadium oxide compounds such as CaV$_2$O$_5$. These strongly suggest that the role of the orbital order is important in understanding the magnetic properties of the vanadium oxide compounds with the spin gap. \section*{Acknowledgements} We would like to thank M. Sato, S.Taniguchi, K. Kodama for useful discussions and comments. We have used a part of the codes provided by H. Nishimori in TITPACK Ver.2. A part of the computation has been performed on VPP500 at the Supercomputer Center of the Institute for Solid State Physics, Univ.of Tokyo. This work is financially supported by a Grant-in-Aid for Scientific Research in Priority Areas ``Anomalous Metallic States near the Mott Transition''.
2,877,628,089,963
arxiv
\section{Introduction} \label{sec:introduction} \IEEEPARstart{C}{onvolutional} neural networks (CNNs) are widely used in visual tasks due to their superior performance in extracting features \cite{Krizhevsky2012,Zeiler2014,Sermanet2014,Ren2017,Shelhamer2017}. The first proposed AlexNet \cite{Krizhevsky2012} leads to the vigorous development of CNN. Since then, many different CNN architectures such as GoogLeNet \cite{Szegedy}, VGG \cite{Simonyan2014} and ResNet \cite{He2016} have been proposed. GoogLeNet proposes Inception module to increase the width of networks and integrates multiple receptive fields information. VGG uses more parameters in a simple network architecture. ResNet proposes a skip connection to deepen the depth of CNNs. DenseNet uses dense connections to concatenate feature maps between different layers. Besides, NAS \cite{Zoph2017,Baker2017} is proposed to search for network architectures with better performance. In general, for input feature maps in each convolutional layer, CNN model learns the optimal kernel filter parameters to obtain higher hierarchical feature maps. Essentially, convolution operation fuses spatial and channel-wise information of input feature maps. By stacking a series of convolutional layers, activation layers, batch normalization layers \cite{Ioffe2015} and pooling layers, networks can hierarchically capture semantic features from previous feature maps, which represent the original input image. However, not all feature maps have the same contribution in the propagation of network. Recent works have considered relationships between channels through attention mechanism to increase the accuracy of CNNs. STN \cite{Jaderberg2015} uses spatial information of feature maps to make networks invariant to translation, rotation and other spatial transformations. SENet \cite{Hu2018} shows that emphasising informative features and suppressing less useful ones can improve the representational power of a network. SCA-CNN \cite{Chen2017}, BAM \cite{Park2019} and CBAM \cite{Woo2018} combine spatial and channel attentions to refine convolutional features. In the non-local \cite{Wang2018} block architecture, global self-attention mechanism is used to capture long-range interactions, which improves the performance of CNNs in video classification and object detection tasks. DANet \cite{Fu2019} and CCNet \cite{Huang2019} capture global feature dependencies in spatial and channel dimensions to achieve better performance in image segmentation task. AutoPruner \cite{Luo2018} proves that even removing unimportant channels directly does not have much impact on the performance of CNNs. When dealing with global information of feature maps, the methods mentioned above either use a global pooling operation , which directly compress a 2D feature map into a scalar without considering the spatial information, or deploy operations with high computational cost to process spatial information of feature maps. However, spatial information of different feature maps carries visual and semantic differences, which is crucial to the effectiveness of extracted channel attentions. Moreover, high computational cost will make networks difficult to train, and limit the application value of networks. Therefore, how to make full use of spatial information to optimize networks while keep less computational cost is an important issue. In this paper, we propose a novel network optimization module called \textit{Channel Reassessment Attention} (CRA) module, which utilizes a pooling operation to compress feature maps and then uses global depthwise convolution (GDConv) to assess channel attentions of each compressed feature maps. There are two advantages of the proposed CRA module: (1) The proposed CRA module captures global-range dependencies between spatial positions; (2) CRA module is a computationally lightweight module and can be easily embedded into different architectures of CNNs. In the following section, we evaluate CRA module with various different architectures of CNNs on several common datasets, the experimental results validate that our method is effective. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{1} \caption{The architecture of the proposed CRA module.} \label{fig:arth} \end{figure} \section{Method} \label{sec:method} In this section, we will formulate the proposed CRA module, which uses channel attentions with spatial information of feature maps to optimize CNNs. In a CNN, the operation of the $i$-th convolution layer can be defined as the following function: \begin{equation} \begin{aligned} Y_i=W_i \odot X_i \label{eq:conv1} \end{aligned} \end{equation} where $X_i=[ x_i^{1},x_i^{2},\dots,x_i^{C_{i}^{'}} ]$ are input feature maps and $Y_i=[ y_i^{1},y_i^{2},\dots,y_i^{C_{i}} ]$ are output feature maps, while $x_i^{j}$ represents the $j$-th 2D input feature map with the size $\langle H_i^{'},W_i^{'} \rangle$ and $y_i^{j}$ represents the $j$-th 2D output feature map with the size $\langle H_{i},W_{i}\rangle$, $C_{i}^{'}$ and $C_{i}$ denote the number of channels of $X_i$ and $Y_i$ respectively, $\odot$ refers to convolution symbol and $W_i=[W_i^{1}, W_i^{2},\dots,W_i^{C_{i}} ]$ denote the convolution kernels where $W_i^{j}=[w_{i}^{j,1}, w_{i}^{j,2},\dots,w_{i}^{j,C_{i}^{'}}]$ represents the $j$-th spatial filter kernel. For simplicity, we omitt the bias term. The equivalent form of Eq.\ref{eq:conv1} can be written as follows: \begin{equation} \begin{aligned} y_i^{j}=W_i^{j} \odot X_i = \sum_{s=1}^{C_i^{'}} w_i^{j,s} \odot x_i^{s}. \label{eq:conv2} \end{aligned} \end{equation} In Eq.\ref{eq:conv2}, we can see that feature maps of different channels are generated by different filter kernels. Since different kernels capture different types of features, feature maps of different channels have different contributions to the performance of networks. Thus, in the paper, we aim to use channel attentions to strengthen the responses of channels with rich feature information and weaken useless ones so that the networks concentrate more on important features. A channel attention process can be written as: \begin{equation} \begin{aligned} \widetilde{y_i^{j}}&=y_i^{j} \otimes M(y_i^{j}) \label{eq:att} \end{aligned} \end{equation} where $M$ denotes an operation to extract channel attentions and $\widetilde{y_i^{j}}$ refers to the final refined output. The richer the information one feature map carries, the greater the value of channel attentions extracted. To make full use of informative features to improve the performance of networks, we propose a CRA module that includes two parts: compressing spatial information and extracting channel attentions. Fig.\ref{fig:arth} illustrates the architecture of CRA module, which will be described in detail in the following section. \subsection{Compression of spatial information} \label{subsec:compression} In order to access channel attentions, we need to evaluate every feature map $y_i^{j}$. However, it is difficult to perform this operation directly due to oversized spatial dimension and complex feature information. This issue is particularly severe in the previous layers because the spatial size of feature maps in the previous layers is generally larger than that in the later layers, which directly leads to excessive computing cost. To solve this problem, we use the average pooling to compress the spatial size of $y_i^{j}$ into a smaller size while keep sufficient spatial information. As show in Fig.\ref{fig:arth}, let $F_{pool}$ represent the average pooling and $U_i$ denote the outputs of $Y_i$ through the function as: \begin{equation} \begin{aligned} U_i=[u_i^{1},&u_i^{2},\dots,u_i^{C_{i}}]=F_{pool}(Y_i) \\&u_i^{j}=F_{pool}(y_i^j) \end{aligned} \end{equation} where $u_i^{j}$ has the size of $\langle h_{i},w_{i}\rangle$ and $h_i \le H_i , w_i \le W_i$. By employing a reasonable value of $\langle h_{i},w_{i}\rangle$, we can retain most of the information of the original $y_i^{j}$ and minimize the subsequent computational cost. In the experimental section we will discuss the influence of different $\langle h_{i},w_{i}\rangle$ on the performance of networks. \subsection{Extraction of channel attentions} \label{subsec:extract} After we obtain $U_i$ from above operation, we extract channel attention from each $u_i^{j}$. However, we have $h_i \times w_i$ values in $u_i^{j}$, and these spatial information have different impacts on the attention of $u_i^{j}$. It is difficult to extract channel attentions artificially from so many complex feature maps. And the extraction operation also subjects to network complexity. To tackle the two issues, we introduce a global depthwise convolution (GDConv). There are three advantages using GDConv: (1) Channel attentions can be directly extracted from all spatial information in feature maps through global convolution; (2) The computational cost is reduced significantly by depthwise convolution; and (3) Channel attentions corresponding to different channels are extracted independently without affecting each other. Therefore, using GDConv, networks can learn the optimal kernel parameters by iteration to respond to all spatial information in feature maps while keep computational cost as low as possible. Fig.\ref{fig:gdc} shows the diagram of GDConv. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{2} \caption{The diagram of GDConv.} \label{fig:gdc} \end{figure} In Fig.\ref{fig:gdc}, $F_{gdc}$ represents GDConv, $L_i=[l_i^{1}, l_i^{2}, \dots, l_{i}^{C_{i}}]$ denotes kernels corresponding to $F_{gdc}$, where the size of $ l_i^{j}$ is $\langle h_{i},w_{i}\rangle$, which equals to the spatial size of $u_i^{j}$. Let $V_i$ denotes channel attentions of $U_i$, then $V_i$ can be defined as: \begin{equation} \begin{aligned} V_i=[v_i^{1},v_i^{2}&,...v_i^{C_{i}}]=F_{gdc}(U_i) \\v_i^{j}&=\sigma (l_i^{j} \odot u_i^{j}) \end{aligned} \end{equation} where $\sigma$ represents the sigmoid function. Note that $l_i^{j}$ only acts on $u_i^{j}$ and they have the same size, thus the result after GDConv operation is a scalar, then by the sigmod function, the result $v_i^{j}$ is taken as channel attention of $u_i^{j}$. Next, we use channel-wise multiplication between feature maps and channel attentions to refine the output as: \begin{equation} \begin{aligned} \widetilde{Y_i}=[\widetilde{y_i^{1}},\widetilde{y_i^{2}},&\dots,\widetilde{y_i^{C_i}}]=F_{mul}(Y_i, V_i) \\ &\widetilde{y_i^{j}}=y_i^{j} \otimes v_i^{j} \end{aligned} \end{equation} where $F_{mul}$ is a function of channel-wise multiplication, $\otimes$ refers to the symbol of channel multiplication and $\widetilde{y_i^{j}}$ denotes the final refined output. Through the proposed CRA module, we realize channel attentions to refine the output feature maps, and then we will introduce the embedding of CRA module into specific CNNs. \begin{table*}[!t] \caption{The detail configuration of ResNet-50 (Left), SE-ResNet-50 (Middle) and CRA-ResNet-50 (Right).The operations and parameters in residual block are listed in square brackets, The number outside the square brackets denotes the number of stacked residual blocks. The square brackets following by \textit{fc} indicates the output dimension of the two fully connected layers in SE module, The angle brackets following by CRA denotes the configuration of $\langle h_{i},w_{i}\rangle$ in CRA module. Params and FLOPs denotes parameters and floating point operations of the corresponding network, respectively.} \label{table:arth_tb} \centering \begin{tabular}{c|c|c|c} \hline output size & ResNet-50 & SE-ResNet-50 & CRA-ResNet-50 \\ \hline $\mathrm{112 \times 112}$ & \multicolumn{3}{c}{conv, $7\times7$, 64, stride 2}\\ \hline \multirow{2}*{$56 \times 56 $} &\multicolumn{3}{c}{max pool, $3\times3$, stride 2}\\ \cline{2-4} & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 64 \\ \mathrm{conv},\, 3 \times 3,\, 64 \\ \mathrm{conv},\, 1 \times 1,\, 256 \\ \end{array} \end{bmatrix}\times 3$ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 64 \\ \mathrm{conv},\, 3 \times 3,\, 64 \\ \mathrm{conv},\, 1 \times 1,\, 256 \\ \mathrm{\textit{fc},\, [16,\, 256]} \\ \end{array} \end{bmatrix}\times 3$ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 64 \\ \mathrm{conv},\, 3 \times 3,\, 64 \\ \mathrm{conv},\, 1 \times 1,\, 256 \\ \mathrm{CRA},\, \langle7 \times 7\rangle,\, 256\\ \end{array} \end{bmatrix}\times 3$ \\ \hline ${\rm 28 \times 28} $ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 128 \\ \mathrm{conv},\, 3 \times 3,\, 128 \\ \mathrm{conv},\, 1 \times 1,\, 512 \\ \end{array} \end{bmatrix}\times 4$ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 128 \\ \mathrm{conv},\, 3 \times 3,\, 128 \\ \mathrm{conv},\, 1 \times 1,\, 512 \\ \mathrm{\textit{fc},\, [32,\, 512]} \\ \end{array} \end{bmatrix}\times 4$ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 128 \\ \mathrm{conv},\, 3 \times 3,\, 128 \\ \mathrm{conv},\, 1 \times 1,\, 512 \\ \mathrm{CRA},\, \langle7 \times 7\rangle,\, 512\\ \end{array} \end{bmatrix}\times 4$ \\ \hline $\mathrm{14 \times 14} $ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 256 \\ \mathrm{conv},\, 3 \times 3,\, 256 \\ \mathrm{conv},\, 1 \times 1,\, 1024 \\ \end{array} \end{bmatrix}\times 6$ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 256 \\ \mathrm{conv},\, 3 \times 3,\, 256 \\ \mathrm{conv},\, 1 \times 1,\, 1024 \\ \mathrm{\textit{fc},\, [64,\, 1024]} \\ \end{array} \end{bmatrix}\times 6$ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 256 \\ \mathrm{conv},\, 3 \times 3,\, 256 \\ \mathrm{conv},\, 1 \times 1,\, 1024 \\ \mathrm{CRA},\, \langle7 \times 7\rangle,\, 1024\\ \end{array} \end{bmatrix}\times 6$ \\ \hline ${\rm 7 \times 7} $ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 512 \\ \mathrm{conv},\, 3 \times 3,\, 512 \\ \mathrm{conv},\, 1 \times 1,\, 2048 \\ \end{array} \end{bmatrix}\times 3$ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 512 \\ \mathrm{conv},\, 3 \times 3,\, 512 \\ \mathrm{conv},\, 1 \times 1,\, 2048 \\ \mathrm{\textit{fc},\, [128,\, 2048]} \\ \end{array} \end{bmatrix}\times 3$ & $\begin{bmatrix} \begin{array}{l} \mathrm{conv},\, 1 \times 1,\, 512 \\ \mathrm{conv},\, 3 \times 3,\, 512 \\ \mathrm{conv},\, 1 \times 1,\, 2048 \\ \mathrm{CRA},\, \langle7 \times 7\rangle,\, 2048\\ \end{array} \end{bmatrix}\times 3$ \\ \hline $1 \times 1$ & \multicolumn{3}{c}{global average pool, 1000-d \textit{fc}, softmax}\\ \hline $\mathrm{Params} $ & $\mathrm{25.56M}$& $\mathrm{28.09M}$& $\mathrm{26.31M}$\\ \hline $\mathrm{FLOPs}$ & $\mathrm{4.11G}$& $\mathrm{4.12G}$& $\mathrm{4.11G}$\\ \hline \end{tabular} \end{table*} \subsection{Network Architectures and Computational Cost} \label{subsec:architectures} For CNNs without skip connections, such as VGG \cite{Simonyan2014}, we embed CRA module into convolutional layer directly. For CNNs with skip connections, such as ResNet \cite{He2016}, we embed CRA module into the last layer of residual block. Moreover, the variants of ResNet like ResNeXt \cite{Xie2017} and DenseNet \cite{Huang2017a} can also construct a new architecture in a similar way. To show the comparable detail of network configuration, here, we list the architecture configuration of ResNet-50, SE-ResNet-50 and CRA-ResNet-50 in Table.\ref{table:arth_tb}, in which ResNet-50 is taken as the baseline. It can be seen from the table that CRA module only brings few parameters, and its computational cost is negligible. More precisely, in the CRA module, the number of additional floating point operations (FLOPs) is $2C_i(3H_iW_i + h_iw_i)$ and the number of additional parameters is $C_i(h_iw_i+1)$ in the $i$-th layer. We prefix the original network name with "CRA-" to indicate the network with CRA module and validate their performance in following experiments. \section{Experiments} \label{exp} In this section, we first evaluate CRA module embedded into various baseline networks on the datasets of ImageNet and CIFAR to verify the performance of CRA module in image classification tasks, and use Grad-CAM visualization results to show the ability of CRA module on target object regions. Next, we validate the performance of the proposed method on MS COCO dataset to verify the performance of CRA module in the task of object detection. Finally, we analyze channel attentions extracted by the proposed CRA module to gain insight into the details of CRA module. \iffalse \begin{figure}[!htb] \centering \includegraphics[width=1.0\linewidth]{imagenetvis} \caption{Sample images from ImageNet 2012 database.} \label{fig:imagenetvis} \end{figure} \fi \subsection{Image Classification on ImageNet} \label{exp_imagenet} The ImageNet 2012 classification dataset \cite{JiaDeng2009} contains more than 1.28 million images for training and 50k images for validation from 1000 classes. Some sample images of ImageNet 2012 dataset can be seen in the first row in Fig~\ref{fig:cam}. For the experimental results for all network networks, we report both top-1 and top-5 error rates with a single center crop on the validation set \cite{He2016, Hu2018}. We follow the practice in \cite{Krizhevsky2012, Simonyan2014, He2016}, initialize the weights as in \cite{He2015} and train 100 epochs totally. The initial learning rate is 0.1 and divided by 10 every 30 epochs with a batch size of 256. The weight decay is 1e-4. All networks are trained using SGD optimizer with 0.9 momentum \cite{He2016, Hu2018}. In order to ensure a fair comparison, we reimplement all the networks with the same settings, including various hyperparameter settings, data augmentation settings, etc. \begin{table}[!htb] \centering \caption{Image classification results of CNNs with different arthitectures on ImageNet 2012, single center crop validation errors are reported. } \label{tab:exp_imagenet} \begin{tabular}{rccccc} \toprule \multicolumn{2}{c}{\textbf{Architecture}}&\textbf{Top-1 (\%)}&\textbf{Top-5 (\%)}&\textbf{FLOPs}&\textbf{Params}\\ \hline \multicolumn{2}{r}{ResNet-50 \cite{He2016}}&24.20&7.15&4.11G&25.56M\\ \multicolumn{2}{r}{SE-ResNet-50 \cite{Hu2018}}&23.02&6.65&4.12G&28.09M\\ \multicolumn{2}{r}{CRA-ResNet-50}&\textbf{22.77}&\textbf{6.47}&4.11G&26.31M\\ \hline \multicolumn{2}{r}{ResNet-101 \cite{He2016}}&23.12&6.67&7.84G&44.55M\\ \multicolumn{2}{r}{SE-ResNet-101 \cite{Hu2018}}&22.28&6.02&7.85G&49.33M\\ \multicolumn{2}{r}{CRA-ResNet-101}&\textbf{21.60}&\textbf{5.93}&7.84G&46.17M\\ \hline \multicolumn{2}{r}{ResNeXt-50 \cite{Xie2017}}&22.10&6.14&4.26G&25.03M\\ \multicolumn{2}{r}{SE-ResNeXt-50 \cite{Hu2018}}&\textbf{21.95}&6.02&4.27G&27.56M\\ \multicolumn{2}{r}{CRA-ResNeXt-50}&21.99&\textbf{5.87}&4.26G&25.78M\\ \hline \multicolumn{2}{r}{ResNeXt-101 \cite{Xie2017}}&21.27&5.79&8.01G&44.18M\\ \multicolumn{2}{r}{SE-ResNeXt-101 \cite{Hu2018}}&20.93&5.66&8.03G&48.96M\\ \multicolumn{2}{r}{CRA-ResNeXt-101}&\textbf{20.71}&\textbf{5.47}&8.02G&45.80M\\ \toprule \end{tabular} \end{table} \textbf{Image classification results of different baselines:} We carry out the experiments on four baseline networks (i.e., ResNet-50, ResNet-101 \cite{He2016}, ResNeXt-50 and ResNeXt-101 \cite{Xie2017}). To compare the performance of CRA module, SE module \cite{Hu2018} is also select to embedded into the above four networks. The experimental results are shown in Table.\ref{tab:exp_imagenet}. It can be seen that the CRA-embedded networks perform much better than both the original networks and SE-embedded networks. The top-1 errors on both CRA-ResNet-50 (22.77\%) and CRA-ResNet-101 (21.60\%) drop by 1.43\% and 1.52\% respectively compared to that of the original networks. Even better, our CRA-ResNet-50 has fewer parameters and computational cost than that of ResNet-101 while outperforms it. Similar to experiments on ResNet, CRA module also brings improvements to ResNeXt obviously. For the comparison of both accuracy and parameter, CRA module is also superior to SE module. The top-1 error of CRA-ResNet-50 is 22.77\% with 26.31M parameters while the top-1 error of SE-ResNet-50 is 23.02\% with 28.09M. CRA module reduces the parameters by 1.78M but improves the accuracy by 0.25\%, which denotes that the proposed CRA module is efficient. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{curve2} \caption{Traning and test error curves of ResNet-50, SE-ResNet-50 and CRA-ResNet-50 on ImageNet 2012.} \label{fig:curve1} \end{figure} Fig.~$\ref{fig:curve1}$ depicts the training and test error curves of ResNet-50, SE-ResNet-50 and CRA-ResNet-50 during the ImageNet training. We can observe that the errors of CRA-embedded network are much lower than that of the baselines, and CRA-embedded network is more stable than SE-embedded network during the training process, which indicates that CRA module has stronger fitting ability and representation power. \begin{table}[!htb] \centering \caption{Image classification results of ResNet-50 with different attention mechanisms on ImageNet 2012. } \label{tab:exp_imagenet2} \begin{tabular}{rcccc} \toprule \multicolumn{1}{c}{{\textbf{Architecture}}}&\textbf{Top-1 (\%)}&\textbf{Top-5 (\%)}&\textbf{FLOPs}&\textbf{Params}\\ \hline ResNet-50 \cite{He2016}&24.20&7.15&4.11G&25.56M\\ \hline BAM \cite{Park2019}&23.21&6.84&4.11G&25.93M\\ SE \cite{Hu2018}&23.02&6.65&4.12G&28.09M\\ CBAM \cite{Woo2018}&23.01&6.59&4.13G&28.07M\\ GALA \cite{Linsley}&22.94&6.52&4.13G&29.42M\\ CRA&\textbf{22.77}&\textbf{6.47}&4.11G&26.31M\\ \hline \toprule \end{tabular} \end{table} \textbf{Results of different attention modules:} To compare with other existing attention mechanisms, we benchmark CRA module against BAM \cite{Park2019}, CBAM \cite{Woo2018}, SE \cite{Hu2018}, and GALA \cite{Linsley} on ResNet-50 arthitecture. We use the same training strategy as the above for experiments and the results are shown in Table.\ref{tab:exp_imagenet2}. Among all attention modules, CRA module has the most significant improvement on ResNet-50, and the additional parameters and computational costs introduced by CRA module are moderate. These results suggest that CRA module provides a competitive trade-off between accuracy and computational cost compared to the previously proposed attention modules. \begin{figure*}[htb] \centering \includegraphics[width=1.0\linewidth]{cam} \caption{Grad-CAM \cite{Selvaraju2020} visualization results of ResNet-50, SE-ResNet-50 and our CRA-ResNet-50.} \label{fig:cam} \end{figure*} \textbf{Visualization results of Grad-CAM:} In order to further show the superiority of CRA module, we apply the Grad-CAM \cite{Selvaraju2020} to ResNet-50, SE-ResNet-50 and CRA-ResNet-50 for comparison. Grad-CAM uses the gradients of target concept in convolutional layers to highlight the important areas in the input image for predicting the concept. We randomly select some images from ImageNet 2012 validation set and Fig.~$\ref{fig:cam}$ shows the visualization results. As shown in the figure, the Grad-CAM masks of CRA-ResNet-50 tend to concentrate on more relevant areas in target objects, while the other two networks work obviously not as well as CRA-embedded network. Thus, it reveals that CRA module improves the ability of networks to recognize objects. \begin{table}[tp] \centering \caption{Image classification results of CRA-ResNet-50 with different $\langle h_i, w_i\rangle$ on ImageNet 2012. } \label{tab:exp_diff_hw} \begin{tabular}{cccc} \toprule $\bm{\langle h_i,w_i \rangle}$&\textbf{Top-1 Error (\%)}&\textbf{Top-5 Error (\%)}&\textbf{Params}\\ \hline $\langle 7,7 \rangle$&\textbf{22.77}&\textbf{6.47}&26.31M\\ $\langle 5,5 \rangle$&22.98&6.58&25.95M\\ $\langle 3,3\rangle$&23.27&6.53&25.71M\\ $\langle 1,1\rangle$&23.56&6.88&25.59M\\ w/o&24.20&7.15&25.56M\\ \toprule \end{tabular} \end{table} \textbf{Ablation Study:} According to the previous analysis, $\langle h_i, w_i\rangle$ controls parameters and computational cost of CRA module. In order to investigate the effect of different $\langle h_i, w_i\rangle$ on CRA module, we compare the performance of CRA-ResNet-50 with different $\langle h_i, w_i\rangle$ on ImageNet 2012. CRA-ResNet-50 has $4$ stages as show in Table \ref{table:arth_tb}, when the input image is $224\times224$ pixels, the spatial size of feature maps in the last stage is $7\times7$. Therefore, in our experiments, the maximum value of $\langle h_i, w_i\rangle$ we assigned is $\langle 7, 7\rangle$. Table.\ref{tab:exp_diff_hw} shows the comparison results under different $\langle h_i, w_i\rangle$. Note that when $\langle h_i, w_i\rangle$ is set to $\langle 1, 1\rangle$, it is equivalent to using global average pooling for feature maps, that is, spatial information in feature maps is not used. Table.\ref{tab:exp_diff_hw} shows that spatial information of feature maps has an important influence on the generation of channel attentions, and indicates that the larger the value of $\langle h_i, w_i\rangle$, the better the performance of CRA-embedded network, because a larger $\langle h_i, w_i\rangle$ can retain more spatial information. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\linewidth]{cifar4} \caption{Sample images from CIFAR-10 database.} \label{fig:cifar} \end{figure} \subsection{Image Classification on CIFAR-10 and CIFAR-100} CIFAR-10 and CIFAR-100 datasets \cite{Krizhevsky2009} consist 50k images in the training set and 10k images in the test set, and the size of the image is $32 \times 32$ pixels. The images in CIFAR-10 are divided into 10 classes while the ones in CIFAR-100 are divided into 100 classes. Some sample images of CIFAR-10 dataset can be seen in Fig~\ref{fig:cifar}. We report the final top-1 error rates on the test set. On the two CIFAR datasets, we still conduct experiments with the original networks, SE-embedded networks and CRA-embedded networks. We use batch size 64 for 300 epochs and follow \cite{He2016} as data augmentation. The initial learning rate is set to 0.1 and divided by 10 at the epoch of 150 and 225. According to the different architectures of networks, we set $\langle h_{i},w_{i}\rangle$ to $\langle 8,8\rangle$ for ResNet and ResNeXt, and set it to $\langle 4,4\rangle$ for ShuffleNet \cite{Zhang2018}, DenseNet and VGG in this experiment. The results are summarized in Table \ref{tab:exp_cifar}. It shows that all CRA-embedded networks perform much better than the other two comparison networks. These results suggest that CRA module brings significant improvements and achieves perfect performance. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{4} \caption{Traning errors and test errors of ResNet-110, SE-ResNet-110 and CRA-ResNet-110 on CIFAR-100.} \label{fig:cirve2} \end{figure} \begin{table*}[htbp] \centering \caption{Image classification results of CNNs with different arthitectures on CIFAR-10 and CIFAR-100. Top-1 validation errors are reported.} \label{tab:exp_cifar} \begin{tabular}{cccrr|crr|crr} \toprule \multicolumn{2}{c}{\multirow{2}{*}{\diagbox{\textbf{Architecture}}{\textbf{CIFAR-10} }}}& \multicolumn{3}{c}{\textbf{Original network}}& \multicolumn{3}{c}{\textbf{SE-embedded network }}& \multicolumn{3}{c}{\textbf{CRA-embedded network}}\\ \cline{3-11}&&\textit{Error (\%)} & \textit{Params} & \textit{FLOPs} & \textit{Error (\%)} & \textit{Params}& \textit{FLOPs} & \textit{Error (\%)} & \textit{Params}& \textit{FLOPs} \\ \hline \multicolumn{2}{r}{ResNet-56 \cite{He2016}}&6.37&853.02K&126.56M&6.06&860.14K&126.56M&\textbf{5.62}&918.54K&126.62M\\ \multicolumn{2}{r}{ResNet-110 \cite{He2016}}&5.61&1.73M&254.99M&5.42&1.74M&255.01M&\textbf{5.24}&1.86M&255.12M\\ \multicolumn{2}{r}{ResNeXt-29 \cite{Xie2017}}&3.74&18.17M&3.03G&3.89&18.69M&3.03G&\textbf{3.73}&18.52M&3.03G\\ \toprule \toprule \multicolumn{2}{c}{\multirow{2}{*}{\diagbox{\textbf{Architecture}}{\textbf{CIFAR-100} }}}& \multicolumn{3}{c}{\textbf{Original network}}& \multicolumn{3}{c}{\textbf{SE-embedded network }}& \multicolumn{3}{c}{\textbf{CRA-embedded network}}\\ \cline{3-11}&&\textit{Error (\%)} & \textit{Params} & \textit{FLOPs} & \textit{Error (\%)} & \textit{Params}& \textit{FLOPs} & \textit{Error (\%)} & \textit{Params}& \textit{FLOPs} \\ \hline \multicolumn{2}{r}{ResNet-56 \cite{He2016}}&27.25&858.87K&126.57M&26.74&865.99K&126.58M&\textbf{26.36}&924.39K&126.63M\\ \multicolumn{2}{r}{ResNet-110 \cite{He2016}}&26.54&1.73M&255.00M&25.91&1.75M&255.02M&\textbf{24.63}&1.86M&255.13M\\ \multicolumn{2}{r}{ResNeXt-29 \cite{Xie2017}}&18.61&18.26M&3.03G&17.23&18.78M&3.03G&\textbf{17.04}&18.61M&3.03G\\ \multicolumn{2}{r}{ShuffleNet-1x \cite{Zhang2018}}&28.75&1.01M&44.55M&28.52&1.63M&45.17M&\textbf{27.59}&1.38M&44.92M\\ \multicolumn{2}{r}{VGG-16 \cite{Simonyan2014}}&26.00&15.30M&315.18M&25.70&15.53M&315.41M&\textbf{25.60}&15.37M&315.25M\\ \multicolumn{2}{r}{DenseNet-BC\cite{Huang2017a}}&22.66&800.03K&296.59M&22.30&804.21K&296.59M&\textbf{21.64}&950.18K&296.74M\\ \toprule \end{tabular} \end{table*} \begin{figure*}[!htb] \centering \includegraphics[width=.98\linewidth]{coco_vis2} \caption{The comparison visualization results of RetinaNet with the original ResNet-50 backbone and our CRA-ResNet-50 backbone on MS COCO dataset. Input images are picked from MS COCO 2017 validation set randomly. The images in the first, second and third row are the original input images, the results of RetinaNet with ResNet-50 backbone, and the results of RetinaNet with CRA-ResNet-50 backbone, respectively.} \label{fig:cocovis} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=1.0\linewidth]{6} \caption{channel attentions extracted by CRA modules in different layers of CRA-ResNet-50 on the dataset of ImageNet 2012. Different style curves represent different input images. CRA modules in different layers are named as "CRA.\textit{stageID.blockID}".} \label{fig:CRA_vis} \end{figure*} Fig.\ref{fig:cirve2} depicts the training and test curves of ResNet-101, SE-ResNet-101 and CRA-ResNet-101 on CIFAR-100. It is worth noting that after 150 epochs, training errors of three networks tend to be silimar while for the error rates on the test set, CRA-ResNet-101 is obviously superior to ResNet-101 and SE-ResNet-101. These curves illustrate that CRA module enhances the information processing ability of feature maps and improves the representation ability of networks. \subsection{Object Detection on MS COCO} In order to verify the effectiveness of CRA module on other task, we conduct object detection experiment on MS COCO 2017 dataset \cite{Lin2014}. MS COCO 2017 is divided into 80 classes, which has more than 118k images in the training set and 5k images in the validation set. Some sample images of MS COCO 2017 dataset can be seen in the first row in Fig~\ref{fig:cocovis}. We report the results in mean Average Precision (mAP) over different IoU thresholds from 0.5 to 0.95 on the validation set, following \cite{Lin2017a, Lin2017}. We use RetinaNet \cite{Lin2017} as our detection framework, ResNet-50, ResNet-101 and their CRA-embedded networks as its backbone respectively. These backbones are pre-trained on ImageNet 2012. We use SGD optimizer and set batch size to 16. The initial learning rate is 0.1 and we totally train 90k iterations. The learning rate is divided by 10 at 60k and 80k iterations with 500 iterations warmup to the initial learning rate. Table.~\ref{table:coco} shows the experimental results. As shown in the table, the performance of RetinaNet with CRA-embedded network backbone significantly exceeds that of the original backbone. This proves that CRA module brings more powerful representation ability and maintains a strong generalization performance. We analyze the improvement brought by the backbone of CRA-ResNet-50 to RetinaNet from the visualization results, as shown in Fig.~\ref{fig:cocovis}. It can be found that for the image in the left column, a bus ignored by RetinaNet with ResNet-50 backbone has been detected by RetinaNet with CRA-ResNet-50 based backbone. And for the image in the middle column, a small traffic light is successfully detected by CRA-ResNet-50 backbone based RetinaNet. These indicate that RetinaNet with CRA-ResNet-50 backbone is more sensitive to small and medium objects. In addition, for partially occluded objects, such as the remote control in the image in the right column, RetinaNet composed by CRA-embedded backbone can also detect well. In summary, the visualization results show the good generalization performance of CRA module on object detection tasks. \begin{table}[!htbp] \caption{Object detection results on MS COCO 2017.} \label{table:coco} \centering \begin{tabular}{ccccc} \hline Backbone & Detector & mAP$_{[.5:.95]}$ & mAP$_{.5}$ & mAP$_{.75}$\\ \hline \multicolumn{1}{r}{ResNet-50 \cite{He2016}}&RetinaNet&33.7\%&52.4\%&36.1\%\\ \multicolumn{1}{r}{CRA-ResNet-50} &RetinaNet&\textbf{34.2\%}&\textbf{53.6\%}&\textbf{36.4\%} \\ \hline \multicolumn{1}{r}{ResNet-101 \cite{He2016}}&RetinaNet&35.6\%&54.4\%&38.5\%\\ \multicolumn{1}{r}{CRA-ResNet-101} &RetinaNet&\textbf{36.2\%}&\textbf{55.7\%}&\textbf{38.9\%} \\ \hline \end{tabular} \end{table} \subsection{Analysis of Channel Attentions} To explore the responses of CRA module to different input images, we randomly select 3 images as the inputs of CRA-ResNet-50 (i.e., \textit{peacock}, \textit{parachute} and \textit{bee}, the last three images shown in Fig.\ref{fig:cam}). Fig.~\ref{fig:CRA_vis} shows channel attentions extracted by CRA modules from different layers in CRA-ResNet-50. We name CRA modules in different layers as "CRA.\textit{stageID.blockID}". By comparing the responses of CRA modules in different stages (e.g., CRA.1.2 and CRA.3.2), we can observe that channel attentions in shallow layers (CRA.1.2) are much lower than that in deep layers (CRA.3.2). This is because, as mentioned in \cite{Yosinski2014} \cite{Luo2017}, the shallow layers in networks learn general features, which contain a lot of redundant information, while the deep layers learn the specific features and have more important information. For different blocks in the same stage (e.g., CRA.3.2 and CRA.3.4), channel attentions that extracted by CRA module in different channels are different. And for different images, channel attention obtained from the same channel tends to be similar, but there are also slight differences. Through embedding CRA module, the ability of CNNs to capture important information in the forward propagation process is strengthened, and the interference of useless information is reduced, thus the performance of CNNs is improved. \section{Conclusion} \label{sec:conclusion} This paper presents a novel \textit{Channel Reassessment Attention} (CRA) module to improve the performance of networks. Our module extracts channel attentions through compression and extraction operations based on spatial information of feature maps, and then refines features adaptively. For image classification tasks, the experimental results on the datasets of ImageNet 2012, CIFAR-10 and CIFAR-100 indicate that CRA module can effectively improve the performance of many different architectures based on CNNs at the minimal additional computational cost. For object detection tasks, the experimental results on MS COCO 2017 datasets verify the generalization performance of CRA module. \section*{Acknowledgment} This work was supported in part by the National Nature Science Foundation of China (61773166), 2030 National Key AI Program of China (2018AAA0100500) and the Science and Technology Commission of Shanghai Municipality under Grant 14DZ2260800. \newpage \bibliographystyle{IEEEbib}
2,877,628,089,964
arxiv
\section{Introduction} The conversion of gas into stars in galaxies and the role of feedback mechanisms has been one of the key aspects of extragalactic astrophysics over the past decades. Several baryonic processes regulate the star formation efficiency within dark matter haloes. Simulations of galaxy formation and semi-analytical models have shown that it is only within haloes in a mass range around M$_{\rm halo}$~$\sim$~M$_{\rm shock}$~$\sim$~$10^{12}$~${\rm M}_{\odot}$ where baryons can form stars efficiently \citep{Cattaneo.etal:2011, Moster.etal:2010, Bouche.etal:2010, Guo.etal:2011}. Above this limit, gravitational shock heating and AGN feedback suppress the gas accretion \citep{Dekel.Birnboim:2006, Keres.etal:2009a, Keres.etal:2005, Birnboim.Dekel:2003, Cattaneo.etal:2009, Cattaneo.etal:2011}. For galaxies within haloes with masses below $\sim$~$10^{12}$~${\rm M}_{\odot}$, other processes are usually invoked to explain the star formation suppression, for instance, reionization of the IGM\citep{Mamon.etal:2010, Cattaneo.etal:2011}. The energy liberated by supernova explosions can eject the gas from haloes with circular velocity $\lesssim 100$~km~s$^{-1}$, quenching star formation \citep{Dekel.Silk:1986}. The fraction of mass acquired via mergers is also a function of stellar mass. For example, the semi-analytical models of \citet{deLucia.etal:2006} show that the number of effective progenitors of galaxies with stellar masses $\lesssim 10^{11} {\rm M}_{\odot}$ is less than two, while this number can be as large as five for galaxies with ${\rm M}_{\star} \sim 10^{12} {\rm M}_{\odot}$. According to \citet{Cattaneo.etal:2011}, the dependence of feedback and merger processes on stellar mass defines three galaxy formation regimes. Stellar mass $\sim 10^{11} {\rm M}_{\odot}$ marks the transition between two dominant mechanisms: gas accretion (M$_{\star} \lesssim 10^{11} {\rm M}_{\odot}$) and gas-poor mergers (M$_{\star} \gtrsim 10^{11} {\rm M}_{\odot}$). A third regime, set immediately below $\sim 10^{11} {\rm M}_{\odot}$, is characterized by the increasing contribution of a population built by gas-rich mergers. The contribution of these mass-dependent processes explain the well known dichotomy among early-type galaxies \citep[e.g.][]{Kang.etal:2007}, supported by many observations, like e.g. a characteristic mass scale \citep{Kauffmann.etal:2003}, and a well-defined mass-metallicity relation \citep[see e.g.][]{Tremonti.etal:2004}. If the dichotomy originates from the mass-dependent role played by feedback, gas accretion, gas-rich and gas-poor mergers, we would expect to find signatures of these processes on the formation history of galaxies with respect to stellar mass. In this letter, the star formation history of a sample of galaxies over a wide range of stellar mass (from $10^9$ to $10^{11.5} {\rm M}_{\odot}$) has been examined for the presence of those signatures. This letter is organized as follows: in Sect. \ref{Sec_sample}, we describe the sample; in Sect. \ref{Sec_starlight}, we present a detailed study of the stellar populations using a spectral fitting code, which also is able to return the star formation history. Finally, we summarize and discuss our results in Sect. \ref{Sec_results}. Throughout the paper, we adopt a cosmology with $H_0 = 70$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_{\rm m} = 0.3$ and $\Omega_{\Lambda} = 0.7$. \begin{figure*} \centering \begin{tabular}{cc} \resizebox{0.35\hsize}{!}{\includegraphics{mcor_mupetro.ps}} & \resizebox{0.35\hsize}{!}{\includegraphics{mcor_rpetro.ps}} \\ \resizebox{0.35\hsize}{!}{\includegraphics{mcor_age_M.ps}} & \resizebox{0.35\hsize}{!}{\includegraphics{mcor_met_M.ps}} \\ \end{tabular} \caption{ {\bf Upper panels:} Scaling relations between stellar mass and surface brightness (left) or half-light radius (right). Both quantities are measured with respect to the Petrosian radius measured in the $r$ band. {\bf Bottom panels:} Mass-weighted average age (left) and metallicity (right), as obtained by the {\tt STARLIGHT} spectral fitting code (see text for details). The grayscale corresponds to the density of points in each graph, with contour lines given at the 90, 70, 50, 25 and 5\% levels with respect to the maximum value. The vertical dashed lines indicate M$_{\star} \sim 3 \times 10^{10}$~M$_{\odot}$, which corresponds to the stellar mass of objects with $\mathcal{M}_{{\rm petro}, r} = -20.2$, i.e. at the knee in the scaling relations. Median values obtained within mass bins (see Table \ref{Tab_bins}) are shown in each panel as open dots.} \label{Fig_scale_relations} \end{figure*} \section{Sample description} \label{Sec_sample} Our sample of early-type galaxies was retrieved from SDSS-DR7 \citep{Abazajian.etal:2009}, selecting galaxies in the redshift range between $0.01$ and $0.025$, brighter than $m_{\rm petro, r} < 17.77$, where $m_{\rm petro, r}$ is the Petrosian magnitude in the r-band. This limit roughly corresponds to the magnitude at which the SDSS spectroscopy is complete \citep{Strauss.etal:2002}. The redshift limits chosen provide a 95\% complete sample between $\mathcal{M}_{\rm petro, r} \sim -20$ and $\sim -17.46$, where $\mathcal{M}_{\rm petro, r}$ is the k-corrected SDSS Petrosian absolute magnitude in r-band, obtained with the {\tt kcorrect} code (version 4\_2) of \citet{Blanton.etal:2003}, choosing as reference the median redshift of the sample (z$_0$=0.021). See \citet{LaBarbera.etal:2008} for details on the estimation of the completeness limits. To select objects from the SDSS database, we define a flag mask, excluding those objects which are: $i$) blended, i. e., with more than one photometric peak\footnote{\scriptsize Since we are using objects from the primary catalog, this selection is equivalent to exclude objects with two or more peaks, which were not deblended}; $ii$) too bright (detections of $> 200~\sigma$); $iii$) too large ($r > 4'$ or a deblend with $r > 1/2$~frame); $iv$) saturated; $v$) located close to the edge of a frame or $vi$) in a region where the sky measurement failed, thus the photometry is compromised; objects which $vii$) are part of the extended wing of a bright star or $viii$) which may be an electronic ghost of a bright star were also excluded. In addition, only objects with no spectroscopic warning on (i.e. {\tt zWarning} attribute set to zero) were selected. This selection returns 10187 objects. Early-type galaxies obey well-studied scaling relations. Several morphological classification indicators have been proposed from the parameters of the SDSS pipeline \citep[e.g. ][]{Strateva.etal:2001}. However, it is still unclear whether these indicators can be applied to low mass systems. Previous work has shown that dwarf elliptical galaxies do not have the same surface brightness profile as their giant counterparts \citep{Graham.Guzman:2003}, or similar star formation histories \citep{Koleva.etal:2009}, even featuring weak spiral structures \citep{Lisker:2009}. Hence, we cannot use the standard SDSS attributes such as {\tt eClass} or {\tt fracDev}. Instead, visual inspection is the most reliable indicator of an early-type morphology. We use the classification from the Galaxy Zoo project \citep{Lintott.etal:2011}, selecting only galaxies classified as elliptical. We found 10138 ($99.7$\%) galaxies from our sample in the Galaxy Zoo database. Among them, 1359 are classified as ellipticals (spirals and unreliable classifications are discarded). The completeness constraint -- in $\mathcal{M}_{\rm petro, r}$ -- results in a final sample of 1328 objects. A further visual inspection was carried out by the authors to confirm the morphological classification. \begin{figure*}[!ht] \centering \begin{tabular}{c} \resizebox{0.9\hsize}{!}{\includegraphics[angle=-90]{stellar_mass_redshift.ps}} \\ \end{tabular} \caption{\label{Fig_t50_t80} Fraction of assembled stellar mass as a function of redshift. The panels show the fraction of mass normalized by the mass of galaxies at z~$\sim 0.1$. Black triangles and solid (dashed) lines show median (mean) values for the present sample of early-type galaxies, with $1\sigma$ confidence intervals being marked as dotted lines. Results by \citet{PerezGonzalez.etal:2008} are plotted as (blue) solid circles with error bars, and dashed blue curves.} \end{figure*} \section{Stellar content} \label{Sec_starlight} Age, metallicity, stellar mass and velocity dispersion were derived using the spectral fitting code {\tt STARLIGHT} \citep{CidFernandes.etal:2005}. Before running the code, the observed spectra are corrected for foreground extinction and de-redshifted, and the models are degraded to match the wavelength-dependent resolution of the spectrum of each galaxy, as described in \citet{LaBarbera.etal:2010a}. We used SSP models based on the Medium resolution INT Library of Empirical Spectra \citep[MILES, ][]{SanchezBlazquez.etal:2006}, using the code presented in \citet{Vazdekis.etal:2010}, using version 9.1 \citep{FalconBarroso.etal:2011}. They have a spectral resolution of $\sim 2.5$ \AA, almost constant with wavelength. A basis grid of 162 SSPs was selected, covering ages in the range of $0.07 - 12.6$~Gyr, and with [M/H]~=$\{-1.71, -1.31, -0.71, -0.38, 0.00, +0.20\}$. The models use a \citet{Kroupa:2001} Universal IMF with slope $= 1.30$, and isochrones by \citet{Girardi.etal:2000}. The stellar masses -- computed within the fiber aperture -- are extended to the full extent of the galaxy by computing the difference between fiber and model magnitudes in the $z$ band. The stellar mass is then $\log($M$_{\star}) = \log({\rm M}_{\star})^{\prime} + 0.4~(m_{{\rm fiber},z} - m_{{\rm model},z})$. We compare the results from {\tt STARLIGHT} with different set-up parameters and different grids. We find no systematic trends, with differences typically within $\pm 20$\%. A detailed study of how results are affected by changes on the set-up parameters of {\tt STARLIGHT} and different SSP model assumptions will be given in a forthcoming paper (Trevisan et al., in prep.). Variations in the SSP optical colors due to different IMF shapes are very small \citep[see e.g. ][]{Vazdekis.etal:2010}. Since {\tt STARLIGHT} uses all the spectral information available, different IMFs are not expected to affect our results. Hence, if a systematic change of the IMF is present \citep[as suggested by, e.g.][]{vanDokkum.Conroy:2011} from low- to high-mass early-type galaxies, the net result would be a change of the stellar mass that corresponds to the position of the knee, keeping the derived stellar population properties presented here unchanged. \section{Results and discussion} \label{Sec_results} In this letter, we study the star formation histories of a sample of $\sim 1300$ visually-selected elliptical galaxies by means of a spectral fitting method. For each galaxy we determine the stellar mass, metallicity, age and star formation history (SFH). Fig.~\ref{Fig_scale_relations} shows both the photometric scaling relations (upper panels), along with the scaling of the derived average ages and metallicities, weighted in mass (bottom panels). The plots of surface brightness $\mu_{\rm Petro}$ and Petrosian radius {\it vs.} stellar mass provide similar information, considering that $\mu_{\rm Petro}$ is derived from Petrosian radius and luminosity, with the latter following the stellar mass of a galaxy. We bin the sample into five subsamples in stellar mass, indicated by blue dots in Fig.~\ref{Fig_scale_relations}. Table \ref{Tab_bins} presents the median properties of the stellar populations for each bin. Regarding the age and metallicity scaling relation (bottom panels in Fig.~\ref{Fig_scale_relations}), a clear change in the slope of [M/H] {\it vs.} stellar mass is apparent at $\sim 3 \times 10^{10} {\rm M}_{\odot}$, equivalent to an absolute magnitude of $\mathcal{M}_{{\rm dev}, r} \sim -20.2$, below which the metallicity decreases linearly with mass. This corresponds approximately to the position of the knee seen in the photometric scale relations. The age distribution is more complex, with a homogeneous population of old galaxies for M$_\star\gtrsim10^{10}$M$_\odot$, and an increased scatter towards younger ages with decreasing mass. It is not clear whether the dichotomy in structural properties has the same origin as the stellar population properties \citep[see e.g. ][]{Graham.Guzman:2003, Janz.Lisker:2008, Cote.etal:2008}. The fact that the knee in the photometric scaling relations has a counterpart in the stellar population properties might indicate the processes regulating the star formation also affect the structural properties of galaxies. For example, feedback mechanisms might affect galaxy sizes, since the gas is pushed out of galaxies by outflows and might be converted into stars at large radii. \citep[e.g. ][]{ Fan.etal:2008, Fan.etal:2010, Damjanov.etal:2009}. \subsection{Star formation history} \label{Sec_SFH} Besides the averaged age and metallicity, spectral fitting provides a wealth of additional information on the star formation histories of individual galaxies. For a given spectrum, a {\tt STARLIGHT} run returns the contribution, as a percentage of mass, from each {\it basis} SSP. This distribution traces directly the star formation history. For each galaxy in the sample, we determine the ``cumulative'' mass fraction, i.e. the fraction of stars older than a given age, as a function of age. Then, we average the cumulative distributions over all galaxies within each mass bin. The age of the distribution at the 50th and 80th percentiles in stellar mass is presented in Table \ref{Tab_bins}. Galaxies with mass $\gtrsim 10^{10} {\rm M}_{\odot}$ form their stars early and over a very short period of time, with 80\% of their stars being older than $\sim 11$~Gyr. Galaxies in the low mass bins also have a very old stellar population. However, the time required to form 80\% of the stellar mass is $\sim 5-6$~Gyr longer than that required by more massive galaxies. Hence, in early-type galaxies, {\it downsizing} should be interpreted as a more extended period of formation in low mass galaxies (instead of an overall later process of formation). We compare our results with \citet{PerezGonzalez.etal:2008}. Their study is based on a sample of $\sim 28000$ objects of all morphological types at 0~$<$~z~$<$~4 to constrain the evolution of the stellar mass content in galaxies as a function of redshift. We rebin our sample into three subsamples with masses ranging from $\log({\rm M}_{\star}) = 9.0-10.0$, $10.0-11.0$, and $11.0-11.5$, which correspond to the first three bins of \citet{PerezGonzalez.etal:2008}. Figure \ref{Fig_t50_t80} shows the cumulative mass fractions as a function of redshift. In all three bins, early-type galaxies are formed in a much more efficient process, in contrast to the sample of \citet{PerezGonzalez.etal:2008}, although the difference is more pronounced in the most massive bin. The large difference is due to the fact that their sample includes galaxies of all morphological types. In contrast, the analysis of the stellar populations of visually classified early-type galaxies at z~$\lesssim$~1, yield a short-lived and early process of star formation \citep{Ferreras.etal:2009b}, consistent with our findings. Furthermore, the present sample is susceptible to aperture effects, since all galaxies are observed through a fiber with fixed angular diameter. Table~\ref{Tab_bins} reports the mean aperture for each mass bin. The aperture $A$ is defined as the ratio between the radius of the SDSS fiber and the half-light Petrosian radius measured in the $r$ band, $A = R_{\rm fiber}/R_{{\rm petro50}, r}$. The mean aperture varies from $A \sim 0.5$ in the first two bins to $\sim 0.2$ in the more massive bin. Assuming that the internal metallicity gradient of early-type galaxies varies from about $-0.4$~kpc$^{-1}$ at high mass \citep{LaBarbera.etal:2011} to negligible at lowest mass \citep[e.g.][]{Koleva.etal:2011}, the above variation of $A$ would imply a change of $\sim 0.16$ in [M/H], i.e. smaller than that of $\sim$~0.4 seen for the range of masses in Table~\ref{Tab_bins}. In addition, the typical internal scatter of stellar population properties is larger than systematics due to aperture effects. Since age gradients are generally small in (massive) ETGs, aperture effects are also likely not to drive the variation of galaxy age with stellar mass. This conclusion is further supported by our analysis of the waveband dependence of the Fundamental Plane relation of bright ETGs \citep{LaBarbera.etal:2010b}, as we found that total (i.e. within an infinite aperture) metallicity and age do actually increase with stellar mass. \begin{table*}[!ht] \centering \small \caption{Stellar population properties as a function of stellar mass.} \begin{tabular}{cccccc} \hline\hline $\log({\rm M}_{\star})$ interval & $9.2 - 9.7$ & $9.7 - 10.2$ & $10.2 - 10.7$ & $10.7 - 11.2$ & $11.2 - 11.7$\\ \hline Number of objects & 156 & 293 & 410 & 360 & 84 \\ L-weighted Age (Gyr) & 6.5~$\pm 3.6$ & 8.8~$\pm 3.3$ & 10.3~$\pm 2.3$ & 10.2~$\pm 1.6$ & 9.8~$\pm 1.2$ \\ M-weighted Age (Gyr) & 9.1~$\pm 2.8$ & 10.3~$\pm 2.4$ & 11.5~$\pm 1.8$ & 11.6~$\pm 1.4$ & 11.8~$\pm 1.2$ \\ 50\% of stars older than (Gyr) & 11.2 & 12.4 & 12.4 & 12.4 & 12.4 \\ 80\% of stars older than (Gyr) & 5.3 & 8.2 & 11.5 & 11.6 & 11.7 \\ L-weighted [M/H] & -0.3~$\pm 0.2$ & -0.2~$ \pm 0.2$ & 0.0~$ \pm 0.1$ & 0.1~$ \pm 0.1$ & 0.1~$ \pm 0.1$ \\ M-weighted [M/H] & -0.2~$\pm 0.2$ & -0.1~$ \pm 0.1$ & 0.1~$ \pm 0.1$ & 0.2~$ \pm 0.1$ & 0.2~$ \pm 0.1$ \\ $R_{\rm fiber} / R_{{\rm petro50}, r}$ & 0.51~$ \pm 0.19$ & 0.50~$ \pm 0.19$ & 0.42~$ \pm 0.15$ & 0.30~$ \pm 0.09$ & 0.21~$ \pm 0.05$ \\ \hline \vspace{0.01cm} \end{tabular} \label{Tab_bins} \end{table*} \subsection{Constraints of feedback processes} These results indicate that the processes regulating star formation in the low- and high-mass regimes leave different signatures on the SFH. Tab.~\ref{Tab_bins} shows that galaxies within the three more massive bins have similar ages and metallicities, with 80\% of their stellar mass formed at approximately the same redshift. The ages and metallicities are almost constant from the third to the fifth mass bins. On the other hand, the low mass bins show a gradual decrease of age and metallicity with decreasing stellar mass. Galaxies in all bins have roughly half of their stars formed before redshift z~$\sim 2-3$. For the most massive galaxies, the additional 30\% of the stars is formed within $\sim$1~Gyr. Hence, the star formation in these galaxies after a redshift z~$\sim$~1 can be considered ``residual''. On the other hand, galaxies with mass ranging from $\log({\rm M}_{\star}) = 9.2$ to $9.7$ have 80\% of their star formed only at z~$\sim 0.5$. Our results indicate that massive objects form faster and low mass sytems have a more extended star formation history than suggested by models \citep[e.g.][]{deLucia.etal:2006}. The feedback mechanism commonly invoked to explain the regulation of star formation in low mass systems is supernovae-driven winds \citep[e.g.][]{Larson:1974,Ferrara.Tolstoy:2000, Keres.etal:2009a}. The chemical enrichment of low-mas galaxies indicate that the process regulating the star formation in these galaxies should be strong enough to eject metals out of the galaxy. However, it is not able to quench the star formation completely. Alternatively, these systems may be continously being fuelled by infalling IGM gas at low metallicity. \begin{figure}[!ht] \centering \begin{tabular}{c} \resizebox{0.9\hsize}{!}{\includegraphics{met_age.ps}} \\ \end{tabular} \caption{\label{Fig_pop_mets} Mass-weighted metallicity as a function of stellar population age, in bins of stellar mass. The metal enrichment is shown until 80\% of the mass is assembled. For this reason, only two points at $11.2$ and $12.6$~Gyr are shown for the three most massive bins. This corresponds to the age of the two oldest SSPs used in the spectral fitting. The solid (dashed) lines indicate the median (mean) values. Error bars reflect mainly the discreteness of the SSP model grid; the metallicities available in the grid are spaced by $\sim$~0.4.} \end{figure} Figure \ref{Fig_pop_mets} shows the mass-weighted [M/H] as a function of stellar population age. The metal enrichment is shown down to the age when approximately 80\% of the stellar mass is formed\footnote{Notice that, for a given mass bin, the lowest Age in Fig.~3 does not correspond exactly to the value of $t_{ 80\%}$ in Tab.~1, because of the discreteness of SSP ages in the STARLIGHT input basis (see Sect.~\ref{Sec_starlight}).}. Galaxies in the three most massive bins form their stars very quickly, and 80\% of their stars are older than 11~Gyr. For this reason, only two points are shown for these three bins. The figure shows a consistent trend of chemical enrichment, whereby the younger populations are more metal rich, a result that supports the idea that the mass-metallicity trend cannot be explained by the infall of metal-poor gas, requiring instead a feedback mechanism that preferentially removes metals from low-mass galaxies. Supernova feedback can account for these ``metal-enhanced outflows''. Since metals are formed in SN events, SN-driven winds are metal-enhanced with respect to the star-forming gas. Most of the metals mixed with the hot gas are able to leave the galaxy, whereas only a small fraction of cool ISM gas is lost \citep[see e.g.][]{ Tremonti.etal:2004, MacLow.Ferrara:1999, Ferrara.Tolstoy:2000}. { This scenario is compatible with the [M/H] {\it vs.} stellar mass relation shown in Fig.~\ref{Fig_scale_relations}. SN-driven feedback is expected to be dominant in low-mass systems, whereas feedback processes in massive galaxies leave a sharp truncation in the SFH, giving rise to different slopes above and below the characteristic mass. The position of the knee in the photometric scaling relations suggests that the interplay of these feedback mechanisms leaves an imprint on galaxy sizes and surface brightness. Aperture effect is not expected to affect these results, as galaxies in a given mass range have similar $r_{\rm fiber}/r_{\rm petro}$ ratios. In addition, we verified that the trend is not driven by the age-metallicity degeneracy. We ran STARLIGHT on simulated spectra with no chemical-enrichment and similar S/N and star-formation history as galaxies at lowest-mass, and we do not find any significant correlation between [M/H]$_{\rm Mass}$ and Age. The SFH of high-mass galaxies indicate that their stars were already formed before z~$\sim 2$. Several studies have shown that high mass systems are assembled mainly via mergers \citep{deLucia.etal:2006, Cattaneo.etal:2011}. However, our results constrain the main epoch of growth via mergers to z~$\gtrsim$~2, unless the mergers proceed mainly via gas poor progenitors. Alternatively, these systems can be already in place at high redshift, as suggested by the weaker number density evolution with redshift for the most massive galaxies between z~$\sim 1.5$ and $0$. \citep[see e.g.][]{Conselice.etal:2007,Ferreras.etal:2009,Banerji.etal:2010}. The cold mode of galaxy formation \citep{Keres.etal:2005, Guo.etal:2011, Dekel.etal:2009a, Dekel.etal:2009b} can account for the required high star formation efficiency at high redshift. However, it is still not clear how this process can result in the spheroidal morphologies we see in place at z~$\sim $~1. \acknowledgments \noindent We thank the referee for very constructive comments that led to improvements in our manuscript. MT acknowledges a FAPESP fellowship no. 2008/50198-3. IGR acknowledges a grant from the Spanish Secretaria General de Universidades, in the frame of its programme to promote mobility of Spanish researchers to foreign centers. A full acknowledgement regarding the use of the Sloan Digital Sky Survey can be found in this website: {\tt http://www.sdss.org/collaboration/credits.html}
2,877,628,089,965
arxiv
\section{Introduction} \label{sec:The state of art} On a first approximation, hodograph transformations are transformations involving the interchange of dependent and independent variables \cite{clarkson,estevez05-1}. When the variables are switched, the space of independent variables is called the reciprocal space. In particular case of two variables, we refer to it as the reciprocal plane. As a physical interpretation, whereas the independent variables play the role of positions in the reciprocal space, this number is increased by turning certain fields or dependent variables into independent variables and vice versa \cite{conte}. For example, in the case of evolution equations in fluid dynamics, usually fields that represent the height of the wave or its velocity, are turned into a new set of independent variables. Reciprocal transformations share this definition with hodograph transformations, but these impose further requirements. Reciprocal transformations require the employment of conservative forms together with the fulfillment their properties, as we shall see in forthcoming paragraphs \cite{Est1,Est2,estevez05-1,{rogersshadwick}}. For example, some properties and requirements for reciprocal transformations that are not necessary for hodograph transformations are: the existence of conserved quantities for their construction \cite{Est1,Est2,EstSar2,EstSar1,rogers4,rogers5,rogerscarrillo}, that the invariance of certain integrable hierarchies under reciprocal transformations induces auto-B\"acklund transformations \cite{EstSar2,EstSar1,oevelrogers, rogerscarrillo, rogersnucci}, and these transformations map conservation laws to conservation laws and diagonalizable systems to diagonalizable systems, but act nontrivially on metrics and on Hamiltonian structures. But finding a proper reciprocal transformation is usually a very complicated task. Notwithstanding, in fluid mechanics, a change of this type is usually reliable, specifically for systems of hydrodynamic type. Indeed, reciprocal transformations have a long story alongside with the inverse scattering transform (IST) \cite{AbloClark,AbloSegur}, the two procedures gave rise to the discovery of other integrable nonlinear evolution equations similar to the KdV equations. For example, Zakharov and Shabat \cite{ZS} presented the now famous nonlinear Sch\"odinger (NLS) equation, which presents an infinite number of integrals of motion and possesses $n$--soliton solutions with purely elastic interaction. In 1928, the invariance of nonlinear gas dynamics, magnetogas dynamics and general hydrodynamic systems under reciprocal transformations was extensively studied \cite{Ferapontov1,RogersKingstonShadwick}. Stationary and moving boundary problems in soil mechanics and nonlinear heat conduction have likewise been subjects of much research \cite{FerapontovRogersSchief,Rogers11}. One of the biggest advantages of dealing with hodograph and reciprocal transformations is that many of the equations reported integrable in the bibliography of differential equations, as the mentioned hydrodynamical systems, which are considered seemingly different from one another, happen to be related via reciprocal transformations. If this were the case, two apparently unrelated equations, even two complete hierarchies of partial differential equations (PDEs) that are linked via reciprocal transformation, are tantamount versions of an unique problem. In this way, the first advantage of hodograph and reciprocal transformations is that they give rise to a procedure of relating allegedly new equations to the rest of their equivalent integrable sisters. The relation is achieved by finding simpler or linearized versions of a PDE so it becomes more tractable. For example, reciprocal transformations were proven to be a useful instrument to transform equations with peakon solutions into equations that are integrable in the Painlev\'e sense \cite{degasholm,h00}. Indeed, these transformations have also played an important role in soliton theory and providing links between hierarchies of PDEs \cite{degasholm,h00}, as in relation to the aforementioned hydrodynamic-type systems. In this chapter we will depict straight forward reciprocal transformations that will help us identify different PDEs as different versions of a same problem, as well as slight modifications of reciprocal transformations, as it can be compositions of several transformations of this type and others. For example, the composition of a {Miura transformation} \cite{AbloKruskalSegur,Sakovich} and a reciprocal transformation gives rise to the so called {Miura-reciprocal transformations} that helps us relate two different hierarchies of differential equations. A whole section of this chapter is devoted to illustrate Miura-reciprocal transformations. A second significant advantage of reciprocal transformations is their utility in the identification of integrable PDEs which a priori are not integrable according to algebraic tests (for example, the Painlev\'e test is one of them) \cite{Est2, estevez05-1} but they are proven indeed integrable according to Painlev\'e, after a reciprocal transformation. Our conjecture is that if an equation is integrable, there must be a transformation that will let us turn the initial equation into a new one in which the Painlev\'e test is successful. We will comment on this later in forthcoming paragraphs. A third advantage for the use of reciprocal transformations is their role in the derivation of Lax pairs. Although it is not always possible to find a Lax pair for a given equation, a reciprocal transformation can turn it into a different one whose Lax pair is acknowledged. Therefore, by undoing the reciprocal transformation in the Lax pair of the transformed equation, we can achieve the Lax pair of the former. These three main points describing the importance of reciprocal transformations imply the power of these transformations to classify differential equations and to sort out integrability. \section{Fundamentals} We will deal with some well-known differential equations in the literature of shallow water wave equations. In particular, we will deal with generalizations of the Camassa--Holm equation and the Qiao equation \cite{CH,Est1,Est2,estevez05-1,qiao,qiao2007,QiaoLiu}. Such generalizations consist of a hierarchy, i.e., a set of differential equations that are related via a recursion operator. The recursive application of such operator gives members of different orders of the hierarchy, i.e., a set of different differential equations. We will understand these differential equations as submanifolds of an appropiate higher-order tangent bundle. Hence, let us introduce the necessary geometric tools for explaining PDEs as submanifolds of bundles. \subsection{PDEs and jet bundles} Let us consider a smooth $k$-dimensional manifold $N$ and the following projection $\pi:(x,u)\in \mathbb{R}^n\times N\equiv N_{\mathbb{R}^n}\mapsto x\in \mathbb{R}^n$ giving rise to a trivial bundle $(N_{\mathbb{R}^n}, \mathbb{R}^n,\pi)$. Here, we choose $\{x_1,\ldots,x_n\}$ as a global coordinate system on $\mathbb{R}^n$. We say that two sections $\sigma_1,\sigma_2:\mathbb{R}^n\rightarrow N_{\mathbb{R}^n}$ are {$p$--equivalent at a point $x\in \mathbb{R}^n$} or they have a {contact of order $p$ at $x$} if they have the same Taylor expansion of order $p$ at $x\in \mathbb{R}^n$. Equivalently, \begin{equation} \sigma_1(x)=\sigma_2(x),\qquad \frac{\partial^{|J|} (\sigma_1)_i}{\partial x_1^{j_1}\ldots\partial x_n^{j_n}}(x)=\frac{\partial^{|J|} (\sigma_2)_i}{\partial x_1^{j_1}\ldots\partial x_n^{j_n}}(x), \end{equation} for every multi-index $J=(j_1,\ldots,j_n)$ such that $0<|J|\equiv j_1+\ldots+j_n\leq p$ and $i=1,\dots,n$. Being $p$-equivalent induces an equivalence relation in the space $\Gamma(\pi)$ of sections of the bundle $(N_{\mathbb{R}^n},\mathbb{R}^n,\pi)$. Observe that if two sections have a contact of order $p$ at a point $x$, then they do have a contact at that point of the same type for any other coordinate systems on $\mathbb{R}^n$ and $N$, i.e., this equivalence relation is geometric. We write $j_{x}^p\sigma$ for the equivalence class of sections that have a {contact of $p$-order} at $x\in \mathbb{R}^n$ with a section $\sigma$. Every such an equivalence class is called a {$p$--jet}. We write ${\rm J}^{p}_x\pi$ for the space of all jets of order $p$ of sections at $x$. We will denote by ${\rm J}^p\pi$ the space of all jets of order $p$. Alternatively, we will write ${\rm J}^p(\mathbb{R}^n,\mathbb{R}^k)$ for the jet bundle of sections of the bundle $\pi:(x,u)\in\mathbb{R}^n\times\mathbb{R}^k\mapsto x\in\mathbb{R}^n$. Given a section $\sigma:\mathbb{R}^n\rightarrow {\rm J}^p\pi$, we can define the functions \begin{equation} (u_j)_J(j^p_x\sigma)=\frac{\partial^{|J|} \sigma_j}{\partial x_1^{j_1}\ldots\partial x_n^{j_n}}(x),\quad \forall j, \quad |J|\leq p.\end{equation} For $|J|=0$, we define $u_J(x)\equiv u(x)$. Coordinate systems on $\mathbb{R}^n$ and $N$ along with the previous functions give rise to a local coordinate system on ${\rm J}^p\pi$. We will also hereafter denote the $n$-tuple and $k$-tuple, respectively, by $x=(x_1,\ldots,x_n),\,\, u=(u_1,\ldots,u_k)$, then \begin{equation}\label{nose1} (u_j)_J=u_{x_{i_1}^{j_1}\dots x_{i_n}^{j_n}}=\frac{\partial^{|J|} u_j}{\partial x_{i_1}^{j_1}\ldots \partial x_{i_n}^{j_n}},\quad \forall j,\quad |J|\leq 0. \end{equation} All such local coordinate systems give rise to a manifold structure on ${\rm J}^p\pi$. In this way, every point of ${\rm J}^p\pi$ can be written as \begin{equation}\label{loccord} \left(x_i,u_j,(u_j)_{x_i},(u_j)_{x_{i_1}^{j_1}x_{i_2}^{2-j_1}},(u_j)_{x_{i_1}^{j_1}x_{i_2}^{j_2}x_{i_3}^{3-j_1-j_2}},\dots,(u_j)_{x_{i_1}^{j_1}x_{i_2}^{j_2}\dots x_{i_n}^{p-\sum_{i=1}^{n-1}j_i}}\right), \end{equation} where the numb indices run $i_1,\ldots,i_p=1,\dots,n$, $j=1,\dots,k,$ $j_1+\dots+j_n\leq p$. For small values of $p$, jet bundles have simple descriptions: ${\rm J}^{0}\pi=N_{\mathbb{R}^n}$ and ${\rm J}^1\pi\simeq \mathbb{R}^n\times {\rm T}N$. The projections $\pi_{p,l}:j^p_x\sigma\in {\rm J}^p\pi\mapsto j^l_x\sigma\in {\rm J}^l\pi$ with $l<p$ lead to define the smooth bundles $({\rm J}^p\pi,{\rm J}^l\pi,\pi_{p,l})$. Conversely, for each section $\sigma: \mathbb{R}^n\rightarrow N_{\mathbb{R}^n}$, we have a natural embedding $j^p\sigma:\mathbb{R}^n\ni x\mapsto j^{p}_x\sigma \in {\rm J}^p\pi$. The differential equations that will be appearing along the chapter will be differential equations in close connection with shallow water wave models. We will define these PDEs on a submanifold $N_{\mathbb{R}^n}$ of a higher-order bundle $J^p(\mathbb{R}^{n+1}, \mathbb{R}^{2k})$. For the reciprocal transformation, we will have to make use of conservation laws. By {conservation law} we will understand an expression of the form \begin{equation} \frac{\partial \psi_1}{\partial x_{i_1}}+\frac{\partial \psi_2}{\partial x_{i_2}}=0, \end{equation} for certain two values of the indices in between $1 \leq i_1,i_2\leq n$ and two scalar functions $\psi_1,\psi_2 \in C^{\infty}({\rm J}^pN_{\mathbb{R}^n})$. The scalar fields representing water wave models will generally be denoted by $U$ or $u$, which depend on the independent variables $x_i$, and the functions $\psi_1,\psi_2$ will be functions of higher-order derivatives of $U$ or $u$. So, let us introduce the pairs $(u_j,x_i)$ or $(U_j,X_i)$ as local coordinates on the product manifold $N_{\mathbb{R}^n}$ and for the further higher-order derivatives we consider the construction given in \eqref{loccord}. In cases of lower dimensionality, as the 2-dim. case, we shall use upper/lower case $(X,T)/(x,t)$. In the 3-dim. case, the independent variables will be denoted by upper/lower case $(X,T,Y)/(x,t,y)$. \subsection{The Camassa--Holm hierarchy} Let us consider the well-known Camassa--Holm equation (CH equation) in $1+1$ dimensions as a submanifold of $J^3(\mathbb{R}^2,\mathbb{R})$ with local coordinates for $\mathbb{R}^2\times \mathbb{R}$ the triple $(X,T,U)$. It reads: \begin{equation}\label{cheq} U_{T}+2\kappa U_{X}-U_{XXT}+3UU_{X}=2U_{X}U_{XX}+UU_{XXX}. \end{equation} We can interpret $U$ as the fluid velocity and $(X,T)$ as the spatial and temporal coordinates, respectively. Nonetheless, the equation \eqref{cheq} in its present form is not integrable in the strict defined Painlev\'e sense, but there exists a change of variables (action-angle variables) such that the evolution equation in the new variables is equivalent to a linear flow at constant speed. This change of variables is achieved by studying its associated spectral problem and it is reminiscent of the fact that integrable classical hamiltonian systems are equivalent to linear flows on tori, \cite{const2, const3, const1}. Indeed, \eqref{cheq} is a bi-Hamiltonian model for shallow water waves propagation introduced by Roberto Camassa and Darryl Holm \cite{CH}. For $\kappa$ positive, the solutions are smooth solitons and for $\kappa=0$, it has peakon (solitons with a sharp peak, so with a discontinuity at the peak in the wave slope) solutions. A peaked solution is of the form: \begin{equation} U=ce^{-|X-cT|}+O(\kappa\log{\kappa}). \end{equation} In the following, we will consider the limiting case corresponding to $\kappa =0$. We can show the bi--Hamiltonian character of the equation by introducing the momentum $M=U-U_{XX $, to write the two compatible Hamiltonian descriptions of the CH equation: \begin{equation} M_{T}=-{\mathcal {D}}_{1}{\frac {\delta {\mathcal {H}}_{1}}{\delta M}}=-{\mathcal {D}}_{2}{\frac {\delta {\mathcal {H}}_{2}}{\delta M}}, \end{equation} where \begin{align}& {\mathcal {D}}_{1}=M{\frac {\partial }{\partial X}}+{\frac {\partial }{\partial X}}M,&{\mathcal {H}}_{1}={\frac {1}{2}}\int U^{2}+\left(U_{X}\right)^{2}\;{\text{d}}X,\nonumber\\ & {\mathcal {D}}_{2}={\frac {\partial }{\partial X}}-{\frac {\partial ^{3}}{\partial X^{3}}}, & {\mathcal {H}}_{2}={\frac {1}{2}}\int U^{3}+U\left(U_{X}\right)^{2 {\text{d}}X. \end{align} The CH equation \eqref{cheq} is the first member of the well-known negative Camassa–-Holm hierarchy for a field $U(X,T)$ \cite{HolmQiao}. From now on, we will refer to this hierarchy by CH(1+1). The CH(1+1) can be written in a compact form in terms of a recursion operator $R$, defined as follows: \begin{equation} U_T=R^{-n}U_X, \quad\quad\quad R=KJ^{-1}, \end{equation} where $K$ and $J$ are defined as \begin{eqnarray} &K=\partial_{XXX}-\partial_{X},\quad\quad\quad J=-\frac{1}{2}(\partial_XU+U\partial_X),\quad \partial_X=\frac{\partial}{\partial X}.\end{eqnarray} The factor $-\frac{1}{2}$ has been conveniently added for future calculations. We can include auxiliary fields $\Omega^{(i)}$ with $i=1,\dots,n$ when the inverse of an operator appears. These auxiliary fields are defined as follows \begin{align} U_T&=J\Omega^{(1)},\nonumber\\ K\Omega^{(i)}&=J\Omega^{(i+1)},\quad i=1,\dots,n-1,\label{chh}\\ U_X&=K\Omega^{(n)}.\nonumber \end{align} It is also useful to introduce the change $U=P^2$, such that the final equations read: \begin{align} P_T&=-\frac{1}{2}\left(P\Omega^{(1)}\right)_X, \label{chcf}\\ \Omega^{(i)}_{XXX}-\Omega^{(i)}_{X}&=-P\left(P\Omega^{(i+1)}\right)_X,\quad i=1,\dots,n-1\label{cht1}\\ P^2&=\Omega^{(n)}_{XX}-\Omega^{(n)}.\label{cht2} \end{align} As we shall see in section 3, the conservative form of equation (\ref{chcf}) is the key for the study of reciprocal transformations. \subsection{The Qiao hierarchy} Qiao and Liu \cite{QiaoLiu} proposed an integrable equation defined as a submanifold of the bundle $J^3(\mathbb{R}^2,\mathbb{R})$. Notice that here the dependent variable is denoted by lower case $u$. In the future, we shall use lower cases for the dependent and independent variables related to Qiao hierarchy. The capital cases shall be used for Camassa--Holm. \begin{equation}\label{qiaoeq} u_t=\left(\frac{1}{2u^2}\right)_{xxx}-\left(\frac{1}{2u^2}\right)_{x}, \end{equation} \noindent which also possesses peaked solutions as the CH equation, and a bi--Hamiltonian structure given by the relation \begin{equation} u_t=j\frac{\delta h_1}{\delta u}=k\frac{\delta h_2}{\delta u}, \end{equation} where the operators $j$ an $k$ are \begin{equation} j=-\partial_xu\left(\partial_x\right)^{-1}u\partial_x,\qquad k=\partial_{xxx}-\partial_x,\quad \partial_x=\frac{\partial}{\partial x}, \end{equation} and the Hamiltonian functions $h_1$ and $h_2$ correspond with \begin{equation} h_1=-\frac{1}{2}\int{\left[\frac{1}{4u^3}+\left(\frac{4}{5\,u^5}+\frac{4}{7\,u^7}\right)\,u_x^2\right]\,dx},\qquad h_2=-\int {\frac{1}{2u}\,dx}. \end{equation} We can define a recursion operator as \begin{equation} r=kj^{-1}. \end{equation} This recursion operator was used by Qiao in \cite{qiao2007} to construct a $1+1$ integrable hierarchy, henceforth denoted as Qiao(1+1). This hierarchy reads \begin{equation} u_t=r^{-n}u_x. \end{equation} Equation \eqref{qiaoeq} is the second positive member of the Qiao hierarchy. The second negative member of the hierarchy was investigated by the same author in \cite{qiao}. If we introduce $n$ additional fields $v^{(i)}$ when we encounter the inverse of an operator, the expanded equations read: \begin{align} u_t&=jv^{(1)},\nonumber\\ kv^{(i)}&=jv^{(i+1)},\quad i=1,\dots,n-1,\label{qh}\\ u_x&=kv^{(n)}.\nonumber \end{align} If we now introduce the definition of the operators $k$ and $j$, we obtain the following equations: \begin{align} u_t&=-\left(u\omega^{(1)}\right)_x,\label{qcf}\\ v^{(i)}_{xxx}-v^{(i)}_x&=-\left(u\omega^{(i+1)}\right)_x,\quad i=1,\dots,n-1,\label{qt1}\\ u&=v^{(n)}_{xx}-v^{(n)},\label{qt2} \end{align} in which $n$ auxiliary fields $\omega^{(i)}$ have necessarily been included to operate with the inverse term present in $j$. These fields have been defined as: \begin{equation} \omega^{(i)}_x=uv^{(i)}_x, \quad i=1,\dots,n. \end{equation} The conservative form of (\ref{qcf}) will allows us to define the reciprocal transformation. \section{Reciprocal transformations as a way to identify and classify PDEs} The CH(1+1) presented in the previous section is here explicitly shown to be equivalent to $n$ copies of the Calogero-Bogoyavlenski-Schiff (CBS) equation \cite{bog,cal,pick}. This CBS equation possesses the Painlev\'e property and the singular manifold method can be applied to obtain its Lax pair and other relevant properties \cite{estevez05-1}. Alongside, in the previous section we have also presented another example, the Qiao(1+1) hierarchy, for which the Painlev\'e test is neither applicable nor constructive. Nonetheless, here we will prove that there exists a reciprocal transformation which allows us to transform this hierarchy into $n$ copies of the modified Calogero-Bogoyavlenskii-Schiff (mCBS), which is known to have the Painlev\'e property \cite{EstSar1}. We shall denote the Qiao(1+1) likewise as mCH(1+1) because it can be considered as a modified version of the CH(1+1) hierarchy introduced in \cite{estevez05-1}. Then, this subsection shows how different pairs of hierarchies and equations: CH(1+1) and CBS equation and the Qiao(1+1) or mCH(1+1) and mCBS equation are different versions of a same problem when a reciprocal transformation is performed upon them. Let us illustrate this in detail. \subsection{Hierarchies in $1+1$ dimensions} \subsubsection*{Reciprocal transformations for CH(1+1)} Given the conservative form of equation \eqref{chcf}, the following transformation arises naturally: \begin{equation}\label{rtch} dz_0=PdX-\frac{1}{2}P\Omega^{(1)}dT, \quad dz_1=dT. \end{equation} We shall now propose a reciprocal transformation \cite{EstSar1} by considering the former independent variable $X$ as a dependent field of the new pair of independent variables $X=X(z_0,z_1),$ and therefore, $dX=X_0\,dz_0+X_1\,dz_1$ where the subscripts zero and one refer to partial derivative of the field $X$ with respect to $z_0$ and $z_1$, correspondingly. The inverse transformation takes the form: \begin{equation}\label{irtch} dX=\frac{dz_0}{P}+\frac{1}{2}\Omega^{(1)}dz_1,\quad dT=dz_1, \end{equation} which, by direct comparison with the total derivative of the field $X$, we obtain: \begin{equation} \partial_0X=\frac{\partial X}{\partial z_0}=\frac{1}{P},\quad \quad \partial_1X=\frac{\partial X}{\partial z_1}=\frac{\Omega^{(1)}}{2}. \end{equation} The important point \cite{EstSar2,EstSar1} is that, we can now extend the transformation (\ref{rtch}) by introducing $n-1$ additional independent variables $z_2,\dots,z_n$ which account for the transformation of the auxiliary fields $\Omega^{(i)}$ in such a way that \begin{equation} \quad\quad\quad \partial_iX=\frac{\partial X}{\partial z_i}=\frac{\Omega^{(i)}}{2},\quad \quad\quad i=2,\dots,n.\end{equation} Then, $X$ is a function $X=X(z_0,z_1,z_2,\dots,z_n)$ of $n+1$ variables. It requires some computation to transform the hierarchy (\ref{chcf})-(\ref{cht2}) into the equations that $X=X(z_0,z_1,z_2,\dots,z_n)$ should obey. For this matter, we use the symbolic calculus package Maple. Equation \eqref{chcf} is identically satisfied by the transformation, and \eqref{cht1}, \eqref{cht2} lead to the following set of PDEs: \begin{equation}\label{bcbs} \partial_0\left[-\frac{\partial_{i+1}X}{\partial_0X}\right]=\partial_i\left[\partial_0\left(\frac{\partial_{00}X}{\partial_0X}+\partial_0X\right)-\frac{1}{2}\left(\frac{\partial_{00}X}{\partial_0X}+\partial_0X\right)^2\right],\quad i=1,\dots,n-1, \end{equation} which constitutes $n-1$ copies of the same system, each of which is written in three variables $z_0,z_i,z_{i+1}$. Considering the conservative form of \eqref{bcbs}, we shall introduce the change: \begin{align} \partial_iM&=\frac{1}{4}\left[-\frac{\partial_{i+1}X}{\partial_0X}\right],\\ \partial_0M&=\frac{1}{4}\left[\partial_0\left(\frac{\partial_{00}X}{\partial_0X}+\partial_0X\right)-\frac{1}{2}\left(\frac{\partial_{00}X}{\partial_0X}+\partial_0X\right)^2\right],\label{cbs2} \end{align} with $M=M(z_0,z_i,z_{i+1})$ and $i=1,\dots,n-1$. The compatibility condition of $\partial_{000} X$ and $\partial_{i+1} X$ in this system gives rise to a set of equations written entirely in terms of $M$: \begin{equation}\label{cbs} \partial_{0,i+1}M+\partial_{000i}M+4\partial_{i}M\partial_{00}M+8\partial_{0}M\partial_{0i}M=0,\quad i=1,\dots,n-1, \end{equation} which are $n-1$ CBS equations \cite{bog, EstPrada, EstSar1}, each one in just three variables, for the field $M=M(z_0,..,z_i,z_{i+1},...z_n)$. \subsubsection*{Reciprocal transformations for mCH(1+1)} Given the conservative form of \eqref{qcf}, the following reciprocal transformation \cite{EstSar1} naturally arises: \begin{equation}\label{rtq} dz_0=u\,dx-u\omega^{(1)}dt,\quad dz_1=dt. \end{equation} We now propose a reciprocal transformation \cite{EstSar1} by considering the initial independent variable $x$ as a dependent field of the new independent variables such that $x=x(z_0,z_1)$, and therefore, $dx=x_0\,dz_0+x_1\,dz_1$. The inverse transformation adopts the form: \begin{equation}\label{irtq} dx=\frac{dz_0}{u}+\omega^{(1)}dz_1,\quad dt=dz_1. \end{equation} By direct comparison of the inverse transform with the total derivative of $x$, we obtain that: \begin{equation} \partial_0x=\frac{\partial x}{\partial z_0}=\frac{1}{u},\quad\quad \partial_1 x=\frac{\partial x}{\partial z_1}=\omega^{(1)}. \end{equation} We shall prolong this transformation in such a way that we introduce new variables $z_2,\dots,z_n$ such that $x=x(z_0, z_1,...,z_n)$ according to the following rule: \begin{equation} \partial_i x=\frac{\partial x}{\partial z_i}=\omega^{(i)}, \quad \quad \quad i=2,\dots,n.\end{equation} In this way, \eqref{qcf} is identically satisfied by the transformation, and \eqref{qt1}, \eqref{qt2} are transformed into $n-1$ copies of the following equation, which is written in terms of just three variables $z_0,z_i,z_{i+1}$: \begin{equation}\label{bmcbs} \partial_0\left[\frac{\partial_{i+1}x}{\partial_0x}+\frac{\partial_{00i}x}{\partial_0x}\right]=\partial_i\left[\frac{(\partial_0x)^2}{2}\right],\quad i=1,\dots,n-1, \end{equation} The conservative form of these equations allows us to write them in the form of a system as: \begin{align} \partial_0\,m&=\frac{(\partial_0x)^2}{2},\label{mcbs1}\\ \partial_i\,m&=\frac{\partial_{i+1}x}{\partial_0x}+\frac{\partial_{00i}x}{\partial_0x},\quad i=1,\dots,n-1.\label{mcbs2} \end{align} which can be considered as modified versions of the CBS equation with $m=m\left(z_0,..z_i,z_{i+1},...z_n\right)$. The modified CBS equation has been extensively studied from the point of view of the Painlev\'e analysis in \cite{EstPrada}, its Lax pair was derived and hence, a version of a Lax pair for Qiao(1+1) is available in \cite{Est2,EstSar1}. \subsection{Generalization to $2+1$ dimensions} \subsubsection*{Reciprocal transformations for CH(2+1)} From now on we will refer to the Camassa--Holm hierarchy in $2+1$ dimensions as CH(2+1), and we will write it in a compact form as: \begin{equation} U_T=R^{-n}U_Y, \label{3.16} \end{equation} where $R$ is the recursion operator defined as: \begin{equation} R=JK^{-1},\quad K=\partial_{XXX}-\partial_X,\quad J=-\frac{1}{2}\left(\partial_XU+U\partial_X\right),\quad \partial_X=\frac{\partial}{\partial X}. \label{3.17} \end{equation} This hierarchy was introduced in \cite{estevez05-1} as a generalization of the Camassa--Holm hierarchy. The recursion operator is the same as for CH(1+1). From this point of view, the spectral problem is the same \cite{calogero} and the $Y$-variable is just another ``time" variable \cite{h00,ivanov}. The $n$ component of this hierarchy can also be written as a set of PDEs by introducing $n$ dependent fields $\Omega^{[i]}, (i=1\dots n)$ in the following way \begin{eqnarray} &&U_Y=J\Omega^{[1]}\nonumber\\&&J\Omega^{[i+1]}=K\Omega^{[i]},\quad i=1\dots n-1,\nonumber\\ \quad &&U_T=K\Omega^{[n]}, \label{3.18} \end{eqnarray} and by introducing two new fields, $P$ and $\Delta$, related to $U$ as: \begin{equation} U=P^2,\quad\quad P_T=\Delta_X, \label{3.19} \end{equation} we can write the hierarchy in the form of the following set of equations \begin{eqnarray} &&P_Y=-\frac{1}{2}\left(P\Omega^{[1]}\right)_X,\nonumber\\ &&\Omega^{[i]}_{XXX}-\Omega^{[i]}_X=-P\left(P\Omega^{[i+1]}\right)_X,\quad i=1\dots n-1, \nonumber \\&& P_T=\frac{\Omega^{[n]}_{XXX}-\Omega^{[n]}_X}{2P}=\Delta_X.\label{3.20} \end{eqnarray} The conservative form of the first and third equation allows us to define the following exact derivative \begin{equation} dz_0= P\,dX-\frac{1}{2}P\Omega^{[1]}\,dY+\Delta\,dT. \label{3.21} \end{equation} A reciprocal transformation \cite{h00,rogers4,rogers5} can be introduced by considering the former independent variable $X$ as a field depending on $z_0$, $z_1=Y$ and $z_{n+1}=T$. From (\ref{3.21}) we have \begin{eqnarray} &&dX= \frac{1}{P}\,dz_0+\frac{\Omega^{[1]}}{2}\,dz_1-\frac{\Delta}{P}\,dz_{n+1},\nonumber\\&&Y=z_1,\quad\quad T=z_{n+1},\label{3.22} \end{eqnarray} and therefore \begin{eqnarray} &&\partial_0X=\frac{1}{P},\nonumber\\&&\partial_1X=\frac{\Omega^{[1]}}{2},\nonumber\label{8}\\&&\partial_{n+1}X=-\frac{\Delta}{P},\label{3.23} \end{eqnarray} where $\partial_i X=\frac{\partial X}{\partial z_i}$. We can now extend the transformation by introducing a new independent variable $z_i$ for each field $\Omega^{[i]}$ by generalizing (\ref{3.23}) as \begin{equation} \partial_i X=\frac{\Omega^{[i]}}{2},\quad i=2\dots n.\label{3.24}\end{equation} Therefore, the new field $ X=X(z_0,z_1,\dots z_n,z_{n+1})$ depends on $n+2$ independent variables, where each of the former dependent fields $\Omega_i,\,(i=1\dots n)$ allows us to define a new dependent variable $z_i$ through definition (\ref{3.24}). It requires some calculation (see \cite{estevez05-1} for details) but it can be proved that the reciprocal transformation (\ref{3.22})-(\ref{3.24}) transforms (\ref{3.20}) to the following set of $n$ PDEs: \begin{equation}\partial_0\left[-\frac{\partial_{i+1}X}{\partial_0X}\right]=\left[\partial_0\left(\frac{\partial_{00}X}{\partial_0X}+\partial_0X\right)-\frac{1}{2}\left(\frac{\partial_{00}X}{\partial_0X}+\partial_0X\right)^2\right]_i,\quad i=1\dots n. \label{3.25}\end{equation} Note that each equation depends on only three variables $z_0, z_i, z_{i+1}$. This result generalizes the one found in \cite{h00} for the first component of the hierarchy. The conservative form of (\ref{3.25}) allows us to define a field $M(z_0,z_1,\dots z_{n+1})$ such that \begin{eqnarray}\partial_iM&=&\frac{1}{4}\left[-\frac{\partial_{i+1}X}{\partial_0X}\right]=-\frac{P\Omega^{[i+1]}}{8},\quad\quad i=1\dots n-1,\nonumber\\ \partial_nM&=&\frac{1}{4}\left[-\frac{\partial_{n+1}X}{\partial_0X}\right]= \frac{\Delta}{4},\label{3.26}\\ \partial_0M&=&\frac{1}{4}\left[\partial_0\left(\frac{\partial_{00}X}{\partial_0X}+\partial_0X\right)-\frac{1}{2}\left(\frac{\partial_{00}X}{\partial_0X}+\partial_0X\right)^2\right]=\frac{1}{4P^2}\left(\frac{3P_X^2}{2P^2}-\frac{P_{XX}}{P}-\frac{1}{2}\right)\nonumber.\end{eqnarray} It is easy to prove that each $M_i$ should satisfy the following CBS equation \cite{calogero} on $J^4(\mathbb{R}^{n+2},\mathbb{R})$, \begin{equation}\partial_{0,i+1}M+\partial_{000i}M+4\partial_{i}M\partial_{00}M+8\partial_{0}M\partial_{0i}M=0,\quad i=1\dots n. \label{3.27}\end{equation} Hence, the CH(2+1) is equivalent to $n$ copies of a CBS equation \cite{bog, cal, EstPrada} written in three different independent variables $z_0,z_i,z_{i+1}$. \subsection*{Reciprocal transformation for mCH(2+1)} Another example to illustrate the role of reciprocal transformations in the identification of partial differential equations was introduced by one of us in \cite{Est2}, were the following $2+1$ hierarchy Qiao(2+1) or mCH(2+1) appears as follows. \begin{equation} u_t=r^{-n}u_y, \label{3.28} \end{equation} where $r$ is the recursion operator, defined as: \begin{equation} r=kj^{-1},\quad k=\partial_{xxx}-\partial_x,\quad j=-\partial_x\,u\,(\partial_x)^{-1}\,u\,\partial_x ,\quad \partial_x=\frac{\partial}{\partial x}, \label{3.29} \end{equation} where $\partial_x=\frac{\partial}{\partial x}$. This hierarchy generalizes the one introduced by Qiao in \cite{qiao2007}. We shall briefly summarize the results of \cite{Est2} when a procedure similar to the one described above for CH(2+1) is applied to mCH(2+1). If we introduce $2n$ auxiliary fields $v^{[i]}$, $\omega^{[i]}$ defined through \begin{eqnarray} && u_y=jv^{[1]},\nonumber\\ &&jv^{[i+1]}=kv^{[i]},\quad \omega_x^{[i]}=uv_x^{[i]},\quad i=1\dots n-1,\nonumber\\ \quad && u_t=kv^{[n]}, \label{3.30} \end{eqnarray} the hierarchy can be expanded to $J^3(\mathbb{R}^3,\mathbb{R}^{2n+1})$ in the following form: \begin{eqnarray} &&u_y=-\left(u\omega^{[1]}\right)_x,\nonumber\\&& v^{[i]}_{xxx}-v^{[i]}_x=-\left(u\omega^{[i+1]}\right)_x,\quad i=1\dots n-1,\nonumber \label{18}\\&&u_t=\left(v^{[n]}_{xx}-v^{[n]}\right)_x,\label{3.31} \end{eqnarray} which allows us to define the exact derivative \begin{equation} dz_0= u\,dx-u\omega^{[1]}\,dy+\left(v^{[n]}_{xx}-v^{[n]}\right)\,dt \label{3.32} \end{equation} and $z_1=y, z_{n+1}=t$. We can define a reciprocal transformation such that the former independent variable $x$ is a new field $x=x(z_0,z_1,\dots \dots z_{n+1})$ depending on $n+2$ variables in the form \begin{eqnarray} &&dx=\frac{1}{u}dz_0+\omega^{[1]}dz_1-\frac{\left(v^{[n]}_{xx}-v^{[n]}\right)}{u}dz_{n+1},\nonumber\\&&y=z_1,\quad\quad t=z_{n+1},\label{3.33} \end{eqnarray} which implies \begin{eqnarray} &&\partial_0x=\frac{\partial x}{\partial z_0}=\frac{1}{u},\nonumber\\&&\partial_ix=\frac{\partial x}{\partial z_i}=\omega^{[i]},\quad i=1...n\label{20}\nonumber\\&&\partial_{n+1}x=\frac{\partial x}{\partial z_{n+1}}=-\frac{\left(v^{[n]}_{xx}-v^{[n]}\right)}{u}.\label{3.34} \end{eqnarray} The transformation of the equations (\ref{3.31}) yields the system of equations \begin{equation}\partial_0\left[\frac{\partial_{i+1}x}{\partial_0x}+\frac{\partial_{00i}x}{\partial_0x}\right]=\partial_i\left[\frac{x_0^2}{2}\right],\quad i=1\dots n. \label{3.35}\end{equation} Note that each equation depends on only three variables: $z_0, z_i, z_{i+1}$. The conservative form of (\ref{3.35}) allows us to define a field $m=m(z_0,z_1,\dots z_{n+1})$ such that \begin{eqnarray} &&\partial _0 m=\frac{x_0^2}{2}=\frac{1}{2u^2},\nonumber\\&& \partial_i m=\frac{\partial_{i+1}x}{\partial_0x}+\frac{\partial_{00i}x}{\partial_0x}=v^{[i]},\quad i=1\dots n,\label{3.36}\end{eqnarray} defined on $J^3(\mathbb{R}^{n+2},\mathbb{R}^2)$. Equation (\ref{3.35}) has been extensively studied from the point of view of Painlev\'e analysis \cite{EstPrada} and it can be considered as the modified version of the CBS equation (\ref{3.27}). Hence, we have shown again that a reciprocal transformation has proven the equivalency between two hierarchies/equations (mCH(2+1)-mCBS) that although they are unrelated at first, they are merely two different description of a same common problem. \subsection{Reciprocal transformation for a fourth-order nonlinear equation} In \cite{EstGand,EstLeble}, we introduced a fourth-order equation in $2+1$-dimensions which has the form \begin{equation} \left(H_{x_1x_1x_2} + 3H_{x_2}H_{x_1}-\frac{k+1}{4}\frac {(H_{x_1x_2})^2}{H_{x_2}}\right)_{x_1}=H_{x_2x_3} \label{3.37}. \end{equation} The two particular cases $k=-1$ \cite{EstLeble} and $k=2$ \cite{EstGand,EstPra2} are integrable and it was possible to derive their Lax pair using the singular manifold method \cite{weiss}. Based on the results in \cite{EstGand,EstLeble}, we proposed a spectral problem of the form: \begin{align} &\phi_{x_1x_1x_1}-\phi_{x_3} + 3H_{x_1}\phi_{x_1}-\frac{k-5}{2}\,H_{x_1x_1}\phi=0,\nonumber\\ &\phi_{x_1x_2}+H_{x_2}\phi +\frac{k-5}{6}\,\frac{H_{x_1x_2}}{H_{x_2}}\,\phi_{x_2}=0\label{3.38}. \end{align} We can rewrite \eqref{3.37} as the system: \begin{align} &H_{x_1x_1x_2}+3H_{x_2}H_{x_1}-\frac{k+1}{4}\frac{H^2_{x_1x_2}}{H_{x_2}}=\Omega, \nonumber \\ &\Omega_{x_1}=H_{x_2x_3}.\label{3.39} \end{align} \subsubsection{Reciprocal transformation I} We can perform a reciprocal transformation of equations \eqref{3.39} by proposing: \begin{align} &dx_1=\alpha(x,t,T)[dx-\beta(x,t,T)dt-\epsilon(x,t,T)dT],\nonumber\\ &x_2=t,\quad x_3=T.\label{3.40} \end{align} Under this reciprocal transformation the derivatives transform as \begin{eqnarray} &&\frac{\partial}{\partial x_1}=\frac{1}{\alpha}\frac{\partial}{\partial x},\nonumber\\ &&\frac{\partial}{\partial x_2}=\frac{\partial}{\partial t}+\beta\frac{\partial}{\partial x},\nonumber\\ &&\frac{\partial}{\partial x_3}=\frac{\partial}{\partial T}+\epsilon\frac{\partial}{\partial x}.\label{3.41} \end{eqnarray} The cross derivatives of \eqref{3.40} give rise to the equations: \begin{equation}\label{3.42} \alpha_t + (\alpha\beta)_x=0,\quad \alpha_T+ (\alpha\epsilon)_x=0,\quad \beta_T-\epsilon_t+ \epsilon \beta_x-\beta\epsilon_x=0. \end{equation} If we select a transformation in the form for $\alpha$ such that \begin{equation}\label{3.43} H_{x_2}=\alpha(x,t,T)^k, \end{equation} this reciprocal transformation, when applied to the system (\ref{3.39}), yields \begin{eqnarray}\label{3.44} &&H_{x_1}= \frac{1}{3}\left(\frac{\Omega}{\alpha^k}-k\frac{\alpha_{xx}}{\alpha^3}+(2k-1)\left(\frac{\alpha_x}{\alpha^2}\right)^2\right),\\ && \label{3.45} \Omega_x=-k\alpha^{(k+1)}\epsilon_x. \end{eqnarray} Furthermore, the compatibility condition $H_{x_2x_1}=H_{x_1x_2}$ between \eqref{3.43} and \eqref{3.44} yields \begin{equation}\label{3.46} \Omega_t=-\beta\,\Omega_x-k\,\Omega\beta_x + \alpha^{k-2}\left[-k\beta_{xxx}+(k-2)\beta_{xx}\,\frac{\alpha_x}{\alpha}+3k\alpha^k\alpha_x\right]. \end{equation} Then, the equations \eqref{3.42}, \eqref{3.45} and \eqref{3.46} constitute the transformed equations for the original system \eqref{3.39}. Still, we can find a more suitable form for the transformed equations if we introduce the following definitions: \begin{equation}\label{3.47} A_1=\frac{k+1}{3},\quad A_2=\frac{2-k}{3},\quad M=\frac{1}{\alpha^3}. \end{equation} In these parameters, the integrability condition $(k+1)(k-2)=0$ is translated into \begin{equation}\label{3.48} A_1\cdot A_2=0,\quad A_1+A_2=1. \end{equation} Using the definitions above, we can finally present the reciprocally transformed system as: \begin{eqnarray}\label{3.49} &&A_1\left[\Omega_t+\beta\,\Omega_x+2\Omega\beta_x+2\beta_{xxx}+2\frac{M_x}{M^2} \right]+\nonumber\\&&\quad\quad\quad\quad + A_2\left[\Omega_t+\beta\Omega_x-\Omega\beta_x-M\beta_{xxx}-M_x\beta_{xx}-M_x\right]=0,\nonumber\\ &&A_1\left(\Omega_x+2\,\frac{\epsilon_x}{M}\right)+A_2(\Omega_x-\epsilon_x)=0,\nonumber\\ &&M_t=3M\beta_x-\beta M_x,\nonumber\\&& M_T=3M\epsilon_x-\epsilon M_x, \nonumber\\ && \beta_T-\epsilon_t+ \epsilon \beta_x-\beta \epsilon_x=0. \end{eqnarray} Furthermore, the reciprocal transformation can also be applied to the spectral problem \eqref{3.38}. After some direct calculations, we obtain \begin{equation} \begin{aligned} &A_1\left[\psi_{xt}+\beta\psi_{xx}-\left(\beta_{xx}-\frac{1}{M}\right)\psi\right]\\ &\qquad\quad+A_2\left[\psi_{xt}+\beta\psi_{xx}+2\beta_x\psi_x+\left(\beta_{xx}+1\right)\psi\right]M^{\frac{2}{3}}=0,\\[.5cm] &A_1\left[\psi_{T}-M\psi_{xxx}-\left(M\Omega-\epsilon\right)\psi_x\right]\\ &\qquad\quad+A_2\left[\psi_{T}-M\psi_{xxx}-2M_x\psi_{xx}-\left(M_{xx}+\Omega-\epsilon\right)\psi_x\right]M^{\frac{2}{3}}=0,\label{3.50} \end{aligned} \end{equation} where we have set \begin{equation}\phi(x_1,x_2,x_3)=M^{\frac{1-2k}{9}}\psi(x,t,T) \end{equation} for convenience. \subsubsection*{Reduction independent of $T$} Let us show a reduction of the set \eqref{3.49}, by setting all the fields independent of $T$. This means that $$\epsilon= 0,\quad \Omega_x=0\Rightarrow \Omega=V(t), $$ and the system \eqref{3.49} reduces to \begin{align} &A_1\left[V_t+2\left(V\beta+\beta_{xx}-\frac{1}{M}\right)_x \right]+ A_2\left[V_t-\left(V\beta+M\beta_{xx}+M\right)_x\right]=0,\label{3.52}\\ &M_t=3M\beta_x-\beta M_x.\label{3.53} \end{align} \noindent $\bullet$ {\bf Degasperis--Procesi equation} \medskip For the case $A_1=1$ and $A_2=0$, we can integrate \eqref{3.52} as: \begin{equation} \beta_{xx}+V\beta+\frac{V_t}{2}x=\frac{1}{M}+q_0,\end{equation} which combined with \eqref{3.53} yields \begin{equation}\left(\beta_{xx} + V\beta\right)_t+\beta \beta_{xxx}+3\beta_x\beta_{xx}+4V\beta \beta_x-3q_0\beta_x+\frac{1}{2}V_t(\beta_x+3\beta x)+\frac{x}{2}V_{tt}=0. \end{equation} For $q_0=0$ and $V=-1$, this system is the well-known Degasperis–-Procesi equation, \cite{degasholm}. \medskip \noindent $\bullet$ {\bf Vakhnenko equation} \medskip For the case $A_1=0,A_2= 1$, we can integrate \eqref{3.52} as: \begin{align} &V_tx-V\beta-M\beta_{xx}-M-q_0=0, \end{align} which combined with \eqref{3.53} provides, when $V=0$, the derivative of the Vakhnenko equation, \cite{vakh}, \begin{equation}\left[\left(\beta_t+\beta\beta_x\right)_x+3\beta\right]_x=0.\end{equation} \subsubsection{Reciprocal transformation II} A different reciprocal transformation can be constructed using the changes \begin{align}\label{3.58} &dx_2=\eta(y,z,T)\left(dz-u(y,z,T)dy-\omega(y,z,T)dT \right),\nonumber\\ &x_1=y,\quad x_3=T. \end{align} The compatibility conditions for this transformation are \begin{align} &\eta_y+(u\eta)_z=0,\nonumber\\ &\eta_T+(\eta\omega)_z=0,\nonumber\\ &u_T-\omega_y-u\omega_z+\omega u_z=0.\label{3.59} \end{align} We select the transformation by setting the field $H$ as the new independent variable $z$: \begin{equation}\label{3.60} z=H(x_1,x_2,x_3)\rightarrow dz=H_{x_1}dx_1+H_{x_2}dx_2+H_{x_3}dx_3. \end{equation} By direct comparison of \eqref{3.58} and \eqref{3.60}, we obtain \begin{align} &H_{x_2}(x_1,x_2,x_3)=\frac{1}{\eta(y=x_1,z=H,T=x_3)},\nonumber\\ &H_{x_1}(x_1,x_2,x_3)=u(y=x_1,z=H,T=x_3),\nonumber\\ &H_{x_3}(x_1,x_2,x_3)=\omega(y=x_1,z=H,T=x_3), \end{align} and the transformations of the derivatives are \begin{eqnarray} && \frac{\partial}{\partial x_1}=\frac{\partial}{\partial y}+u\frac{\partial}{\partial z},\nonumber\\ && \frac{\partial}{\partial x_2}=\frac{1}{\eta }\frac{\partial}{\partial z},\nonumber\\ && \frac{\partial}{\partial x_3}=\frac{\partial}{\partial T}+\omega\frac{\partial}{\partial z}.\label{3.62} \end{eqnarray} With this definitions, we get the transformation of the system \eqref{3.39}, as: \begin{align}\label{3.63} &G=(u_y+uu_z)_z+3u-\frac{k+1}{4}u_z^2,\nonumber\\ &G_y=(\omega-u G)_z, \end{align} where $G(z,y,T)$ has been defined as $G=\eta\, \Omega$. \subsubsection*{Reduction independent of $T$} The reduction independent of $T$ can be obtained by setting $\omega=0$. In this case, the system \eqref{3.63} contains the case $G=0$, \begin{eqnarray} (u_y+uu_z)_z+3u-\frac{k+1}{4}u_z^2=0. \end{eqnarray} When $k=-1$, it is the Vakhnenko equation. For the other integrable case, $k=2$, it yields a modified Vakhnenko equation if $A_2=0$. \section{Reciprocal transformations to derive Lax pairs} Reciprocal transformations have served us as a way to derive Lax pairs of differential equations and hierarchies of such differential equations. A differential equation in its initial form may not be Painlev\'e integrable as we mentioned before, but we are able to prove its integrability by transforming it into another differential equation via reciprocal transformation that makes it Painlev\'e integrable. In the same fashion, an initial differential equation may not have an associated Lax pair and the singular manifold method may not be applicable. Through a reciprocal transformation we can again transform such equation into another in which we can work the singular manifold method upon. We are depicting examples in the following lines. \subsection{Lax pair for the CH(2+1) hierarchy} In section 2, we have proved that the reciprocal transformations can be used to establish the equivalence between the CH(2+1) hierarchy (\ref{3.16}) and $n+1$ copies of the CBS equation (\ref{3.27}). This CBS equation has the Painlev\'e property \cite {pick} and the singular manifold method can be successfully used to derive the following Lax pair \cite{EstPrada}, \begin{align} &\partial_{00}\psi=\left(-2\partial_0 M-\frac{\lambda}{4}\right)\,\psi,\label{4.1}\\ &0=E_i=\partial_{i+1}\psi-\lambda\partial_i\psi+4\partial_iM\partial_0\psi-2\partial_{0i}M\,\psi\label{4.2}.\end{align} Furthermore, the compatibility condition between these two equations implies that the spectral problem is nonisospectral because $\lambda$ satisfies: \begin{equation} \partial_0\lambda=0, \quad\quad \partial_{i+1}\lambda-\lambda\partial_i\lambda=0. \label{4.3} \end{equation} Notice that the first equation in the Lax pair is independent of the index $i$. Nevertheless, the second equation can be considered as a recursion relation for the derivatives of $\psi$ with respect to each $z_i$. \begin{comment}This allows us to take the following combination: \begin{align} &0=E_n\lambda^{-n}+\sum_{i=1}^{n-1} E_i\lambda^{-i}=\nonumber\\ &-\lambda^{-n}\partial_{n+1}\psi+\lambda^{1-n}\partial_n\psi-4\lambda^{-n}\partial_n M\partial_0\psi+2\lambda^{-n}\,{\partial_{0n}M}\psi\nonumber\\ &+\sum_{i=1}^{n-1}\left(-\lambda^{-i}\partial_{i+1}\psi+\lambda^{1-i}\partial_i\psi-4\lambda^{-i}\partial_iM\partial_0\psi+2\lambda^{-i}\,{\partial_{0i}M}\psi\right) \end{align} It is easy to see that: \begin{equation} \sum_{i=1}^{n-1}\left(-\lambda^{-i}\partial_{i+1}\psi+\lambda^{1-i}\,\partial_i\psi\right)=-\lambda^{1-n}\partial_{n}\psi+\partial_{1}\psi\label{4.5} \end{equation} Therefore, we have \begin{equation} 0=\partial_{1}\psi-\lambda^{-n}\partial_{n+1}\psi+\sum_{i=1}^n\lambda^{-i}\left(-4\partial_{i}M\,\partial_0 \psi+2\partial_{0i}M\psi\right) \label{4.6} \end{equation} and (\ref{4.3}) yields \begin{equation} \partial_0\lambda=0,\quad\quad\sum_i^n\lambda^{n-i}\left(\partial_{i+1}\lambda-\lambda\partial_i\lambda\right)=\partial_{n+1}\lambda-\lambda^n\,\partial_{1}\lambda=0 \label{4.7} \end{equation} \end{comment} Now, to come back to the original fields $U$ and $\Omega^{[i]}$ as well as to the original variables $X,Y,T$, all we need is to perform the change \begin{equation} \psi(z_0,z_1,\dots,z_n,z_{n+1})=\sqrt{P}\,\phi(X,Y,T) \label{4.4} \end{equation} where $P$ is defined in (\ref{3.19}). Considering the reciprocal transformation \eqref{3.22}, we have the following induced transformations \begin{alignat}{3} &\partial_0\psi&&=\sqrt{P}\left(\frac{\phi_X}{P}+\frac{P_X}{2P^2}\phi\right),\nonumber\\ &\partial_{00}\psi&&=\sqrt{P}\left(\frac{\phi_{XX}}{P^2}+\left[\frac{P_{XX}}{2P^3}-\frac{3}{4}\frac{P_X^2}{P^4}\right]\phi\right),\nonumber\\ &\partial_1\psi&&=\sqrt{P}\left(\phi_Y+ \frac{\Omega^{[1]}\phi_X}{2}+\left[\frac{P_Y}{2P}+\frac{P_X\Omega^{[1]}}{4P}\right]\phi\right)\nonumber\\ &\quad\quad&&=\sqrt{P}\left(\phi_Y+ \frac{\Omega^{[1]}\phi_X}{2}-\frac{\Omega^{[1]}_X\phi}{4}\right),\nonumber\\ &\partial_{n+1}\psi&&=\sqrt{P}\left(\phi_T-\frac{\Delta\phi_X}{P}+\left[\frac{P_T}{2P}-\frac{P_X\Delta}{2P^2}\right]\phi\right)\nonumber\\ &\quad\quad&&=\sqrt{P}\left(\phi_T-\frac{\Delta\phi_X}{P}+\left[\frac{\Delta_X}{2P}-\frac{P_X\Delta}{2P^2}\right]\phi\right). \label{4.5} \end{alignat} With these changes, (\ref{4.1}) becomes: \begin{equation} \phi_{XX}+\left(\frac{\lambda P^2}{4}-\frac{1}{4}\right)\phi=0,\nonumber \end{equation} where equation (\ref{3.26}) has been used. Finally, the combination with (\ref {3.19}) yields \begin{equation} \phi_{XX}=\frac{1}{4}\left(1-\lambda U\right)\phi,\label{4.6}\end{equation} as the spatial part of the Lax pair for the CH(2+1) hierarchy. The temporal part can be obtained from (\ref{4.2}) through the following combination: \begin{equation} 0=\sum_{i=1}^n\lambda^{n-i}E_i=\sum_{i=1}^n\lambda^{n-i}\left(\partial _{i+1}\psi-\lambda\partial_i\psi\right)+\sum_{i=1}^n\lambda^{n-i}\left(4\partial_iM\partial_0\psi-2\partial_{0i}M\,\psi\right).\label{4.7}\end{equation} It is easy to prove that \begin{equation}\sum_{i=1}^n\lambda^{n-i}\left(\partial _{i+1}\psi-\lambda\partial_i\psi\right)=\partial_{n+1}\psi-\lambda^n\partial_1\psi.\label{4.8}\end{equation} The reciprocal transformation (\ref{3.22}), when applied to (\ref{4.8}), and combined with (\ref{4.4}) and (\ref{4.5}) yields \begin{eqnarray} &&\sum_{i=1}^n\lambda^{n-i}\left(\partial _{i+1}\psi-\lambda\partial_i\psi\right)=\sqrt{ P}\left[\phi_T-\frac{\Delta\phi_X}{P}+\frac{\Delta_X\phi}{2P}-\frac{P_X\Delta}{2P^2}\,\phi\right]\nonumber\\&&\quad\quad\quad\quad\quad-\,\lambda^n\,\sqrt{ P}\left[\phi_Y+ \frac{\Omega^{[1]}\phi_X}{2}-\frac{\Omega^{[1]}_X\phi}{4}\right].\label{4.9} \end{eqnarray} For the last sum of (\ref{4.7}), we can use \eqref{3.26} and (\ref{4.5}). The result is \begin{eqnarray} &&\sum_{i=1}^n\lambda^{n-i}\left(4\partial_iM\partial_0\psi-2\partial_{0i}M\,\psi\right)=\sqrt P\left[\frac{\Delta\phi_X}{P}+\frac{\Delta\,P_X}{2P^2}\phi-\frac{\Delta_X}{2P}\phi\right]\nonumber\\&&\quad\quad\quad\quad+\sqrt{P}\, \sum_{i=1}^{n-1}\lambda^{n-i}\left[\frac{\Omega^{[i+1]}_X}{4}\phi-\frac{\Omega^{[i+1]}}{2}\phi_X\right]=0.\label{4.10} \end{eqnarray} Substitution of (\ref{4.9}) and (\ref{4.10}) in (\ref{4.7}) yields \begin{equation}\phi_T-\lambda^n\phi_Y+\lambda^n\left(\frac{\Omega^{[1]}_X}{4}\phi-\frac{\Omega^{[1]}}{2}\phi_X\right)+\sum_{i=1}^{n-1}\lambda^{n-i}\left[\frac{\Omega^{[i+1]}_X}{4}\phi-\frac{\Omega^{[i+1]}}{2}\phi_X\right]=0.\label{4.11}\end{equation} The expression (\ref{4.11}) can be written in a more compact form as \begin{equation} \phi_T-\lambda^n\phi_Y+\frac{A_X}{4}\phi-\frac{A}{2}\phi_X=0,\label{4.12}\end{equation} where $A$ is defined as \begin{equation}A=\sum_{j=1}^n\lambda^{n-j+1}\Omega^{[j]}, \quad\quad \text{with}\quad i=j-1.\end{equation} \begin{comment}\begin{eqnarray} 0&=&\left[\phi_Y+ \frac{\Omega^{[1]}\phi_X}{2}-\frac{\Omega^{[1]}_X\phi}{4}\right]-\lambda^{-n}\left[\phi_T-\frac{\Delta\phi_X}{P}+\left[\frac{\Delta_X}{2P}-\frac{P_X\Delta}{2P^2}\right]\phi\right]\nonumber\\&+&\sum_{i=1}^n \lambda^{-i}\left[-4\partial_iM\left(\frac{\phi_X}{P}+\frac{P_X}{2P^2}\phi\right)+2\partial_{0i}M\phi+\left(\frac{-\Delta}{2P}-\lambda^n\Omega^{[1]}\right)\frac{P_X\phi}{P}\right] .\nonumber\end{eqnarray} We now need to use (\ref{3.26}) to obtain: \begin{eqnarray} &&0=\left[\phi_Y+ \frac{\Omega^{[1]}\phi_X}{2}-\frac{\Omega^{[1]}_X\phi}{4}\right]-\lambda^{-n}\left[\phi_T-\frac{\Delta\phi_X}{P}+\left[\frac{\Delta_X}{2P}-\frac{P_X\Delta}{2P^2}\right]\phi\right]\nonumber\\&&\quad+\lambda^{-n}\left[-\Delta\left(\frac{\phi_X}{P}+\frac{P_X}{2P^2}\phi\right)+\frac{\Delta_X}{2P}\phi\right]\nonumber\\&&\quad+\sum_{i=1}^{n-1} \lambda^{-i}\left[\frac{P\Omega^{[i+1]}}{2}\left(\frac{\phi_X}{P}+\frac{P_X}{2P^2}\phi\right)-\frac{(P\Omega^{[i+1]})_X}{4P}\phi\right].\nonumber\end{eqnarray} which can be simplified to: \begin{equation}0=\left[\phi_Y+ \frac{\Omega^{[1]}\phi_X}{2}-\frac{\Omega^{[1]}_X\phi}{4}\right]-\lambda^{-n}\phi_T+\sum_{i=1}^{n-1} \lambda^{-i}\left[\frac{\Omega^{[i+1]}\phi_X}{2}-\frac{\Omega^{[i+1]}_X\phi}{4}\right].\nonumber\end{equation} or: \begin{equation}0=\phi_Y -\lambda^{-n}\phi_T+\sum_{i=1}^{n} \lambda^{1-i}\left[\frac{\Omega^{[i]}\phi_X}{2}-\frac{\Omega^{[i]}_X\phi}{4}\right].\label{4.11}\end{equation} \end{comment} The nonisospectral condition (\ref{4.3}) reads \begin{equation}\lambda_X=0,\quad\quad 0=\sum_{i=1}^n \lambda^{n-i}\left(\partial_{i+1}-\lambda\partial_i\right)\lambda=\partial_{n-1}\lambda-\lambda^n\,\partial_1\lambda=\lambda_T-\lambda^n\lambda_Y=0.\end{equation} In sum: the Lax pair for CH(2+1) can be written as \begin{eqnarray} &&\phi_{XX}+\frac{1}{4}\left(\lambda\,U-1\right)\phi=0,\nonumber\\&& \phi_T-\lambda^{n}\phi_Y -\frac{A}{2}\phi_X+\frac{A_X}{4}\phi=0,\label{4.15}\end{eqnarray} where \begin{equation} A=\sum_{i=1}^{n} \left[\lambda^{n-i+1}\,\Omega^{[i]}\right],\qquad \lambda_T-\lambda^{n}\lambda_Y=0.\label{4.16} \end{equation} \subsection{Lax pair for mCH(2+1)} In \cite{EstPrada} it was proved that the CBS equation (\ref{3.27}) and the mCBS equation (\ref{3.35}) were linked through a Miura transformation. This is a transformation that relates the fields in the CBS and mCBS in the following form \begin{eqnarray} \partial_0M&=&-\frac{\partial_0x^2}{8}+ \frac{\partial_{00}x}{4},\nonumber\end{eqnarray} which combined with (\ref{3.35}) can be integrated as \begin{eqnarray} 4M= \partial_0x-m.\label{4.17}\end{eqnarray} The two-component Lax pair for the mCBS equation (\ref{3.35}) was derived in \cite {EstPrada}. In our variables this spectral problem reads: \begin{equation}\partial_0 \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right) =\frac{1}{2}\left( \begin{array}{cc} -\, \partial_0 x& i\sqrt { \lambda}\\ i\sqrt { \lambda} & \partial_0 x \end{array} \right) \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right),\label{4.18}\end{equation} \begin{eqnarray}&&0=F_i=\partial_{i+1} \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right) -\lambda\,\partial_{i} \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right)\nonumber\\ &&\quad\quad\quad-\frac{1}{2}\left( \begin{array}{cc} -\, \partial_{i+1} x &i\sqrt { \lambda}\,\partial_i\left(m-\, \partial_0 x\right) \\ i\sqrt { \lambda}\,\partial_i\left(m+ \partial_0 x\right) & \partial_{i+1} x \end{array} \right) \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right).\label{4.19}\end{eqnarray} It is easy to see that the compatibility condition of (\ref{4.18})-(\ref{4.19}) yields the equation (\ref{3.35}) as well as the following nonisospectral contition: \begin{equation} \partial_0\lambda=0,\quad\quad \partial_{i+1}\lambda=\lambda\,\partial_i\lambda.\label{4.20}\end{equation} If, from the above Lax pair, we wish to obtain the spectral problem of the mCH(2+1), we need to invert the reciprocal transformation (\ref{3.33})-(\ref{3.34}), which means applying the following substitutions: \begin{eqnarray} &&\partial_0x=\frac{1}{u},\nonumber\\&&\partial_ix=\omega^{[i]}\quad\Rightarrow\quad \partial_{0i}x=\frac{\omega^{[i]}_x}{u}=v^{[j]}_x, \quad\quad i=1...n,\nonumber\\ &&\partial_{n+1}x=-\frac{v^{[n]}_{xx}-v^{[n]}}{u},\label{4.21}\\&&\partial_0m=\frac{1}{2u^2},\nonumber\\&&\partial_im=v^{[i]}\nonumber. \end{eqnarray} and the transformations of the derivatives are \begin{eqnarray}&&\partial_0=\frac{1}{u}\,\partial_x\,,\nonumber\\ &&\partial_1=\partial_y+\omega^{[1]}\,\partial_x\,, \label{4.22}\\ &&\partial_{n+1}=\partial_t-\frac{\left(v^{[n]}_{xx}-v^{[n]}\right)}{u}\,\partial_x.\nonumber\end{eqnarray} We can now tackle the transformation of the Lax pair (\ref{4.18})-(\ref{4.19}). The spatial part (\ref{4.18}) transforms trivially to: \begin{equation} \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right)_x =\frac{1}{2}\left( \begin{array}{cc} -\,1& i\sqrt { \lambda}u\\ i\sqrt { \lambda}u &1 \end{array} \right) \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right).\label{4.23}\end{equation} The transformation of (\ref{4.19}) is slightly more complicated. Let us compute the following sum: \begin{eqnarray} 0=\sum_{i=1}^{n}\lambda^{n-i}F_i,\label{4.24} \end{eqnarray} where $F_i$ is defined in (4.19). It is easy to see that \begin{equation}\sum_{i=1}^n\lambda^{n-i}(\partial_{i+1}-\lambda\partial_i) \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right) =(\partial_{n+1}-\lambda^n\partial_1) \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right),\label{4.25}\end{equation} and then, the inverse reciprocal transformation (\ref{4.21})-(\ref{4.22}) can be applied to (\ref{4.24}) in order to obtain \begin{equation} \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right)_t-\lambda^n\left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right)_y =C\left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right)_x+\frac{i\sqrt { \lambda}}{2}\left( \begin{array}{cc} 0&B_{xx}-B_x\\ B_{xx}+B_x &0 \end{array} \right) \left( \begin{array}{c} \psi \\ \hat \psi \end{array} \right),\label{4.26}\end{equation} where \begin{equation}C=\sum_{i=1}^{n}\lambda^{n-i+1}\omega^{[i]},\quad\quad B=\sum_{i=1}^{n}\lambda^{n-i}v^{[i]}.\quad \label{4.27}\end{equation} The inverse reciprocal transformation, when applied to (\ref{4.20}) yields \begin{equation}\lambda_x=0,\quad\quad 0=\sum_{i=1}^n \lambda^{n-i}\left(\partial_{i+1}-\lambda\partial_i\right)\lambda=\partial_{n-1}\lambda-\lambda^n\,\partial_1\lambda=\lambda_t-\lambda^n\lambda_y=0.\end{equation} Hence, we have derived a Lax pair for mCH(2+1) using the existing Miura transformation between the CBS and mCBS and the Lax pair for mCBS. This is another example of how reciprocal transformations or compositions of transformations can provide us with Lax pairs, and the implication of integrability. \section{A Miura-reciprocal transformation} Recalling the previous sections, we can summarize by saying CH(2+1) and mCH(2+1) are related to the CBS and mCBS by reciprocal transformations, correspondingly. Aside from this property, in this section we would like to show that there exists a Miura transformation \cite{EstPrada} relating the CBS and the mCBS equations. Hence, one wonders if mCH(2+1) is related to CH(2+1) in any way. It seems clear that the relationship between mCH(2+1) and CH(2+1) necessarily includes a composition of a Miura and a reciprocal transformation. \begin{figure}[H] \begin{center} $ \xymatrix{*+<1cm>[F-,]{\text{CH(2+1)}} \ar[rrr]^{\textit{reciprocal transf.}} \ar@2{<->}[d]^{\textit{Miura-reciprocal transf.}} & & &*+<1cm>[F-,]{\text{CBS equation}}\ar[d]^{\textit{Miura transf.}}\\ *+<1cm>[F-,]{\text{mCH(2+1)}} & & &*+<1cm>[F-,]{\text{mCBS equation}}\ar[lll]_(0.5){\textit{reciprocal transf.}}} $ \end{center} \caption{Miura-reciprocal transformation.} \label{Fig2} \end{figure} Evidently, the relationship between both hierarchies cannot be a simple Miura transformation because they are written in different variables $(X,Y,T)$ and $(x,y,t)$. The answer is provided by the relationship of both sets of variables with the same set $(z_0,z_1,z_{n+1})$. By combining (\ref{3.21}) and (\ref{3.32}), we have \begin{equation} \begin{aligned} &P\,dX-\frac{1}{2}P\Omega^{[1]}\,dY+\Delta\,dT=u\,dx-u\omega^{[1]}\,dy+\left(v^{[n]}_{xx}-v^{[n]}\right)\,dt,\\ &Y=y,\quad\quad\quad T=t, \label{5.1} \end{aligned}\end{equation} which yields the required relation between the independent variables of CH(2+1) and those of mCH(2+1). The Miura transformation (\ref{4.17}), combined with (\ref{3.26}) and (\ref{3.36}) also provides the following results \begin{eqnarray} &&4\partial_{0}M=\partial_{00}x-\partial_{0}m\Longrightarrow \frac{\partial_{00}X}{\partial_{0}X}+\partial_{0}X=\partial_{0}x, \label{5.2}\\ &&4\partial_{i}M=\partial_{0i}x-\partial_{i}m\Longrightarrow -\frac{\partial_{i+1}X}{\partial_{0}X}=\partial_{0i}x-\frac{\partial_{00i}x}{\partial_{0}x}-\frac{\partial_{i+1}x}{\partial_{0}x}, \label{5.3}\end{eqnarray} with $i=1,\dots, n$. With the aid of (\ref{3.23}), (\ref{3.24}) and (\ref{3.34}), the following results arise from (\ref{5.2})-(\ref{5.3}) \begin{eqnarray} &&\frac{1}{u}=\left(\frac{1}{P}\right)_X+\frac{1}{P},\nonumber\\&& P\Omega^{[i+1]}= 2\left(v^{[i]}-v^{[i]}_x\right)\quad\Longrightarrow\quad \omega^{[i+1]}=\frac{\Omega^{[i+1]}_X+\Omega^{[i+1]}}{2},\quad i=1\dots n-1,\nonumber\\&& \Delta =v^{[n]}_x-v^{[n]}.\label{5.4} \end{eqnarray} Furthermore, (\ref{5.1}) can be integrated as \begin{equation}x=X-\ln P.\label{5.5}\end{equation} By summarizing the above conclusions, we have proven that the mCH(2+1) hierarchy \begin{equation} u_t=r^{-n}u_y,\quad u=u(x,y,t), \end{equation} can be considered as the modified version of CH(2+1) \begin{equation} U_T=R^{-n}U_Y,\quad U=U(X,Y,T). \end{equation} The transformation that connects the two hierarchies involves the reciprocal transformation \begin{equation}x=X-\frac{1}{2}\ln U,\label{5.8}\end{equation} as well as the following transformation between the fields \begin{eqnarray} &&\frac{1}{u}=\frac{1}{\sqrt U}\left(1-\frac{U_X}{2U}\right), \nonumber\\&& \omega^{[i]}=\frac{\Omega^{[i]}_X+\Omega^{[i]}}{2},\quad i=1\dots n,\nonumber\\&& \frac{\delta}{u}=\left(\frac{\Delta}{\sqrt U}\right)_X+\frac{\Delta}{\sqrt U}\label{5.9}.\end{eqnarray} \subsection{Particular case 1: The Qiao equation} We are now restricted to the first component of the hierarchies $n=1$ in the case in which the field $u$ is independent of $y$ and $U$ is independent of $Y$. \begin{itemize} \item From (\ref{3.19}) and (\ref{3.20}), for the restriction of CH(2+1) we have \begin{equation} \begin{aligned}&U=P^2, \\&U_T=\Omega^{[1]}_{XXX}-\Omega^{[1]}_{X},\\& \left(P\,\Omega^{[1]}\right)_X=0,\label{5.10}\end{aligned}\end{equation} which can be summarized as \begin{equation} \begin{aligned}\Omega^{[1]}&=\frac{k_1}{P}=\frac{k_1}{\sqrt U},\\ U_T&=k_1\left[\left(\frac{1}{\sqrt U}\right)_{XXX}-\left(\frac{1}{\sqrt U}\right)_{X}\right],\label{5.11}\end{aligned}\end{equation} that is the {Dym equation} \cite{kruskal}. \item The reduction of mCH(2+1) can be achieved from \eqref{18} in the form \begin{equation} \begin{aligned}&\omega^{[1]}_x=uv^{[1]}_x,\\&u_t=v^{[1]}_{xxx}-v^{[1]}_{x},\\& \left(u\omega^{[1]}\right)_x=0,\end{aligned}\label{5.12}\end{equation} which can be written as \begin{equation}\label{5.13} \begin{aligned}&\omega^{[1]}=\frac{k_2}{u} \quad\Longrightarrow \quad v^{[1]}=\frac{k_2}{2u^2},\\ &u_t=k_2\left[\left(\frac{1}{2u^2}\right)_{xx}-\left(\frac{1}{2u^2}\right)\right]_{x},\end{aligned}\end{equation} that is the Qiao equation. \item From \eqref{5.8} and \eqref{5.9} it is easy to see that $k_1=2k_2$. By setting $k_2=1$, we can conclude that the Qiao equation \begin{equation} u_t=\left(\frac{1}{2u^2}\right)_{xxx}-\left(\frac{1}{2u^2}\right)_{x},\end{equation} is the modified version of the Dym equation \begin{equation} U_T=\left(\frac{2}{\sqrt U}\right)_{XXX}-\left(\frac{2}{\sqrt U}\right)_{X}.\end{equation} \item From \eqref{8} and \eqref{20}, it is easy to see that the independence from $y$ implies that $\partial_1X=\partial_0X$ and $\partial_1 x=\partial_0 x$, which means that the CBS and mCBS \eqref{3.27} and \eqref{3.35} reduce to the following potential versions of the KdV and modified KdV equations \begin{equation} \begin{gathered} \partial_0\left(\partial_2 M+\partial_{000}M+6\partial_0M^2\right)=0,\\ \partial_2 x+\partial_{000}x-\frac{1}{2}\partial_0x^3=0.\end{gathered}\end{equation} \end{itemize} \subsection{Particular case 2: The Camassa--Holm equation} If we are restricted to the $n=1$ component when $T=X$ and $t=x$, the following results hold: \begin{itemize} \item From \eqref{3.19} and \eqref{3.20}, for the restriction of CH(2+1) we have \begin{equation} \begin{aligned} &\Delta=P=\sqrt U,\\& U=\Omega^{[1]}_{XX}-\Omega^{[1]},\\ &U_Y+U\Omega^{[1]}_X+\frac{1}{2}\Omega^{[1]} U_X=0,\end{aligned}\end{equation} which is the Camassa--Holm equation. \item The reduction of mCH(2+1) can be obtained from \eqref{18} in the form \begin{equation} \begin{aligned}&\delta=u=v^{[1]}_{xx}-v^{[1]},\\& u_y+\left(u\omega^{[1]}\right)_x=0,\\& \omega^{[1]}_x-uv^{[1]}_x=0,\end{aligned}\end{equation} which can be considered as a modified Camassa--Holm equation. \item From \eqref{3.23} and \eqref{3.34}, it is easy to see that $\partial_2 X=\partial_2 x=-1$. Therefore, the reductions of \eqref{3.27} and \eqref{3.35} are \begin{equation} \partial_{0001}M+4\partial_1 M\,\partial_{00}M+8\partial_0 M\,\partial_{01}M=0,\end{equation} which is the AKNS equation, and \begin{equation} \partial_0\left(\frac{\partial_{001}x-1}{\partial_0 x}\right)=\partial_1\left(\frac{\partial_0 x^2}{2}\right),\end{equation} which is the modified AKNS equation. \end{itemize} \section{Conclusions} Concerning the role of reciprocal transformations in the classification and identification of PDEs, we have shown that CH(2+1) and mCH(2+1) hierarchies can be connected with the CBS and mCBS equations via a reciprocal transformations. A big advantange of a reciprocal transformation is that it turns a whole hierarchy into a set of equations that can be studied through Painlev\'e analysis and other properties can afterwards be derived from this. In this context, a reciprocal transformation has served as a way to turn a set of differential equations with multiple scalar fields a few independent variables into a unique differential equation with one scalar field depending on multiple independent variables. Furthermore, it serves to turn the initial equations into one in which the Painlev\'e integrability is satisfied and therefore proving the integrability of the hierarchy prior to the reciprocal transformation. We have shown examples of higher-order by presenting a fourth-order nonlinear PDE in $2+1$ dimensions and investigated different reciprocal transformations for it. Reciprocal transformations have once more shown that the transformed equations (their reductions actually) in $1+1$ dimensions are the Vakhnenko--Parkes and Degasperis-–Procesi equations. Reciprocal transformations have been further proved to be useful for the derivation of Lax pairs. As it has been shown, the transformations of CH and mCH into CBS and mCBS, being these later equations integrable in the algebraic Painlev\'e sense, and being their Lax pair knowledgeable, undoing the reciprocal transformation in the Lax pairs for CBS and mCBS, we were able to retrieve Lax pairs that have not been proposed for CH and mCH in $1+1$ and $2+1$ dimensions. This verifies the importance of reciprocal transformations as a way to derive Lax pairs. As a last instance, we have depicted Miura-reciprocal transformations, based on the composition of a Miura transformation between the CBS and mCBS and the reciprocal transformations linking CH and mCH to CBS and mCBS, correspondingly, in $1+1$ and $2+1$. Miura-reciprocal transformations verify the importance of composition of reciprocal transformations to classify hierarchies, indeed, we have successfully proven that CH and mCH in $1+1$ and $2+1$ are two different versions of a same common problem that can be reached by a transformation map that has been proposed in the last section. The observation of all these properties show the efficiency and importance of reciprocal transformations that we introduced at the start of the chapter, and that we here close having given proof of our arguments with remarkable examples in the physics literature of hydrodynamic systems, shallow water waves, etc.
2,877,628,089,966
arxiv
\section{Introduction} \label{intro:sec} Soft matter systems are characterised by the simultaneous existence of two intrinsic structural length scales: the omnipresent atomic or {\it microscopic} scale that is associated with the solvent molecules and the monomers of the dissolved polymers (if any) and the {\it mesoscopic} scale that characterises the dissolved macromolecular aggregates as a whole. Depending on the physical system under consideration, the latter typically covers the range between several nanometers and micrometers, spanning thereby three orders of magnitude. The former is rather located in the domain of a few {\AA}ngstr{\"o}m. In attempting to bridge the scales all the way from the microscopic to the macroscopic ones, it has been proven very useful to eliminate the atomic degrees of freedom from view, by performing a statistical-mechanical trace over their degrees of freedom and constructing thereby an {\it effective Hamiltonian} that involves the mesoscopic degrees of freedom only \cite{likos:pr:01}. Although the effective Hamiltonian ${\mathcal H}_{\rm eff}$ greatly facilitates the transition to the macroscopic scales, both its construction and its interpretation have to be treated with care: indeed, the effective potential energy function that involves the mesoscopic degrees of freedom which appear in ${\mathcal H}_{\rm eff}$ are not true interaction potentials in the sense of Hamiltonian Mechanics but rather a constrained free energy which arises by the thermodynamic trace of the microscopic ones. There is a number of subtleties associated with the effective potential energy function that have to be taken into account when a coarse-grained statistical mechanical treatment of a soft matter system is employed. Two of them are particularly relevant in the context of calculating thermodynamic quantities and tracing out phase diagrams. First, the potential energy cannot, in general, be written as a sum of pair interactions:\footnote{An important exception, however, is the depletion attraction in colloid-polymer mixtures described by the idealised Asakura-Oosawa model. In this case, all $n$-th order polymer-mediated effective interactions between colloids vanish identically for $n \geq 3$ if the polymer-to-colloid size ratio does not exceed $2\sqrt{3}/3 - 1$. See Ref.\ \cite{brader:99} for details.} the process of eliminating the microscopic degrees of freedom inadvertently generates higher-order, many-body potentials \cite{brader:99, dijkstra:prl:99, dijkstra:pre:99}. Truncating the effective potential energy function at the pair-level constitutes the {\it pair potential approximation}, whose validity is not {\it a priori} guaranteed and has to be explicitly checked. And secondly, that the contributions to the potential energy are in general state-dependent, the most prominent example of the latter being the Debye-H{\"u}ckel effective pair potential that has been extensively employed to model charge-stabilised colloidal suspensions under certain physical conditions \cite{lowen:hansen:00}. Sometimes the state-dependence of an effective pair potential hides precisely the effect of many-body forces and then particular care has to be taken in the ways in which the pair potential is employed, so as to avoid blatant thermodynamic inconsistencies \cite{ard:beware, stillinger1:03, stillinger2:03, dijkstra:jcp:00}. Many-body potentials are already encountered in the realm of atomic systems, the Axilrod-Teller interaction \cite{axilrod:teller} being a characteristic example that has been shown to be relevant for the description of high-precision measurements of the structure factor of rare gases \cite{tau:jpcm:99}. A formal decomposition of the effective potential energy function between the particles of one kind in a binary mixture in which the particles of the other kind are traced out has been given in Refs.\ \cite{dijkstra:prl:99} and \cite{dijkstra:pre:99}. Unfortunately, the treatment there applies only to mixtures for which the number densities of the two components can be varied at will, e.g., colloid-polymer or hard-sphere mixtures. It is not applicable to two broad categories of soft matter systems, namely charged mixtures and solutions of polymers of arbitrary architecture. In the former case, the number densities of the two components are constrained by the electroneutrality condition. In the latter, where one specific monomer \cite{likos:prl:98, jusufi:jpcm:01} or the centre of mass of the molecule \cite{krueger:etal:98, ard:prl:00, ard:pre:00, ard:pre:01, ard:jcp:01} are chosen as effective, mesoscopic coordinates, the total number of monomers and the number of effective particles are coupled to each other through the constraint of keeping the number of monomers per macromolecule fixed. In charge-stabilised colloidal suspensions, three-body forces are generated by nonlinear counterion screening. Their effects have been examined by density functional theory and simulations \cite{lowen:jpcm:98} as well as by numerical solution of the nonlinear-Poisson Boltzmann equation \cite{dobn:prl:04, dobn:pre:04}. It has been found that the three-body forces in this case are {\it attractive} \cite{lowen:jpcm:98, dobn:prl:04, dobn:pre:04}, a result confirmed by direct experimental measurements using optical tweezers \cite{dobn:prl:04, dobn:pre:04}. As far as polymeric systems are concerned, the triplet forces in star polymer solutions have been analysed by theory and simulations in Ref.\ \cite{ferber:epje:01}, where it was found that they play a minor role for concentrations vastly exceeding the overlap density. For linear chains, on the other hand, the many-body forces appear to have a more pronounced effect, as witnessed by the considerable state-dependence of the effective pair potential that reproduces the correlation functions of concentrated polymer solutions \cite{ard:pre:01, ard:jcp:01}. The general functional form of the centre-of-mass effective interaction between polymer chains was found to preserve its Gaussian form, its strength and range being nevertheless modified within a range of $\sim 10\%$ of their original values, due to many-body effects \cite{ard:pre:01, ard:jcp:01}. Another polymeric system that serves as a prototype for a tunable colloidal system that displays a Gaussian, soft effective pair interaction is that of a solution of dendritic macromolecules, or dendrimers for simplicity \cite{likos:ac:04}. It has been recently shown that a Gaussian effective pair potential can describe extremely well the scattering intensities obtained experimentally from concentrated dendrimer solutions \cite{likos:macrom:01, likos:jcp:02}. The Gaussian pair interaction has also been explicitly measured in recent computer simulations that employed two different coarse-grained models for the microscopic, monomer-monomer interactions \cite{ingo:jcp:04}. Nevertheless, in the approach of Ref.\ \cite{ingo:jcp:04} only {\it two} dendritic molecules were simulated, hence no information about many-body forces was gained. In the present work, we address the issue of the magnitude and importance of many-body effective interaction potentials in concentrated dendrimer solutions. We do not attempt to derive an explicit decomposition of the potential energy function into $n$-body terms, $n = 2, 3, 4, \ldots$; this would require separate simulations of just $n$ dendrimers. Instead, we explicitly simulate a large number of interacting dendrimers at the microscopic level simultaneously. We measure thereby the pair correlation functions in the concentrated system directly and we compare the result with the one obtained by simulating the {\it same} number of dendrimers as effective entities interacting exclusively by means of pair potentials. The discrepancies in the results from the two approaches for the correlation functions yield then information regarding the importance of the many-body forces {\it of all orders}. We find that the many-body effects are of minor importance, especially for flexible dendrimers. The rest of the paper is organised as follows. In section \ref{model:sec} we present our model and the simulation details. In section \ref{pair:sec} we present our results for the correlation functions derived by the two approaches mentioned above and we discuss the magnitude and origin of their discrepancies. In section \ref{scatter:sec} we turn our attention to the issue of the interpretation of the total scattering intensities from concentrated dendrimer solutions, and in particular to the question of the validity of the so-called factorisation approximation of the latter as the product of the form- and the structure factor, discussing the limits of applicability of such an approach. Finally, in section \ref{summary:sec} we summarise and conclude. \section{The model and simulation details} \label{model:sec} In this work, we focus exclusively on dendrimers of the fourth generation (G4). We model the macromolecules at the monomer-level using a simplified model that pictures every monomer as a hard sphere of diameter $\sigma$. The bonding between the connected monomers is modeled by flexible threads of maximum extension $\sigma(1 + \delta)$. In detail, the potential between {\it disconnected} monomers is given by \begin{eqnarray} V_{\rm HS}(r)=\left\{\begin{array}{l@{\qquad}l}\infty & \mbox{for}\quad r/\sigma < 1 \\0 & \mbox{for}\quad r/\sigma > 1 \end{array} \right. \end{eqnarray} whereas {\it bonded} monomers interact via the potential \begin{eqnarray} V_{\rm bond}(r)=\left\{\begin{array}{l@{\qquad}l}\infty & \mbox{for}\quad r/\sigma<1 \\0 & \mbox{for}\quad 1 < r/\sigma < 1+\delta\\ \infty & \mbox{for}\quad r/\sigma > 1+\delta \end{array} \right. \end{eqnarray} The quantity $\delta > 0$ serves as a control parameter of the dendrimer conformations, with small $\delta$-values resulting into stiff dendrimers and large values into loose structures. This bead-thread model was originally introduced by Sheng {\it et al.} \cite{sheng:macrom:02}, who kept a fixed value $\delta = 0.4$ and examined the scaling of the radius of gyration of the dendrimers as a function of generation number and spacer length. The same model has been employed in a previous work by us, in order to systematically examine the evolution of the dendrimers' conformational properties with the generation number $G$ \cite{ingo:macrom:03}. By comparing the results for various values of the parameter $\delta$ and by performing a further comparison with results from a different model, we have shown that the conformational properties of single dendrimers are insensitive with respect to the details of the microscopic model. Moreover, this very simple, coarse-grained model reproduces the experimental scattering data for isolated dendrimers very well \cite{ingo:macrom:03}. As we are only interested in static properties, we also allow `ghost chains', i.e., crossing of bonds occurring for $\delta \ge \sqrt{2} - 1 \approx 0.414$ are possible. Monte Carlo simulations of this model are very fast, as there is no need to calculate energies; one only needs to check for overlaps, and additionally whether the conditions of the maximal bond extension are fulfilled. If one of these conditions is violated, the trial move is rejected in any case, so the search for further overlaps can be aborted. Furthermore, due to the very short range of the hard sphere interaction, neighbour lists are very effective. The effective {\it pair} interaction potential between the centres of mass of two G4-dendrimers has been determined with the help of configuration-biased Monte Carlo simulations of this model in Ref.\ \cite{ingo:jcp:04}. The strength of the interaction between dendrimers can be tuned by varying the number of generations or the parameter $\delta$. Denoting by $r$ the centre of mass separation, the $\delta$-dependent effective pair potential $V_{\rm eff}^{(2)}(r;\delta)$ has been found to have a Gaussian form with small, additional corrections. In particular, it can be fitted by the function: \begin{equation} \fl \beta\,V_{\rm eff}^{(2)}(r;\delta) = \epsilon_0 \exp\left(-\frac{r^2}{\gamma_0}\right) +\epsilon_1 \exp\left[-\frac{(r-r_1)^2}{\gamma_1}\right] -\epsilon_2 \exp\left[-\frac{(r-r_2)^2}{\gamma_2}\right], \label{potential:fit} \end{equation} where $\beta = (k_{\rm B}T)^{-1}$ with Boltzmann's constant $k_{\rm B}$ and the absolute temperature $T$; the numerical values of the various fit parameters, depending on the choice of $\delta$, are given in Table \ref{TABparameters}. Note that the precise values of the fit parameters are slightly different than those given in Ref.\ \cite{ingo:jcp:04}, since there we employed a more constrained fit by setting $\gamma_0 = 4R_{g,\infty}^2/3$, with the radius of gyration $R_{g,\infty}$ of the dendrimers at infinite dilution, and $\epsilon_2 = 0$. The gyration radius is also shown at the last column of Table \ref{TABparameters}. Here, we considered G4-dendrimers with two different values, $\delta=0.1$ and $\delta = 2.0$, representing the two extreme cases studied in Ref.\ \cite{ingo:jcp:04}. \begin{table} \caption{The numerical values of the fit parameters of the effective pair potential between the centres of mass of two G4-dendrimers appearing in Eq.\ (\ref{potential:fit}) for two different values of $\delta$. At the last column the gyration radius $R_{g,\infty}$ at infinite dilution is also shown.} \begin{center} \begin{tabular}{lccccccccr}\hline\hline\hline $\delta$ & $\epsilon_0$ & $\gamma_0/\sigma^2$ & $\epsilon_1$ & $\gamma_1/\sigma^2$ & $r_1/\sigma$ & $\epsilon_2$ & $\gamma_2/\sigma^2$ & $r_2/\sigma$ & $R_{g,\infty}/\sigma$ \\ \hline 0.1 & 55.75 & 9.75 & 5.0 & 0.9 & 2.5 & 0.1 & 1.5 & 7.2 & 2.665 \\ 2.0 & 11.35 & 33.0 & 0.8 & 10.0 & 3.7 & 0.0 & --- & --- & 4.939 \\ \hline\hline\hline \end{tabular} \end{center} \label{TABparameters} \end{table} Let $\rho = N/\Omega$ be the number density of a sample containing $N$ dendrimers enclosed in the volume $\Omega$. The definition of the overlap density $\rho_{*}$ of a dendrimer solution requires some care, as it is not a sharply defined quantity. Previous simulation studies with this system \cite{ingo:macrom:03} have revealed that the monomer density profiles around the dendrimer's centre of mass decay to zero at a distance $r_{\rm c} \cong 1.5\,R_{g,\infty}$. Motivated by this fact, we envision every dendrimer as a `soft sphere' of radius $r_c$ and define the overlap density through the relation:\footnote{In the literature, there are alternative definitions. For polymer chains, for instance, the definition $\frac{4\pi}{3}\rho_{*}R_g^3 = 1$ was used in Ref.\ \cite{ard:pre:01}.} \begin{equation} \frac{4\pi}{3}\rho_{*}r_{\rm c}^3 = 1. \label{rhoov:eq} \end{equation} Moreover, we introduce the diameter of gyration at infinite dilution, $\tau \equiv 2R_{g,\infty}$, as the characteristic mesoscopic length scale to be used to introduce a dimensionless expression for the number density, $\rho\tau^3$. In these terms, the overlap density of Eq.\ (\ref{rhoov:eq}) above is given by $\rho_{*}\tau^3 = 0.566$. The highest density in the simulation was $\rho_{\rm max}\tau^3 = 0.605$, slightly exceeding the overlap value, since $\rho_{\rm max} = 1.07\rho_{*}$. For both values of $\delta$, ten different concentrations were simulated, in particular at the densities $\rho/\rho_{\rm max} = 0.1, 0.2, \ldots, 1.0$. Periodic boundary conditions were employed throughout. At all densities, systems of 500 dendrimers were simulated, whereby each dendrimer consists of $\nu = 62$ monomers, and the size of the simulation box was changed in order to modify the dendrimer number density. The minimum box length was $L_{\rm min} = 9.384\,\tau$, yielding a system with the density $\rho_{\rm max}$. The equilibration criterion for the system at hand requires some care, as there are is no internal energy in the microscopic model, since all interactions are either zero or infinity. We therefore took advantage of the fact that the {\it effective}, pair interaction $V_{\rm eff}^{(2)}(r;\delta)$ between the centres of mass is known and given by Eq.\ (\ref{potential:fit}) with the parameters given in Table \ref{TABparameters}. Hence, we chose to monitor the total effective pair potential energy $U^{(2)}(N;\delta)$ given by \begin{equation} U^{(2)}(N;\delta) = \frac{1}{2}\sum_{i=1}^{N}\sum_{j\ne i}^{N} V_{\rm eff}^{(2)}(|{\bf r}_i - {\bf r}_j|;\delta), \label{totalpairenergy:eq} \end{equation} where ${\bf r}_{i,j}$ denotes the position of the $i,j$-th centre of mass. Two different starting configurations were tried. In the first one, the centres of mass dendrimers possessing identical microscopic conformations were placed at the vertices of a fcc-lattice, which was achieved without violation of the excluded volume conditions. This procedure is particularly useful especially at the highest density, $\rho_{\rm max}$, where a random distribution of the centres of mass will result with high probability into a forbidden state with monomer overlaps. The system was then equilibrated, monitoring $U^{(2)}(N;\delta)$ described above. In the second one, the dendrimers' centres of mass were placed in a random arrangement. Although this procedure requires a large number of failed attempts before an allowed configuration is found, especially at high densities, such configurations are possible. Once again, we monitored the total effective pair potential energy during the equilibration period, finding that it converges to the same value as the one obtained from the fcc-initial state. In this way, sufficient equilibration of the system was guaranteed. Finite-size effects were checked by selectively simulating some systems with 256 of dendrimers, in a box having a correspondigly smaller volume, so that the same density is achieved, and finding agreement between the two attempts. For $\delta=0.1$, $N_{\rm equil} = 10^7$ MC steps were used to equilibrate the system, and about $N_{\rm run} = 2 \times 10^8$ steps to gather statistics. Statistical averages were calculated every $N_{\rm meas} = 10\,000$ MC steps. For $\delta=2.0$, where a much larger random displacement for the monomers can be used, the equilibration phase consisted of $N_{\rm equil} = 10^6$ steps and statistical averages were calculated every $N_{\rm meas} = 1000$ steps over a period of $N_{\rm run} = 2 \times 10^8$ steps. The quantities measured were monomer profiles around the centres of mass, the radial distributions functions of the latter, radii of gyration, form factors, structure factors from the centres of mass and total scattering intensities; all these quantities will be precisely defined in the sections that follow. \begin{figure} \begin{center} \includegraphics[width=8.5cm,clip]{figure01.ps} \end{center} \caption{A snapshot from the monomer resolved-simulation of dendrimers. The monomers are rendered as spheres of diameter $\sigma$. Here, dendrimers with threads characterised through $\delta=0.1$ at a density $\rho\tau^3 = 0.0605$ are shown. Note that only a part of the simulation box is shown, which has the same size as the full box depicted in Fig.\ \ref{rho10:fig}.} \label{rho01:fig} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8.5cm,clip]{figure02.ps} \end{center} \caption{Same as Fig.\ \ref{rho01:fig} but at density $\rho\tau^3 = 0.605$. Here the complete simulation box is shown.} \label{rho10:fig} \end{figure} In Figs.\ \ref{rho01:fig} and \ref{rho10:fig}, we show simulation snapshots of the monomer-resolved simulations for the lowest and the highest density for the thread length $\delta=0.1$. (For clarity, in Fig.\ \ref{rho01:fig} we show only a section of the simulation box of the same size as in Fig.\ \ref{rho10:fig}.) Although at Fig.\ \ref{rho01:fig} individual dendrimer molecules can still be resolved, since the density is much smaller than $\rho_{*}$, in Fig.\ \ref{rho10:fig} this is not any more possible. Here, $\rho = 1.07\rho_{*}$ and the whole system appears as a dense solution of monomers, in which the individual character of each macromolecule is lost. We will return to the implications of this fact in section \ref{scatter:sec}. In addition, a different kind of Monte Carlo simulations was also carried out, in which the monomers were not explicitly resolved. Instead, the dendrimers were replaced entirely by their centres of mass, which were then treated as effective, soft particles interacting exclusively by means of the pair potential of Eq.\ (\ref{potential:fit}). Accordingly, we call this approach an {\it effective} simulation. As all monomers have dropped out of sight in the effective approach, it is only possible to measure quantities pertaining to the centres of mass, i.e., their radial distribution functions and structure factors. Comparison of the results regarding these quantities that are obtained through the two different types of simulations yields important information by way of testing whether the pair-potential approximation is meaningful. \section{Comparison between the monomer-resolved and the effective simulations} \label{pair:sec} Each dendrimer of the fourth generation consists of $\nu = 62$ monomers. Let $\alpha$, $\beta$ be monomer indices within a given dendrimer whereas $i$, $j$ are integers describing the dendrimer molecules as whole entities. In particular, let ${\bf r}_i$ stand for the position of the centre of mass of the $i$-th dendrimer, ${\bf R}_{\alpha}^{i}$ denote the position vector of the $\alpha$-th monomer in the $i$-th dendrimer, and ${\bf u}_{\alpha}^{i}$ stand for the same quantity but now measured in a coordinate system centred at ${\bf r}_i$. Obviously, it holds \begin{equation} {\bf R}_{\alpha}^{i} = {\bf r}_i + {\bf u}_{\alpha}^{i}. \label{trafo:eq} \end{equation} In the monomer-resolved simulation, the following quantities were measured: The radial distribution function $g(r)$ between the centres of mass, defined as \begin{equation} g(r) = \frac{1}{N}\left\langle \sum_{i=1}^{N}\sum_{j \ne i}^{N} \delta\left({\bf r}-{\bf r}_{ij}\right)\right\rangle, \label{gofr:eq} \end{equation} where $\langle \cdots \rangle$ denotes a statistical average and ${\bf r}_{ij} = {\bf r}_i - {\bf r}_j$. Related to this quantity is the structure factor $S(q)$ that describes the correlations between the centres of mass in reciprocal space and it is given by \begin{equation} S(q) = \frac{1}{N}\left\langle \sum_{i=1}^{N}\sum_{j=1}^{N} \exp\left[-{\rm i}{\bf q}\cdot \left({\bf r}_i - {\bf r}_j\right)\right] \right\rangle. \label{sofq:eq} \end{equation} Note that $S(q)$ and $g(r)$ are related by a Fourier transformation \cite{hansen:mcdonald} \begin{equation} S(q) = 1 + \rho\int{\rm d}^3r \exp\left[-{\rm i}{\bf q}\cdot{\bf r}\right] \left[g(r) - 1\right]. \label{ft:eq} \end{equation} Moreover, we took advantage of the microscopic nature of the simulation to measure the dendrimers' form factor $F(q)$ at every simulated density $\rho$. This quantity is expressed by the relation: \begin{equation} F(q) = \frac{1}{N}\sum_{i=1}^{N} \frac{1}{\nu}\left\langle \sum_{\alpha=1}^{\nu} \sum_{\beta=1}^{\nu} \exp\left[-{\rm i}{\bf q}\cdot \left({\bf u}_{\alpha}^{i} - {\bf u}_{\beta}^{i}\right) \right]\right\rangle, \label{fofq:eq} \end{equation} Another quantity of interest is the monomer distribution around the centre of mass, $\xi(u)$, which can again be measured at any desired overall density and is given by the expression: \begin{equation} \xi(u) = \frac{1}{N}\sum_{i=1}^{N} \left\langle\sum_{\alpha = 1}^{\nu}\delta \left({\bf u} - {\bf u}_{\alpha}^{i}\right)\right\rangle, \label{xiofr:eq} \end{equation} The overall size of the dendrimer is characterised by its radius of gyration $R_g$, which was measured in the simulation by calculating the quantity: \begin{equation} R_g = \frac{1}{N}\sum_{i=1}^{N} \sqrt{\frac{1}{\nu}\left\langle \sum_{\alpha=1}^{\nu}{\bf u}_{\alpha}^{i}\cdot{\bf u}_{\alpha}^{i} \right\rangle}, \label{rg:eq} \end{equation} In Eqs.\ (\ref{fofq:eq}) - (\ref{rg:eq}) above, the summand in the sum over $i$ is the corresponding quantity (form factor, density profile, and radius of gyration, respectively) of the $i$-th dendrimer. The additional summation over $i$ and the division by the total number of dendrimers corresponds to an additional average over {\it all} dendrimers. Since all macromolecules are equivalent, the expectation values are identical for every summand. Finally, we also measured the scattering function $I(q)$ of the concentrated solution, which corresponds to the coherent contribution of the total scattering intensity in a SANS experiment, under the assumption that all monomers possess the same scattering length density \cite{mb:macrom:99, mb:mcp:00, mb:mcp:02, benoit}. This is given by the equation: \begin{equation} I(q) = \frac{1}{N\nu}\left\langle \sum_{i=1}^{N}\sum_{j=1}^{N} \sum_{\alpha = 1}^{\nu}\sum_{\beta=1}^{\nu} \exp\left[-{\rm i}{\bf q}\cdot \left({\bf R}_{\alpha}^{i}-{\bf R}_{\beta}^{j}\right) \right]\right\rangle, \label{iofq:eq} \end{equation} i.e., it is the total coherent scattering intensity from all monomers of the system. In the effective picture, all information regarding the monomers' degrees of freedom is lost, hence in the effective simulation we can only measure the corresponding radial distribution function $g_{\rm eff}(r)$ and the structure factor $S_{\rm eff}(q)$ of the centres of mass. These are given by Eqs.\ (\ref{gofr:eq}) and (\ref{sofq:eq}) above but with the averages now performed with the effective Hamiltonian, i.e., \begin{equation} g_{\rm eff}(r) = \frac{1}{N}\left\langle \sum_{i=1}^{N}\sum_{j \ne i}^{N} \delta\left({\bf r}-{\bf r}_{ij}\right) \right\rangle_{{\mathcal H}_{\rm eff}}, \label{gofreff:eq} \end{equation} and \begin{equation} S_{\rm eff}(q) = \frac{1}{N}\left\langle \sum_{i=1}^{N}\sum_{j=1}^{N} \exp\left[-{\rm i}{\bf q}\cdot \left({\bf r}_i - {\bf r}_j\right)\right] \right\rangle_{{\mathcal H}_{\rm eff}}. \label{sofqeff:eq} \end{equation} The effective Hamiltonian ${\mathcal H}_{\rm eff}$ involves the momenta ${\bf p}_i$ and positions ${\bf r}_i$ of the centres of mass only and contains exclusively pair interactions, i.e., \begin{equation} {\mathcal H}_{\rm eff} = \sum_{i=1}^{N}\frac{{\bf p}_i^2}{2 m} +\frac{1}{2}\sum_{i=1}^{N}\sum_{j\ne i}^{N} V_{\rm eff}^{(2)}(|{\bf r}_i - {\bf r}_j|;\delta), \label{heff:eq} \end{equation} where $m$ is the dendrimers' mass, which is irrelevant as far as static quantities of the system are concerned. A particular property of the effective description of a complex system is that it leaves all correlation functions between the coarse-grained degrees of freedom invariant {\it provided} that the mapping into the effective system is {\it exact} \cite{likos:pr:01}. In other words, if the effective Hamiltonian contains the contributions to the effective potential at {\it all orders}, it makes no difference whether one calculates quantities such as $g(r)$ or $S(q)$ in the original, microscopic description or in the coarse-grained one. As our effective Hamiltonian ${\mathcal H}_{\rm eff}$ is truncated at the pair level, the deviations between $g(r)$ and $g_{\rm eff}(r)$ or, equivalently, between $S(q)$ and $S_{\rm eff}(q)$ will be a measure of the importance of the neglected many-body terms in Eq.\ (\ref{heff:eq}). \begin{figure} \begin{center} \begin{minipage}[h]{7.2cm} \includegraphics[width=8cm,clip]{figure03a.eps} \includegraphics[width=8cm,clip]{figure03b.eps} \end{minipage} \end{center} \caption{Comparison between the results from the monomer-resolved and the effective simulation of concentrated dendrimers with maximal thread length $\delta = 0.1$ of the bonds. The three different densities are $\rho = 0.1\rho_{\rm max}$, $0.5\rho_{\rm max}$ and $\rho_{\rm max}$, as indicated on the plots, with $\rho_{\rm max}\tau^3=0.605$. Results are shown for (a) the radial distribution function $g(r)$ and (b) the structure factor $S(q)$ of the centre of mass coordinates.} \label{G4_0.1_GrSq} \end{figure} \begin{figure} \begin{center} \begin{minipage}[h]{7.2cm} \includegraphics[width=8cm,clip]{figure04a.eps} \includegraphics[width=8cm,clip]{figure04b.eps} \end{minipage} \end{center} \caption{Same as Fig.\ \ref{G4_0.1_GrSq} but for thread length $\delta=2.0$.} \label{G4_2.0_GrSq} \end{figure} Representative results comparing between the two approaches are shown in Fig.\ \ref{G4_0.1_GrSq}, pertaining to the dendrimers with $\delta = 0.1$ and in Fig.\ \ref{G4_2.0_GrSq}, which refers to dendrimers with $\delta = 2.0$. The length scale used in this plot is the zero-density gyration radius of the dendrimers, $R_{g,\infty}$. For clarity, only the results only for three different densities obtained from the monomer resolved simulations are compared to those from the effective ones. At sufficiently low densities, $\rho = 0.1\rho_{\rm max}$, the results from the two types of simulations are indistinguishable, hence the pair-potential approximation is an excellent one and many-body forces seem to play no role there; they can be thus safely ignored. Deviations between the two descriptions arise nevertheless as the overall concentration of the solution grows. Referring to Fig.\ \ref{G4_0.1_GrSq}(a), we see that for the $\delta = 0.1$-dendrimers, which have a rather high internal monomer density, the deviations are already visible (but small) at a density $\rho = 0.5\rho_{\rm max}$ and they become more pronounced at the highest simulated density, $\rho = \rho_{\rm max}$. The true radial distribution function $g(r)$ between the centres of mass shows a more pronounced coordination than the effective one, $g_{\rm eff}(r)$, and this effect is also reflected in the corresponding structure factors. The peak height of $S(q)$ is higher than the one of $S_{\rm eff}(q)$, pointing to the fact that the zero-density pair potential underestimates somehow the strength of the repulsions between the dendrimers' centres of mass. The relative deviation between the two descriptions as far as the peak height is concerned are at the highest density about $6\%$. Much more drastic is the discrepancy of the $S(q \to 0)$ limit, for which $S(q \to 0) = 0.018$ whereas $S_{\rm eff}(q \to 0) = 0.033$. Given the fact that the $S(q = 0)$-value is proportional to the osmotic isothermal compressibility of the solution, employing the effective picture can lead here to serious errors in the calculation of the thermodynamics of the system. Two integrations of the inverse compressibility are needed in order to obtain the Helmholtz free energy of the solution, hence errors at all lower densities accumulate in performing such an integration and they can lead to a serious underestimation of the free energy if the effective picture is employed. The agreement between the microscopic and the coarse-grained approaches is a lot better for the case of the $\delta = 2.0$-dendrimers, which possess a much lower internal monomer density than their $\delta = 0.1$-counterparts. Indeed, as can be seen in Fig.\ \ref{G4_2.0_GrSq}(a), the radial distribution functions $g(r)$ and $g_{\rm eff}(r)$ barely show any difference, all the way up to the maximum density $\rho_{\rm max}$. Similar to the case $\delta = 0.1$, $g(r)$ shows a slightly more pronounced coordination than $g_{\rm eff}(r)$, the difference between the two is nevertheless extremely small. The same holds for the structure factors $S(q)$ and $S_{\rm eff}(q)$, shown in Fig.\ \ref{G4_2.0_GrSq}(b). Here, even the discrepancy in the compressibility is very small, with $S(q \to 0) = 0.132$ and $S_{\rm eff}(q \to 0) = 0.138$ at $\rho = \rho_{\rm max}$. For dendrimers with a higher degree of internal freedom, the pair potential approximation holds all the way up to the overlap concentration. In this respect, it is very satisfactory that it is precisely the model with the value $\delta = 2.0$ that has been found to accurately describe scattering data from real dendrimers \cite{ingo:jcp:04}. \begin{figure} \begin{center} \begin{minipage}[h]{7.2cm} \includegraphics[width=8cm,clip]{figure05a.eps} \includegraphics[width=8cm,clip]{figure05b.eps} \end{minipage} \end{center} \caption{The radial monomer density profiles $\xi(u)$ [Eq.\ (\ref{xiofr:eq})] of the dendrimers around their centres of mass at infinite dilution ($\rho = 0$) and at the highest density $\rho = \rho_{\rm max} = 1.07\rho_{*}$, as indicated on the plots. (a) For model dendrimers with thread length $\delta = 0.1$ and (b) for $\delta = 2.0$. Note the shrinkage and growth of the profiles.} \label{G4_rho} \end{figure} Let us now try to to obtain some physical insight into the mechanisms that cause the true correlation functions to show higher ordering than the effective ones. Suppose that the reason lied in the increasing significance of three-body effective forces. Three-body potentials arise through three-dendrimer overlaps: the region of space in which three spherical objects simultaneously overlap is overcounted when one adds over the three pair interactions and it has to be subtracted anew. Given the fact that any overlap between repulsive monomers gives rise to a correspondingly repulsive interaction, together with the fact that the contribution from the triple-overlap region has to be {\it subtracted}, leads to the conclusion that triple forces should be {\it attractive}, as for the case of star polymers \cite{ferber:epje:01}, as well as self-avoiding polymer chains \cite{ard:pre:01}, for which three-body forces have been measured explicitly \cite{foot:charge}. Yet, an attractive contribution to the potential energy leads to a {\it reduced} effective pair repulsion. This is on the one hand intuitively clear and, on the other, it can be put in formal terms by making a density expansion of the density-dependent pair interaction up to linear order in density, see Eq.\ (10) of Ref.\ \cite{ard:pre:01}. Thus, we would then obtain a {\it weakening} of the correlations and an {\it increase} of the osmotic compressibility, whereas in Figs.\ \ref{G4_0.1_GrSq} and \ref{G4_2.0_GrSq} exactly the opposite is true. In order to obtain the true $g(r)$ at $\rho = \rho_{\rm max}$ for the $\delta = 0.1$-dendrimers, a renormalised effective pair potential $\tilde{V}_{\rm eff}^{(2)}(r;\delta,\rho)$ can be employed that is more strongly repulsive than the original one, $V_{\rm eff}^{(2)}(r;\delta)$; as a matter of fact, we were able to reproduce $g(r)$ at $\rho_{\max}$ by using $\tilde{V}_{\rm eff}^{(2)}(r;\delta = 0.1,\rho_{\rm max}) \cong 1.2\,V_{\rm eff}^{(2)}(r;\delta = 0.1)$. A similar effect has been observed for polymer chains \cite{ard:pre:01}, for which the density-dependent, renormalised pair potential necessary to reproduce $g(r)$ at high concentrations was found to be more repulsive than the one that holds at $\rho = 0$, whereas, at the same time, the correction arising from triplet forces alone goes in the opposite direction of weakening the pair repulsions. The above considerations point to the fact that the deviations between $g(r)$ and $g_{\rm eff}(r)$ are a genuinely many-body effect that arises from the high concentration of the solution per se and cannot be attributed to three-body forces alone. In particular, the presence of many dendrimers surrounding a given one in the concentrated solution, gives rise to a deformation of the dendrimer itself. To corroborate this statement, we have measured the concentration-dependent monomer density profiles $\xi(u)$ around the dendrimers' centre of mass, given by Eq.\ (\ref{xiofr:eq}). Results are shown in Fig.\ \ref{G4_rho}(a) for the case $\delta = 0.1$ and in Fig.\ \ref{G4_rho}(b) for the case $\delta = 2.0$. It can be seen that as a result of the crowding of the dendrimers at the highest concentration, the monomer profiles become slightly shorter in range and they grow in height; in other words, the dendrimers {\it shrink} as a result of the increased overall concentration, as can be also witnessed by the reduction of their radius of gyration shown in Fig.\ \ref{Rg}. The molecules that effectively interact are {\it stiffer} at higher densities than at lower ones; their internal monomer concentration grows with $\rho$ and as a result of this deformation, the interaction between two dendrimers becomes more repulsive than at zero density. \begin{figure} \begin{center} \includegraphics[width=8cm,clip]{figure06.eps} \end{center} \caption{The dependence of the dendrimers' radius of gyration on the solution density for the two types of model macromolecules, as indicated in the legend.} \label{Rg} \end{figure} The above claim is supported by the fact that the effect of the concentration on the pair interaction is much more pronounced for the dendrimers with the short thread length than for those with the longer one. Although the monomer profiles for {\it both} dendrimer kinds grow with $\rho$, the internal monomer concentration for the stiffer dendrimers is much higher than the one for the softer ones. A concentration-induced increase of $\xi(u)$ has a much stronger effect for the effective interaction of the stiff dendrimers than for the soft ones, since it occurs at a scale of $\sigma^3\xi(u) \sim 0.4$ for the former but at a scale of $\sigma^3\xi(u) \sim 0.1$ for the latter, see Fig.\ \ref{G4_rho}. The monomer beads are modeled here as hard spheres. The change in the free energy of a hard-sphere fluid upon an increase of the local density is highly nonlinear and grows rapidly with increasing packing fraction, hence the effect is much more pronounced for the case $\delta = 0.1$ than for the case $\delta = 2.0$. Another way of expressing the vast discrepancy in the monomer crowding of the two systems is to look at the monomer packing fraction $\eta_{\rm m}$. As there are $\nu$ monomers per dendrimer, this quantity is given by the expression \begin{equation} \eta_{\rm m} = \frac{\pi}{6}\nu\rho\tau^3\left(\frac{\sigma}{\tau}\right)^3. \label{etam:eq} \end{equation} For both types of dendrimers, $\nu = 62$ and $\rho_{\rm max}\tau^3 = 0.605$. Yet, the ratio $\sigma/\tau$ has the value 0.188 for $\delta = 0.1$ and $0.101$ for $\delta = 2.0$, see the last column of Table \ref{TABparameters}. Accordingly, at $\rho = \rho_{\rm max}$ we obtain $\eta_{\rm m} = 0.13$ for $\delta = 0.1$ but $\eta_{\rm m} = 0.02$ for $\delta = 2.0$. The soft dendrimers have a much lower monomer packing fraction at $\rho_{*}$ than the stiffer ones, a result that can be traced to the fact that their radius of gyration is larger.\footnote{This is characteristic for non-compact objects: for polymer chains, e.g., one obtains $\eta_{\rm m} \sim R_g^{-4/3}$ at the overlap concentration \cite{likos:pr:01}.} Thus, we conclude that the density-dependence of the pair interaction can be traced back to the shrinking of the dendrimers, a phenomenon that leads to increased crowding of the monomers in their interior. \section{Total scattering intensities and the factorisation approximation} \label{scatter:sec} In this section we turn our attention to a different question, which is however related to the issues discussed above, namely to the interpretation of scattering data from concentrated dendrimer solutions. As a first step, we consider the form factor $F(q)$, defined by Eq.\ (\ref{fofq:eq}). Clearly, $F(q)$ expresses the {\it intramolecular} correlations between the monomers belonging to a certain dendrimer. When scattering from an infinitely dilute solution, $F(q)$ offers the only contribution to the coherent scattering density. Since all the information about the monomer correlations is encoded in $F(q)$, great experimental effort is devoted to the determination of this quantity. At low values of $q$, $qR_{g,\infty}$, the form factor delivers information about the overall size of the molecule whereas at higher values of the scattering wavevector, $q \sim 1/a$, where $a$ is the monomer length, information about the monomer correlations and the fractal dimension of the object is hidden \cite{likos:pr:01, benoit, grest:review}. \begin{figure} \begin{center} \begin{minipage}[h]{7.2cm} \includegraphics[width=8cm,clip]{figure07a.eps} \includegraphics[width=8cm,clip]{figure07b.eps} \end{minipage} \end{center} \caption{The form factors measured in the monomer-resolved simulations for one isolated dendrimer molecule ($\rho = 0$, solid line) and at the highest density ($\rho = \rho_{\rm max}$, dotted line). The model dendrimers have maximum thread length (a) $\delta=0.1$ and (b) $\delta=2.0$.} \label{G4_Fq} \end{figure} Although $F(q)$ is experimentally measured at the limit $\rho \to 0$, the same quantity can be defined at any density. At arbitrary concentrations, $F(q)$ will in general change with respect to its form at infinite dilution, due to possible deformations of the macromolecules. In Fig.\ \ref{G4_Fq} we show the form factors for the two model dendrimers at the lowest and at the highest simulate densities. It can be seen there that there is only a small change in both cases, which takes the form of a slight extension of $F(q)$ to higher $q$-values as the concentration increases. This is consistent with the shrinkage of the dendrimers and the corresponding decrease of the gyration radius. Indeed, in the Guinier regime, $qR_g < 1$, the form factor has a parabolic profile, $F(q) \cong N[1 - (qR_g)^2/3]$, and a reduction of $R_g$ manifests itself as a swelling in $q$-space and vice versa \cite{likos:pre:98}. \begin{figure} \begin{center} \begin{minipage}[h]{7.2cm} \includegraphics[width=8cm,clip]{figure08a.eps} \includegraphics[width=8cm,clip]{figure08b.eps} \end{minipage} \end{center} \caption{The total coherent scattering intensity $I(q)$ [Eq.\ (\ref{iofq:eq})] from concentrated $\delta = 0.1$-dendrimer solutions, compared with the result from the factorisation approximation, Eq.\ (\ref{appr3:eq}), at different overall concentrations $\rho$. (a) $\rho = 0.1\rho_{\rm max}$ and (b) $\rho = 0.5\rho_{\rm max}$. Results using both the form factor $F(q)$ at the given density and its counterpart at infinite dilution, $F_0(q)$ are shown for the factorisation approximation.} \label{fact1_0.1:fig} \end{figure} Let us now turn our attention to the total coherent scattering intensity from all monomers, $I(q)$, given by Eq.\ (\ref{iofq:eq}). It is clear from its definition that $I(q)$ can also be measured in the monomer-resolved simulation and this has been done for both dendrimer species, characterised by the maximum thread extensions $\delta = 0.1$ and $\delta = 2.0$. In attempting to model complex polymeric entities as soft colloids, it is a common procedure to separate the intramolecular from the intermolecular correlations and to write down approximations for the quantity $I(q)$ in which the two types of correlations appear in a factorised fashion. Here we are going to put this approach into a test and figure out the limits of its validity as far as dendritic molecules are concerned. A similar test has been carried out by Krakoviak {\it et al.} \cite{krak:epl:02} who compared results from the PRISM model for polymers with simulations and with the factorisation ansatz. \begin{figure} \begin{center} \begin{minipage}[h]{7.2cm} \includegraphics[width=8cm,clip]{figure09a.eps} \includegraphics[width=8cm,clip]{figure09b.eps} \end{minipage} \end{center} \caption{(a) Same as Figs.\ \ref{fact1_0.1:fig}(a) and (b) but for $\rho = \rho_{\rm max}$. (b) The true structure factor $S(q)$ between the centres of mass at $\rho = \rho_{\rm max}$, as obtained from the monomer-resolved simulations, compared with the apparent structure factor $S_{\rm app}(q) = I(q)/F_0(q)$.} \label{fact2_0.1:fig} \end{figure} As a first approximate step, one assumes that the intramolecular conformations and centre-of-mass correlations decouple from each other. Correspondingly, Eq.\ (\ref{iofq:eq}) takes the approximate form: \begin{equation} \fl I(q) \cong \frac{1}{N\nu}\sum_{i=1}^N\sum_{j=1}^N \sum_{\alpha=1}^{\nu}\sum_{\beta=1}^{\nu} \left\langle \exp\left[-{\rm i}{\bf q}\cdot \left({\bf r}_i - {\bf r}_j\right)\right] \right\rangle \left\langle \exp\left[-{\rm i}{\bf q}\cdot \left({\bf u}_{\alpha}^{i} - {\bf u}_{\beta}^{j}\right)\right] \right\rangle. \label{appr1:eq} \end{equation} The approximation inherent in Eq.\ (\ref{appr1:eq}) above is a reasonable one for dendrimers. Indeed, as it has been shown in Ref.\ \cite{harreis:jcp:03}, the monomer degrees of freedom are correlated at length scales $\sim \sigma$, whereas for the overall densities $\rho$ considered here, the centers of mass are correlated at lengths at least $\sim R_g$ and the two are well-separated from each other. Hence, at the wavevector-scale $q_{\rm CM} \sim 1/R_g$ at which the centre-of-mass $S(q)$ shows structure, the dendrimers still appear as compact objects and the internal fluctuations can be decoupled from the intermolecular ones. The second approximation is now the following. Suppose that we are at sufficiently low densities, so that close approaches between the centres of mass of the dendrimers are very rare and they carry therefore a negligible statistical weight. Then, since monomers belonging to different dendrimers stay far apart, it is reasonable to assume that the deviations from their respective centres of mass are uncorrelated. In this case, one can approximately write: \begin{eqnarray} \fl \nonumber \frac{1}{\nu}\sum_{\alpha=1}^{\nu}\sum_{\beta=1}^{\nu} \left\langle \exp\left[-{\rm i}{\bf q}\cdot \left({\bf u}_{\alpha}^{i} - {\bf u}_{\beta}^{j}\right)\right] \right\rangle & \cong & \frac{1}{\nu}\sum_{\alpha=1}^{\nu}\sum_{\beta=1}^{\nu} \left\langle \exp\left(-{\rm i}{\bf q}\cdot {\bf u}_{\alpha}^{i}\right)\right\rangle \left\langle\exp\left({\rm i}{\bf q}\cdot {\bf u}_{\beta}^{j}\right) \right\rangle \\ & = & \frac{1}{\nu}\left\langle \hat\xi_{\bf q}\right\rangle \left\langle\hat\xi_{-\bf q} \right\rangle, \label{appr2:eq} \end{eqnarray} where $\xi_{\bf q}$ is the Fourier transform of the monomer density operator $\hat\xi({\bf u})$ around the centre of mass of an arbitrary dendrimer:\footnote{The quantity $\xi({\bf u})$ defined in Eq.\ (\ref{xiofr:eq}) is simply the expectation value of the operator $\hat\xi({\bf u})$.} \begin{equation} \hat\xi({\bf u}) = \sum_{\alpha = 1}^{\nu}\delta\left( {\bf u}-{\bf u}_{\alpha}^{i}\right). \label{hatxi:eq} \end{equation} Clearly, the right hand side of Eq.\ (\ref{appr2:eq}) has no dependence on the dendrimer index. At the same time, it has been shown in Ref.\ \cite{harreis:jcp:03} that the product $\nu^{-1}\langle\hat\xi_{\bf q}\rangle\langle\hat\xi_{-{\bf q}}\rangle$ is an excellent approximation for the form factor $F(q)$ of the dendrimers, deviations from the exact expression in Eq.\ (\ref{fofq:eq}), $F(q) = \nu^{-1}\langle\hat\xi_{\bf q}\hat\xi_{-{\bf q}}\rangle$, appearing only at high $q$-values that are unreachable in a typical SANS experiment. The approximation inherent in Eq.\ (\ref{appr2:eq}) has been derived for monomers belonging to different dendrimers ($i \ne j$) and now, in view of the results of Ref.\ \cite{harreis:jcp:03}, it can be also applied to the case $i = j$. Putting everything together, we obtain \begin{equation} \frac{1}{\nu}\sum_{\alpha=1}^{\nu}\sum_{\beta=1}^{\nu} \left\langle \exp\left[-{\rm i}{\bf q}\cdot \left({\bf u}_{\alpha}^{i} - {\bf u}_{\beta}^{j}\right)\right] \right\rangle \cong F(q). \label{rigid:eq} \end{equation} Eqs.\ (\ref{appr1:eq}) and (\ref{rigid:eq}) now yield the oft-employed {\it factorisation approximation}: \begin{equation} I(q) \cong S(q)F(q), \label{appr3:eq} \end{equation} whose validity will be tested in what follows. \begin{figure} \begin{center} \begin{minipage}[h]{7.2cm} \includegraphics[width=8cm,clip]{figure10a.eps} \includegraphics[width=8cm,clip]{figure10b.eps} \end{minipage} \end{center} \caption{Same as Fig.\ \ref{fact1_0.1:fig} but for $\delta = 2.0$-dendrimers.} \label{fact1_2.0:fig} \end{figure} \begin{figure} \begin{center} \begin{minipage}[h]{7.2cm} \includegraphics[width=8cm,clip]{figure11a.eps} \includegraphics[width=8cm,clip]{figure11b.eps} \end{minipage} \end{center} \caption{Same as Fig.\ \ref{fact2_0.1:fig} but for $\delta = 2.0$-dendrimers.} \label{fact2_2.0:fig} \end{figure} The assumptions that went into the derivation of Eq.\ (\ref{appr3:eq}) above become exact when the particles from which one scatters are rigid colloids \cite{klein:96}, in which case individual scattering centres are devoid of a fluctuating nature. In this context, it is important to note that there is an analog of the factorisation approximation that is applied in the theory of concentrated polymer solutions and carries the name ``rigid particle assumption'' \cite{krak:epl:02, cates:epl:01}. Here, one starts from Eq.\ (\ref{appr1:eq}) and assumes that monomer-monomer correlations between monomers belonging to different polymers are identical to the intramolecular correlations in any chain \cite{krak:epl:02}. Under this assumption, the second factor on the right-hand-side of Eq.\ (\ref{appr1:eq}) above takes the form: \begin{equation} \fl \frac{1}{\nu}\sum_{\alpha=1}^{\nu}\sum_{\beta=1}^{\nu} \left\langle \exp\left[-{\rm i}{\bf q}\cdot \left({\bf u}_{\alpha}^{i} - {\bf u}_{\beta}^{j}\right)\right] \right\rangle \cong \frac{1}{\nu}\sum_{\alpha=1}^{\nu}\sum_{\beta=1}^{\nu} \left\langle \exp\left[-{\rm i}{\bf q}\cdot \left({\bf u}_{\alpha}^{i} - {\bf u}_{\beta}^{i}\right)\right] \right\rangle = F(q), \label{cates:eq} \end{equation} and, in conjunction with Eq.\ (\ref{appr1:eq}), the factorisation approximation of Eq.\ (\ref{appr3:eq}) follows once again. Krakoviak {\it et al.} tested the validity of Eq.\ (\ref{appr3:eq}) for polymer solutions, finding that it breaks down for high polymer densities. We have put the validity of Eq.\ (\ref{appr3:eq}) into a strong test by comparing the directly measured total coherent scattering intensity $I(q)$ with the product $F(q)S(q)$, where for the latter quantity both factors are the ones measured in the same simulation. Results are shown in Figs.\ \ref{fact1_0.1:fig} and \ref{fact2_0.1:fig}(a) for the $\delta = 0.1$-dendrimers as well as in Figs.\ \ref{fact1_2.0:fig} and \ref{fact2_2.0:fig}(a) for the $\delta = 2.0$-dendrimers. It can be seen that the factorisation approximation is valid at the lowest density shown ($\rho = 0.1\rho_{\rm max}$) but that its quality becomes poorer as the concentration of the solution increases. A dramatic breakdown can be seen in Fig.\ \ref{fact2_0.1:fig}(a) for the more compact dendrimers, whereas the breakdown is also clear (but less spectacular) for the more open dendrimers, Fig.\ \ref{fact2_2.0:fig}(a). We can now trace back to the physical origins of the breakdown of the factorisation approximation, Eq.\ (\ref{appr3:eq}). There is first of all a weak breakdown of the first assumption, Eq.\ (\ref{appr1:eq}), in which the centre-of-mass coordinates were decoupled from the fluctuating monomers. Indeed, were this approximation to be true, then the form factor $F(q)$ would remain unchanged at all concentrations. This is however not the case, as the results in Fig.\ \ref{G4_Fq} demonstrate: the dendrimers shrink as $\rho$ grows. Yet, the difference between the infinite-dilution form factor, $F_0(q)$ and its counterpart at finite density, $F(q)$, is not sufficient to account for the failure of the factorisation approximation. As can be seen in Figs.\ \ref{fact1_0.1:fig}(b), \ref{fact2_0.1:fig}(a), \ref{fact1_2.0:fig}(b) and \ref{fact1_2.0:fig}(a), the product $S(q)F(q)$ is in even {\it worse} agreement with $I(q)$ than the product $S(q)F_0(q)$. The reason for the breakdown of Eq.\ (\ref{appr3:eq}) lies in the assumption inherent in deriving the approximation of Eq.\ (\ref{rigid:eq}), namely that fluctuations between monomers belonging to different dendrimers are uncorrelated. At sufficiently low densities $\rho$, this is a reasonable assumption. However, in approaching the overlap density $\rho_{*}$, it does not hold any more. As monomers from different dendrimers begin to crowd with one another, their coordinates with respect to their centers of mass become more and more strongly correlated and Eq.\ (\ref{rigid:eq}) loses its validity. In this respect, it is not surprising that the breakdown of Eq.\ (\ref{appr3:eq}) is more dramatic for the $\delta = 0.1$-dendrimers than for the $\delta = 2.0$-ones. In the former case, the monomer packing fraction is higher and the corresponding correlations between monomers belonging to different molecules stronger than in the latter. To put it in more pictorial terms: at the overlap concentration it is not any more possible to tell to which dendrimer a monomer belongs, see Fig.\ \ref{rho10:fig}. A clear separation between intra- and inter-dendrimer fluctuations is not any more possible. We finally discuss the consequences of the above findings for the interpretation of scattering data obtained from concentrated dendrimer solutions. The validity of Eq.\ (\ref{appr3:eq}) is often taken for granted: the form factor $F(q)$ is measured in a SANS- or SAXS experiment at low concentrations and extrapolated to infinite dilution to obtain the quantity $F_0(q)$. Thereafter, the measured coherent scattering intensity at any concentration, $I(q)$ is divided through $F_0(q)$, the result being interpreted as the structure factor of the system. In order to differentiate it from $S(q)$, we emphasise here that this is only an {\it apparent} structure factor $S_{\rm app}(q)$, given by \begin{equation} S_{\rm app}(q) = \frac{I(q)}{F_0(q)}. \label{sapp:eq} \end{equation} In Figs.\ \ref{fact2_0.1:fig}(b) and \ref{fact2_2.0:fig}(b) we compare the apparent structure factors for the two dendrimer species at the highest simulated density with the true ones. It can be seen that the process of applying Eq.\ (\ref{sapp:eq}) has the effect of producing apparent structure factors that are everywhere lower than the true ones and they even fail to reach the asymptotic value unity at the range considered. Such structure factors from concentrated dendrimer solutions have been published in Refs.\ \cite{topp:macrom:99} and \cite{ramzi:macrom:98}, in which they have been correctly termed `apparent'. It is important here to point out that apparent structure factors can lead to false conclusions regarding the validity of the pair potential approximation in mesoscopic theories of dendrimer solutions. Indeed, as we have explicitly shown in this work, many-body effective potentials play only a minor role in concentrated dendrimer solutions, therefore, one can obtain accurate structure factors from theory by working with a density-independent pair potential. If, however, these structure factors were to be compared with the apparent experimental quantities $S_{\rm app}(q)$, discrepancies of the kind shown in Figs.\ \ref{fact2_0.1:fig}(b) and \ref{fact2_2.0:fig}(b) would show up. It would be then possible to argue that these discrepancies are due to the breakdown of the pair potential approximation but, as we have shown here, this conclusion would be unwarranted. The reason for the disagreement between theory and `experiment' would, in this case, lie in the employment of an {\it erroneous} approximation, Eq.\ (\ref{appr3:eq}), in deriving apparent structure factors from the experimental data. It is worth noting that Krakoviak {\it et al.} \cite{krak:epl:02} reached similar conclusions for the case of polymer solutions, although they did not formally introduce an apparent structure factor into their considerations. \section{Summary and concluding remarks} \label{summary:sec} We have carried out extensive, monomer-resolved and effective simulations of model dendrimers in order to calculate correlation functions between the centres of mass of the macromolecules and the individual monomers themselves. By comparing the real-space correlation functions obtained by the two simulation approaches, we found that many-body effective potentials play a minor role up to the overlap density and they can be altogether ignored for open dendrimers with long bond lengths. Our finding for the scattering intensity, on the other hand, is that the factorisation approximation of this quantity into a form- and a structure factor loses its validity as one approaches the overlap concentration. Structure factors that are obtained from experimental data by dividing the scattering intensity through the form factor can be seriously in error. It appears, therefore, that the extraction of an accurate structure factor from concentrated dendrimer solutions is extremely difficult as one approaches the overlap concentration. We anticipate that this result is also valid for other `polymeric colloids' such as star-shaped polymers and brushes. One strategy to circumvent this inherent difficulty is to use the labeling technique, in which a small, inner part of the molecule is protonated and the rest is deuterated in such a way that the contrast between the outermost part of the molecule and the solvent vanishes. In this way, only the innermost part of the molecule will have contrast with the solvent and scatter coherently. Thus, one can reach concentrations for the whole system that exceed $\rho_{*}$, whereas the labeled parts are still nonoverlapping. Such a technique was successfully applied, e.g., to star polymers \cite{likos:prl:98}. \section*{Acknowledgments} This work has been supported by the Deutsche Forschungsgemeinschaft (DFG). \section*{References}
2,877,628,089,967
arxiv
\section{} \begin{acknowledgments} We acknowledge support from a Grant-in-Aid from the Japan Society for the Promotion of Science (Grant number 16H02203), and Innovative Science and Technology Initiative for Security, ATLA, Japan. \end{acknowledgments}
2,877,628,089,968
arxiv
\section{Introduction} The scalar field self-coupling and the Yukawa couplings in the electroweak theory are believed to vanish in the limit of infinite cut-off, as suggested by the signs of the perturbative $\beta$-functions. But the confirmation of this ``triviality'' of these couplings and a reliable determination of its consequences require nonperturbative methods, because one has to control a very difficult regime -- when the renormalized couplings have maximal possible values which could, in principle, be large. A use of nonperturbative lattice methods for these purposes is desirable. In the pure scalar sector of electroweak interactions these methods have been successful in estimating the upper bound for the Higgs boson mass in the approximation neglecting the gauge and fermion fields \cite{DaNe83}--\cite{LuWe}. They also indicate that the renormalized quartic coupling is lower than the upper bound obtained from the tree level unitarity. This result makes a strongly interacting Higgs sector rather improbable and explains the quantitative agreement with calculations using the renormalized perturbation theory. Weakly coupled gauge fields do not seem to have any unexpected effects on the $\Phi^4$ theory \cite{HaHa86b} and can thus be treated perturbatively. Possible effects of a strong Yukawa coupling remain relatively unexplored, however. The most important phenomenological questions are: \begin{itemize} \item What would be the effect of a strong Yukawa coupling on the upper bound for the Higgs boson mass \cite{Li86}? \item What is the lower bound on the Higgs boson mass which follows from the vacuum stability requirement in the case of a strong Yukawa coupling \cite{Li86,LOWERB}? \item How strong can the Yukawa coupling actually be? What is the maximal fermion mass which can be generated by a Yukawa interaction \cite{EiGo86}? \end{itemize} {}From the general point of view of quantum field theory the investigations of models with strong Yukawa coupling attempt to elucidate the following issues: \begin{itemize} \item Are the Yukawa theories in 4 dimensions really trivial or do some nontrivial fixed points exist? \item If they are trivial, how far and in what form Yukawa theories can be used as effective quantum field theories? \item What are their relations to the purely fermionic theories with four-fermion coupling \cite{Na88}--\cite{Zi91}? \end{itemize} Recently numerous explorations of various lattice Yukawa models have been performed \cite{YCONT}--\cite{MIRRORS} (for a recent review see \cite{DeJe92}). In the following discussion, only the models with the so-called lattice parametrization and single-site Yukawa coupling are considered. The studies reveal the existence of three qualitatively different regions of the bare Yukawa coupling $y$. If $y$ is sufficiently small, the lattice Yukawa models behave according to the perturbation theory based on the quasi-classical picture of the spontaneous symmetry breaking (SSB) in the scalar sector. For large $y$ the fermions decouple in the continuum limit so that the physically interesting range of the values of $y$ is restricted from above. In the intermediate region, the phase with nonvanishing scalar field expectation value $\langle \Phi \rangle$ exists even if the nearest neighbour coupling $\kappa$ of the scalar field is weak or even antiferromagnetic ($\kappa < 0$). The corresponding SSB is then generated by the Yukawa coupling. In particular, at $\kappa=0$ the Yukawa models on the lattice correspond to the pure fermion theories with multifermion couplings of the Nambu--Jona-Lasinio type \cite{Na88}--\cite{Zi91} and one realizes that the SSB in such theories can be understood on the lattice as a special case of the SSB caused by the Yukawa interaction. In the light of the above-mentioned goals a very important observation \cite{BoDe91a} is that it is this intermediate region where the renormalized Yukawa coupling achieves its maximal values. Thus there is ample motivation to investigate SSB in Yukawa theories on the lattice in the regime where the Yukawa interaction is its driving mechanism and the standard quasi-classical approximation based on the scalar field potential is not adequate. Our recent numerical investigations of the SU(2)$_L\otimes$SU(2)$_R$ lattice Yukawa model with naive lattice fermions have shown \cite{Ja91,BoDe91a} that in the region of intermediate values of $y$ and negative $\kappa$ the scalar field propagators have a very complex form. As it turns out, it has two reasons: Firstly, the Yukawa coupling causes here sizeable fermion loop corrections to the scalar propagator. These effects can actually be used to estimate the value of the renormalized Yukawa coupling. Secondly, the antiferromagnetic ordering tendency of the negative scalar field coupling $\kappa$ competes with the ferromagnetic ordering effect of the Yukawa interaction. This shows up at large momenta and is therefore a lattice artefact which, however, has to be taken into account particularly on small lattices in the analysis of the scalar propagators. In this paper we develop reliable methods of analysis of numerical data in Yukawa models and present some results for the renormalized Yukawa and scalar quartic couplings. The main improvements over previous investigations of a similar type are threefold: \begin{itemize} \item[(i)] We have been able to analyze the scalar propagators in a very satisfactory way by including the 1-fermion loop contribution and as a result the Goldstone wave function renormalization constant is now determined quite precisely. \item[(ii)] To gain experience with finite-size effects we have varied the lattice size upto $12^3 16$ and have found the main effect to be the Goldstone boson dictated $1/L^2$ dependence. \item[(iii)] We have also covered nearly the full range of bare parameters which includes the maximum possible bare Yukawa coupling, staying within the lattice artefact-free region of the coupling parameter space. \end{itemize} However, the fermion number in our simple model with naive fermions is too large. This probably causes the most serious problem we have found, namely that the fermion mass stays much below the scalar mass even for the largest renormalized Yukawa coupling. Therefore phenomenologically relevant large scale investigations of Yukawa theories should be performed in more realistic models. After defining the model in sect.~2 we describe in sect.~3 the spectrum in various phases and point out the appearance of the ``staggered'' scalar states to be interpreted as lattice artefacts occuring on a hypercubic lattice in the vicinity of the phases with antiferromagnetic ordering. In sect.~4 we discuss the physical meaning of the SSB occuring at negative $\kappa$. The complex structure of the scalar propagators and the method of their analysis on finite lattices are described in sects.~5--7. In sect.~8 the fermion and scalar masses are discussed. Some results for the renormalized Yukawa coupling and the renormalized scalar field quartic coupling are presented in sects.~9 and 10, respectively. We summarize our results and conclude in sect.~11. Also a brief appendix elucidating the properties of models with both the ferromagnetic and antiferromagnetic orderings is included. \section{The SU(2)$_L\otimes$SU(2)$_R$ model with naive fermions} Our starting point is a fermion-scalar system in the continuum, which is defined in euclidean space-time by the action \begin{eqnarray} S_0 &=& \int \mbox{d}^4x \left[ \, \frac{1}{2} \; {\textstyle \frac{1}{2} \mbox{Tr}\,} \left\{ (\partial_{\mu} \Phi_0 )^{\dagger} (\partial^{\mu} \Phi_0) \right\} + \frac{m_0^2}{2} \; \; {\textstyle \frac{1}{2} \mbox{Tr}\,} \left\{ \Phi_0^{\dagger} \Phi_0 \right\} + \frac{g_0}{4!} \; {\textstyle \frac{1}{2} \mbox{Tr}\,} \left\{ \left( \Phi_0^{\dagger} \Phi_0 \right)^2 \right\} \right. \nonumber \\ && \hspace{2cm} + \left. \overline{\Ps}_0 \gamma^{\mu} \partial_{\mu} \Psi_0 + y_0 \overline{\Ps}_0 \left( \Phi_0 P_R + \Phi_0^{\dagger} P_L \right) \Psi_0 \vphantom{ \left( \frac{1}{2} \right)^2 } \right] \; . \label{LA} \end{eqnarray} Here $\Phi_0 $ is a 4-component scalar field with a quartic self-coupling and $\Psi_0$ is a fermionic SU(2) doublet field coupled to the scalar field by an Yukawa interaction with the bare Yukawa coupling parameter $y_0$. $P_R$ and $P_L$ are the right- and left-handed chiral projectors. Because both fermions in the doublet couple with the same strength $y_0$ to the scalar field, the action has a global SU(2)$_R$ flavour symmetry. Together with the global SU(2)$_L$ symmetry, which would turn into a local symmetry when gauge fields are included, the action is invariant under the global chiral SU(2)$_L\otimes$SU(2)$_R$ transformations \begin{eqnarray} \Psi_0 & \rightarrow & (\Omega_L P_L + \Omega_R P_R) \Psi_0, \label{PSIT} \\ \overline{\Ps}_0 & \rightarrow & \overline{\Ps}_0 (\Omega_L^{\dagger} P_R + \Omega_R^{\dagger} P_L), \label{PSIBT} \\ \Phi_0 & \rightarrow & \Omega_L^{} \Phi_0 \, \Omega_R^{\dagger} \label{PHIT}, \end{eqnarray} where $\Omega_{L,R} \in$ SU(2)$_{L,R}$. We regularize the model by introducing a 4-dimensional hypercubic lattice with the lattice spacing $a$. A simple possibility is to keep the {\em continuum parametrization} and just to replace the derivatives by the lattice differences in the action (\ref{LA}). But for a study of the largest renormalized Yukawa coupling it turns out to be very important to rescale the fields \begin{equation} \Phi_0 (x)=\sqrt{2 \kappa} \; \Phi_x / a \;, \;\;\;\;\;\;\; \Psi_0 (x) = \Psi_x / a^{3/2} \label{A} \end{equation} and reparametrize the coupling parameters \begin{equation} (a m_0)^2=\frac{1-2\lambda}{\kappa} - 8\;, \;\;\;\; g_0 =\frac{6 \lambda}{\kappa^2} \;, \;\;\;\; y_0 = \frac{y}{ \sqrt{2 \kappa} }\;, \label{BC} \end{equation} thus ending up with the model in the {\em lattice parametrization} defined by the action \begin{eqnarray} S &=&-\kappa \; \sum_{x \mu} \; {\textstyle \frac{1}{2} \mbox{Tr}\,} \left\{ \Phi^{\dagger}_x \Phi^{}_{x + \hat{\mu}} + \Phi^{\dagger}_{x+ \hat{\mu}} \Phi^{}_x \right\} + \sum_{x} \; {\textstyle \frac{1}{2} \mbox{Tr}\,} \left\{ \Phi_x^{\dagger} \Phi_x +\lambda \left( \Phi_x^{\dagger} \Phi_x - 1\!\!1 \right)^2 \right\} \nonumber \\ & &+ \sum_{x \mu} \frac{1}{2} \left(\overline{\Ps}_x \gamma_{\mu} \Psi_{x + \hat{\mu}} - \overline{\Ps}_{x + \hat{\mu}} \gamma_{\mu} \Psi_x \right) + y \; \sum_x \overline{\Ps}_x ( \Phi_x^{} P_R + \Phi_x^{\dagger} P_L) \Psi_x \; .\label{SS} \end{eqnarray} We note that all the fields and the parameters in this expression are dimensionless. In the following we set the bare quartic coupling $\lambda$ of the scalar field to infinity, which leads to a freezing of the radial mode of the scalar field, $\; {\textstyle \frac{1}{2} \mbox{Tr}\,}\Phi^\dagger_x \Phi_x = 1$, so that $\Phi_x$ can be represented by an SU(2) matrix. The hopping parameter of the scalar field $\kappa$ and the bare Yukawa coupling $y$ are left as free parameters. As usual in statistical mechanics, in the action (\ref{SS}) also the values $\kappa < 0$ are admissible, though the relationships (\ref{A}) and (\ref{BC}) to the continuum parametrization are not defined for negative $\kappa$. For the sake of simplicity we are using the naive lattice fermions and the model actually involves 16 degenerate fermion doublets. There are some approaches trying to achieve a more realistic fermion spectrum preserving the chiral symmetry of the action. So far none of these approaches have proved satisfactory (see e.g. \cite{BoDe91b} and for recent reviews \cite{DeJe92,CHIRAL}). An analytic treatment of the naive model by the $1/N$ expansion is not feasible in the investigated limit $\lambda\!=\!\infty$. In order to use the Hybrid Monte Carlo algorithm in the numerical simulations we further have to double the number of fermions by squaring the fermion determinant. Thus we are actually simulating the model with 32 fermion doublets expecting that qualitative aspects of Yukawa models are not changed by a large fermion number. One of such properties, found in many lattice Yukawa models with different symmetries and numbers of fermions \cite{YLAT}--\cite{MIRRORS}, is the occurence of SSB even at negative values of the hopping parameter $\kappa$. Here we face the somewhat puzzling fact that the relation (\ref{BC}) between the coupling parameters of the continuum (\ref{LA}) and lattice parametrizations (\ref{SS}) breaks down for negative $\kappa$. As long as physical (renormalized) quantities are considered, the region $\kappa<0$ can have a well defined physical meaning. The physically interesting region of the parameter space seems to be extended in the lattice parametrization and the use of this parametrization is thus crucial. This observation motivates definitions of renormalized quantities which do not use the field $\Phi_0$ nor the transformations (\ref{A}) and (\ref{BC}) and are thus applicable also for $\kappa < 0$: The scalar field in lattice parametrization is renormalized in the phase with SSB as \begin{equation} \Phi_{x,R}=\frac{\Phi_x}{\sqrt{Z_{\pi}}} \; , \end{equation} where $Z_{\pi}$ is the wave function renormalization constant of the Goldstone components of the $\Phi$-field propagator. For $\kappa>0$ this is equivalent to the renormalized scalar field in the continuum parametrization \begin{equation} \Phi_{0,R}(x)=\frac{\Phi_0 (x)}{\sqrt{Z_{0,\pi}}} \; \end{equation} when the dimensions of the fields are taken into account, \begin{equation} \Phi_{x,R} = a \; \Phi_{0,R}(x) \;, \label{PhiR} \end{equation} and \begin{equation} Z_{0,\pi} = 2 \kappa \, Z_\pi \; . \label{relZ} \end{equation} The vacuum expectation value of the renormalized scalar field in lattice units is obtained from the magnetization $\langle \Phi \rangle$ \begin{equation} a v_R = \frac{\langle \Phi \rangle}{\sqrt{Z_{\pi}}} \label{vR} \end{equation} and $v_R$ is the vacuum expectation value in physical units. The renormalized couplings in the broken phase may be defined as \begin{equation} y_R = \frac{m_F}{v_{R}} =\frac{(a m_F)}{\langle \Phi \rangle} \; \sqrt{Z_{\pi}} \; , \label{TR} \end{equation} and \begin{equation} \lambda_R = \frac{m_{\sigma}^2}{2v_{R}^2} = \frac{(a m_{\sigma})^2}{2\langle \Phi \rangle^2} \; Z_{\pi} \;,\label{LR} \end{equation} where $m_F$ and $m_\sigma$ are the fermion mass and the $\sigma$ boson mass, respectively, in physical units. \section{Spectrum and continuum limit at negative~$\kappa$} The phase diagram of the SU(2)$_L\otimes$SU(2)$_R$ model (\ref{SS}) with naive fermions shown in fig.~1 \cite{BoDe90a,BoDe91a} consists of several phases and phase regions with ferromagnetic (FM), paramagnetic (PM), antiferromagnetic (AM) and ferrimagnetic (FI) ordering of the scalar field. In addition to the weak coupling phases FM(W), PMW and AM(W)\footnote{For simplicity we use the nomenclature ``phases'' also to denote phase regions with weak (W) or strong (S) bare Yukawa coupling and include a bracket in the abbreviation.} we find at large values of $y$ the strong coupling phases FM(S), PMS and AM(S) with nonperturbative behaviour of several fermionic observables. A similar phase structure was also observed in fermion-scalar models with different symmetry groups and/or different formulations of fermions on the lattice \cite{YLAT}--\cite{MIRRORS} provided the lattice parametrization and single-site Yukawa coupling is used. Common features of all these models are (i) the existence of weak and strong Yukawa coupling phases and phase regions, (ii) the continuation, at intermediate values of $y$, of the FM phase into the negative $\kappa$ region and (iii) the existence of points where several phase transition lines meet. The width of the funnel in the phase diagram filled by the FM and FI phases becomes smaller if the number of fermions is decreased (compare, e.g., the phase diagrams in refs.~\cite{BoDe90a,BeHe91}) in accordance with mean field calculations \cite{StTs}. The fermion mass \cite{BoDe89a} obeys within the error bars the tree level relation $am_F \!=\! y\,\langle \Phi \rangle$ in the FM(W) phase and is zero in the PMW phase where $\langle \Phi \rangle=0$. It increases when the FM(S)--PMS phase transition is approached from the FM(S) side, does not feel this phase transition and continues to increase when $\kappa$ is lowered within the PMS phase. The scalar spectrum is quite different in different phases. In the four phases around the point A, where the PMW, AM(W), FM(W) and FI phases meet, the spectrum is indicated in fig.~2. In the phases PMW and AM(W) with zero magnetization $\langle \Phi \rangle$ there exists a scalar $\Phi$ particle quadruplet of degenerate mass $am_{\Phi}$. On the other hand, in the phases FM(W) and FI with nonzero magnetization $\langle \Phi \rangle$ there exist three Goldstone bosons called $\pi$ and one massive $\sigma$ boson with masses $am_{\pi}\!=\!0$ and $am_{\sigma}$, respectively. In the $\kappa < 0$ region the scalar propagators show near the momentum $(\pi,\, \pi,\, \pi,\, \pi)$ the presence of further scalar states which we call {\em staggered states} in the following. This effect shows up feebly already at small positive $\kappa$ and as $\kappa$ is lowered it results in a gradually increasing curvature in the scalar propagators at large momenta. The existence of such states is obvious at $y=0$, as here the symmetry of the action (\ref{SS}) under the transformation \begin{equation} \Phi_x \rightarrow \Phi^{st}_x = (-1)^{x_1+x_2+x_3+x_4} \Phi_x , \;\;\; \kappa \rightarrow -\kappa \label{ST} \end{equation} implies the presence of a $\Phi^{st}$ quadruplet in the PM phase as well as of three $\pi^{st}$ and \linebreak one $\sigma^{st}$ states in the AM phase with nonvanishing staggered magnetization \linebreak $\langle \Phi^{st} \rangle = \langle \sum_x (-1)^{x_1+x_2+x_3+x_4} \Phi_x \rangle$. They are visible at low momenta in the 2-point function of the field $\Phi^{st}_x$. We denote the corresponding masses $am_{\Phi}^{st}$, $am_{\pi}^{st}$ and $am_{\sigma}^{st}$, respectively. For $0<y<\infty$ the symmetry (\ref{ST}) is broken explicitly, and these states can appear on the lattice simultaneously with the usual $\Phi$ or $\pi$ and $\sigma$ bosons, as indicated in fig.~2. The question is, which particles will remain in the various possible continuum limits, or in the large cut-off limits if the theory is an effective one (several possibilities are discussed in ref.~\cite{DeJe92}). We want of course to recover in the continuum limit the fermions and the usual bosons simultaneously. This includes the renormalized vacuum expectation value of the scalar field $av_R = \langle \Phi_R \rangle$, which is proportional to the gauge boson mass $am_W$. This is possible only in the FM(W) phase in the scaling region of the FM(W)-PMW phase transition. Here the model is of physical relevance from the point of view of the electroweak theory. As $am^{st}_\Phi$ and $am^{st}_\sigma$ vanish only on the critical lines where $\langle \Phi^{st} \rangle$ is vanishing, the staggered states could remain in the continuum limit simultaneously with the fermions and the usual scalars only if the limit is taken at the point A of the phase diagram in fig.~1. The scaling region of this point should therefore for physical reasons probably be avoided. The staggered states are lattice artefacts because they depend substantially on the lattice geometry. But they can have drastic effects on finite lattices and thus have to be taken into consideration in the analysis of the numerical data for the scalar propagator. \section{SSB generated by Yukawa coupling and the relation to four-fermion coupling} The negative $\kappa$ region has no analogue in the continuum parametrization because the transformation equations (\ref{BC}) are not defined for $\kappa<0$. At first glance it seems to be awkward and the question immediately arises whether a sensible continuum limit can be obtained in this region. The fact that for small positive $\kappa$ and for negative $\kappa$ there is a broken symmetry phase FM(W) at large enough Yukawa coupling suggests that the Yukawa coupling must be the driving force for SSB in that region (see also ref.~\cite{Cah91}). This is obviously outside the regime of usual perturbation theory in the continuum. It is therefore very important to investigate this region if one is looking for nonperturbative effects of the Yukawa coupling. Of course, the AM and FI phases appearing only for $\kappa < 0$ are probably lattice artefacts as they depend very much on the lattice geometry. But the scaling regions of the FM(W) and PMW phases down to the point A (fig.~1) are worth of consideration. The Osterwalder-Schrader (OS) reflection positivity can be proven presumably only for $\kappa \ge 0$. However, it is a sufficient, but not a necessary condition for unitarity, so that unitarity can still hold. In the numerical computations of the propagators of the theory, our failure to detect any state with a negative norm is assuring. Furthermore, as demonstrated later in this article and also in ref.~\cite{BoDe91a}, the measured values for all the masses and the renormalized couplings continue analytically from positive to negative $\kappa$ across $\kappa=0$. The problems with the transition from $\kappa \ge 0$ to $\kappa < 0$ seem to exist only on the level of bare parameters in the continuum parametrization. One should consider only the renormalized quantities. If one knew the renormalized running coupling $\overline{y}(\mu)$ at all momentum scales $\mu$ one could define also a sensible bare Yukawa coupling $y_B$ as \begin{equation} y_B=\overline{y}(\mu \sim 1/a)\;. \label{YYYY} \end{equation} For small values of $y_0$ and $g_0$, usual perturbation theory is valid and $y_B$ would not differ very much from the coupling parameter $y_0$. At larger values of $y_0$ and in particular in the negative $\kappa$ region the parameter $y_0$ does not any longer have to be close to $y_B$. This so-defined bare coupling $y_B$ would also have in general no simple relation to the bare parameters $\kappa$ and $y$. Fig.~3 schematically illustrates how $y_B$ and $y_0$ may split up from each other as the nonperturbative region is entered along some line of constant physics specified, e.g., by $y_R = \overline{y}(\mu=\mu_{\rm phy}) = const$, where $\mu_{\rm phy}$ is a physical scale. The stage is therefore well set to plunge into the negative $\kappa$ region with three issues in mind: \begin{itemize} \item[(i)] Is there a nonperturbative fixed point? \item[(ii)] Even if the theory is trivial, can the couplings be strong at a reasonably large cut-off? \end{itemize} Our previous investigation \cite{BoDe91a} with more naive methods of analysis of the numerical data has already produced tentatively negative answers to these two questions which we want to confirm in this paper. \begin{itemize} \item[(iii)] In any case, it is necessary to find out how far the observables in the $\kappa < 0$ region differ {\em quantitatively} from the $\kappa > 0$ region, for determination of bounds on renormalized couplings. \end{itemize} In the rest of this section we want to point out the connection of a Yukawa theory with a Nambu-Jona-Lasinio (NJL) type model. Integrating out the scalar fields in the partition function defined by the action (\ref{SS}) with $\kappa = \lambda = 0$ produces obviously a pure fermionic theory with local (on-site) four-fermion coupling of strength $\mbox{\small $\frac{1}{2}$} y^2$ -- the NJL model on the lattice. The scalar field is equivalent to the ``auxiliary field'' used in the context of the four-fermion coupling \cite{Na88,BaHi90,GrNe74}. Also for $\lambda > 0$ the effective fermion interaction is local at $\kappa = 0$ and corresponds thus to a multifermion interaction discussed, e.g., in ref.~\cite{Su90}. Thus we conclude that theories of the NJL type are special cases ($\kappa = 0$) of the Yukawa models in the lattice parametrization. But in terms of the renormalized theory, as we have discussed above, $\kappa=0$ is not singular and the same qualitative physics is obtained for the whole FM(W)-PMW scaling region down to the point A unless there is a nonperturbative fixed point somewhere. Recent work \cite{HaHa91,Zi91} using $1/N$ expansion also shows an equivalence between Yukawa models and generalized NJL models. Thus we achieve a unification of concepts and language of the SSB: the Yukawa models treated nonperturbatively using the lattice formulation (\ref{SS}) embrace both \begin{itemize} \item the classical Higgs mechanism, in which the SSB is understood in terms of the quasi-classical approximation for the effective potential of the scalar field and perturbation theory (e.g. the region $y \ll 1$ in fig.~1 for any $\lambda$), and \item the NJL type mechanism, operating at small (and possibly negative) $\kappa$ for relatively large values of $y$, which has to be treated by nonperturbative techniques such as the $1/N$ expansion. \end{itemize} Yukawa theories provide a gradual transition between these mechanisms. It seems, therefore, possible to formulate the SSB in the standard model in terms of the NJL mechanism \cite{Na88,BaHi90,Su90}. However, in the light of the above discussion, the distinctions between an elementary and a composite scalar, an auxiliary and a dynamical scalar field and between dynamical symmetry breaking and the usual Higgs mechanism do not seem important. \section{Properties of the scalar propagators} In our numerical simulation we consider $V=L^3T$ lattices with periodic boundary conditions for the scalar fields. Fermionic fields are periodic in space and antiperiodic in euclidean time directions. In the {\bf symmetric (PM) phase} the bosonic spectrum contains the $\Phi$ quadruplet of mass $am_{\Phi}$. The corresponding propagator in the momentum space is \begin{equation} G_{\Phi}(a p) = \left\langle \frac{1}{4V} \sum_{x,y} \; {\textstyle \frac{1}{2} \mbox{Tr}\,} \{ \Phi_x^{\dagger} \Phi_y \} \exp( i a p (x-y) ) \right\rangle \;. \label{bp} \end{equation} Neglecting the instability of $\Phi$ at the present precision level, the renormalized mass $am_{\Phi}$ and the wave function renormalization constant $Z_{\Phi}$ can be defined by means of the limit \begin{equation} G_{\Phi} (ap)|_{p^2 \rightarrow 0} = \frac{ Z_{\Phi} }{(am_\Phi)^2 + \widehat{a p}^2} \;, \label{bpp} \end{equation} where the quantity $\widehat{a p}^2=2 \sum_{\mu} (1-\cos(ap_{\mu}))$ is the dimensionless lattice equivalent of the momentum square in the continuum. In the {\bf broken (FM) phase} it is useful to introduce the following notation for the scalar field \begin{equation} \Phi = \sigma 1\!\!1 + i \sum_{j=1}^{3} \pi^j \; \tau^j \;. \label{SIGPI} \end{equation} Here $\tau^j$, $j=1,\ldots,3$ are the usual Pauli matrices and $\sqrt{ \sigma_x^2 + \sum_{j=1}^3 (\pi_x^j)^2 }=1$. The components are chosen such that the magnetization is given by $\langle \Phi \rangle = \langle \sigma \rangle$. Then the longitudinal component is associated with the massive $\sigma$ boson whereas the transverse ones with the three massless Goldstone bosons $\pi$. On the lattice the $\pi$ and $\sigma$ propagators in the momentum space are defined by \begin{eqnarray} G_{\pi}(a p) &=& \left\langle \frac{1}{3V} \sum_{x,y} \sum_{j=1}^{3} \pi_x^j \pi_y^j \exp( i a p (x-y) ) \right\rangle \;, \label{gp} \\ G_{\sigma}(a p) &=& \left\langle \frac{1}{V} \sum_{x,y} \sigma_x \sigma_y \exp( i a p (x-y) ) \right\rangle \;. \label{sp} \end{eqnarray} The only asymptotic states in the FM phase are the massless $\pi$ bosons. Therefore the wave function renormalization constant $Z_{\pi}$ for the scalar field is defined through the following limit of the $\pi$ propagator, \begin{equation} G_{\pi}(ap) |_{p^2 \rightarrow 0} = \frac{Z_{\pi}}{\widehat{a p}^2}\;. \label{gpp} \end{equation} Using the so-defined $Z_\pi$ the renormalized field expectation value $v_R$ is then given by eq.~(\ref{vR}). Again at our present precision level it is presumably sufficient to define the renormalized mass $am_{\sigma}$ of the unstable $\sigma$ particle by the relation \begin{equation} G_{\sigma} (ap)|_{p^2 \rightarrow 0} = \frac{ Z_{\sigma} } { (am_{\sigma})^2 +\widehat{a p}^2 } \;. \label{spp} \end{equation} Another point to note is that in the pure $\Phi^4$ theory the renormalized mass defined this way is very close to the physical mass \cite{KuLi88b,LuWe}. In a finite system no spontaneous breakdown of the symmetry can occur and during a Monte Carlo simulation in the broken phase the system drifts through the set of degenerate ground states. This causes a vanishing of noninvariant observables like $\langle \Phi \rangle$. To compensate for this drift, each scalar field configuration is rotated so that $\frac{1}{V} \sum_x \Phi_x = \frac{1}{V} \sum_x \sigma_x$. In the pure $\Phi^4$ theory the rotation technique provides a very good approximation of the infinite volume values of the noninvariant quantities \cite{Ha}. A lot is known about the properties of the scalar propagators in the pure $\Phi^4$ theory which is the limiting case of the model (\ref{SS}) both for $y=0$ and $y=+\infty$ (in the latter case fermions become infinitely heavy and decouple completely from the particle spectrum). When plotting the inverse propagators $G_{\Phi}^{-1}(ap)$, $G_{\pi}^{-1}(ap)$ and $G_{\sigma}^{-1}(ap)$ as functions of $\widehat{a p}^2$ one finds for all propagators and for all possible values of $\widehat{a p}^2$ a straight line behaviour confirming the analysis of the data in terms of free scalar propagators. {}From straight line fits to the inverse propagator data, using the relations (\ref{bpp}), (\ref{gpp}) and (\ref{spp}), one can determine the wave function renormalization constants and the renormalized masses on a finite lattice. In the Yukawa model we have determined for various values of $\kappa$ and $y$ the momentum space propagator (\ref{bp}) in the PMW and PMS phases and the propagators (\ref{gp}) and (\ref{sp}) in the FM phase close to the critical lines FM(W)-PMW and FM(S)-PMS (see fig.~1). For very small and very large values of the Yukawa coupling $y$ the results for the propagators are very close to those obtained in the $\Phi^4$ theory, i.~e., when plotting the inverse propagators as functions of $\widehat{a p}^2$ we find approximately a straight line behaviour. However, when entering the intermediate Yukawa coupling region and lowering $\kappa$, where fermions have a strong feedback on the scalar sector and the staggered scalar states (sect.~3) become visible, deviations from the free propagator behaviour at larger values of $\widehat{a p}^2$ are observed. These deviations become more and more pronounced when approaching the multicritical points A and B. In fig.~4 we display for several points in the FM phase the inverse Goldstone propagator $G_{\pi}^{-1}(ap)$ as a function of the quantity $\widehat{a p}^2$. The figures in the left column were obtained at 3 points in the vicinity of the FM(W)--PMW phase transition whereas the figures in the right column correspond to 3 points in the vicinity of the FM(S)--PMS phase transition. The lowest figures were obtained very close to the multicritical points A and B respectively. The figures show that there are three kinds of deviations from a free propagator behaviour: \begin{enumerate} \item The formation of a second pole in $G_{\pi}(ap)$ in the corner of the Brillouin zone with $ap=(\pi,\pi,\pi,\pi)$ when approaching the points A or B within the FM phase. This effect is caused by the staggered states $\Phi^{st}$ which are present on the lattice also in the FM phase (see sect.~3). \item The appearance of dips in $G_{\pi}^{-1}$ around the momenta $\widehat{a p}^2=4,8,12$ (corresponding to $a p_\mu \!=\! 0$ or $\pi$) in the weak coupling region. These dips are already visible at positive $\kappa$ as can be seen from the first figure in the left column. \item At small $\widehat{a p}$ the inverse propagator has in the weak Yukawa coupling region a curvature, which plays a significant role in the data analysis. \end{enumerate} Similar structures were also discovered for the propagator $G_{\sigma}(ap)$ in the FM phase and the propagator $G_{\Phi}(ap)$ in the PMW and PMS phases near the FM(W)--PMW and FM(S)--PMS phase transitions. As we shall now discuss, the first two effects are actually lattice artefacts, dependent on the geometry of the lattice, but they have to be understood quantitatively in order to extract the physically relevant quantities from the scalar propagator data. The third effect is physical. \section{Staggered scalar states in the FM and PM phases} For an understanding of the two pole phenomenon it is useful to discuss first the situation in the pure $\Phi^4$ theory which is found in the limiting cases $y=0$ and $y=+\infty$. We use the $\sigma$ propagator $G_{\sigma}(ap,\kappa)$ as an example and indicate for a moment explicitly the $\kappa$-dependence of the propagator. Using the transformation (\ref{ST}) we find the relation: \begin{eqnarray} G_{\sigma}(ap,-\kappa) = G_{\sigma}(ap+a\tilde{\pi},\kappa) \label{spst} \end{eqnarray} where $\tilde{\pi}=(\pi,\pi,\pi,\pi)$. As the propagator $G_{\sigma}(ap,\kappa)$ in the pure scalar theory can for all momenta $ap$ be well described by the free scalar Ansatz eq.~(\ref{spp}), $G_{\sigma}(ap,-\kappa)$ has for $\kappa > \kappa_c$ the form \begin{equation} G_{\sigma}(ap,-\kappa) = \frac{ Z_{\sigma} } { (am_{\sigma})^2 +\widehat{ a(p+\tilde{\pi}) }^2 } = \frac{ Z_{\sigma}^{st} } { (am_{\sigma}^{st})^2 + 16 - \widehat{ ap }^2 } \label{sppst} \end{equation} with $m_{\sigma}^{st}=m_{\sigma}$ and $Z_{\sigma}^{st}=Z_{\sigma}$. For $am_{\sigma}=0$ the propagator $G_{\sigma}(ap,-\kappa)$ has thus a pole at the momentum $p=\tilde{\pi}$. Similar relations can be obtained also for the propagators $G_{\Phi}$ and $G_{\pi}$. Obviously, at no $\kappa$ the normal and staggered states appear simultaneously as their masses are small only close to the FM--PM and PM--AM phase transitions, respectively, which are distant. For $0 < y < \infty $ the symmetry (\ref{ST}) does not hold any more. Nevertheless, for small and large values of $y$ the spectrum is very similar to the pure $\Phi^4$ theory. But when the phase transition lines approach each other and finally meet at the points A and B we expect that both the normal and the staggered state masses are small simultaneously. In particular, $am^{st}_{\Phi}$ can be small also in the FM phase. In the FM(S) phase it is therefore reasonable to try to fit the numerical results for $G_{\sigma}(ap)$ by the two pole Ansatz \begin{equation} G_{\sigma}(ap) = \frac{Z_{\sigma} }{(am_{\sigma})^2+\widehat{a p}^2 } + \frac{Z_{\Phi}^{st} }{(am_{\Phi}^{st})^2+16-\widehat{a p}^2 } \;.\label{stan} \end{equation} Analogous expressions have been applied also for the propagators $G_{\pi}(ap)$ and $G_{\Phi}(ap)$ in the FM and PM phases respectively. In fig.~5 we show as an example a fit to the Goldstone propagator in the FM(S) phase, where the fit Ansatz for $G_{\pi}(ap)$ is given by eq.~(\ref{stan}) with $am_{\pi}=0$ and $Z_{\sigma}$ replaced by $Z_{\pi}$. Fig.~5 shows that the data are described by the two pole Ansatz very well. We furthermore expect that the staggered mass $am_{\Phi}^{st}$ and the wave function renormalization constant $Z_{\Phi}^{st}$ obtained from the fits to $G_{\pi}(ak)$ and $G_{\sigma}(ap)$ should agree. This expectation is indeed confirmed by the numerical results, for example at $\kappa=-0.65$ and $y=1.8$ the values are $am_\Phi^{st}=1.47(7)$ / $1.37(4)$ and $Z_\Phi^{st}=1.41(7)$ / $1.45(4)$ from the $\sigma / \pi$ propagators. As expected, the mass $am_{\Phi}^{st}$ does approach zero when the line AB is approached. It should be also stressed that both terms in the Ansatz (\ref{stan}) have positive sign, so that the second term has the usual form of a pole after shifting the momentum by $(\pi,\, \pi,\, \pi,\, \pi)$ and cannot be interpreted as a ghost. The same formulae describe also the two pole structure of the scalar propagator in the vicinity of the point A. Here, however, a more elaborated Ansatz has to be developed in order to take into account simultaneously also the overlaid finer structure caused by the fermion loop corrections. It is the subject of the next section. \section{One fermion loop contribution to the scalar propagators} The other two features making the scalar propagators different from a free propagator are the appearance of dips at momenta $\widehat{a p}^2 \!=\! 4$, $8$, $12$ and the curvature at small $\widehat{a p}^2$. They occur in the weak coupling phase regions FM(W) and PMW where the fermion masses scale. According to the definitions of the quantities $Z_{\pi}$, $Z_{\sigma}$ and $am_{\sigma}$ in eqs.~(\ref{gpp}) and (\ref{spp}) the scalar propagators have to be analyzed in the limit $\widehat{a p}^2 \rightarrow 0$. However, on small lattices with periodic boundary conditions the smallest nonvanishing momentum is $ap=\frac{2\pi}{T}$ (for $T>L$) which is quite large, e.g., $ap=0.79$ for $T=8$. In the pure $\Phi^4$ theory this is not a serious problem as the inverse scalar propagator can be fitted with a straight line up to $\widehat{a p}^2 = O(10)$ \cite{KuLi88b}. In our Yukawa model the situation is much less favourable: in addition to the dips occuring at $\widehat{a p}^2=4, \; 8$ and $12$ -- for which one might argue that they should simply be ignored as the analysis has to be restricted to the smallest momenta -- we have to face the more serious problem of a significant curvature at small $\widehat{a p}^2$. The following further observation makes the situation look even worse: Increasing the $T$-extent of the lattice -- which is the cheapest possibility to have small momenta -- e.g. from $T=6$ up to $T=46$, we still have not found an onset of a linear $\widehat{a p}^2$ dependence. We conclude from this that for sizeable Yukawa coupling an application of the free particle Ansatz for the scalar propagators on finite lattices produces uncontrollable systematic errors. Therefore, we have developed a more sophisticated fit Ansatz for the scalar propagator based on the 1-fermion-loop contribution to the self-energy of the $\pi$ or $\sigma$ bosons. The justification for including only the fermion loop comes from the experience in the pure scalar sector of the model where the scalar loop contributions do not change the linear $\widehat{a p}^2$-shape of the propagators but only lead to wave function and mass renormalizations. On the other hand, as will be shown below, in the case of naive lattice fermions the 1-fermion-loop contribution will cause deviations from the linear $\widehat{a p}^2$-dependence just of the form observed in the data. Let us discuss the example of the Goldstone boson propagator. On finite lattices we may write \begin{equation} G_{\pi,L}^{-1}(ap)=Z_\pi^{-1} \left[(am_{\pi,L})^2+ \widehat{ap}^2 - \Sigma_{\pi,L} (ap;am_{F,L}) \right] \; , \label{GL} \end{equation} where $\Sigma_{\pi,L}$ denotes the 1-fermion-loop contribution to the self-energy of the Goldstone boson in the renormalized perturbation theory. For the purpose of this section the subscript $L$ points out the possible dependence of various quantities on the spatial lattice size $L$. For instance with regard to the Goldstone bosons we take the finite-size of the lattice into account in a naive but simple way -- allowing for a finite mass of the Goldstone boson $am_{\pi,L}$ (see e.g. also \cite{AoShi91}). (A treatment based on chiral perturbation theory, successful in the pure $\Phi^4$ theory \cite{Ha}, is in the complex situation with light fermions presumably not applicable.) In the FM phase where the fermions are massive we impose the two necessary normalization conditions on $\Sigma_{\pi}$ in the infinite volume at momentum zero, \begin{eqnarray} \left. \Sigma_{\pi,\infty} (ap;am_{F,\infty}) \right|_{p=0} &=& 0 \; , \nonumber \\ \left. \frac{\partial}{\partial \widehat{ap}^2} \, \Sigma_{\pi,\infty} (ap;am_{F,\infty}) \right|_{p=0} &=& 0 \; . \label{normcond} \end{eqnarray} With this normalization $G_{\pi}$ approaches in the thermodynamic limit the form used for the definition of $Z_{\pi}$, eq.~(\ref{gpp}), provided $(am_{\pi,L})^2 \rightarrow 0$ as $L \rightarrow \infty$. The not-yet-normalized $\Sigma_{\pi,L}^\prime$ calculated from the corresponding Feynman diagram on the lattice is \begin{equation} \Sigma_{\pi,L}^\prime (ap;am_{F,L}) = (-1) \frac{4}{L^3 T} \sum_k \mbox{Tr } \left\{ (i \gamma_5 \tilde{y}_R) \frac{1}{i s\!\!\!/(k)+am_{F,L}} (i \gamma_5 \tilde{y}_R) \frac{1}{i s\!\!\!/(k-ap)+am_{F,L}} \right\} \; . \label{loop} \end{equation} This equation defines the renormalized Yukawa coupling $\tilde{y}_R$ in terms of the 3-point function. Of course, $\tilde{y}_R$ could in principle differ from $y_R$ defined in eq.~(\ref{TR}). In the above $s_{\mu}(k)=\sin k_{\mu}$, the factor $(-1)$ comes from the fermion loop, the trace is to be taken over the Dirac indices. The SU(2) trace together with another factor of 2 from the HMC doubling (see sect.~2) results in the factor 4 whereas the standard fermion doubling is taken into account automatically. The sum over the loop momentum $k$ runs over the set of momenta corresponding to a finite $L^3 T$ lattice with periodic boundary conditions in the space direction and antiperiodic boundary conditions in the time direction, \begin{equation} \begin{array}{rlll} k_j &= \frac{2 \pi}{L} n_j \; , &\hspace{0.2cm} n_j=0,1,2, \ldots L-1 &\hspace{0.3cm} j=1,2,3 \;\; , \nonumber \\ k_4 &= \frac{2 \pi}{T}(n_4+\frac{1}{2}) \; , &\hspace{0.2cm} n_4=0,1,2, \ldots T-1 \; . & \label{moms} \end{array} \end{equation} After some simplifications we get \begin{eqnarray} \Sigma_{\pi,L}^\prime (ap;am_{F,L}) &=& \tilde{y}_R^2 \frac{16}{L^3 T} \sum_k \frac{(am_{F,L})^2 + s(k)s(k-ap)} { \left[ (am_{F,L})^2 + s^2(k) \right] \left[ (am_{F,L})^2 + s^2(k-ap) \right]} \nonumber \\ &=:& \tilde{y}_R^2 \, I_{\pi,L}^\prime (ap;am_{F,L})\; , \label{simpl} \end{eqnarray} where we introduce the notation $I_{\pi,L}^\prime$ for the (unnormalized) lattice integral after factorizing out $\tilde{y}_R^2$. In order to recognize the 1-fermion-loop as the reason for the deviations in the scalar propagators it is instructive to discuss the dependence of this expression on the external momentum $ap$. For $ap=(0,0,0,0)$ the denominator has minima when the loop momentum components are $k_{\mu}\!=\!0$ or $\pi$, corresponding to processes involving the physical fermion and its antifermion or some doubler fermion with its own antidoubler. Thus a considerable contribution of $\Sigma_\pi$ to $G_{\pi}$ for small momenta $ap$ can be expected, causing the observed strong curvature of $G_\pi^{-1}$. In addition, $\Sigma_\pi$ also peaks when some components $ap_{\mu}\!=\!\pi$ and others are zero. Then the kinematics allows such intermediate states to be excited which involve e.g. the physical fermion and the antidoubler of momentum $ap$, or any other appropriate pair of doubler and antidoubler whose respective positions of poles differ just by $ap$. This is precisely the reason for the dips seen in the inverse scalar propagators near $\widehat{a p}^2=4, \, 8, \, 12$. To fit the propagator data using the Ansatz (\ref{GL}) we have to perform the following steps: First, $\Sigma_{\pi,L}$ is normalized as \begin{eqnarray} \Sigma_{\pi,L}(ap; am_{F,L}) &=& \Sigma_{\pi,L}^\prime (ap;am_{F,L}) - \Sigma_{\pi,\infty}^\prime (ap=0;am_{F,\infty}) \nonumber \\ & &-\widehat{a p}^2 \left( \left. \frac{\partial}{\partial \widehat{a p}^2} \Sigma_{\pi,\infty}^\prime (ap; am_{F,\infty}) \right|_{p=0} \right) \label{normed} \end{eqnarray} so that it satisfies the conditions (\ref{normcond}) in the infinite volume limit. Here the fermion mass $m_{F,L}$ on the given finite lattice is taken from the standard fit to the fermion propagator data. But note that this normalization also requires an estimate of the fermion mass in infinite volume $am_{F,\infty}$ (various attempts to normalize $\Sigma_{\pi,L}$ using only finite volume quantities did not work). In the spontaneously broken phase FM, the major part of the finite-size effects is expected to be due to the massless Goldstone bosons leading to a volume dependence linear in $1/L^2$. So, checking this dependence and then extrapolating $a m_{F,L}$ to $a m_{F,\infty}$ requires, at a given $(\kappa, \, y)$-point, simulations on a sequence of at least three lattices. We have performed runs on lattices $L^3 16$ with $L=6, \, 8, \, 10$ and $12$ with the result that as long as $a m_{F,L}$ itself is not too small the agreement with a linear $1/L^2$ dependence indeed allows an extrapolation to $a m_{F,\infty}$ (see next section). Analogous to eq.~(\ref{simpl}) we define the (normalized) lattice integral $I_{\pi,L}$ with $\tilde{y}_R^2$ factorized out, \begin{equation} \Sigma_{\pi,L}(ap; am_{F,L}) =: \tilde{y}_R^2 I_{\pi,L} (ap; am_{F,L}) \; . \label{defI} \end{equation} The $\pi$ propagator fit Ansatz is then \begin{equation} G_{\pi,L}^{-1}(ap) = Z_\pi^{-1} \left[ a m_{\pi,L}^2+\widehat{a p}^2 - \tilde{y}^2_R \, I_{\pi,L}(ap;a m_{F,L}) \right] \label{ANSATZ} \end{equation} with the free parameters $a m_{\pi,L}$, $Z_\pi$ and $\tilde{y}_R$ (we note that through the normalization conditions (\ref{normcond}) this expression also depends on $m_{F,\infty}$). Before describing the results in the next sections let us discuss the quality of the fit. The full Ansatz we use is the superposition of the propagator with the pole at $ap=(0,0,0,0)$ including the 1-fermion-loop contribution and the staggered propagator with the pole at $ap=(\pi,\pi,\pi,\pi)$ as explained in sect.~6 above. This Ansatz is able to describe the scalar propagator data in the complete interval $0 \leq \widehat{ap}^2 \leq 16$. In particular, the curvature at small momenta and also every detail of the peculiar structures near $\widehat{ap}^2=4, \, 8, \, 12$ are perfectly reproduced. This is demonstrated in figs.~6 and 7 for two typical examples of the fits, one at small positive $\kappa$ and one in the close vicinity of point A where the scalar propagators have the most complex form. It should be stressed again that the dips at momenta $\widehat{ap}^2=4, \, 8, \, 12$ are caused by the presence of the doubler fermions and hence should at least look different, if not absent, in models without them. However, in all models the curvature at small momenta will appear for sufficiently strong Yukawa coupling and the second pole at $\widehat{ap}^2=16$ will be present near phases with antiferromagnetic ordering. \section{Fermion and scalar masses at small positive $\kappa$ and negative $\kappa$} We have been able to determine reliably both $am_F$ and $am_\sigma$ simultaneously only for $\kappa \geq 0$. Here $am_\sigma$ is always greater than $am_F$ at least by a factor 3 -- 6. This presumably is a consequence of having a large number (32) of fermion doublets. Estimating that a similar mass-ratio holds also for $\kappa < 0$ we have mostly performed calculations at points with very small fermion mass, $am_F \simeq 0.1 - 0.3$, in order to have the $\sigma$-boson mass at least smaller than 1. The determination of $am_F$ in this range of values requires only moderate statistics and can be reliably performed at $\kappa < 0$ even in the vicinity of the point A. An important condition is, however, that the long size $T$ of the lattice $L^3T$ is at least 16, otherwise $am_F$ is spuriously small and $T$-dependent when determined from the fit by means of the free fermion propagator. The method of analysis and many results have been presented already in ref.~\cite{BoDe91a}. Here we would like to point out the large spatial volume dependence of $am_F$. For $am_F \simeq 0.3$ the value can decrease by 40\% when $L$ increases from 6 to 10. Nevertheless, for $am_F \gsim 0.2$ the decrease is linear with $L^{-2}$, so that one can tentatively extrapolate to $L = \infty$. The observable $\langle \Phi \rangle$ is easily measurable everywhere and also has a linear $L^{-2}$ dependence. The $L^{-2}$ dependence and the extrapolation of both the quantities are shown in fig.~8 (we show $av_R$ instead of $\langle \Phi \rangle$ because $Z_{\pi}$ is $L$-independent as discussed in sect.~9). The $\sigma$ boson mass has been determined at several points for $\kappa \geq 0$ on lattices of various sizes. The finite-size effects are compatible with the expected $L^{-2}$ dependence but the large error bars prevent a verification. Nevertheless, we have used the same volume dependence to extrapolate $am_\sigma$ to $L =\infty$. There are two technical reasons making a reliable determination of $am_\sigma$ at negative $\kappa$ very difficult. Firstly, the number of iterations for the fermion matrix inversion needed for the field update increases drastically with decreasing $\kappa$. As the determination of the $\sigma$ propagator requires much higher statistics than of the fermionic propagator, an accumulation of good data in the negative $\kappa$ region, in particular in the vicinity of the point A is prohibitively expensive. Secondly, the maximum of the curve $G_{\sigma}^{-1}(ap)$ occurs already at rather small momenta (see fig.~7) and only a few data points of the propagator contain the information about $am_\sigma$ and $Z_\sigma$, the rest being dominated by the staggered state. Therefore we have not succeeded to determine reliably $am_\sigma$ for any of our data points at negative $\kappa$. The largest renormalized couplings are expected on the boundary of the scaling region, i.e. relatively far from the critical line. However, at present we do not know the position of this boundary, actually not even its proper definition (e.g. how small $am_F$ and $am_\sigma$ should be in order that the lattice model can be used as an effective continuum theory). We are thus not able to extract upper bounds on masses from our results for renormalized couplings. \section{The renormalized Yukawa coupling} The excellent agreement of the fits with the MC data for the $\pi$ propagator both for positive and negative $\kappa$ allows to determine $Z_\pi$ reliably. In comparison to the usual determination of $Z_\pi$ by a naive free particle fit to the smallest momentum used e.g. in our earlier publication \cite{BoDe91a} the present method yields more precise results (the error bars are reduced by factors 3 -- 10). In particular, they are now stable when the $T$-size of the lattice is varied, in spite of the strong curvature near $\hat{p}^2=0$ which is most clearly seen on large-$T$ lattices. No $L$-dependence of $Z_\pi$ has been found. The actual values are slightly smaller than found in ref.~\cite{BoDe91a} confirming the conjecture in that paper that a simple linear fit to the smallest momentum on small lattices gives overestimated values of $Z_{\pi}$. Some values of $Z_{\pi}$ obtained in the vicinity of the FM(W)--PMW critical line are plotted in fig.~9. $Z_\pi$ decreases strongly as $\kappa$ decreases and appears to vanish at the multicritical point A. The knowledge of $Z_{\pi}$ allows us to determine, from the available very good data for $am_F$ and $\langle \Phi \rangle$ for various $L$, the renormalized Yukawa coupling $y_R$ by means of the definition (\ref{TR}). Both $am_F$ and $\langle \Phi \rangle$ are strongly $L$-dependent, but their ratio turns out to be practically $L$-independent. We use the observed linear $L^{-2}$ dependence to extrapolate $am_F$ and $av_R$ to infinite volume obtaining $y_R$ in the $L \rightarrow \infty$ limit (see fig.~8) In fig.~10 we plot these results against the fermionic correlation length $\xi_F=1/am_F$ at $L \!=\! \infty$. The reason for not using the smaller $\xi_{\sigma}=1/am_\sigma$ at $L \!=\! \infty$ for this purpose is the fact that we do not know its values for $\kappa < 0$. The dotted curve in fig.~10, given by \begin{equation} y_R=\frac{1}{\sqrt{\frac{32}{4\pi^2} | \ln a \mu | }} \; , \label{dotc} \end{equation} is obtained by choosing infinite bare Yukawa coupling in the 1-loop formula for the running Yukawa coupling and identifying the scale $\mu$ to be $m_F$. As pointed out in \cite{BoDe91a}, this curve described quite well the results for $y_R$ at small positive $\kappa$ and negative $\kappa$. The dotted error bars associated with this curve in the figure indicate the range of values and the error bars of those former results in \cite{BoDe91a}. The dramatic reduction of the error bars in fig.~10 is mostly due to the refined analysis in this paper. It is now apparent that $y_R$ is bounded by the dotted curve, suggesting applicability of 1-loop perturbation theory even close to the point A. In addition all the $y_R$ results are clearly below the $s$-wave tree level unitarity bound for 32 fermion doublets. This bound is indicated by the horizontal dashed line in fig.~10. These results suggest the conclusion that in our model there is no strong Yukawa coupling regime at least for $\xi_F > 5$. We remark that initial indications from numerical investigations in the mirror fermion model \cite{MIRRORS} are, however, different. The weakness of $y_R$ in our model in the negative $\kappa$ region comes about as follows: the unrenormalized ratio $a m_F / \langle \Phi \rangle$ actually increases strongly as $\kappa$ decreases, but this is compensated by the equally strong decrease of $Z_\pi^{1/2}$ (fig.~9). The fermion loop correction allows to determine the renormalized Yukawa coupling $\tilde{y}_R$ defined in terms of the vertex function and extracted from the Goldstone propagator data by the Ansatz (\ref{ANSATZ}). In table~\ref{tab:yReta} we compare $y_R$ and $\tilde{y}_R$ for some typical $(\kappa,y)$-points where we were able to extrapolate $am_F$ to $L=\infty$ in order to satisfy the normalization conditions (\ref{normcond}). \begin{table} \begin{center} \begin{tabular}{|c|r|l|l|l|cl|r|} \hline \hline &Lattice &$\kappa$=0.03 &$\kappa$=0.04 &$\kappa$=-0.65& \phantom{.} &Lattice&$\kappa$=0.00 \\ & & y=0.60 & y=0.60 & y=0.98 & & & y=0.65 \\ \hline \hline $ y _R $&$6^3 24$& & &0.427(5)& & $6^3 24$& 0.55(2) \\ $\tilde{y}_R$& & & &0.49(4) & & & 0.51(1) \\ \hline $ y _R $&$6^3 16$&0.410(6)&0.461(8)&0.412(6)& & $6^3 12$& 0.55(2) \\ $\tilde{y}_R$& &0.401(4)&0.45(1) &0.495(8)& & & 0.51(2) \\ \hline $ y _R $&$8^3 16$&0.410(6)&0.47(2) &0.43(1) & & $8^3 12$& 0.51(1) \\ $\tilde{y}_R$& &0.402(6)&0.450(4)&0.49(2) & & & 0.52(1) \\ \hline $ y _R $&$10^316$&0.40(1) &0.46(1) &0.45(3) & &$10^3 12$& 0.51(3) \\ $\tilde{y}_R$& &0.402(5)&0.45(1) &0.48(7) & & & 0.52(4) \\ \hline \hline \end{tabular} \end{center} \caption{{\em Comparison of $y_R$ from the tree level definition ({\protect\ref{TR}}) to $\tilde{y}_R$ from the fit to the Goldstone propagator ({\protect\ref{ANSATZ}}) at some typical $(\kappa,y)$-points.}} \label{tab:yReta} \end{table} The good agreement found between $y_R$ and $\tilde{y}_R$ indicates that the analysis of the scalar propagator data by taking the 1-fermion loop corrections into account is adequate and that both definitions of the renormalized Yukawa coupling quantitatively agree. This further supports the conclusion that in our model with naive fermions the Yukawa coupling is not strong. Some caution is due, however. It could be that our linear extrapolation of $a m_F$ to $L \!=\! \infty$ underestimates $\xi_F$. Furthermore, as we do not know $am_\sigma$ in the vicinity of the point A, we cannot exclude that $\xi_\sigma$ is as large as $\xi_F$ and that our data points there are actually deep in the scaling region. \section{Fermion influence on the scalar mass bound} To study the influence of heavy fermions on the triviality bound for the scalar mass it would be again most interesting to perform this analysis in the vicinity of the multicritical point A and the lack of reliable results for $am_\sigma$ there is deplorable. However, provided the renormalized Yukawa coupling does not attain in the negative $\kappa$ region values significantly larger than at $\kappa$ positive, as suggested by our results, we expect that it is sufficient to investigate the influence of the Yukawa coupling on the scalar sector in the positive $\kappa$ region. These considerations have motivated our relatively high statistics study of the $\sigma$-pro\-pa\-ga\-tor at $\kappa \gsim 0$. We have fixed $y$ at the value $y=0.6$, for which the critical $\kappa$ is $\kappa_c=0.020(5)$. On lattices of three different spatial sizes, $L^3 16$ with $L=6,8,10$, we have accumulated about 4-5 thousand Hybrid Monte Carlo trajectories at three points $\kappa=0.03,0.04,0.06$. Both scalar propagators have been analyzed by means of the same Ansatz (\ref{ANSATZ}) and the scalar mass $am_{\sigma}$ and $Z_{\pi}$ have been determined. The results for the ratio $m_\sigma/v_R$ are displayed in fig.~11 as a function of the scalar correlation length $\xi_{\sigma}=1/am_{\sigma}$ (open symbols). The finite-size dependence of $a m_\sigma$ can be described to be linear in $L^{-2}$, though the error bars on $a m_\sigma$ leave room for modification. The tentative extrapolation of the results to the infinite volume assuming a $L^{-2}$ dependence is indicated by the full squares. This kind of plot is customary to extract a triviality upper bound for the scalar mass in the pure $\Phi^4$ theory. For comparison the data from this theory ($y=0$) on a hypercubic lattice \cite{O4uppb} are also shown (full circles). The results in the Yukawa model on finite lattices approach the infinite volume results from below (different from the $\Phi^4$ theory \cite{Ha}) and the ex\-tra\-po\-lated results are - within large error bars - consistent with the pure $\Phi^4$ results. As $\xi_\sigma \simeq 1$ -- $3$ and $\xi_F$ is much larger than $\xi_\sigma$ for all points in fig.~11, one can expect that the edge of the scaling region is contained. So we observe no large influence of the Yukawa coupling on the Higgs mass upper bound in our model for $\kappa>0$. \section{Summary and conclusions} We have explored the region of the largest renormalized Yukawa coupling in a lattice Yukawa model with naive fermions in the broken symmetry phase. In this region the Yukawa interaction is the driving force of the spontaneous symmetry breaking overwhelming the very weak ferromagnetic or even antiferromagnetic nearest neighbour scalar field coupling. Such competing interactions, together with sizeable fermion loop corrections, result in a complex structure of the scalar propagators. We have demonstrated that this structure can be theoretically understood and even utilized for an extraction of the renormalized Yukawa coupling $y_R$ from the scalar propagator data. The results for $y_R$ showing that the Yukawa coupling is small in the limit of large cutoff are consistent with the triviality of this coupling. In the physically relevant FM(W) phase the lines of constant $y_R$ seem to flow nearly parallel to the FM(W)-PMW phase transition and for $y > 1$ run out of the FM(W) phase, instead of flowing into some point on this phase transition. In particular the suspicious point A does not seem to be a nontrivial fixed point. The values of $y_R$ do not seem to exceed significantly those at $\kappa \geq 0$, indicating that the $\kappa < 0$ region of the FM(W) phase does not add much to the physical content of the model at $\kappa \geq 0$. We expect these results to be generic for various lattice Yukawa models. Our results, looked at quantitatively, may however be specific to the chosen model with a large number of fermions (32 doublets). The renormalized Yukawa coupling stays below the tree level unitarity bound, thus being never strong. Correspondingly, no influence of the Yukawa interaction on the upper bound for the $\sigma$-mass could be detected. However, a word of caution is warranted because we have not localized reliably the edge of the scaling region, where the largest values of the renormalized couplings should actually be determined. Furthermore, the model suffers from a drawback caused by the large number of fermions: the fermion mass generated by the Yukawa interaction stays substantially lower than the $\sigma$ boson mass. This makes investigations in the scaling region difficult with two very different correlation lengths and is certainly not a generic feature of Yukawa models. There are now suggestions of Yukawa models with a small number (2 and 4) of fermion doublets \cite{Jan92}. The methods developed in this article should be appropriate also in these models. Apart from quantitative differences, it would be interesting to see if any of the qualitative conclusions drawn in this paper change. \vspace{2cm} {\bf Acknowledgement.} We thank J.~Smit, M.~Tsypin and F.~Zimmermann for valuable suggestions and H.A.~Kastrup for discussions and continuous support. We have also benefited from discussions with K.~Jansen, M.~Lindner, I.~Montvay, G.~M\"unster, J.~Shigemitsu and J.~Vink. The numerical computations have been performed on the Cray Y-MP/832 of HLRZ J\"ulich and the S-400 of RWTH~Aachen. \vspace{2cm} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \noindent{\large{\bf Appendix A: The metamagnetic Ising model}} Integrating out the fermionic fields leads to a scalar model with nonlocal interaction terms. In this appendix we want to discuss a simple type of spin model with a nearest-neighbour ($nn$)- and a next-to-nearest-neighbour ($nnn$)-interaction terms, which appears in the literature in the context of metamagnets\footnote{See footnote on p.~60 in \cite{KiCo75}.}. In this model an overlap of the scaling regions associated with the normal and the staggered magnetizations occurs analogous to the vicinity of the points A and B in Yukawa models. The model is defined by the action \begin{equation} S=-2\kappa_{nn} \sum_{x,\mu>0} \sigma_x \sigma_{x+\mu} -2\kappa_{nnn} \frac{8}{24} \sum_{x,\mu>\nu} \sigma_x \sigma_{x+\mu+\nu}, \end{equation} where $\sigma_x$ are Ising spins. In a mean field approximation the critical lines are given by \cite{KiCo75} \begin{eqnarray} \kappa_{nn}^c + \kappa_{nnn}^c = \kappa^c_{\rm Ising} \;\; &\kappa_{nn}^c>0,\kappa_{nnn}^c>0 & \mbox{\hspace{1cm} for the FM-PM transition line} \nonumber \\ -\kappa_{nn}^c + \kappa_{nnn}^c = \kappa^c_{\rm Ising} \;\; &\kappa_{nn}^c<0,\kappa_{nnn}^c>0 & \nonumber \mbox{\hspace{1cm} for the AM-PM transition line,} \end{eqnarray} where the constant $\kappa^c_{\rm Ising}$ is known from the 4-dimensional Ising model to be about 0.0748. At $\kappa_{nn}^c=0$ the transition lines meet in a multicritical point. The phase diagram found in a numerical Monte Carlo simulation of the 4-dimensional model shown in fig.~12 has three phases: a ferromagnetic phase (FM) with $\langle\sigma\rangle > 0 ,\langle\sigma^{st}\rangle = 0 $, a paramagnetic phase (PM) with $\langle\sigma\rangle = 0 , \langle\sigma^{st}\rangle = 0 $ and an antiferromagnetic phase (AM) with $\langle\sigma\rangle = 0 ,\langle\sigma^{st}\rangle > 0 $. The phase transition lines separating the symmetric from the broken phases is of second order and the phase transition separating FM and AM is of first order. Except for the absence of the FI phase, this phase diagram is similar to the phase diagram of the Yukawa model at weak Yukawa coupling, the triple point being analogous to the point A in fig.~1. In the vicinity of the multicritical point we find by numerical simulation that the inverse propagator of the scalar field in momentum space has two poles, one of them at $ap=(\pi, \pi, \pi, \pi)$ and some structure for intermediate momenta. An example in the ferromagnetic phase near the multicritical point is shown in fig.~13. This shape is well described by the inverse propagator of a gaussian model with the same ($nn$)- and ($nnn$) kinetic terms \begin{equation} S_p=-a_{nn} \sum_{\mu>0} \{ (1-\cos k_{\mu} \} -a_{nnn} \frac{8}{24} \sum_{\mu>\nu} \{ 2-\cos(k_{\mu}+k_{\nu})-\cos(k_{\mu}-k_{\nu}) \} + b . \label{eq:Sp} \end{equation} The fit of the Monte-Carlo data with the function (\ref{eq:Sp}) plotted in fig.~13 demonstrates that one can describe the data very well in spite of a nonlinear dependence on $\widehat{a p}^2$. If one includes further interaction terms, the structure of the propagator at intermediate values of $\widehat{a p}$ changes. Of course, integrating out the fermion field in the Yukawa model produces a scalar theory which has an infinite number of nonlocal interaction terms. Already taking only into account the first two orders of the $1/y$ expansion of the fermion determinant (see for example \cite{AbShr91}) leads to a large number of non-single-site terms and also to terms which are no longer bilinear in the scalar fields. A small $y$ expansion leads to infinite range interaction terms. The Yukawa model is therefore much more complicated than the metamagnetic Ising model and so its scalar propagators are analysed in this paper in a different way. In particular, a gaussian model does not describe the data. But the qualitative feature of the propagators, namely the existence of a second pole at $ap=(\pi, \pi, \pi, \pi)$ if two scaling regions overlap, can be understood considering simple gaussian and Ising models.
2,877,628,089,969
arxiv
\section{Introduction} \begin{figure*}[t] \begin{center} \includegraphics[width=1\linewidth]{Fig/CompareBoundaryDetection.pdf} \end{center} \vspace{-1em} \caption{a) Which visual cues would you draw when sketching out an image? b) Traditional edge detectors \cite{Canny1986ACA} only capture high frequency signals in the image without image understanding. c) Boundary detectors are usually trained on edges that derived from closed segment annotations and therefore, they do not include salient inner boundaries by definition \cite{Martin2001ADO,amfm_pami2011}. d) In contrast, our contour drawing (ground truth is shown here) contains both the occluding contours and salient inner edges. For example, the dashed box in the top row contains a open contour ending in a {\em cusp}~\cite{koenderink1990solid,Forsyth2012ComputerV}.} \label{fig:compareboundary} \end{figure*} Edge-like visual representation, appearing in form of image edges, object boundaries, line drawings and pictorial scripts, is of great research interest in both computer vision and computer graphics. Automatic generation of such representation enables us to understand the geometry of the scene \cite{revaud2015epicflow}, and perform image manipulation in this sparse space \cite{Dekel2018SparseS}. This paper studies such representation in the form of {\em contour drawing}, which contains object boundaries, salient inner edges such as occluding contours, and salient background edges. These sets of visual cues convey 3D perspective, length and width as well as thickness and depth \cite{Sutherland1997Gesture}. Contour drawings are usually based on real-world objects (immediately observed or from memory), and therefore, can be considered as an expression of human vision. Its counterpart in machine vision is edge and boundary detection. Interesting, the set of visual cues is different in contour drawings and in image boundaries (Fig \ref{fig:compareboundary}). Comparing to image boundaries, contour drawings tend to have more details inside each object (including occluding contours and semantically-salient features such as eyes, mouths, etc.) and are made of strokes that are loosely aligned to pixels on the image edges. We propose a contour generation algorithm to output contour drawings given input images. This generation process involves identifying salient boundaries and is connected with the salient boundary detection in computer vision. In fact, we will show that our contour generation algorithm can be re-purposed to perform salient boundary detection and achieve the best performance on standard benchmark. Another element involved contour drawing generation is to adopt proper artistic style. Fig \ref{fig:teaser} shows our method successfully captures the style and itself is a style transfer application. Moreover, contour drawing is an intermediate representation between image boundary and abstract line drawing. Our study of contour drawing paves the road for machine's understanding and generation of abstract line drawings \cite{Eitz2012HowDH,Cole2008WhereDP}. What types of edge-like visual representation are studied in existing work? In non-photorealistic rendering, 2D lines that convey 3D shapes are widely studied. The most important of such might be the {\em occluding contours}, regions where the local surface normal is perpendicular to the viewing direction, and {\em creases}, edges along which the dihedral angle is small \cite{koenderink1990solid}. It is noted by DeCarlo \etal \cite{decarlo2003suggestive} that important details are missing if only those edges are rendered, and their solution is to add {\em suggestive contours}, regions where occluding contour would appear with minimal change in viewpoint. As a result of having clear mathematical definition, these edge-like representation can be directly computed using methods in differential geometry given the 3D model. In computer vision, a different set of visual cues are defined and are inferred from the image alone without knowledge of the 3D world, namely the {\em image edges} and {\em boundaries}. Image edges correspond to sharp changes in image intensity due to changes in albedo, surface orientation, or illumination~\cite{Forsyth2012ComputerV}. Boundaries, as formally defined by Martin \etal \cite{1273918}, are contours in the image plane that represents a change in pixel ownership from one object or surface to another. Nonetheless, this definition ignores the fact that the contour can also appear on a smooth surface of the same object, for example the {\em cusp} in Fig \ref{fig:compareboundary}. Since much progress has been driven by datasets, in practise, the boundaries are ``defined'' by the seminal benchmark of BSDS300 \cite{1273918} and BSDS500 \cite{amfm_pami2011}. Interestingly, despite their popularity, these datasets were originally designed and annotated to be a segmentation dataset. This means that boundaries are derived from closed segments annotated by humans \cite{Martin2001ADO}, and yet not all boundaries form closed shapes. The other related line of research revolves around the representation of sketch. Most work study the relationship between the strokes themselves without a reference object or image \cite{Ha2018ANR,Eitz2012HowDH,schneider2014sketch}. While some work on sketch-based image retrieval \cite{sketchy2016,eitz2011sketch} and sketch generation \cite{Ha2018ANR,Song2018LearningTS} indeed models the correspondence between sketch and image, the type of drawings used are far too simple or abstract, and does not contain edge-level correspondence, making it unsuitable for training a generic scene sketch generator. A comparison is summarized in Fig \ref{fig:sketchcompare} and Tab \ref{tab:comparesketch}. \begin{figure*}[t!] \begin{center} \includegraphics[width=1\linewidth]{Fig/SketchComparison.pdf} \end{center} \vspace{-1em} \caption{Comparison with other edge-like representations. Here we order examples \cite{amfm_pami2011,sketchy2016,eitz2012hdhso,Ha2018ANR} by their level of abstraction. Our contour drawing cover more detailed internal boundaries than a boundary annotation, while having much better alignment to actual image contours, and much more complexity compared to other drawing-based representations.} \vspace{+1em} \label{fig:sketchcompare} \end{figure*} \begin{table*}[t!] \centering \adjustbox{width=1\linewidth}{ \begin{tiny} \begin{tabular}{l|c|c|c|c|c} \hline Dataset & Edge Aligned & Multiple Obj & With Image & Vec Graphics & Stroke Order \\ \hline BSDS500 \cite{amfm_pami2011} & \checkmark & \checkmark & \checkmark & \ding{55} & \ding{55} \\ \hline Contour Drawing (ours) & Roughly & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline Sketchy \cite{sketchy2016} & \ding{55} & \ding{55} & \checkmark & \checkmark & \checkmark \\ \hline TU-Berlin \cite{eitz2012hdhso} & \ding{55} & \ding{55} & \ding{55} & \checkmark & \checkmark \\ \hline QuickDraw \cite{Ha2018ANR} & \ding{55} & \ding{55} & \ding{55} & \checkmark & \checkmark \\ \hline \end{tabular} \end{tiny} } \caption{Dataset comparison. Our proposed contour drawing dataset is different from prior work in terms of boundary alignment, multiple objects, corresponding image-sketch pairs, vector graphics encoding, and stroke ordering annotation.} \label{tab:comparesketch} \end{table*} To accommodate our research on contour drawings, we collect a dataset containing 5000 drawings (Sec \ref{sec:dataset}). The challenge for training a contour generator is to resolve the diversity among the contours for the same image obtained from multiple annotators. We address it by proposing a novel loss that allows the network to converge to an implicit consensus, while retaining details (Sec \ref{sec:gensketch}). Our contour generator can be applied to salient boundary detection. By simply fine-tuning on BSDS500, we achieve the state-of-the-art performance (Sec \ref{sec:boundarydetection}). Finally, we show our dataset can be expanded in a cost free way with a sketch game (Sec \ref{sec:expansion}). Our code and dataset are available online \footnote{\url{http://www.cs.cmu.edu/~mengtial/proj/sketch}}. \section{Collecting Contour Sketches} \label{sec:dataset} We create our novel task with the the popular crowd-sourcing platform Amazon Mechanical Turk \cite{Buhrmester2011AmazonsMT}. To collect drawings that are roughly boundary aligned, we allow the Turkers to trace over a fainted background image. In order to obtain high-quality drawings, we design a labeling interface with a detailed instruction page including many positive and negative examples. The quality control is realized through manual inspection by treating drawings of the following types as rejection candidates: (1) missing inner boundary, (2) missing important objects, (3) with large misalignment with original edges, (4) the content not recognizable, (5) drawing humans with stick figures, (6) shaded on empty areas. Finally, we collect 5000 high-quality drawings on a dataset of 1000 outdoor images crawled from Adobe Stock \cite{adobestock} and each image is paired with exactly 5 drawings. In addition, we have 1947 rejected submissions, which will be used in setting up an automatic quality guard as discussed in Sec \ref{sec:expansion}. \section{Sketch Generation} \label{sec:gensketch} In this section, we propose a new deep learning-based model to generate contour sketches from a given image and evaluate it against competing methods in both objective and subjective manner. A unique aspect of our problem here is that each training image is associated with multiple ground truth sketches drawn by different annotators. \subsection{Previous Methods} Early methods of line-drawing generation focus on human faces, where they build explicit models to represent facial features \cite{brennan1984caricature,937657}. Other work focuses on generating the style but leaving the task of deciding which edge to draw to the user \cite{kang2005interactive,Xie2015StrokeBasedSL}. More recently, Song \etal \cite{Ha2018ANR} used LSTM to sequentially generate the stroke for simple doodles of several strokes. However, our contour drawings on average contain 44 strokes and around 5,000 control points, way beyond the capacity of existing sequential models. \subsection{Our Method} \begin{figure*}[t] \begin{center} \includegraphics[width=1\linewidth]{Fig/Model.pdf} \end{center} \caption{We train an image-conditioned contour generator with a novel MM-loss (Min-Mean-loss) that accounts for multiple diverse outputs encountered during training (top row). Training directly on the entire set of image-contour pairs generates conflicting gradients. To rectify this, we carefully aggregate the discriminator ``GAN" loss and the regression ``Task" loss. The discriminator averages the GAN loss across all image-contour pairs, while the regression-loss finds the minimum-cost contour to pair with this image (determined on-the-fly during learning). This ensures that the generator will not simply regress to the ``mean" contour, which might be invalid. Photo by alexei\_tm -- \url{stock.adobe.com}. } \label{fig:model} \end{figure*} Naturally, the problem of generating contour drawing can be cast into an image translation problem or a classical boundary detection problem. Given the popularity of using conditional Generative Adversarial Networks (cGANs) to generate images from sketches or boundary maps, one might think the apparently easier inverse problem can be solved by reversing the image generation direction. However, none of the existing cGAN methods \cite{isola2017image,CycleGAN2017,liu2017unsupervised,zhu2017multimodal} have shown results on such a task and our experiments show that they do not work on sketch generation out-of-the-box. We conjecture that the drawings are sparse and discrete representation compared to textured images. It might be easier to obtain gradients in the latter case. Also, our dataset has more than one target for each source image (1-to-many mapping). And modeling such diversity makes it difficult to optimize. On the other hand, classical boundary detection approaches linearly combines the different ground truths to form a single target per each input. This form of data augmentation bypasses the need to model the diverse outputs and results in a soft output as well, but it is not the case with multiple ground truth having edges not perfectly aligned. The soft representation no longer bears the meaning of boundary strength, but how well the edges are accidentally matched. Training on such data yields unreasonable output for both our method and existing boundary detection methods. Hence, our problem cannot be trivially solved by training or fine-tuning boundary detectors on the contour drawing dataset. Another issue with soft representation, as found in \cite{Hou2013BoundaryDB}, is their poor correlation with the actual boundary strength. We share the same findings in our experiments, as it is difficult to find a single threshold for the final output that works well for all images. In this work, we use a different cGAN with a novel MM-loss (Fig \ref{fig:model}). \subsubsection{Formulation} We leverage the power of the recently popular framework of adversarial training. In a Generative Adversarial Network (GAN), a random noise vector $z$ is fed into the generation network $G$ to generate an output image $y$. In the conditional setup (cGAN), the generator takes input an image $x$, and together with a $z$, it maps to a $y$. The generator $G$ aims to generate ``real" images conditioned on $x$, while there is another discriminator network $D$ that is adversarially trained to tell the generated images from the actual ground truth target. Mathematically, the loss for such objective can be written as \begin{align} \mathcal{L}_{cGAN}(x, y, z) = \min_G\max_D \mathbb{E}_{x,y}[\log D(x,y)] + \notag \\ \mathbb{E}_{x,z}[\log (1-D(x,G(x,z))] \label{eq:cGAN} \end{align} As found by previous work \cite{Mathieu2016DeepMV,isola2017image}, the noise vector $z$ is usually ignored in the optimization. Therefore, we do not include $z$ in the our experiments. We also followed the common approach in cGAN to include a task loss in addition to the GAN loss. This is reasonable since we have a target ground truth for us to compare with directly. For our contour generation task, we set the task loss to be L1 loss which encourages sparsity required for contour outputs. The combined loss function now becomes \begin{align} \mathcal{L}_c(x, y) = \lambda \mathcal{L}_{cGAN}(x, y) + \mathcal{L}_{\texttt{1}}(x, y), \label{eq:Lc} \end{align} where the the non-negative constant $\lambda$ adjusts the relative strength of the two objectives. Note that when $\lambda = 0$, the model reduces to a simple regression. The above formulation assumes a 1-to-1 mapping between the two domains. However, we have multiple different targets $y_i^{(1)}, y_i^{(2)}, ..., y_i^{(M_i)}$ for a same input $x_i$, making it a 1-to-many mapped problem. Note the number of targets $M_i$ for each input may vary from examples to examples. If we ignore the fact of 1-to-many mapping, this is reduced to a regular 1-to-1 mapping problem: $(x_1, y_1^{(1)})$, ..., $(x_1, y_1^{(M_1)})$, ..., $(x_N, y_N^{(1)})$, ..., $(x_N, y_N^{(M_i)})$, and those pairs are fetched in random order to train the network. Our method treats $(x_i, y_i^{(1)}, ..., y_i^{(M_i)})$ as a single training example. To accommodate the extra targets in each training example, we propose a novel MM-loss (Min-Mean-loss) (Fig \ref{fig:model}). Two different aggregate functions are used for the generator $G$ and the discriminator $D$ respectively. The final loss for each training example becomes \begin{align} \mathcal{L}(x_i, y_i^{(1)}, ..., y_i^{(M_i)}) = \frac{\lambda}{M_i} \sum_{j=1}^{M_i}\mathcal{L}_{cGAN}(x_i, y_i^{(j)}) + \notag \\ \min_{j \in \{1, ..., M_i\}} \mathcal{L}_{\texttt{1}}(x_i, y_i^{(j)}), \label{eq:L} \end{align} The ``mean'' aggregate function asks the discriminator to learn from all modalities in the target domain and treat those modalities with equal importance. The ``min'' aggregate function allows the generator to adaptively pick the most suitable modality to generate on-the-fly. Therefore, the problem of conflicting gradients caused by different modalities is greatly alleviated. In the diagnostic experiments (Tab \ref{tab:ablation}), we find that training on the consensus drawing outperforms the baseline method, while training on the complete set of sketches with MM-loss outperforms training just on consensus. The ``min'' aggregation function might be reminiscent of the stochastic multiple-choice loss \cite{lee2016stochastic} which relies on a single target output but learns multiple network output branches to generate a diverse output. In our setting, we have a single stochastic output, but multiple ground-truth targets, and a part of the network (the discriminator) that still uses the set of all ground truths to back-propagate the gradients. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{Fig/Consensus.pdf} \end{center} \vspace{-1em} \caption{Finding consensus from diverse drawings. In row 1, we visualize 3 different ground truth drawings corresponding to the same image, followed by their overlay in the fourth column. We match strokes in one drawing to another, removing those strokes that could not be matched (row 2). The leftover matched strokes (row 3) are used for evaluation. Note our novel loss allows us to train on the original drawing (row 1) directly and it outperforms the results training on the consensus (row 3), as shown in Tab \ref{tab:ablation} second last row.} \label{fig:consensus} \end{figure} We use the standard encoder-decoder architecture (ResNet \cite{He2016DeepRL} based) that yields good performance on style translation tasks \cite{Johnson2016PerceptualLF}. Unlike other pixel generation tasks, we find the skip connections between the encoder and the decoder make the performance drop. The reason might be that our targets contain mainly object boundaries instead of edges in textures, and removing the skip connections suppresses this low-level information. In many pixel-level prediction tasks, the skip connections are added to make pixel accurate predictions. However, we find that pixel accuracy is already encoded in the network itself since our output is sparse. This can be evidenced by the pixel accurate predictions of our same model applied to boundary detection (Sec \ref{sec:boundarydetection}). For the discriminator, we used a regular global GAN as opposed to PatchGAN \cite{pix2pix2017} in related work. Although PatchGAN helps other networks to generate nice textures, it discourages the network to ``think'' globally, resulting in many broken edges for a single countour of the object. This problem is alleviated when using the global GAN. An ablation study is provided in Tab \ref{tab:ablation} with evaluation metric explained in the next subsection. \begin{table}[t] \centering \begin{tabular}{lrrr} \toprule Method & F1-score & Precision & Recall \\ \midrule pix2pix \cite{pix2pix2017} (baseline) & 0.514 & 0.585 & 0.458 \\ \midrule + ResNet generator & 0.561 & 0.620 & 0.512 \\ + our MM-loss & 0.765 & 0.814 & 0.722 \\ + GlobalGAN & 0.773 & 0.720 & 0.835 \\ + augmentation (\textbf{final}) & {\bf 0.826} & 0.861 & 0.794 \\ \midrule - train on consensus & 0.802 & 0.915 & 0.714 \\ - remove GAN loss & 0.778 & 0.889 & 0.692 \\ \bottomrule \end{tabular} \caption{Ablation study of our method on the validation set. The metrics are explained in Sec \ref{sec:sketchobjeval}. We built up our model from a baseline method \cite{pix2pix2017} and the final model uses the ResNet generator without skip connection, a global discriminator and our proposed MM-loss. Moreover, despite the inconsistency in the non-consensus strokes, training on the original drawings outperforms training on just consensus strokes (second last row). We conjecture that our MM-loss can resolve conflicting supervision on the fly. Also, the last row shows that by adopting adversarial training, we outperform pure regression.} \label{tab:ablation} \end{table} \begin{figure*}[t] \begin{center} \includegraphics[width=1\linewidth]{Fig/SketchQualitative.pdf} \end{center} \vspace{-0.5em} \caption{Qualitative results for contour drawing generation. Column 3 and 5 are results at their optimal thresholds for the entire test set (\ie ODS, for readers familiar with BSDS evaluation \cite{amfm_pami2011}). Note that it is non-trivial to convert HED \cite{xie15hed} soft output (column 4) to clean sketches (column 5) without artifacts (broken edges). The last row shows that our methods fails at places with partial occlusion (water splash). In contrast, all human annotators are not confused by such occlusion. Photos from top to bottom by martincp, BlueOrange Studio, and Phil Stev -- \url{stock.adobe.com}.} \label{fig:visualsketch} \end{figure*} \subsection{Evaluation} {\bf Quantitative Evaluation} \label{sec:sketchobjeval} Boundary detection has a well-established evaluation protocol that matches predicted pixels to the ground truth under a given offset tolerance \cite{1273918,amfm_pami2011}. Matching is done with min-cost bipartite assignment \cite{Goldberg1995,Cherkassky1997}. To apply this approach to contour generation, we first need to reconcile the diverse drawing styles in the ground truth set. Hou \etal \cite{Hou2013BoundaryDB} propose a {\em consensus matching} evaluation of boundary detection that refines the ground-truth by matching pixels from one human annotation to another, removing those that are not unanimously matched across all annotators. We follow suit, but match at stroke level to ensure that strokes are not broken up in the final consensus drawing (Fig \ref{fig:consensus}). In addition, since contour drawings are not exactly aligned with the image boundary, we double the standard offset tolerance used for boundary evaluation. The evaluation treats each ground truth pixel as an ``object" in the precision-recall framework. We split the set of 1000 images with associated sketches into train-val-test sets of 800-100-100. The results are shown in Tab~\ref{tab:sketch} and Fig \ref{fig:visualsketch}. Pix2Pix and our method are trained on our dataset while HED is off-the-shelf. As explained earlier, boundary detection methods cannot work with diverse ground truths and imperfect alignment between the annotations and fine-tuning them on our dataset yields worse performance. \begin{table}[t] \centering \begin{tabular}{lcccc} \toprule Method & F1 & Prec & Rec & Human \\ \midrule pix2pix \cite{pix2pix2017} & 0.536 & 0.625 & 0.469 & 4.0\% \\ gPb-OWT-UCM \cite{amfm_pami2011} & 0.697 & 0.634 & 0.774 & 4.5\% \\ HED \cite{xie15hed} & 0.782 & 0.779 & 0.785 & 13.0\% \\ {\bf Ours} & {\bf 0.822} & 0.879 & 0.773 & {\bf 19.5}\% \\ \bottomrule \end{tabular} \caption{We evaluate contour drawing generation using standard schemes for evaluating boundary detection, modified to allow for larger pixel offsets during matching. Our method outperforms strong baselines for image translation and boundary detection. In the last column, we measure accuracy with an A/B user study. Our generated drawings are able to fool significantly more human subjects. User studies are consistent with our quantitative evaluation, suggesting that boundary detection accuracy is a reasonably proxy for contour drawing generation.} \label{tab:sketch} \end{table} {\bf Perceptual Study} Besides measuring the F1-score, we also evaluate results with A/B testing in a perceptual study (shown in the last column in Tab~\ref{tab:sketch}). For each algorithm, we present AMT Turkers~\cite{Buhrmester2011AmazonsMT} with a tuple consisting of an image, the generated drawing, and a random human drawing for that image for 5 seconds. Turkers are then asked to select the one drawn by a human. Past A/B tests for image generation tend to use a shorter presentation time of 1 second~\cite{pix2pix2017,CycleGAN2017}, making it easier to fool a user. {\bf In-the-Wild Testing} We also test the generalizability of our method on arbitrary Internet images. Note that the results here are obtained by directly applying the top performing model on the sketch validation set, {\em without any tuning of the hyperparameters}. The qualitative results in Fig \ref{fig:teaser} show that our model has learned general representations for salient contours in the images without content bias and incorporates random perturbations present in human drawings. The generalization power to unseen contents suggests that our method can be applied to other tasks, for instance, salient boundary detection, which is discussed in the following section. Moreover, the generalization to arbitrary contents is also crucial for us to design a human-in-the-loop learning scheme for dataset expansion (Sec \ref{sec:expansion}). \section{Application to Boundary Detection} \label{sec:boundarydetection} \begin{figure*}[t] \begin{center} \includegraphics[width=1\linewidth]{Fig/BSDSQualitative.pdf} \end{center} \vspace{-1em} \caption{Qualitative results for salient boundary detection. Our method generates complete objects more frequently than competing methods and yet without over generating texture edges. The last row shows a challenging case where all methods fail.} \label{fig:visualbsds} \end{figure*} For an algorithm to generate a contour drawing, it needs first to identify salient edges in an image and this implies that our contour sketch can be re-purposed for salient boundary detection. {\bf Previous Methods} Edge detectors are precursors to boundary detectors. Those methods \cite{kittler1983accuracy,Canny1986ACA} are usually filter-based and closely related to intensity gradients in images. Since Martin \etal \cite{1273918} first raised the problem of boundary detection, many efforts \cite{amfm_pami2011,Dollr2006SupervisedLO,Ren2008MultiscaleIB,Lim2013SketchTA,Dollr2015FastED} have been devoted to building learning methods upon hand-crafted features. Benchmark performance was improved by a series of deep methods \cite{Shen2015DeepContourAD,Bertasius2015DeepEdgeAM,xie15hed}. However, most of these methods merge the set of annotations to one before training ignoring annotation inconsistency issue. In addition, adversarial training has not yet been applied to boundary detection. {\bf Salient Boundaries} Image boundaries may be somewhat ambiguous. This can be seen from the open-ended instructions used to annotate BSDS500 \cite{MartinFTM01, amfm_pami2011} (divide the images into 2 to 20 regions), which often results in inconsistent boundaries labeled by annotators for the same image. Hou \etal \cite{Hou2013BoundaryDB} studied this inconsistency issue in BSDS500 with a series of experiments. They define {\em orphan} labels to be boundaries labeled by only a single annotator. They then ask human subjects if a particular algorithm's false alarm (predicted boundary pixel that is not matched to any ground-truth) is stronger than a randomly-selected orphan. Subjects selected the false alarm 50.2\% of the time. This seems worrisome as nearly half of the false alarms are stronger than a ground-truth boundary. When repeating this experiment with a consensus boundary pixel rather than an orphan boundary, subjects select the false alarm only 10.9\% of the time. Motivated by this observation, Hou \etal suggest evaluating results using only consensus boundary pixels as the ground-truth. \begin{table}[t] \centering \begin{tabular}{lccc} \toprule Method & F1-score & Precision & Recall \\ \midrule gPb-OWT-UCM \cite{amfm_pami2011} & 0.591 & 0.512 & 0.698 \\ DeepContour \cite{Kokkinos2010BoundaryDU} & 0.615 & 0.540 & 0.714 \\ DeepEdge \cite{Bertasius2015DeepEdgeAM} & 0.651 & 0.608 & 0.700 \\ HED \cite{xie15hed} & 0.665 & 0.580 & 0.778 \\ RCF \cite{Liu2017RicherCF} & 0.693 & 0.621 & 0.784 \\ Ours (w/o pre-training) & 0.637 & 0.627 & 0.646 \\ {\bf Ours (final)} & {\bf 0.707} & 0.706 & 0.708 \\ \bottomrule \end{tabular} \caption{Salient (consensus) boundary detection on the BSDS500 dataset, a standard benchmark set up by \cite{Hou2013BoundaryDB}. } \label{tab:strongbsds} \vspace{-1em} \end{table} Qualitatively, we find such boundary pixels correspond to salient contours on object boundaries. This criterion appears to be quite reliable in BSDS, but weak boundaries (of which not all annotators agree) may not be. Many of the weak boundaries may be artifacts caused by original segmentation-like annotation protocols in BSDS. As a result, the standard BSDS benchmark favors algorithms that tend to over-generate boundary predictions so as to have high recall of weak boundaries. Therefore, we focus on salient boundary detection and adopt the evaluation criteria proposed by \cite{Hou2013BoundaryDB}. {\bf Results} The BSDS500 is also a dataset with 1-to-many mapping. We can directly apply our method to this task. When we train our method on BSDS500, we experiment with two settings: whether we pre-train on our contour drawing dataset. The results are summarized in Tab \ref{tab:strongbsds} and Fig \ref{fig:visualbsds}. When our model is trained only on BSDS500, it performs worse than HED \cite{xie15hed} \& RCF \cite{Liu2017RicherCF}, which are pre-trained on ImageNet. But after fine-tuning, our method notably outperforms HED \& RCF by a sizeble margin. Interestingly, our fine-tuned model learns to generate contours with precise pixel alignment. \section{Cost-Free Data Expansion} \label{sec:expansion} The edge alignment in our contour drawing is not as perfect as in BSDS500, but it is this very imperfection that makes data collection much easier, and therefore much more scalable. In this ``deep'' era, both BSDS500 and our current dataset are considered ``small''. Comparing to boundary annotations in BSDS500, our sketch annotations typically contain more details, but it is a much easier and more interesting task than annotating precise boundaries, since we only require loose alignment. During data collection, we frequently received comments like ``this is really fun'', ``I like this {\em game}'' and ``I enjoyed the task''. Motivated by such comments, we further extended the interface to a sketch drawing game with the goal of collecting large scale data for free. {\bf Prior Work} Von Ahn and Dabbish \cite{Ahn2004LabelingIW} first point out that games can be used to label images. Their game asks a pair of players to guess the label of the same image and extract the common input as the final label. Another game is built by Deng and his colleagues \cite{Deng2013FineGrainedCF} in which they mask an image of an object and ask the player which fine-grained class (e.g. the species of a bird) is present. The player is allowed to erase blobs of the mask at some penalty to better inspect the image. Several other games, WhatsMySketch \cite{Eitz2012HowDH} and QuickDraw \cite{Ha2018ANR}, are built with the idea of letting the player to draw the sketch in order for an artificial intelligence system to recognize it. DrawAFriend \cite{Limpaecher2013RealtimeDA} lets players trace images of celebrities or their friends and send their finished drawings to their friend to guess the identity. {\bf Gaming Interface} The fact that many data-collection games make use of sketches reinforces our observation that drawing itself is an interesting task. Therefore, we develop a game app for scalable data collection (Fig \ref{fig:sketchgame}, demo video can be found on the project website). The challenge here is to set up a game reward system to provide user real-time feedback and also have automatic quality control mechanism for the collected data. In our game, we first process the image with a boundary detector or a contour generator (as described in Sec \ref{sec:gensketch}), and randomly sample points on the generated boundary map. We then define those points to be reward points, and when a player's drawing matches a reward point, he or she receives the reward associated with the point. We also randomly sample points that are far enough from the generated boundary maps to be penalties points, \ie, when a player stroke is too near a penalty point, he or she loses some of scores. Now the players get instant feedback on how well they draw and the total scores they obtained is a measure of the sketch quality. Note that all reward points are hidden to the players. Since both the reward points and penalty points are sparse, this will not limit the player to precisely follows the algorithm's output. Then we set a cut-off score (as a percentage of the total available rewards) to perform final decision on whether to accept this sketch or not. {\bf Evaluation} To evaluate this reward system, or {\em AI score} for short, we conduct two experiments. For the first experiment, we set the sketches in our initial data collection phase (Sec \ref{sec:dataset}) as ground truth, where 5000 of them are manually marked qualified and 1947 of them unqualified. When we use the AI score to rate those sketches, it can identify 90\% of unqualified sketches, showing the capability of rejecting poor submissions. For the second experiment, we collected 100 additional sketches using the game interface, then we manually marked them as qualified or unqualified based on the standard in Sec \ref{sec:dataset}. For this new testing, the AI score identifies 96\% of the unqualified sketches. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{Fig/SketchGame.pdf} \end{center} \vspace{-1em} \caption{Our sketch game for automatic large scale data collection. The game implements real-time reward/penalty feedback and an automatic quality control mechanism.} \label{fig:sketchgame} \end{figure} In the future, we plan to release the game to the public to build a free sketch collection machine. As the collection process continues, we can keep update our sketch generation models, which allow us to generate more accurate reward points for the game, and make a never-ending sketch collection and generation system by closing the loop. \section{Conclusion} In this work, we examine the problem of generating contour drawings from images, introducing a dataset, benchmark criteria, and generation model. From a graphics perspective, this problem generates aesthetically pleasing line drawings. From a vision perspective, contour sketches allow for the exploration of open contours arising from geometric occlusion events. We show that such data can be used to learn low-level representations that can be fine-tuned to produce state-of-the-art results for salient boundary detection. Looking forward, contour sketches appear to be a scalable alternative for collecting geometric visual annotations (potentially through game interfaces).
2,877,628,089,970
arxiv
\section{Introduction} The G-equation is \begin{equation}\label{e.geqn} u_t = A|\nabla u| + V(t,x) \cdot \nabla u \ \hbox{ in } \ \mathbb{R}^d \times [t_0,\infty) \ \hbox{ with } \ u(x,t_0) = u_0(x) \end{equation} where $V$ is a space-time stationary ergodic random field on $\mathbb{R}_t \times \mathbb{R}^d_x $ that is Lipschitz continuous with $|V| \leq M$, \[ \nabla \cdot V(t,x) = 0, \ \hbox{ and } \ |\mathbb{E}[V](t,x)| <A.\] This is a simple model for premixed turbulent combustion. In this interpretation, the super-level sets of $u$ are ``burnt regions" and the sub-levels are ``unburnt regions", $V$ models a turbulent fluid flow, and $A$ is the laminar flame speed. Usually $u$ is called $G$ in the applied literature, which explains the name of the equation. In mathematical terms this is a (geometric) Hamilton-Jacobi equation with convex Hamiltonian $H(p,t,x) = A|p| + V(t,x) \cdot p$. The difficulty of the problem comes from the lack of coercivity: it may be that $M \gg A$. The key consequences of coercivity are Lipschitz estimates (in the time independent case) and reachability estimates for controlled trajectories (in general). These estimates, derived from coercivity, play a fundamental role in homogenization results for Hamilton-Jacobi equations, but they are not present for the G-equation. Nonetheless, the formal intuition is that the Hamiltonian associated with the G-equation is ``coercive on average" since $\mathbb{E} H(p,t,x) = |p| + \mathbb{E}[V] \cdot p$ is coercive. Of course, one cannot just take expectations on both sides of \eref{geqn} and hope to derive something since $V$ and $\nabla u$ are not independent. Nonetheless, as we will show here, the primary consequences of coercivity (Lipschitz/reachability estimates) are indeed recovered at the length/time scale $T(t,x)$ (a stationary random field) where the space-time averages of $V$ centered at $(t,x)$ become less than $A$. We put the above in more precise terms. The G-equation \eref{geqn} has a natural control interpretation with trajectories \[ \dot{X}_t = V(t,X_t) + \alpha_t \ \hbox{ with any measurable control } \ |\alpha_t| \leq A.\] It turns out that \begin{equation}\label{e.controlformula} u(t,x) = \sup_{x\in R_t(t_0,x_0)} u_0(t_0,x_0), \end{equation} where $R_t$ is called the reachable set, defined for $t \in \mathbb{R}$, \[ R_{t}(t_0,x_0) = \left\{ x \in \mathbb{R}^d: \begin{array}{c} \hbox{ there exists a controlled trajectory $X$ on $[t_0,t]$}\\ \hbox{with $X_{t_0} = x_0$ and $X_t = x$} \end{array}\right\}. \] The reachable set from a given space-time point is the main object of interest in this study. The indicator function ${\bf 1}_{R_t(t_0,x_0)}(t,x)$ is a special solution of \eref{geqn}, in PDE language it is like a nonlinear version of a fundamental solution. We say that there is a finite waiting time if there is a stationary random field $T: \mathbb{R}^d \times \mathbb{R} \to [0,\infty)$ that is finite almost surely and for which the following delayed coercivity condition holds: there exists $c>0$ universal such that, \begin{equation}\label{e.waitcoercivity} B_{c(A-|\mathbb{E}[V]|)|t-t_0|}(x) \subset R_{t}(t_0,x_0) \ \hbox{ for all } \ |t -t_0|\geq T(t_0,x_0). \end{equation} In the time independent case $V(t,x) = V(x)$, by some simple manipulations of the control formula \eref{controlformula} using \eref{waitcoercivity}, it follows that solutions of the G-equation are Lipschitz continuous at length/time scales larger than the waiting time \[ |u(t,x) - u(y,s)| \leq \|\nabla u_0\|_\infty[2MT(x)\vee T(y) + |x-y|+M|t-s|].\] Thus, this can be thought of as a large scale regularity property. In general such results play an important role in quantitative homogenization theory. The waiting time estimate \eref{waitcoercivity} has been proved previously in space-time periodic \cite{XinYu,CardaliaguetNolenSouganidis}, stationary ergodic (time independent) \cite{NolenNovikov,CardaliaguetSouganidis}. Recently, Burago, Ivanov and Novikov \cite{BuragoIvanovNovikov} proved \eref{waitcoercivity} in space-time uniformly ergodic environments, a class which at least includes periodic, almost periodic, and some finite range dependence random velocity fields with special structure. We give the first proof of \eref{waitcoercivity} in the most general setting for homogenization theory, space-time stationary ergodic random environments, building on the new ideas of \cite{BuragoIvanovNovikov}. This also gives a new proof of finite waiting time in time independent media, which was proved some time ago by Cardaliaguet and Souganidis \cite{CardaliaguetSouganidis}. In what follows we make some simplifications and consider only the case $A = 1$ and $\mathbb{E}[V] = 0$. The general case presented above can be recovered by a rescaling of the time variable and ``trading" some of the control to make $\mathbb{E}[V] = 0$. The starting point to understanding the spreading of the reachability set $R_\tau$ follows from the divergence free condition and the isoperimetric inequality. Integrating \eref{geqn} over $\mathbb{R}^d$, since ${\bf 1}_{R_t}$ itself is a solution of the G-equation, \[ \frac{d}{dt}|R_t| = \int_{\partial R_t} 1 + V(t,x) \cdot n \ dS = |\partial R_t| \geq d\omega_d^{1/d}|R_t|^{1-\frac{1}{d}}.\] Integrating this differential inequality from $t_0$ to $t$ yields \begin{equation}\label{e.Rdvolgrowth} |R_t| \geq \omega_d (t-t_0)^d. \end{equation} In combination with the uniform upper bound $M$ on the speed of trajectories one can obtain \[ |R_t(t_0,x_0) \cap B_{M(t-t_0)}(x_0)| \geq \omega_d(t-t_0)^d.\] This estimate, however, contains no information on how the reachable set spreads. In the below we write \[ \Box_r(x) = x+(-\tfrac{r}{2},\tfrac{r}{2})^d \] Let us consider a localization of the reachability set growth estimate on a large open box $\Box_r = \Box_r(x_0)$ \[ \frac{d}{dt} |\Box_r \cap R_t |_d =|\Box_r \cap \partial R_t|_{d-1} - \textup{flux}(V,R_t \cap \partial \Box_r). \] If the flux term was not present the relative isoperimetric inequality would allow to show that $\Box_r$ is completely filled by $R_t$ in time proportional to $r$. Thus the issue lies with the flux through $R_t \cap \partial \Box_r$. The clever new arguments introduced by Burago, Ivanov, and Novikov \cite{BuragoIvanovNovikov} show how to control this flux in terms of only the uniform convergence of the spatial averages. Given $\varepsilon>0$ define \begin{equation}\label{e.uniformmean} r^*_\varepsilon = \sup \left\{ r >0: \ \sup_{x,t} |\frac{1}{|\Box_r|_d}\int_{\Box_r(x)} V(t,x) \ dx| \geq \varepsilon\right\}. \end{equation} There is a universal $\varepsilon_0(d,M)$ such that if $r^*_{\varepsilon_0}<+\infty$ then \cite{BuragoIvanovNovikov} obtain a finite waiting time $T$ (independent of $t,x$) such that \eref{waitcoercivity} holds. This is similar to, although slightly weaker than, uniform ergodicity, the condition that the ergodic averages converge to the mean uniformly in space-time. This makes heuristic sense since we imagine that the problem is coercive on average at length scale $r^*_{\varepsilon_0}$. This condition does hold for periodic, almost periodic, and also on a non-trivial class of finite range of dependence random velocity fields $V$ \cite[Remark 6.5]{BuragoIvanovNovikov}. Still the condition is fairly restrictive in the context of random media, such uniform space-time convergence of the ergodic averages contained in the condition $r^*_\varepsilon<+\infty$ for small $\varepsilon>0$ will definitely not hold in general even on random velocity fields with good mixing properties. In the present paper we greatly generalize the class of velocity fields to which these methods are applicable. We are able to prove \eref{waitcoercivity} in the class of space-time stationary ergodic velocity fields, this is the absolute weakest assumption which is widely used in studying homogenization. We will only need to control a much weaker quantity than \eref{uniformmean}. First define the space-time boxes \[ Q_r(t,x) = (t,x) + (-\tfrac{r}{2}, \tfrac{r}{2}) \times \Box_r . \] Fix $N> 1$ and define the empirical averages \begin{equation}\label{e.en} E_N[V;Q_r] = \sup_{(k,n), \in \mathbb{Z} \times \mathbb{Z}^d , \ |(k,n)|_\infty < N/2}\left|\frac{1}{|Q_{r/N}|}\int_{Q_{r/N}(k\frac{r}{N},n\frac{r}{N})} V(t,x) \ dxdt\right|. \end{equation} Define $E_N[V,Q_r(t,x)]$ analogously for $(t,x) \in \mathbb{R}_t \times \mathbb{R}^d_x$. From the ergodic theorem and $\mathbb{E}[V] = 0$ the following limit holds on an event of full probability \[\lim_{r \to \infty} E_N[V;Q_r(t,x)] = 0 \ \hbox{ for all } \ (t,x) \in \mathbb{R}_t \times \mathbb{R}^d_x.\] Then define \begin{equation}\label{e.rnequantity} r^*_{N,\varepsilon}(t,x) = \sup \left\{ r >0: \ E_N[V;Q_r(t,x)] \geq \varepsilon\right\}. \end{equation} This quantity is stationary in $(t,x)$ and, by the ergodic theorem (see Akcoglu and Krengel \cite{AkcogluKrengel}), it is finite almost surely for every $\varepsilon >0$. We show that $r^*_{N,\varepsilon}$ controls the waiting time for a sufficiently large, universal, $N$. \begin{theorem}\label{t.main} Suppose that $V$ is a space-time stationary ergodic random field, uniformly bounded $\|V\|_\infty \leq M$, and Lipschitz continuous. Then there are dimensional constants $c(d),C(d)>0$ such that \[ B_{c|t-t_0|}(x_0) \subset R_{t}(t_0,x_0) \ \hbox{ for all } \ |t -t_0|\geq T(t_0,x_0):= Cr^*_{N,\varepsilon}(t_0,x_0) \] for $\varepsilon = C^{-1}M^{-(d-1)}$ and $N = \lceil C M^{5d+2}\rceil$. In particular $T$ is finite almost surely. \end{theorem} Since the dependence of $\varepsilon$ and $N$ on $M$ is quite explicit, this would probably allow to consider unbounded velocity fields with finite moments, at least under some mixing conditions. This bound also allows to naturally derive mixing estimates and tail bounds on the waiting time if we assume mixing conditions on $V$. In particular one can get an explicit upper bound (depending on $M$) for the length scale where coercivity first holds with high probability. We give an example result in this direction. The statement will be slightly imprecise, since we do not want to explicitly define this mixing condition yet. When we say a constant is universal below, we mean it depends at most on the dimension and the hidden constants in the mixing condition, see \sref{mixing} for a precise description. \begin{corollary}\label{c.mixing} Suppose that $V$ satisfies an $\alpha$-mixing condition with stretched exponential decay with stretching exponent $\beta>0$ and unit length scale. Then there are universal constants $c,C>0$ such that \[\P(T(t,x) \geq t) \leq C\exp\left[-c(\tfrac{t}{\ell(M)})^{\beta'}\right]\] and $T$ is $\alpha$-mixing with stretched exponential decay with exponent $\beta' = \frac{(d+1)\beta}{d+1+\beta}$ and length scale $\ell(M) = CM^{2(d-1)/\beta'}(1+|\log M|)^C$. \end{corollary} We expect that this quantitative regularity will have applications for quantitative homogenization of the G-equation especially in the time independent case. \subsection{Literature} The G-equation was introduced by Williams \cite{Williams}, it is a popular simple model for flame propagation in turbulent combustion \cite{Peters}. In the mathematical community popular questions have been related to homogenization \cite{XinYu,CardaliaguetNolenSouganidis,NolenNovikov,CardaliaguetSouganidis,BuragoIvanovNovikov0,BuragoIvanovNovikov,henderson2018brownian} and quantifying front speed enhancement as $M \to \infty$ \cite{MR2847532,MR3132416,MR3228465,MR3237880,MR3549020,MR3580814}. The topic of asymptotic flame speed enhancement has lead to some very interesting mathematics, including connections with dynamical systems, however this direction seems to be less relevant to the present paper, so we will focus on explaining the works studying homogenization. In the homogenization results listed below more and more general coercivity estimates have been developed progressively. What we want to emphasize about our coercivity estimate, in comparison to previous results, is that it allows the most general assumption on the random media while still having explicit dependence on the random field $V$. The first works on homogenization of the G-equation were by Cardaliaguet, Nolen and Souganidis \cite{CardaliaguetNolenSouganidis} (considering space-time periodic media) and, at the same time, by Xin and Yu \cite{XinYu} (considering time independent periodic). In stationary ergodic media (time independent), $d=2$, Nolen and Novikov \cite{NolenNovikov} proved homogenization assuming the existence of a stream function with a certain growth condition, this would follow from a sufficiently strong mixing condition on the field. A key step there is a waiting time estimate, their proof strongly uses the $2$-$d$ structure (scalar stream function, periodic trajectories are boundaries of open sets). Their bound on the waiting time does explicitly depend on the spatial averages of $V$ via the stream function. Next Cardaliaguet and Souganidis \cite{CardaliaguetSouganidis} obtained a very general homogenization result, covering stationary ergodic media in all dimensions in the time independent case. As an important step they proved a new waiting time bound in stationary ergodic (time independent) media, the proof is abstract, using ergodicity, so, unlike our result, the dependence of the waiting time on $V$ is not explicit. Finally we come to the works of Burago, Ivanov and Novikov \cite{BuragoIvanovNovikov0,BuragoIvanovNovikov}, which, as explained above, are the inspiration for the present work. Their main results were the delayed coercivity condition \eref{waitcoercivity} under the uniform convergence of the means \eref{uniformmean}, and a homogenization result in space-time finite range dependence media with a special structure. Our new result, building on \cite{BuragoIvanovNovikov}, is the first bound on the waiting time in the most general setting of space-time stationary ergodic media. \subsection{Acknowledgments} Thank you to Takis Souganidis for many helpful conversations and for bringing the result \cite{BuragoIvanovNovikov} to my attention. Thank you to Inwon Kim and Chris Henderson for helpful comments on the manuscript. \subsection{Support} The author appreciates the support of the Friends of the Institute for Advanced Study and NSF RTG grant DMS-1246999. \section{Notation} For \sref{bdryflux} the only relevant parameters of the problem, which will appear in the estimates, are $d$ and $M = \|V\|_\infty$. Dependence on $d$ will be omitted in general, we write $c$, $C$ for positive constants depending at most on the dimension which may change from instance to instance. All dependence on $M$ will be made explicit. In \sref{mixing} we will introduce some additional parameters related to the mixing condition, the dependence of constants $c,C$ on these parameters will also be omitted. We will need to measure various co-dimension sets $E$ of $\mathbb{R}^n$. We will denote $\mathcal{H}^m$ for the $m$-dimensional Hausdorff measure. We usually write $|E|_m = \mathcal{H}^m(E)$ when it is not too confusing. We will also make use of the perimeter, for an open $\Omega \subset \mathbb{R}^n$ and a set $E \subset \mathbb{R}^n$ \[ \textup{Per}(E,\Omega) = \sup \{ \int_{E \cap \Omega} \nabla \cdot \varphi \ dx: \ \varphi \in C^1_c(\Omega) \ \hbox{ and } \ |\varphi| \leq 1\}.\] This can be defined also on closed sets $K$ as the infimum of $\textup{Per}(E,\Omega)$ over open $\Omega \supset K$. It can also be defined similarly for $\Omega$ in a flat $m$-dimensional slice of $\mathbb{R}^n$ or a finite union of such. We will use this to compute perimeters on $d$-dimensional boundaries of boxes in dimension $d+1$. See \cite{BuragoIvanovNovikov} for more details and references on the geometric measure theoretic tools needed for the G-equation. \section{General strategy} This section outlines the general strategy of \cite{BuragoIvanovNovikov} to integrate the local volume growth ODE \begin{equation}\label{e.volumegrowth} \frac{d}{dt} |\Box_r(x_0) \cap R_t |_d =|\Box_r(x_0) \cap \partial R_t|_{d-1} - \textup{flux}(V,R_t \cap \partial \Box_r(x_0)). \end{equation} The argument proceeds in three steps: Step 1 filling a small fraction $\alpha|\Box_r(x_0)|$, Step 2 filling from $\alpha|\Box_r(x_0)|$ to $(1-\alpha)|\Box_r(x_0)|$, Step 3 filling the small complement. Step 1: Set $t_1 = t_0 + \frac{r}{2M}$, then we claim \begin{equation}\label{e.alphadef} |\Box_r \cap R_t|_d \geq \alpha |\Box_r|_d \ \hbox{ with } \ \alpha = \frac{\omega_d}{(2M)^d}. \end{equation} By the control formula and $|V| \leq M$ we have $R_{t} \subset \Box_r$ for $t_0 \leq t \leq t_1$. Thus $|R_t| = |\Box_r \cap R_t|_d$ and \eref{Rdvolgrowth} proves the claim. Step 2: If the boundary flux term could be ignored, then we could just integrate \eref{alphadef} using the relative isoperimetric inequality: \begin{equation}\label{e.relisoperimetric} |\Box_r \cap \partial R_t|_{d-1} \geq \lambda_1(d) \min\{ |\Box_r \cap R_t|_d, | \Box_r \setminus R_t|_d\}^{\frac{d-1}{d}}. \end{equation} The central difficulty is to show that the boundary flux is appropriately small, and this is where the beautiful new ideas of \cite{BuragoIvanovNovikov} come in. In \sref{bdryflux} we show how to modify those ideas to handle a much more general class of velocity fields. Step 3: Suppose that $|R_{t_2} \cap \Box_{2r}|_d \geq (1-2^{-d}\alpha)|\Box_{2r}|_d$ at some time $t_2$. Let $y_0 \in \Box_r$. Then by step 1 above \[ |R_{-\frac{r}{2M}}(t_2+\tfrac{r}{2M},y_0) \cap \Box_{2r}|_d \geq \alpha |\Box_r|_d \geq 2^{-d}\alpha |\Box_{2r}|_d. \] Thus, for every $y_0 \in \Box_{r}(x_0)$ \[ R_{-\frac{r}{2M}}(t_2+\tfrac{r}{2M},y_0) \cap R_{t_2}(t_0,x_0) \neq \emptyset\] and so \[ \Box_r(x_0) \subset R_{t_2+\frac{r}{2M}}(t_0,x_0).\] Thus the proof of \tref{main} will be complete if we can prove the following proposition. \begin{proposition} Suppose that $r \geq r^*_{N,\varepsilon}(t,x)$ with $\varepsilon = c(d)M^{-(d-1)}$ and $N = C(d)M^{5d+2}$. Call $t_1(\Box_r)$ and $t_2(\Box_r)$ as defined above ($t_2 = +\infty$ if such time does not exist). Then $t_2(\Box_r)$ exists and \[ t_2 - t_1 \leq C(d)r^d.\] \end{proposition} \subsection{Regularity} The regularity of the reachable set is already discussed in \cite{BuragoIvanovNovikov} and \cite{CardaliaguetSouganidis}, from \eref{volumegrowth} one can derive that $R_t$ is a finite perimeter set for almost every time. For our purposes we need a bit more. In particular we need to understand the regularity of the space-time reachable set $R$, of which $R_\tau = R \cap \{t = \tau\}$ are the time slices. We argue formally here, and justify below by regularization, also we consider only the case $t \geq t_0$, the case $t \leq t_0$ follows by symmetry. The indicator function of the reachable set ${\bf 1}_{R_t}$ solves the G-equation in the viscosity sense \[ \partial_t {\bf 1}_{R_t} = |\nabla {\bf 1}_{R_t}| + V(t,x) \cdot \nabla {\bf 1}_{R_t} \ \hbox{ in } \ \mathbb{R}^d \times [t_0,\infty)\] with initial data $R_{t_0} = \{x_0\}$. First, from the control interpretation, it is immediate that \[ R_t \subset B_{(1+M)(t-t_0)}(x_0).\] Taking absolute values on both sides of the G-equation \begin{equation}\label{e.1+M} |\partial_t {\bf 1}_{R_t}| \leq (1+M) |\nabla {\bf 1}_{R_t}|. \end{equation} If instead we integrate the G-equation over $\mathbb{R}^d$ and use the divergence theorem \[ \partial_t|R_t|_d = \int_{\mathbb{R}^d} |\nabla{\bf 1}_{R_t}|.\] Integrating in time \[ (1+M)^d(t-t_0)^d \geq |R_t|_d - |R_{t_0}|_d = \int_{t_0}^t\int_{\mathbb{R}^d} |\nabla{\bf 1}_{R_\tau}| \ dxd\tau\] Then using the pointwise inequality \eref{1+M} we obtain also \[ \int_{t_0}^t\int_{\mathbb{R}^d} |\partial_t{\bf 1}_{R_\tau}|+ |\nabla_x{\bf 1}_{R_\tau}| \ dxd\tau \leq (1+M)^{d}(2+M)(t-t_0)^d.\] Summarized we have the following result: \begin{lemma} For each $(t_0,x_0)$ the reachable set $R(t_0,x_0)$ is a finite perimeter set of $\mathbb{R}_t \times \mathbb{R}^d_x$ with \[ \textup{Per}(R(t_0,x_0),[t_0-\tau,t_0+\tau] \times \mathbb{R}^d)\leq 2(1+M)^{d}(2+M)\tau^d \ \hbox{ for all } \ \tau \geq 0.\] \end{lemma} \begin{proof} Apply all the above arguments to the solution of the G-equation with initial data \[ u^\delta(t_0,x) = \varphi(|x-x_0|/\delta)\] where $\varphi$ smooth, $\varphi(0) =1$, and $\varphi$ is supported in $B_1(0)$. Then, by the control formulation, $u^\delta$ converges pointwise to ${\bf 1}_{R_t}$. The bound of the Lemma holds for the space-time BV norm of $u^\delta$, since the BV norm is lower semi-continuous for sequences converging in $L^1$ the result is obtained. \end{proof} We mention one other piece of regularity information, which is a lower continuity estimate on $|\Box_r \cap R_t|_d$. \begin{lemma}\label{l.lowercont} For each $(t_0,x_0)$, each $\Box_r$, and $t \geq t_0$ \[ \frac{d}{dt}|\Box_r \cap R_t(t_0,x_0)|_d \geq -C(d)Mr^{d-1}.\] \end{lemma} \begin{proof} Follows from \eref{volumegrowth} bounding the term $|\Box_r \cap \partial R_t|_{d-1} \geq 0$ and the term $|\textup{flux}(V,R_t \cap \partial \Box_r)| \leq M |\partial \Box_r|_{d-1} = 2dMr^{d-1}$. Again do a regularization as in the previous proof to make this rigorous. \end{proof} \section{Controlling the boundary flux}\label{s.bdryflux} The following arguments are adaptations of \cite{BuragoIvanovNovikov}. The aim is to control the flux term using that that space-time averages of $V$ are small at the length scale $r\geq r^*_{N,\varepsilon}$ (at least down to scale $r/N$). We provide a heuristic description and then move to formal statements. The first claim is that the averages of $V$ on $d$-dimensional boundary faces of $\Box_r \times [t-r,t+r]$ are small. This is not true at scale $r/N$, but at some intermediate scale $Lr/N$ it follows from the mean value theorem and the divergence free condition. This is made precise in the following Lemma. \begin{lemma}\label{l.faceflux} Suppose that $r \geq r^*_{N,\varepsilon}(t,x)$, let $1 \leq L < N$ an integer and $F$ be a $(d-1)$-dimensional cube contained in $\Box_r(x)$ with side lengths $LN^{-1}r$ and $I$ a time interval of length at least $LN^{-1}r$ contained in $[t-r/2,t+r/2]$, then \[\left|\int_I\textup{flux}(V,F)\ d \tau \right | \leq C(d)(\varepsilon+ML^{-1}) |F\times I|_{d+1}\] \end{lemma} The next issue is that we are not looking at $\textup{flux}(V,\partial \Box_r \times [t-r,t+r])$ but at $\textup{flux}(V,R \cap (\partial \Box_r \times [t-r,t+r]))$. Looking in $LN^{-1}r$ size sub-faces tiling the boundary we see that if $R$ takes up most of the measure of the face, or only a small portion then there is no problem. The only issue is on the sub-faces where $R$ and $R^C$ both take up a nontrivial portion of the measure. However, in this case, by the relative isoperimetric inequality there must be a corresponding proportion of the total perimeter $\textup{Per}(R,\partial \Box_r \times I)$. This allows to control the total flux through $R \cap (\partial \Box_r \times [t-r,t+r])$ by the total flux (already small) plus a term involving the perimeter $\textup{Per}(R,\partial \Box_r \times I)$. This is not precisely how the argument goes, there are some nice tricks which were introduced by \cite{BuragoIvanovNovikov}, which we re-use. We make a technical note before stating the Lemma. It is convenient to establish our bounds actually on a space-time rectangle $\Box_r \times [t_0-\gamma r, t_0 + \gamma r]$, with a dimensional constant $\gamma = 1 + \frac{2d}{\lambda_1(d)}$ where $\lambda_1(d)$ is the constant of the relative isoperimetric inequality in the cube as in \eref{relisoperimetric}. \begin{lemma}\label{l.bdryflux} Let $[t-\gamma r,t+\gamma r] \times \Box_r(x) $ be a space-time rectangle with side length $r \geq r^*_{N,\varepsilon}(t,x)$, $L \in \{1,\dots,N\}$, and $I$ be a subinterval of $[t-\gamma r,t+\gamma r]$ of length $LN^{-1}r$. Then \[\left|\int_I\textup{flux}(V,R_\tau \cap \partial \Box_r(x)) \ d\tau\right| \leq C(d)M{L}{N}^{-1} r\textup{Per}(R,\partial \Box_r \times I) +C(d)(\varepsilon+ML^{-1}) | I \times \partial \Box_r|_d.\] \end{lemma} It is not a-priori obvious but it turns out to be optimal to choose $L = \lceil M\varepsilon^{-1} \rceil$. First the proof of \lref{bdryflux} using \lref{faceflux} \begin{proof}[Proof of \lref{bdryflux}] Let $F$ be $(d-1)$-dimensional sub-face of $\partial \Box_r(x)$ with with side lengths $LN^{-1}r$ and $I$ a subinterval of $[t-\gamma r,t+\gamma r]$ of the same length. First note that \[ \textup{flux}(V,F) = \textup{flux}(V,F \cap R_\tau) + \textup{flux}(V,F \setminus R_\tau),\] and so, applying \lref{faceflux} on with radius $\gamma r$, and by reverse triangle inequality \begin{align*} \left||\int_{I}\textup{flux}(V,F \cap R_\tau) \ d\tau | -|\int_I\textup{flux}(V,F \setminus R_\tau)\ d\tau|\right| &\leq |\int_I\textup{flux}(V,F) \ d \tau | \\ &\leq C(\varepsilon+ML^{-1}) |F\times I|_d. \end{align*} By the relative isoperimetric inequality on $F \times I$ (see \cite[Theorem A.5]{BuragoIvanovNovikov}) \begin{align*} \min\{|(F \times I) \cap R |_d,|(F \times I) \setminus R|_d\} &\leq CLN^{-1}r\textup{Per}(R,F \times I). \end{align*} Therefore \[ |\int_I\textup{flux}(V,F \cap R_\tau)\ d \tau| \leq C(\varepsilon+ML^{-1}) |F\times I|_d+CMLN^{-1}r\textup{Per}(R,F \times I).\] Summing over a partition of $\partial \Box_r(x)\times I$ by sub-faces $F\times I$ \[ |\int_{I}\textup{flux}(V,\partial \Box_r(x) \cap R_\tau )\ d\tau| \leq C(\varepsilon+ML^{-1}) |\partial \Box_r(x) \times I|_d+CMLN^{-1}r\textup{Per}(R,\partial \Box_r \times I)\] \end{proof} \begin{proof}[Proof of \lref{faceflux}] Let $F$ as in the statement of the Lemma, we can assume that $F = [0,\frac{Lr}{N}]^{d-1} \times \{x_d = 0\}$ and $I = [0,\frac{rL}{N}]$. Then $I \times F$ is contained in a union of $(L+1)^{d}$ of the $N^{d+1}$ space-time cubes of width $r/N$ partitioning $\Box_r \times [t-\frac{r}{2},t+\frac{r}{2}]$. Call $P$ to be the union of the spatial projection of these rectangles, \[ P = y+[-\tfrac{r}{2N},(L+\tfrac{1}{2})\tfrac{r}{N}]^{d-1}\times[-\tfrac{r}{2N},\tfrac{r}{2N}] \ \hbox{ for some } \ |y|_\infty \leq \tfrac{r}{2N}\] and call $J$ to be the union of the temporal projections \[ J = s + [-\tfrac{r}{2N},(L+\tfrac{1}{2})\tfrac{r}{N}] \ \hbox{ for some } \ |s| \leq \tfrac{r}{2N}.\] By the definition of $r^*_{N,\varepsilon}(t,x)$ \[ \left|\frac{1}{|J \times P|_d} \int_{J \times P} V(t,x) \cdot e_d \ dtdx\right| \leq \varepsilon. \] Then, by Fubini and the mean value theorem, there is a face $F' = P \cap \{x_d = h\}$ with $h \in y_d+[-\frac{r}{2N},\frac{r}{2N}]$ such that \[ \left|\frac{1}{|J \times F'|_d} \int_{J \times F'} V(t,x) \cdot e_d \ dtdx\right| \leq \varepsilon.\] Applying the divergence theorem in the region $P \cap \{x_d \in [0,h]\}$ (assume $h>0$, the other case is symmetric) at each fixed time and using $\nabla \cdot V = 0$ \begin{align*} \left|\int_{J \times (P \cap \{x_d = 0\}) } V(t,x) \cdot e_d \ d\mathcal{H}^{d-1}\right| &= \left| \int_{J \times F'}V(t,x) \cdot e_d \ dtd\mathcal{H}^{d-1} + \int_{J \times (\partial P \cap \{0 < x_d < h\})} V(t,x) \cdot n \ dtd\mathcal{H}^{d-1}\right| \\ &\leq \varepsilon |J \times F'|_d + CMh((L+1)r/N)^{d-2}|J| \\ &\leq (\varepsilon+CM/L)|J \times F'|_d \end{align*} Finally $ |F'|- |F| \leq C\frac{1}{L}|F|$ and $ |J| - |I| \leq C\frac{1}{L}|I|$ and, similarly, \[ \left|\int_{ J \times (P \cap \{x_d = 0\})} V(t,x) \cdot e_d \ dtd\mathcal{H}^{d-1}(x) - \int_{I \times F}V(t,x) \cdot e_d \ dtd\mathcal{H}^{d-1}(x)\right|\leq C\frac{M}{L}|I \times F|_d \] and, combining, we get the result. \end{proof} \subsection{Volume growth differences} Integrating the ODE \eref{volumegrowth} and using the flux bound \lref{bdryflux}, we have proven that for $r \geq r^*_{N,\varepsilon}(t_0,x_0,\varepsilon)$ and $I = [t',t]$ a subinterval of $[t_0,t_0+\gamma r]$ of length at least $Lr/N$ (with $L = \lceil M\varepsilon^{-1}\rceil$, as it will be fixed for the rest of the section) \begin{equation}\label{e.odeperRHS} |\Box_r(x_0) \cap R_t |_d - |\Box_r(x_0) \cap R_{t'}|_d \geq \int_{t'}^{t}|\Box_r \cap \partial R_\tau|_{d-1} \ d\tau -C\varepsilon |I \times \partial \Box_r|_d -CM^2\varepsilon^{-1}N^{-1}r\textup{Per}(R,I \times\partial \Box_r ). \end{equation} Our goal is the following estimate \begin{lemma}\label{l.volumegrowthode} Let $1>\varepsilon>0$ and $M^{-1}\wedge\frac{1}{4} > \beta > 0$ fixed, let $N= \lceil \beta^{-2} \varepsilon^{-3}M^3\rceil$, let $r \geq r^*_{N,\varepsilon}(t_0,x_0)$, and let $I$ a subinterval of $[t_0,t_0+\gamma r]$ of length $|I| = \beta r$. Then \begin{equation}\label{e.volode} |\Box_r(x_0) \cap R_t|_d - |\Box_r(x_0) \cap R_{t'}|_d \geq \int_{I}|\Box_r \cap \partial R_\tau|_d \ d \tau -C\varepsilon r^{d-1}|I|_1 . \end{equation} \end{lemma} This is basically achieved by averaging over a small range of $r$ and applying the mean value theorem to find a value of $r$ where the term $\textup{Per}(R,I \times\partial \Box_r )$ is of ``typical" size. In the proof we will apply the following co-area formula several times: \begin{lemma}[Federer co-area formula]\label{l.coarea} Let $f : \mathbb{R}^n \to \mathbb{R}$ be a Lipschitz function, and $E \subset \mathbb{R}^n$ be an $\mathcal{H}^k$-rectifiable set. Then the function $\lambda \mapsto \mathcal{H}^{k-1}(E \cap f^{-1}(\lambda))$ is Lebesgue measurable, $E \cap f^{-1}(\lambda)$ is $\mathcal{H}^{k-1}$-rectifiable for Lebesgue a.e. $\lambda \in \mathbb{R}$ and \[ \int_{E} |\nabla_{\tan} f(y)| d \mathcal{H}^{k}(y) = \int_{\mathbb{R}} \mathcal{H}^{k-1}(E \cap f^{-1}(\lambda)) \ d\lambda\] where $\nabla_{\tan} f(y)$ is the component of $\nabla f(y)$ tangent to $E$ at $y$. \end{lemma} \begin{proof}[Proof of \lref{volumegrowthode}] By the co-area formula \lref{coarea} applied to $\partial R$ with $f(t,x) = |x|_\infty$ and using $|\nabla_{\tan} f| \leq 1$, for some $\delta>0$ to be chosen, \begin{equation}\label{e.dtod-1} \mathcal{H}^{d}((I \times \Box_{(1+\delta )r} ) \cap \partial R) \geq \int_{r}^{(1+\delta )r}\textup{Per}(R,I \times \partial \Box_\rho) \ d\rho. \end{equation} Again from the co-area formula \lref{coarea} applied to $\partial R$ now with $f(t,x) = t$ \begin{equation}\label{e.coareat} \frac{1}{(1+M^2)^{1/2}}\mathcal{H}^{d}((I \times \Box_{(1+\delta)r}) \cap \partial R ) \leq \int_{I} \mathcal{H}^{d-1}(\Box_{(1+\delta)r} \cap \partial R_\tau) d\tau. \end{equation} In more detail, since the normal direction $n = (n_t,n_x)$ to $\partial R$ at $(t,x)$ has \[ |n_x| = \frac{1}{(1+|V(t,x) \cdot \hat{n}_x|^2)^{1/2}} \geq \frac{1}{(1+M^2)^{1/2}}\] and so, using $\nabla_{t,x} f(t,x) = (1,0)$, \[ |\nabla_{\tan} f|(t,x) =(1-|\nabla f(t,x)\cdot n|^2)^{1/2} = (1-|n_t|^2)^{1/2} = |n_x| \geq \frac{1}{(1+M^2)^{1/2}}. \] Plugging this inequality into the co-area formula \lref{coarea} gives \eref{coareat}. Next estimate $\int_{I} \mathcal{H}^{d-1}(\Box_{(1+\delta)r} \cap \partial R_\tau) \ d\tau$, as in \cite[Lemma 4.2]{BuragoIvanovNovikov}, by integrating \eref{volumegrowth} on $I$ and bounding the flux term by $M|\partial \Box_r|_{d-1}$ and the volume difference by the total volume of $\Box_r$: \begin{equation}\label{e.totalperimeter} r^d+CMr^{d-1}|I| \geq \int_{I} \mathcal{H}^{d-1}(\Box_{(1+\delta)r} \cap \partial R_\tau) \ d\tau. \end{equation} Since $|I| \leq M^{-1}r$ the left hand side is bounded above by $C r^d$. Combining the previous inequalities \eref{totalperimeter}, \eref{coareat}, and \eref{dtod-1} \[ CMr^d \geq \int_{r}^{(1+\delta)r} \textup{Per}(R,I \times \partial \Box_\rho) \ d\rho\] and so, for some $r \leq \rho \leq (1+\delta)r$ \[ \textup{Per}(R,I \times \partial \Box_\rho) \leq CM\delta^{-1}r^{d-1}.\] Plugging this into the difference equation \eref{odeperRHS} \begin{align*} |\Box_\rho \cap R_t |_d- |\Box_\rho \cap R_{t'} |_d&\geq\int_{I}|\Box_\rho \cap \partial R_\tau|_{d-1}\ d\tau -C\varepsilon |\partial \Box_\rho|_{d-1} |I|_1 -CM^2\varepsilon^{-1}N^{-1}r\textup{Per}(R,I \times \partial \Box_\rho ). \\ &\geq \int_{I}|\Box_r \cap \partial R_\tau|_{d-1} \ d\tau -C\varepsilon r^{d-1}|I|_1-CM^3\delta^{-1}\varepsilon^{-1}N^{-1}r^{d}. \end{align*} Then using \[ ||\Box_\rho \cap R_t |_d - |\Box_r \cap R_{t}|_d| \leq |\Box_\rho \setminus \Box_r|_d \leq C\delta r^d\] and choosing $\delta = \beta \varepsilon$ and $N = \beta^{-2}M^3\varepsilon^{-3}$ to match the size of all the error terms \[ |\Box_r \cap R_t |_d- |\Box_r \cap R_{t'} |_d\geq\int_{I}|\Box_r \cap \partial R_\tau |_{d-1} \ d\tau - C\varepsilon r^{d-1}|I|_1.\] Note that with these choices $LN^{-1} =\beta^2\varepsilon^2M^{-2} \leq \beta$ using $\beta \leq 1/4$, $M \geq 1/2$ and $\varepsilon < 1$, thus the application of \eref{odeperRHS} above was justified since $|I|_1 = \beta r \geq LN^{-1}r$. \end{proof} \subsection{Integrating the volume growth differences}\label{s.integration} This section considers the difference equation \eref{volode} for the the growth of $|R_t \cap \Box_r(t_0,x_0)|$. All of the necessary analysis was already carried out by \cite{BuragoIvanovNovikov}, we provide a proof anyway for completeness and to be clear about the choice of the constants $\varepsilon$ and $N$. By \lref{volumegrowthode} and the relative isoperimetric inequality the difference inequality holds for intervals of length $|I| = \beta r$ \begin{equation}\label{e.volode1} |\Box_r(x_0) \cap R_t|_d - |\Box_r(x_0) \cap R_{t'}|_d \geq \lambda_1(d)\int_{t'}^t \min\{ |\Box_r \cap R_\tau|_d,| \Box_r \setminus R_\tau|_d\}^{\frac{d-1}{d}} \ d \tau -C\varepsilon r^{d-1}|I|_1 \end{equation} as long as $N(\varepsilon,\beta)$ as in \lref{volumegrowthode} and $r \geq r^*_{N,\varepsilon}(t_0,x_0)$. Here $\lambda_1(d)$ is the relative isoperimetric constant for cubes in $\mathbb{R}^d$. We will compare $|\Box_r(x_0) \cap R_t|$ with $r^d \phi(t/r)$ where $\phi$ solves the ODE \begin{equation}\label{e.phiode} \phi'(t) = \frac{1}{2}\lambda_1 \min\{\phi(t),1-\phi(t)\}^{\frac{d-1}{d}} \ \hbox{ with } \ \phi(0) = 0. \end{equation} The ODE \eref{phiode} comes, after rescaling, from \eref{volode1} if the error term is ignored and $t-t'$ could be taken arbitrarily small. The factor of $\frac{1}{2}$ in the front is used to absorb the error terms, any constant smaller than $1$ could be used which would affect the choice of parameters $\varepsilon$ and $N$. Of course \eref{phiode} does not have uniqueness, so we specify the solution we are interested in \[ \phi(t) = \begin{cases} at^d &\hbox{$0 \leq t \leq b$}\\ 1- a(2b-t)^d &\hbox{$b \leq t \leq 2b$} \end{cases} \] with $a = (\frac{\lambda_1}{2d})^d$ and $b = (\frac{1}{2a})^{1/d} = \frac{d}{\lambda_1}$. Note that $|\phi'(t)| \leq d a b^{d-1}$, which is just another dimensional constant. \begin{lemma} Suppose that $r \geq r^*_{N,\varepsilon}(t_0,x_0)$ with $N = C(d)M^{5d+2}$ and $\varepsilon = c(d)M^{-(d-1)}$ for appropriately large/small dimensional constants. Let $t_0 < t_1 < t_2$ be, respectively, the first time that $ |\Box_r \cap R_t|_d \geq \alpha |\Box_r|_d$ and the first time that $ |\Box_r \cap R_t|_d \geq (1-2^{-d}\alpha) |\Box_r|_d$. Then \[ |\Box_r \cap R_t|_d \geq r^d \phi((t-t_1)/r) \ \hbox{ for } \ t_1 \leq t \leq t_2,\] in particular $t_2$ exists and $t_2 \leq t_1 + \frac{2d}{\lambda_1(d)} r$. \end{lemma} Now we can also specify that $\gamma = 1 + \frac{2d}{\lambda_1(d)}$, since $t_1 \leq t_0+\frac{r}{2M} \leq t_0 + 1$, and with the above Lemma $t_2 \leq t_1 + \frac{2d}{\lambda_1(d)}r$. \begin{proof} Fix $1>\varepsilon>0$ and $\frac{1}{M} \wedge \frac{1}{4} > \beta > 0$ to be made precise in the course of the proof. Let $N = \beta^{-2}M^3\varepsilon^{-3}$ and $r \geq r^*_{N,\varepsilon}(t_0,x_0)$ so that \lref{volumegrowthode}, and hence \eref{volode1}, hold on intervals $I$ of length $|I| = \beta r$. Call $\psi(t) = r^d\phi((t-t_1')/r)$ started at time $t_0 \leq t_1' \leq t_1$ chosen so that $\psi(t_1) = \frac{\alpha}{2}|\Box_r|_d$, precisely, \[ t_0 \leq t_1' = t_1 - \frac{d\alpha^{1/d}}{2\lambda_1}r\] Then $\psi(t_1) <\alpha| \Box_r| \leq | \Box_r \cap R_{t_1}|$. Let $t_1 < t_* \leq t_2$ be the first time that the inequality $\psi(t) < |\Box_r \cap R_t|_d$ fails (if such time exists). Note that equality does hold at $t_*$ by \lref{lowercont}. First we get a lower bound on $t_* - t_1$. Note that, from the lower continuity estimate for $|\Box_r \cap R_t|_d$ \lref{lowercont} and the upper bound on the derivative of $\phi$ and hence $\psi$, \[ - M r^{d-1}(t_*-t_1)\leq |\Box_r \cap R_{t_*}|_d - |\Box_r \cap R_{t_1}|_d \leq \psi(t_*) - \alpha|\Box_r|_d \leq C r^{d-1}(t_*-t_1) - \tfrac{\alpha}{2}|\Box_r|_d. \] Rearranging this \[ (t_* - t_1) \geq cM^{-1}\alpha r\] We will apply \eref{volode1} on an interval of length $\beta r$ with \begin{equation}\label{e.betaconstraint} \beta \leq cM^{-1}\alpha. \end{equation} to be specified more precisely below (actually matching this upper bound, up to a dimensional constant, will be the right choice). With this choice the ordering $|\Box_r \cap R_\tau| \geq \psi(\tau)$ holds on the interval $t_* - \beta r \leq \tau \leq t_*$. Note that for all $t_* - \beta r \leq \tau \leq t_*$ \begin{equation}\label{e.complementbd} r^d - \psi(\tau) - C\beta r^{d} \leq r^d - \psi(t_*) \leq | \Box_r \setminus R_{t_*}|_d \leq | \Box_r \setminus R_{\tau}|_d +M\beta r^d \end{equation} again from \lref{lowercont} and the bound on $\psi'$. Also note that on $t_* - \beta r \leq \tau \leq t$ \begin{equation}\label{e.psi'} \psi'(\tau) \geq \frac{1}{2}\lambda_1(d)(2^{-d}\alpha)^{\frac{d-1}{d}}r^{d-1} \end{equation} since $t_* - \beta r \geq t_1$ by \eref{betaconstraint} so $\psi(\tau) \geq \frac{\alpha}{2}|\Box_r|_d$ and, by definition, $t_* \leq t_2$ so \[ \psi(\tau) \leq \psi(t_*) = |\Box_r \cap R_{t_*}|_d \leq (1-2^{-d}\alpha)|\Box_r|_d.\] Now finally we have all the necessary set up to compute the ``slope" at the touching time \begin{align*} \psi(t_*) - \psi(t_* - \beta r) &\geq |\Box_r(x_0) \cap R_{t_*}|_d - |\Box_r(x_0) \cap R_{t_*-\beta r}|_d \\ \hbox{\small{by \eref{volode1}, \eref{betaconstraint}}} \ \ &\geq \lambda_1(d)\int_{t_*-\beta r}^{t_*} \min\{ |\Box_r \cap R_\tau|_d,| \Box_r \setminus R_\tau|_d\}^{\frac{d-1}{d}} \ d \tau -C\varepsilon r^{d-1}|I| \\ \hbox{\small{by \eref{complementbd}}} \ \ &\geq \lambda_1(d)\int_{t_*-\beta r}^{t_*} \min\{ \psi(\tau) ,r^d - \psi(\tau) - CM\beta r^d\}^{\frac{d-1}{d}} \ d \tau - C \varepsilon r^{d-1}|I| \\ \hbox{\small{by subadditivity}} \ \ &\geq \int_{t_*-\beta r}^{t_*}\lambda_1(d)\min\{ \psi(\tau) ,r^d - \psi(\tau) \}^{\frac{d-1}{d}}-C [M^{\frac{d-1}{d}}\beta^{\frac{d-1}{d}}+\varepsilon] r^{d-1}\ d \tau \\ \hbox{\small{by \eref{psi'}}} \ \ &\geq \int_{t_*-\beta r}^{t_*} \psi'(\tau)+ [2^{-d}\lambda_1(d)\alpha^{\frac{d-1}{d}} -C(M^{\frac{d-1}{d}}\beta^{\frac{d-1}{d}}+\varepsilon)] r^{d-1}\ d \tau \\ & > \int_{t_*-\beta r}^{t_*} \psi'(\tau) d\tau \end{align*} which is a contradiction as long as we have fixed \[ \beta = c(d)M^{-1}\alpha \ \hbox{ and } \ \varepsilon = c(d)\alpha^{\frac{d-1}{d}}\] so that the term in square brackets in the second to last line is strictly positive. Recalling the definition of $\alpha$ from \eref{alphadef} we see, for $M \geq 1$, $\beta =c(d) M^{-(d+1)}$ and $\varepsilon = c(d) M^{-(d-1)}$. This also finally specifies $N = C(d)M^{5d+2}$. Also note that the constraint \eref{betaconstraint} are satisfied, up to good choices of the dimensional constants $c(d)$. \end{proof} \section{Bounds and mixing estimates}\label{s.mixing} Given a mixing condition on $V$ it is straightforward to apply known result and derive from \tref{main} tail bounds and mixing estimates on $T$. We will work with the notion of $\alpha$-mixing since that is a standard notion which is more general than finite range dependence. For each Borel set $U \subset \mathbb{R}_t \times \mathbb{R}^d_x$ define the cylinder $\sigma$-algebras generated by $V$ \[ \mathcal{F}(U) = \sigma(V(t,x): (t,x) \in U).\] For a pair of $\sigma$-algebras $\mathcal{F}_1$ and $\mathcal{F}_2$ the $\alpha$-mixing coefficient is defined \[ \alpha(\mathcal{F}_1,\mathcal{F}_2) = \sup_{A \in \mathcal{F}_1 ,B \in \mathcal{F}_2} |\P(A \cap B) - \P(A)\P(B)| \] Say that $V$ is $\alpha$-mixing in $(t,x)$ if for all diameters $D>0$ the coefficients (abusing notation) \[ \alpha(r,D) = \sup\{ \alpha( \mathcal{F}(U),\mathcal{F}(U')): \ U, U' \ \hbox{ Borel sets with $d(U,U') \geq r$ with diameter $\leq D$} \}\] have $\lim_{r \to \infty} \alpha(r,D) = 0$. We make the assumption of stretched exponential decay of the $\alpha$-mixing coefficients, for some exponent $\beta>0$ and length scale $\ell>0$ and parameters $A,B>0$ we assume \[ \alpha(r,D) \leq A(1+D)^B\exp(- \ell^{-\beta}r^\beta),\] and say that $V$ has $\alpha$-mixing with stretched exponential decay with exponent $\beta$ and length scale $\ell$. We take $\ell = 1$, since the general case can be derived by rescaling. The constants $c,C$ which appear in the remainder of the section will depend on $A,B$ as well as $d$ and we will not keep track of this dependence any further. In this case a concentration estimate holds for the spatial averages, from \cite[Proposition 1.9]{DuerinckxGloria}, for any $0 < \varepsilon < 1/2$ \begin{equation}\label{e.concentration} \P( |\frac{1}{|Q_r|}\int_{Q_r(t_0,x_0)} V(t,x) \ dt dx| \geq \varepsilon) \leq C \exp(-c \varepsilon^2|\log \varepsilon|^{-\beta'}r^{\beta'}) \end{equation} with $\beta' = \frac{(d+1)\beta}{d+1+\beta} < \beta$. Of course the constants in the concentration estimate depend on the constants in the $\alpha$-mixing assumption, but we will not keep track of this dependence. We now aim to use this estimate to bound the tails of $T(t_0,x_0)$. By stationarity we can work with $T = T(0,0)$. Recall \[ T = r^*_{N,\varepsilon} = \sup \{ r : E_N[V,Q_r] \geq \varepsilon\}\] with $\varepsilon = c(d)M^{-(d-1)}$ and $N = \lceil C(d)M^{5d+2} \rceil$ and $E_N$ defined in \eref{en}. Applying \eref{concentration} with a union bound gives \[ \P(E_N[V;Q_r] \geq \varepsilon) \leq CM^{(5d+2)(d+1)} \exp(-c M^{-2(d-1)}|\log M|^{-\beta'}r^{\beta'}) \] We want to control $T$ by a union bound so we need to discretize in $r$. The discretization error is \[ |E_N[V;Q_{\lambda r}] - E_N[V;Q_{r}]| \leq CM^{(5d+2)(d+1)}(\lambda-1)^d \leq \varepsilon/2\] if we choose $\lambda = 1+cM^{-\frac{(5d+2)(d+1)}{d}-2}$. So that \begin{align*} \P(T \geq \tau ) &\leq \sum_{\lambda^k \geq \tau} \P(E_N[V;Q_{\lambda^kr_0}] \geq \varepsilon/2) \\ &\leq \sum_{\lambda^k \geq \tau} CM^{(5d+2)(d+1)} \exp(-c M^{-2(d-1)}|\log M|^{-\beta'}\lambda^{k\beta'}) \\ & \leq C \exp(-c M^{-2(d-1)}|\log M|^{-C}\tau^{\beta'}) \end{align*} where finally we absorbed all the polynomial powers of $M$ in front by changing the power of the logarithm inside the exponential. This proves the desired tail bounds on $T$. Next we consider the mixing estimate on $T$. Define the localization, \[ r_*^R(t,x) = \sup\{ 0 < r \leq R: E_N[V,Q_r(t,x)] \geq \varepsilon\}\] and, for a bounded Borel set $U \subset \mathbb{R}_t \times \mathbb{R}^d_x$, \[ r_*(U) = \sup_{(t,x) \in U} r_*(t,x).\] Note that \[ r^R_*(t,x) \in \mathcal{F}(U+Q_R) \ \hbox{ for all } \ (t,x) \in U.\] On the event that $\{r_*(U) < R\}$ the localizations $r_*^R(t,x)$ and the actual values of $r_*(t,x)$ agree on $U$. More precisely, for all $(t,x) \in U$, \[ r^R_*(t,x) {\bf 1}_{\{r_*(U) < R\}} = r_*(t,x) {\bf 1}_{\{r_*(U) < R\}}. \] This event has high probability since, by a standard discretization and union bound and the tail bounds established above for $r_*(t,x)$, \[\P(r_*(U) \geq R ) \leq C(1+\textup{diam}(U))^{d+1} \exp(-c M^{-2(d-1)}|\log M|^{-C}R^{\beta'}).\] Let $U$ and $U'$ Borel sets with diameter at most $D$ and call $R = \frac{1}{3\sqrt{d+1}}d(U,U')$. Then $U+Q_R$ and $U'+Q_R$ have distance apart at least $\frac{1}{3}d(U,U')$ and, by the mixing condition of $V$ \begin{align*} \alpha(\mathcal{F}(U+Q_R),\mathcal{F}(U'+Q_R)) &\leq \alpha(R,D) \\ &\leq C(1+D+2R)^C \exp(-c R^{\beta}) \\ &\leq C(1+D)^C \exp(-cR^{\beta'}) \end{align*} where the constants $c,C$ may have changed in the last inequality and we used $\beta' < \beta$ to absorb the $C|\log R|$ term in the exponential. Let random variables $X \in \sigma(r_*|_{U})$ and $Y \in \sigma(r_*|_{U'})$ of the form (abusing notation) $X = X(r_*(t_1,x_1),\dots,r_*(t_n,x_n))$ for some $X: \mathbb{R}^n \to \mathbb{R}$ with $|X| \leq 1$ and points $(t_j,x_j) \in U$, and $Y = Y(r_*(s_1,y_1),\dots,r_*(s_m,y_m))$ for some $Y: \mathbb{R}^m \to \mathbb{R}$ with $|Y| \leq 1$ and points $(s_,y_j) \in U'$. Call $X^R$ and $Y^R$ to be the same functions applied to the localized $r_*^R$ at the same points. Then $X$ and $X^R$ agree on $\{r_*(U) < R\}$, and $Y$ and $Y^R$ agree on $\{r_*(U') < R\}$. \begin{align*} |\mathbb{E}[XY] -\mathbb{E}[ X]\mathbb{E}[Y]| &\leq |\mathbb{E}[ X^RY^R]-\mathbb{E}[ X^R]\mathbb{E}[Y^R]|+|\mathbb{E}[ XY] - \mathbb{E}[ X^RY^R]|\\ &\quad \quad \quad + | \mathbb{E}[ X^R]\mathbb{E}[Y^R]-\mathbb{E}[X]\mathbb{E}[Y]| \\ &\leq \alpha(R,D)+4\P(r_*(U)>R) + 4\P(r_*(U')>R)\\ &\leq C(1+D)^C \exp(-c M^{-2(d-1)}|\log M|^{-C}d(U,U')^{\beta'}) \end{align*} This establishes the $\alpha$-mixing rate for the field $T = r_*$. \bibliographystyle{plain}
2,877,628,089,971
arxiv
\section{Introduction} The {\em Virasoro algebra} $\mathcal{L}$ is an infinite-dimensional complex Lie algebra with the basis $\{L_m,C \mid m\in \mathbb{Z}\}$ and the Lie bracket defined as follows: \begin{equation*} [L_m,L_n]= (n-m)L_{m+n}+\delta_{m+n,0}\frac{m^{3}-m}{12}C\ {\rm and}\ [L_m,C]=0\ {\rm for}\ m,n\in\mathbb{Z}, \end{equation*} which is a one-dimensional central extension of the Witt algebra. It is well-known that $\mathcal{L}$ is a very important infinite-dimensional Lie algebra both in mathematics and mathematical physics (see, e.g., \cite{IK,LL,KR}). The theory of weight modules over $\mathcal{L}$ has been well developed (see \cite{IK}). One of the most important weight modules is the highest weight module, which depends on the triangular decomposition structure of $\mathcal{L}$. In fact, any irreducible weight module over the Virasoro algebra with a nonzero finite-dimensional weight space is a Harish-Chandra module (see \cite{MZ1}), whose weight subspaces are all finite-dimensional. And the classification of irreducible Harish-Chandra modules over $\mathcal{L}$ was already achieved (see \cite{M,S,MP}). Afterwards, several families of irreducible weight modules with infinite-dimensional weight spaces were also investigated (see, e.g., \cite{CM,LZ0,LLZ}). The first such modules were constructed by taking the tensor product of some highest weight modules and some intermediate series modules (see \cite{Z}), whose irreducibilities were solved completely in \cite{CGZ}. Other work has focused on non-weight $\mathcal{L}$-modules, which attract attention in the past few years, such as Whittaker modules (see, e.g., \cite{OW,LGZ,MW,MZ}), $\mathbb{C}[L_0]$-free modules, irreducible modules from Weyl modules and a class of non-weight modules including highest-weight-like modules (see, e.g, \cite{LZ,TZ,TZ1,LZ2,GLZ,CG}). In the present paper, we shall study non-weight $\mathcal{L}$-modules. More precisely, we are going to construct a family of new irreduicible $\mathcal{L}$-modules from the tensor products of a finite number of $\mathcal{L}$-modules $\Omega(\lambda_i,\alpha_i)=\mathbb{C}[\partial_i]$ (see \cite{LZ2}) and $\mathcal{L}$-module $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)=V\otimes\mathbb{C}[\partial_0]$ (see \cite{LZ}). We notice that most of tensor product modules over the Virasoro algebra as defined in \cite{H,CGZ,LGW,Z} are related to the locally finite modules $\mathrm{Ind}(M)$ (see \cite{MZ}). But this class of tensor product modules considered in this paper does not include locally finite modules. Here follows a brief summary of this paper. In Section $2$, we recall some known results for late use. In Section $3$, the irreducibilities of modules $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ are determined. Section 4 is devoted to giving the necessary and sufficient conditions for two irreducible $\mathcal{L}$-modules $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ and $\mathcal{M}\big(W,\Omega(\mu_0,\beta_0)\big)\otimes\bigotimes_{j=1}^n\Omega(\mu_j,\beta_j)$ being isomorphic. In Section 5, we compare irreducible $\mathcal{L}$-modules constructed in the present paper with the known irreducible non-weight $\mathcal{L}$-modules and show that all irreducible $\mathcal{L}$-modules $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ ($V$ being not finite-dimensional and $m\neq1$) are new. Throughout this paper, we denote by $\mathbb{C},\mathbb{C}^*,\mathbb{Z}$, $\mathbb{Z}_+$, and $\mathbb{N}$ the sets of complex numbers, nonzero complex numbers, integers , nonnegative integers and positive integers respectively. All vector spaces are assumed to be over $\mathbb{C}$. \section{Preliminaries} In this section, we first recall two classes of non-weight Virasoro modules $\Omega(\lambda,\alpha)$ and $\mathcal{M}\big(V,\Omega(\lambda,\alpha)\big)$ defined in \cite{LZ2} and \cite{LZ} respectively. Note that both of them are $\mathbb{C}[L_0]$-free modules. For $\lambda\in\mathbb{C}^*,\alpha\in\mathbb{C}$, the non-weight $\mathcal{L}$-module $\Omega(\lambda,\alpha)=\mathbb{C}[\partial]$ is defined by $$L_mf(\partial)=\lambda^m(\partial-m\alpha)f(\partial-m)\ {\rm and}\ Cf(\partial)=0\quad {\rm for}\ m\in\mathbb{Z}, f(\partial)\in\mathbb{C}[\partial].$$ It is worthwhile to point out that $\Omega(\lambda,\alpha)$ is irreducible if and only if $\alpha \neq0$ (see \cite{LZ2}). For $r\in\mathbb{Z}_+$, denote by $\mathcal{L}_{r}$ the ideal of $\mathcal{L}_{+}=\mathrm{span}_{\mathbb{C}}\{L_i\mid i\geq0\}$ generated by $L_i$ for all $i>r$. Set $\bar \mathcal{L}_{r}$ be the quotient algebra $\mathcal{L}_{+}/ \mathcal{L}_{r}$, and $\bar L_i$ be the image of $L_i$ in $\bar \mathcal{L}_{r}$. Assume that $V$ is an $\bar \mathcal{L}_{r}$-module. For any $\lambda\in\mathbb{C}^*, \alpha\in\mathbb{C}$, define an $\mathcal{L}$-action on the vector space $\mathcal{M}\big(V,\Omega(\lambda,\alpha)\big)=V\otimes \mathbb{C} [\partial]$ as follows \begin{eqnarray*} && L_m\big(v\otimes f(\partial)\big)=v\otimes\lambda^m(\partial-m\alpha)f(\partial-m)+\sum_{i=0}^r\Big(\frac{m^{i+1}}{(i+1)!}\bar L_i\Big)v\otimes \lambda^mf(\partial-m),\\ && C\big(v\otimes f(\partial)\big)=0\quad {\rm for}\ m\in\mathbb{Z},v\in V, f(\partial)\in\mathbb{C}[\partial]. \end{eqnarray*} \begin{remark}\label{rem2.1} Let $r\in\mathbb{Z}_+$ and $V$ be an irreducible $\bar\mathcal{L}_{r}$-module. \begin{itemize}\lineskip0pt\parskip-1pt \item[\rm(1)] $V$ must be infinite-dimensional if ${\rm dim} V> 1,$ since any irreducible finite-dimensional module over the solvable Lie algebra $\bar\mathcal{L}_{r}$ is one-dimensional by Lie's Theorem. \item[\rm(2)] If $r=0,$ then $V$ must be one-dimensional, i.e.$,$ $V=\mathbb{C} v$ and $\bar L_iv=0$ for any $i\in\mathbb{N},$ $\bar L_0v =\beta v$ for some $\beta\in\mathbb{C}$. In this case $V$ is denoted by $V_{\beta}$. We know that $\mathcal{M}\big(V,\Omega(\lambda,\alpha)\big)$ is reducible if and only if $V\cong V_\alpha$ $\mathrm{(}$see {\rm \cite{LZ}}$\mathrm{)}$. \end{itemize} \end{remark} The following result will be used in the sequel. \begin{prop}{\rm \cite{LZ0}}\label{pro2.3} Let $P$ be a vector space over $\mathbb{C}$ and $P_1$ a subspace of $P$. Suppose that $\mu_1,\mu_2,\ldots,\mu_s\in\mathbb{C}^*$ are pairwise distinct, $v_{i,j}\in P$ and $f_{i,j}(t)\in\mathbb{C}[t]$ with $\mathrm{deg}\,f_{i,j}(t)=j$ for $i=1,2,\ldots,s$, $j=0,1,2,\ldots,k.$ If $$\sum_{i=1}^{s}\sum_{j=0}^{k}\mu_i^mf_{i,j}(m)v_{i,j}\in P_1\quad{ \it for}\quad \forall m\in\mathbb{Z},$$ then $v_{i,j}\in P_1$ for all $i,j\in\mathbb{Z}$ $($the result also holds for $m>K,K\in\mathbb{Z}\cup\{-\infty\}$$)$. \end{prop} Now we introduce some notations. Let $m\in\mathbb{N},\lambda_i\in\mathbb{C}^*,\alpha_i\in\mathbb{C}$ for $i=1,\ldots,m$. Denote $\Omega(\lambda_i,\alpha_i)=\mathbb{C}[\partial_i]$. It will be convenient to identify the module $\bigotimes_{i=0}^m\Omega(\lambda_i,\alpha_i)$ with the polynomial algebra $\mathbb{C}[\partial_0,\ldots,\partial_m]$. Then we write $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)=V\otimes\mathbb{C}[\partial_0,\partial_1,\partial_2,\ldots,\partial_m]$ for $m\in\mathbb{N}$. \begin{remark}\label{re2.1} Let $r\in\mathbb{Z}_+$ and $V$ be an irreducible $\bar\mathcal{L}_{r}$-module. \begin{itemize}\lineskip0pt\parskip-1pt \item[\rm(1)] It is clear that $$\mathcal{M}\big(V_\beta,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\cong \bigotimes_{i=0}^m\Omega(\lambda_i,\alpha_i-\delta_{i,0}\beta)$$ for any $\lambda_i\in\mathbb{C}^*,\alpha_i\in\mathbb{C},i=0,1,\ldots,m$, $m\in\mathbb{N}$ $\mathrm{(}$see {\rm \cite{TZ1}}$\mathrm{)}$. \item[\rm (2)] By Remark \ref{rem2.1} and $\mathrm{(}1\mathrm{)}$, we only need to consider the case $r\geq1$ in the sequel. Then we may assume that $\bar L_r$ is injective on $V,$ since otherwise $\bar L_r V=0$ by Lemma 2 in {\rm \cite{LLZ}} and $V$ reduces to an $\bar \mathcal{L}_{r-1}$-module. \end{itemize} \end{remark} \section{Irreducibilities} In this section, we will investigate the irreducibilities of $\mathcal{L}$-modules $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$. Consider the tensor product $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$. Define a total order $``\prec"$ on the subset $$\{v\otimes\partial_0^{p_0}\partial_1^{p_1}\cdots\partial_m^{p_m}\mid\mathbf{P}=(p_0,p_1,\ldots,p_m)\in\mathbb{Z}_+^{m+1},m\in\mathbb{N},0\neq v\in V\},$$ by \begin{eqnarray*}&&u\otimes\partial_0^{p_0}\partial_1^{p_1}\cdots\partial_m^{p_m}\prec v\otimes\partial_0^{q_0}\partial_1^{q_1}\cdots\partial_m^{q_m}\\ &\Longleftrightarrow& \exists k\in\mathbb{Z}_+\ \mathrm{such\ that} \ p_k<q_k\ \mathrm{and} \ p_n=q_n \ \mathrm{for} \ n<k.\end{eqnarray*} Then each $0\neq w\in\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ can be (uniquely) written in the form $$w=\sum_{\mathbf{p}\in I} v_{\mathbf{p}}\otimes\partial_0^{p_0}\partial_1^{p_1}\cdots\partial_m^{p_m},$$ where $I$ is a finite subset of $\mathbb{Z}_+^{m+1}$ and all $v_{\mathbf{p}}$ are nonzero vectors of $V$. Now we define the degree of $w$ to be $\mathbf{p}=(p_0,p_1,\ldots,p_m)$, where $v_{\mathbf{p}}\otimes\partial_0^{p_0}\partial_1^{p_1}\cdots\partial_m^{p_m}$ is the term with the maximal order in the sum, and denote it by $\mathrm{deg}(w)$. Notice that $\mathrm{deg}(v\otimes1)=\mathbf{0}=(\underbrace{0,0,\ldots,0}_{m+1})$. \begin{lemm}{\rm \cite{TZ1}}\label{lemm3.1} Let $m\in\mathbb{N},\lambda_i,\alpha_i\in\mathbb{C}^*$ for $i=0,1,\ldots,m$. Then $\bigotimes_{i=0}^m\Omega(\lambda_i,\alpha_i)$ is irreducible if and only if $\lambda_0,\ldots,\lambda_m$ are pairwise distinct. \end{lemm} In order to prove the main theorem of this section, we first give the following lemma. \begin{lemm}\label{lemm1} Let $m\in\mathbb{N}, \alpha_0\in\mathbb{C},\lambda_0,\lambda_i,\alpha_i\in\mathbb{C}^*$ for $i=1,2,\ldots,m$ with $\lambda_0,\ldots,\lambda_m$ pairwise distinct and $v$ be a nonzero vector in an infinite-dimensional irreducible $\bar \mathcal{L}_{r}$-module $V$. Then $v\otimes1$ generates the module $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i),$ where $1=\underbrace{1\otimes1\otimes\cdots\otimes1}_{m+1}.$ \end{lemm} \begin{proof} As we mentioned in Section $2$, we identify $\bigotimes_{i=0}^m\Omega(\lambda_i,\alpha_i)$ with $\mathbb{C}[\partial_0,\partial_1,\ldots,\partial_m]$. Denote $M=V\otimes\mathbb{C}[\partial_0,\partial_1,\ldots,\partial_m]$. Let $W$ be the submodule of $M$ generated by $v\otimes1$. Note that \begin{eqnarray}\label{4.1} \nonumber L_k\big(v\otimes1\big) &=& \Big(v\otimes\lambda_0^k(\partial_0-k\alpha_0)+\sum_{i=0}^{r}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)v\otimes \lambda_0^k\Big)\otimes1\otimes\cdots\otimes1+\\ &&v\otimes1\otimes\big(\sum_{i=1}^m1\otimes\cdots\otimes\lambda_i^k(\partial_i-k\alpha_i)\otimes\cdots\otimes1\big)\in W\ {\rm for}\ k\in\mathbb{Z}. \end{eqnarray} By Proposition \ref{pro2.3}, $v\otimes\partial_i\in W$ for any $0\le i\le m$. Replacing $v\otimes 1$ by $v\otimes\partial_i\in W$ in \eqref{4.1}, one can obtain $v\otimes \partial_i\partial_j\in W$ for $0\le i,j\le m$. Inductively, $v\otimes\partial_0^{p_0}\cdots\partial_m^{p_m}\in W$ for all $p_i\in\mathbb{Z}_+,i=0,\ldots,m$. Then it follows from \begin{eqnarray*} &&W\ni L_k\big(v\otimes (\partial_0+k)^{p_0}\partial_1^{p_1}\cdots\partial_m^{p_m}\big)\\ &=& \Big(v\otimes\lambda_0^k(\partial_0-k\alpha_0)\partial_0^{p_0}+\sum_{i=0}^{r}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)v\otimes \lambda_0^k\partial_0^{p_0}\Big)\partial_1^{p_1}\cdots\partial_m^{p_m}+\\ &&v\otimes(\partial_0+k)^{p_0}\otimes\big(\sum_{i=1}^m\partial_1^{p_1}\cdots\lambda_i^k(\partial_i-k\alpha_i) (\partial_i-k)^{p_i}\cdots\partial_m^{p_m}\big) \end{eqnarray*} and Proposition \ref{pro2.3} that $\bar L_iv\otimes \partial_0^{p_0}\partial_1^{p_1}\cdots\partial_m^{p_m}\in W$ for all $i=0,1,\ldots,r$. Now the irreducibility of $V$ gives $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\subseteq W$. Thus, $M=W$. \end{proof} \begin{lemm}\label{lemm2} Let $r,s\in\mathbb{Z}_+,\alpha_0\in\mathbb{C},\lambda,\alpha_1\in\mathbb{C}^*$ and $V$ be an infinite-dimensional irreducible $\bar \mathcal{L}_{r}$-module. Denote $W_s$ the vector subspace of $\Omega(\lambda,\alpha_0)\otimes \Omega(\lambda,\alpha_1)$ spanned by $\{f(\partial_0)(\partial_0+\partial_1)^n\mid n\in\mathbb{Z}_+,0\leq\mathrm{deg}(f)\leq s\}$ or $\{(\partial_0+\partial_1)^nf(\partial_1)\mid n\in\mathbb{Z}_+,0\leq\mathrm{deg}(f)\leq s\}$ . Then $V\otimes W_s$ is a submodule of $\mathcal{M}\big(V,\Omega(\lambda,\alpha_0)\big)\otimes\Omega(\lambda,\alpha_1)$. \end{lemm} \begin{proof} Without loss of generality, we may assume $\lambda=1.$ For any $0\neq v\in V,n\in\mathbb{Z}_+,f(\partial_0)\in W_s,k\in\mathbb{Z}$, we have \begin{eqnarray*} &&L_k\big(v\otimes f(\partial_0)(\partial_0+\partial_1)^n\big) \\ &=&L_k\big(v\otimes\sum_{i=0}^n\binom{n}{i}f(\partial_0)\partial_0^{i}\partial_1^{n-i}\big) \\&=&v\otimes\sum_{i=0}^n\binom{n}{i}(\partial_0-k\alpha_0)f(\partial_0-k)(\partial_0-k)^{i}\partial_1^{n-i} +\\&& \sum_{j=0}^{r}\big(\frac{k^{j+1}}{(j+1)!}\bar L_j\big)v\otimes\sum_{i=0}^n\binom{n}{i}f(\partial_0-k)(\partial_0-k)^{i}\partial_1^{n-i} +\\&& v\otimes\sum_{i=0}^n\binom{n}{i}f(\partial_0)\partial_0^{i}(\partial_1-k\alpha_1)(\partial_1-k)^{n-i} \end{eqnarray*} \begin{eqnarray*} \\&=&v\otimes\Big((\partial_0-k\alpha_0)f(\partial_0-k)(\partial_0+\partial_1-k)^n+f(\partial_0)(\partial_1-k\alpha_1)(\partial_0+\partial_1-k)^n\Big)+ \\&&\sum_{j=0}^{r}\big(\frac{k^{j+1}}{(j+1)!}\bar L_j\big)v\otimes f(\partial_0-k) (\partial_0+\partial_1-k)^n \\&=&v\otimes\Big(\partial_0\big(f(\partial_0-k)-f(\partial_0)\big)-k\alpha_0 f(\partial_0-k)\Big)(\partial_0+\partial_1-k)^n+ \\&&v\otimes\Big(f(\partial_0)(\partial_0+\partial_1-k)^{n+1}+k(1-\alpha_1)f(\partial_0)(\partial_0+\partial_1-k)^n\Big)+ \\&&\sum_{i=0}^{r}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)v\otimes f(\partial_0-k) (\partial_0+\partial_1-k)^n\in V\otimes W_s, \end{eqnarray*} by noting that $(\partial_0+\partial_1-k)^n\in W_s.$ By the similar calculation, we obtain $L_k\big(v\otimes(\partial_0+\partial_1)^n f(\partial_1)\big)\in V\otimes W_s.$ Thus, $V\otimes W_s$ is a submodule of $\mathcal{M}\big(V,\Omega(\lambda,\alpha_0)\big)\otimes\Omega(\lambda,\alpha_1)$, completing the proof. \end{proof} \begin{coro}\label{coro3} Let $r,s\in\mathbb{Z}_+,\alpha_0\in\mathbb{C},\lambda_0,\alpha_1\in\mathbb{C}^*$ and $V$ be an infinite-dimensional irreducible $\bar \mathcal{L}_{r}$-module. Assume that $V\otimes W_s$ is the subspace of $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\Omega(\lambda_0,\alpha_1)$, where $W_s$ is spanned by $\{f(\partial_0)(\partial_0+\partial_1)^n\mid n\in\mathbb{Z}_+,0\leq\mathrm{deg}(f)\leq s\}$ or $\{(\partial_0+\partial_1)^nf(\partial_1)\mid n\in\mathbb{Z}_+,0\leq\mathrm{deg}(f)\leq s\}$. Then $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\Omega(\lambda_0,\alpha_1)$ has a series of $\mathcal{L}$-submodules $$V\otimes W_0\subset V\otimes W_1\subset\cdots V\otimes W_s\subset\cdots$$ such that $V\otimes W_s/V\otimes W_{s-1}\cong \mathcal{M}\big(V,\Omega(\lambda_0,s+\alpha_0+\alpha_1)\big)$ as $\mathcal{L}$-modules for each $s\geq1$. \end{coro} \begin{proof} For $s,n\in\mathbb{Z}_+,0\neq v\in V$, it follows from Lemma \ref{lemm2} that \begin{eqnarray*} &&L_k\big(v\otimes \partial_0^s(\partial_0+\partial_1)^n\big) \\ &=&L_k\big(v\otimes\sum_{i=0}^n\binom{n}{i}\partial_0^{s+i}\partial_1^{n-i}\big) \\&=&v\otimes\sum_{i=0}^n\binom{n}{i}\lambda_0^k(\partial_0-k\alpha_0)(\partial_0-k)^{s+i}\partial_1^{n-i} + \\&& \sum_{j=0}^{r}\big(\frac{k^{j+1}}{(j+1)!}\bar L_j\big)v\otimes\sum_{i=0}^n\binom{n}{i}\lambda_0^k(\partial_0-k)^{s+i}\partial_1^{n-i} +\\&& v\otimes\sum_{i=0}^n\binom{n}{i}\partial_0^{s+i}\lambda_0^k(\partial_1-k\alpha_1)(\partial_1-k)^{n-i} \\&=&v\otimes\lambda_0^k\Big((\partial_0-k\alpha_0)(\partial_0-k)^s(\partial_0+\partial_1-k)^n+\partial_0^s(\partial_1-k\alpha_1)(\partial_0+\partial_1-k)^n\Big)+ \\&&\sum_{j=0}^{r}\big(\frac{k^{j+1}}{(j+1)!}\bar L_j\big)v\otimes \lambda_0^k(\partial_0-k)^s (\partial_0+\partial_1-k)^n \\&\equiv&v\otimes\lambda_0^k\partial_0^s\big(\partial_0+\partial_1-k(s+\alpha_0+\alpha_1)\big)(\partial_0+\partial_1-k)^n+ \\&&\sum_{i=0}^{r}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)v\otimes\lambda_0^k \partial_0^s (\partial_0+\partial_1-k)^n \quad (\mathrm{mod}\ V\otimes W_{s-1}). \end{eqnarray*} Similarly, we have \begin{eqnarray*} L_k\big(v\otimes (\partial_0+\partial_1)^n\partial_1^s\big)&=& v\otimes\lambda_0^k\big(\partial_0+\partial_1-k(s+\alpha_0+\alpha_1)\big)(\partial_0+\partial_1-k)^n\partial_1^s \\&&+\sum_{i=0}^{r}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)v\otimes\lambda_0^k (\partial_0+\partial_1-k)^n \partial_1^s \quad (\mathrm{mod}\ V\otimes W_{s-1}). \end{eqnarray*} These show that the $\mathcal{L}$-module isomorphism $$V\otimes W_s/V\otimes W_{s-1}\cong \mathcal{M}\big(V,\Omega(\lambda_0,s+\alpha_0+\alpha_1)\big).$$ \end{proof} Now we are ready to prove the main theorem of this section, which gives a characterization of the irreducibility of $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$. \begin{theo}\label{th1} Let $m\in\mathbb{N},r\in\mathbb{Z}_+,\alpha_0\in\mathbb{C},\lambda_0,\lambda_i,\alpha_i\in\mathbb{C}^*$ for $i=1,2,\ldots,m$ with $\lambda_0,\cdots, \lambda_m$ pairwise distinct and $V$ be an irreducible $\bar \mathcal{L}_{r}$-module. Then the $\mathcal{L}$-module $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ is reducible if and only if $V\cong V_{\alpha_0}$. \end{theo} \begin{proof} Denote $\mathcal W=\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$. First, we consider the case when $V$ is finite-dimensional. Then $V\cong V_{\beta}$ for some $\beta\in\mathbb{C}$ and $\mathcal W\cong \bigotimes_{i=0}^m\Omega(\lambda_i,\alpha_i-\delta_{i,0}\beta)$ by Remark \ref{rem2.1}(2) and Remark \ref{re2.1}(1). According to Lemma \ref{lemm3.1}, $\bigotimes_{i=0}^m\Omega(\lambda_i,\alpha_i-\delta_{i,0}\beta)$ is reducible if and only if $\alpha_0=\beta$. Assume that $V$ is infinite-dimensional. Let $r^\prime$ be the maximal (positive) integer such that $\bar L_{r^\prime}$ is an injective linear transformation of $V$ (see Remark \ref{re2.1}(2)). Let $W$ be a nonzero submodule of $\mathcal W$ and $w$ a nonzero element of $W$ with the minimal degree. We claim $\mathrm{deg}(w)=\mathbf{0}$, namely, $w=v\otimes 1\in W$ for some $0\neq v\in V$. Then by Lemma \ref{lemm1}, $W=\mathcal W$, which shows that $\mathcal W$ is irreducible in this case. Suppose on the contrary that $\mathrm{deg}(w)>\mathbf{0}$. Write \begin{equation*} w=\sum_{\mathbf{p}\in I} v_{\mathbf{p}}\otimes \partial_0^{p_0}\partial_1^{p_1}\cdots\partial_m^{p_m}\in W, \end{equation*} where $I$ is a finite subset of $\mathbb{Z}_+^{m+1}$ and $\{v_{\mathbf{p}}\mid {\bf p}\in I\}$ is a linearly independent subset of $V$. Let $v_{\mathbf{q}}\otimes \partial_0^{{q}_0}\partial_1^{{q}_1}\cdots\partial_m^{{q}_m}$ be maximal among the terms in the sum with respect to $``\prec"$. Let $i^\prime$ be minimal such that $q_{i^\prime}>0$. Note that for any $k\in\mathbb{Z}_+$, \begin{eqnarray*} && L_k\big(\sum_{\mathbf{p}\in I} v_{\mathbf{p}} \otimes\partial_0^{p_0}\partial_1^{p_1}\cdots\partial_m^{p_m}\big)\\ &=& \sum_{\mathbf{p}\in I}\Big(\Big( v_{\mathbf{p}}\otimes\lambda_0^k(\partial_0-k\alpha_0)(\partial_0-k)^{p_0}+\sum_{i=0}^{r}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)v_{\mathbf{p}}\otimes \lambda_0^k(\partial_0-k)^{p_0}\Big)\otimes\partial_1^{p_1}\cdots\partial_m^{p_m}\\ &&+~v_{\mathbf{p}}\otimes\partial_0^{p_0}\otimes\sum_{i=1}^m\Big(\partial_1^{p_1}\otimes\cdots\otimes\lambda_i^k(\partial_i-k\alpha_i) (\partial_i-k)^{p_i}\otimes\cdots\otimes\partial_m^{p_m}\Big)\Big)\in W. \end{eqnarray*} Applying Proposition \ref{pro2.3}, we can see that a nonzero element of the following form $$w^\prime=\left\{\begin{array}{llll} (-1)^{q_0}\bar L_rv_{{\mathbf{q}}}\otimes \partial_1^{{q}_1}\cdots\partial_m^{{q}_m}+\mathrm{lower}\ \mathrm{terms}&\mbox{if}\ i^\prime=0,\\[4pt] (-1)^{q_{i^\prime}+1}\alpha_{i^\prime}v_{{\mathbf{q}}}\otimes \partial_0^{{q}_0}\partial_1^{{q}_1}\cdots\partial_{i^\prime}^0\cdots\partial_m^{{q}_m}+\mathrm{lower}\ \mathrm{terms}&\mbox{if}\ 1\leq i^\prime\leq m, \alpha_{i^\prime}\neq 0, \end{array}\right. $$ also lies in $W.$ This would lead to a contradiction, since ${\rm deg}(w^\prime)<{\rm deg}(w)$. \end{proof} The following is an immediate consequence of Lemma \ref{lemm2} and Theorem \ref{th1}. \begin{coro} Let $m\in\mathbb{N},r\in\mathbb{Z}_+,\alpha_0\in\mathbb{C},\lambda_0,\lambda_i,\alpha_i\in\mathbb{C}^*$ for $i=1,2,\ldots,m$ and $V$ be an irreducible $\bar \mathcal{L}_{r}$-module such that $V\ncong V_{\alpha_0}$. Then the $\mathcal{L}$-module $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ is irreducible if and only if $\lambda_0,\ldots, \lambda_m$ are pairwise distinct. \end{coro} \section{Isomorphism classes} In this section, we will determine the necessary and sufficient conditions for two classes of irreducible $\mathcal{L}$-modules $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ and $\mathcal{M}\big(W,\Omega(\mu_0,\beta_0)\big)\otimes\bigotimes_{j=1}^n\Omega(\mu_j,\beta_j)$ to be isomorphic. \begin{lemm}{\rm \cite{TZ1}}\label{lemm4.1} Let $m,n\in\mathbb{N},r,s\in\mathbb{Z}_+$, $\lambda_i,\alpha_i,\mu_j,\beta_j\in\mathbb{C}^*$ for $i=0,\ldots,m,j=0,\ldots,n$ with all $\lambda_i$ and all $\mu_j$ respectively pairwise distinct. Then the $\mathcal{L}$-modules $\bigotimes_{i=0}^m\Omega(\lambda_i,\alpha_i) \cong\bigotimes_{j=0}^n\Omega(\mu_j,\beta_j)$ if and only if $m=n$ and $(\lambda_i,\alpha_i)=(\mu_{\sigma(i)},\beta_{\sigma(i)})$ for $i=0,1,\ldots,m,\sigma\in S_{m+1}$ $\mathrm{(}$the $m+1$-th symmetric group$\mathrm{)}.$ \end{lemm} \begin{defi}\label{defi-new} Let $r\in\mathbb{Z}_+,$ $\alpha\in\mathbb{C}$ and $V$ be an $\bar\mathcal{L}_{r}$-module. Denote by $V^\alpha$ the $\bar\mathcal{L}_{r}$-module obtained from $V$ by modifying the $\bar L_0$-action as $\bar L_0+\alpha{\rm id}_{V}.$ \end{defi} Now we are ready to present the main theorem of this section. \begin{theo}\label{th2} Let $m,n\in\mathbb{N},r,s\in\mathbb{Z}_+$, $\gamma_1,\gamma_2,\alpha_0,\beta_0\in\mathbb{C}$, $\lambda_0,\mu_0,\lambda_i,\alpha_i,\mu_j,\beta_j\in\mathbb{C}^*$ for $i=1,\ldots,m,j=1,\ldots,n$ with $\lambda_0,\ldots,\lambda_m$ and $\mu_0,\ldots,\mu_n$ respectively pairwise distinct. Assume that $V$ is an irreducible $\bar \mathcal{L}_{r}$-module and $W$ is an irreducible $\bar \mathcal{L}_{s}$-module. Then $$\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i) \cong\mathcal{M}\big(W,\Omega(\mu_0,\beta_0)\big)\otimes\bigotimes_{j=1}^n\Omega(\mu_j,\beta_j)$$ as $\mathcal{L}$-modules if and only if one of the following holds \begin{itemize}\parskip-3pt \item[{\rm (a)}] $m=n$, $V\cong V_{\gamma_1},W\cong W_{\gamma_2}$ and $(\lambda_i,\alpha_i-\delta_{i,0}\gamma_1)=(\mu_{\sigma(i)},\beta_{\sigma(i)}-\delta_{\sigma(i),0}\gamma_2)$ for $\alpha_0\neq\gamma_1,\beta_0\neq\gamma_2$ and $i=0,1,\ldots,m,\sigma\in S_{m+1};$ \item[{\rm (b)}] $m=n,r=s,$ $V^{\alpha_0}\cong W^{\beta_0}$ as infinite-dimensional $\bar\mathcal{L}_r$-modules and $(\lambda_0,\lambda_i,\alpha_i)=(\mu_0,\mu_{\sigma(i)},\beta_{\sigma(i)})$ for $i=1,\ldots,m,\sigma\in S_m.$ \end{itemize} \end{theo} \begin{proof} The sufficiency is trivial. For convenience, we write $\Omega(\lambda_i,\alpha_i)=\mathbb{C}[\partial_i]$, $\Omega(\mu_j,\beta_j)=\mathbb{C}[{\partial}_j]$, $V_1=\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ and $V_2=\mathcal{M}\big(W,\Omega(\mu_0,\beta_0)\big)\otimes\bigotimes_{j=1}^n\Omega(\mu_j,\beta_j)$, respectively. Let $\phi$ be an isomorphism from $V_1$ to $V_2$. For any fixed nonzero element $v\otimes 1\in V_1$, assume \begin{equation}\label{ass222} \phi(v\otimes 1)=\sum\limits_{\mathbf{p}\in I}w_{\mathbf{p}}\otimes{\partial_0}^{p_0}{\partial_1}^{p_1}\cdots{\partial_n}^{p_n}, \end{equation} where $I$ is a finite subset of $\mathbb{Z}_+^{n+1}$ and all $w_{\mathbf{p}}$ are nonzero vectors of $W$. For any $k\in\mathbb{Z}$, it follows from $\phi\big(L_k(v\otimes 1)\big)=L_k\phi(v\otimes 1)$ that \begin{eqnarray}\label{4.42} &&\lambda_0^k\phi(v\otimes\partial_0\otimes1\cdots\otimes1)-\lambda_0^kk\alpha_0\phi(v\otimes1) +\sum_{i=0}^{r}\lambda_0^kk^{i+1}\phi\Big(\big(\frac{1}{(i+1)!}\bar L_i\big)v\otimes 1\Big)+\nonumber \\&&\phi\Big(v\otimes1\otimes\sum_{i=1}^m\lambda_i^k(1\otimes\cdots\otimes\partial_i\otimes\cdots\otimes1)\Big)-\sum_{i=1}^m\lambda_i^kk\alpha_i\phi(v\otimes1)\nonumber \\&=&\sum_{\mathbf{p}\in I}\Big(\Big( w_{\mathbf{p}}\otimes\mu_0^k({\partial}_0-k\beta_0)({\partial}_0-k)^{p_0}+\sum_{i=0}^{s}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)w_{\mathbf{p}}\otimes \mu_0^k(\partial_0-k)^{p_0}\Big)\otimes\partial_1^{p_1}\cdots\partial_n^{p_n}\nonumber \\ &&~~~~~~~~~+\,w_{\mathbf{p}}\otimes{\partial}_0^{p_0}\otimes\sum_{j=1}^n\big({\partial}_1^{p_1} \otimes\cdots\otimes\mu_j^k({\partial}_j-k\beta_j) ({\partial}_j-k)^{p_j}\otimes\cdots\otimes{\partial}_n^{p_n}\big)\Big). \end{eqnarray} If $V$ is finite-dimensional, then $V\cong V_{\gamma_1}$ for some $\gamma_1\in\mathbb{C}$ and the power of $k$ on the left hand side of \eqref{4.42} is less than $2$. By Proposition \ref{pro2.3}, we observe from \eqref{4.42} that $m=n$ and $W\cong W_{\gamma_2}$ for some $\gamma_2\in\mathbb{C}$. Then by Lemma \ref{lemm4.1}, there exists $\sigma\in S_{m+1}$ such that $(\lambda_i,\alpha_i-\delta_{i,0}\gamma_1)=(\mu_{\sigma(i)},\beta_{\sigma(i)}-\delta_{\sigma(i),0}\gamma_2))$ for $\alpha_0\neq\gamma_1,\beta_0\neq\gamma_2$ and $i=0,1,\ldots,m$. This is (a). Consider that $V$ is infinite-dimensional. Then by Proposition \ref{pro2.3}, $$m=n ,p_j=0, \forall 1\le j\le m\ {\rm and}\ \exists \sigma\in S_m \ {\rm such\ that}\ (\lambda_0,\lambda_i,\alpha_i)=(\mu_0,\mu_{\sigma(i)},\beta_{\sigma(i)}),\forall 1\le i\le m.$$ Expanding both sides of $\phi\big(L_{n-k}L_k(v\otimes 1)\big)=L_{n-k}L_k\phi(v\otimes 1)$ and then comparing the highest degree of $k$, we deduce that $r=s,p_0=0$ by Proposition \ref{pro2.3}. These remarks allow us to define an injective linear map $\varphi:V\rightarrow W$ such that $\phi(v\otimes 1)=\varphi (v)\otimes 1$ for $v\in V$. Thus, \eqref{4.42} is written as \begin{eqnarray*} &&\lambda_0^k\phi(v\otimes\partial_0\otimes1\cdots\otimes1)-\lambda_0^kk\alpha_0\phi(v\otimes1) +\sum_{i=0}^{r}\lambda_0^kk^{i+1}\phi\Big(\big(\frac{1}{(i+1)!}\bar L_i\big)v\otimes 1\Big)+\nonumber \\&&\phi\Big(v\otimes1\otimes\sum_{i=1}^m\lambda_i^k(1\otimes\cdots\otimes\partial_i\otimes\cdots\otimes1)\Big)-\sum_{i=1}^m\lambda_i^kk\alpha_i\phi(v\otimes1) \\&=&\mu_0^k\varphi (v)\otimes{\partial}_0\otimes1\cdots\otimes1-\mu_0^kk\beta_0\varphi (v)\otimes1 +\sum_{i=0}^{r}\mu_0^kk^{i+1}\big(\frac{1}{(i+1)!}\bar L_i\big)\varphi (v)\otimes 1+\nonumber \\&&\varphi (v)\otimes1\otimes\sum_{j=1}^m\mu_j^k(1\otimes\cdots\otimes{\partial}_j\otimes\cdots\otimes1) -\sum_{j=1}^m\mu_j^kk\beta_j\varphi (v)\otimes1, \end{eqnarray*} which together with Proposition \ref{pro2.3} imply $$\varphi\big((\bar L_i-\alpha_0\delta_{i,0})v\big)=(\bar L_i-\beta_0\delta_{i,0})\varphi(v), \forall i=0,1,\ldots, m,$$ i.e., $V^{\alpha_0}\cong W^{\beta_0}$. This is (b). \end{proof} \section{New irreducible modules} In this section, we compare the tensor product $\mathcal{L}$-modules $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ with other known irreducible non-weight Virasoro modules. By Remark \ref{re2.1}, we only consider the case when $V$ is infinite-dimensional in the following. Let us first recall irreducible non-weight $\mathcal{L}$-modules from \cite{H,CH,LGW,LLZ,TZ1,MZ,LZ2}. For any $\lambda,\alpha\in\mathbb{C}^*$ and $h(t)\in\mathbb{C}[t]$ such that ${\rm deg}\, h(t)=1$, $\Phi(\lambda,\alpha,h)=\mathbb{C}[s,t]$ carries the structure of an $\mathcal{L}$-module: $L_mf(s,t)=\sum_{j=0}^\infty \lambda^m(-m)^j S^jf(s,t)$ and $Cf(s,t)=0$, where $$S^j=\frac s{j!}\partial_s^j-\frac 1{(j-1)!}\partial_s^{j-1}\big(t(\eta-\partial_t)+h(\alpha)\big)-\frac{1}{(j-2)!}\partial_s^{j-2}\alpha(\eta-\partial_t)\quad{\rm for}\ j\in\mathbb{Z}_+,$$ $\partial_t=\frac{\partial}{\partial t}, \partial_s=\frac{\partial}{\partial s}$ and $\eta=\frac{h(t)-h(\alpha)}{t-\alpha}\in\mathbb{C}^*$. Here, $\big(\frac\partial {\partial s}\big)^{-1}=0$, $k!=1$ for $k<0$ and $\binom{i}{j}=0$ for $j> i$ or $j<0$. Let $V$ be an irreducible $\mathcal{L}$-module for which there exists $R_V\in\mathbb{Z}_+$ such that $L_m$ is locally nilpotent on $V$ for all $m\geq R_V$. It was shown respectively in \cite{H,LGW,TZ1} that the tensor product $\mathcal{L}$-modules $\bigotimes_{i=1}^n\Phi(\lambda_i,\alpha_i,h_i(t))\otimes V$, $\bigotimes_{i=1}^n\Phi(\lambda_i,\alpha_i, h_i(t))\otimes\bigotimes_{i=1}^m\Omega(\mu_i,\beta_i)\otimes V$ and $\bigotimes_{i=1}^{n}\Omega(\lambda_i,\alpha_i)\otimes V$ are irreducible if $\lambda_1,\ldots,\lambda_n,\mu_1,\ldots,\mu_m$ are pairwise distinct. Let $b\in\mathbb{C}$ and $A$ be an irreducible module over the associative algebra $\mathcal{K}=\mathbb{C}[t^{\pm1}, t\frac{d}{dt}].$ The action of $\mathcal{L}$ on $A_b:=A$ is given by $$Cv=0, L_nv=(t^{n+1}\frac{d}{dt}+nbt^n)v\quad \mbox{for}\ n\in\mathbb{Z},v\in A.$$ It was proved in \cite{LZ2} that $A_b$ is an irreducible $\mathcal{L}$-module if and only if one of the following conditions holds: (1) $b\neq 0\ \mbox{or}\ 1$; (2) $b=1$ and $t\frac{d}{dt} A =A$; (3) $b=0$ and $A$ is not isomorphic to the natural $\mathcal{K}$-module $\mathbb{C}[t,t^{-1}]$. Let $V$ be an $\bar\mathcal{L}_{r}$-module. For any $a\in\mathbb{C}$ and $\gamma(t)=\sum_{i}c_it^{i}\in\mathbb{C}[t,t^{-1}]$, define the action of $\mathcal{L}$ on $V\otimes \mathbb{C}[t,t^{-1}]$ as follows \begin{eqnarray*}\label{l6.1} &L_m(v\otimes t^n)=\Big(a+n+\sum_{i=0}^r\frac{m^{i+1}}{(i+1)!} \bar L_i\Big)v\otimes t^{m+n}+\sum_{i}c_iv\otimes t^{n+i},\\ &C(v\otimes t^n)=0 \quad {\rm for}\ m,n\in\mathbb{Z}, v\in V. \end{eqnarray*} Then $V\otimes \mathbb{C}[t,t^{-1}]$ carries the structure of an $\mathcal{L}$-module under the above given actions, which is denoted by ${\mathcal M}(V,\gamma(t))$. Note from \cite{LLZ} that ${\mathcal M}(V,\gamma(t))$ is a weight $\mathcal{L}$-module if and only if $\gamma(t)\in \mathbb{C}$ and also that the $\mathcal{L}$-module ${\mathcal M}(V,\gamma(t))$ for $\gamma(t)\in\mathbb{C}[t,t^{-1}]$ is irreducible if and only if $V$ is irreducible. Let $\mathbb{C}[[t]]$ be the algebra of formal power series. Denote $e^{mt}=\sum_{i=0}^\infty\frac{(mt)^i}{i!}\in\mathbb{C}[[t]]$ and $\mathcal{H}=\mathrm{span}\{L_i\mid i\geq-1\}$. Assume that $V$ is an $\mathcal{H}$-module. For any $\mu,\lambda\in\mathbb{C}^*$ and $ \alpha\in\mathbb{C}$, define an $\mathcal{L}$-action on the vector space $\mathcal{M}\big(V,\mu,\Omega(\lambda,\alpha)\big)=V\otimes \mathbb{C} [\partial]$ as follows \begin{eqnarray*} &\label{Lm2.1} L_m\big(v\otimes f(\partial)\big)=v\otimes\lambda^m(\partial-m\alpha)f(\partial-m)+(\mu^me^{mt}-1)\frac{d}{dt}v\otimes \lambda^mf(\partial-m),\\& \label{C1232.3} C\big(v\otimes f(\partial)\big)=0\quad {\rm for}\ m\in\mathbb{Z},v\in V, f(\partial)\in\mathbb{C}[\partial]. \end{eqnarray*} Let $\mathcal C$ denote the category of all nontrivial $\mathcal{H}$-modules $V$ satisfying the following condition: for any $0\neq v\in V$ there exists $r_v\in\mathbb{Z}_+$ such that $L_{r_v+i}v=0$ for all $i\geq 1$. Given any irreducible $\mathcal{H}$-module $V$ in $\mathcal C$, it was proved in \cite{CH} that the $\mathcal{L}$-module $\mathcal{M}\big(V,\mu,\Omega(\lambda,\alpha)\big)$ is irreducible if and only if $\alpha\neq0$ and $\mu\neq 1$. \begin{prop}\label{5.1} Let $m,r\in\mathbb{N},\alpha_0\in\mathbb{C},1\neq \mu,\lambda_0,\lambda_i,\alpha_i,\alpha,\lambda\in\mathbb{C}^*$ for $i=1,2,\ldots,m$ with $\lambda_0,\ldots, \lambda_m$ pairwise distinct. Assume that $V$ is an irreducible $\bar \mathcal{L}_{r}$-module and $V^\prime$ is an irreducible $\mathcal{H}$-module in $\mathcal{C}$. Then $$\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\ncong\mathcal{M}\big(V^\prime,\mu,\Omega(\lambda,\alpha)\big).$$ \end{prop} \begin{proof} Suppose that $$\phi: \mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\rightarrow \mathcal{M}\big(V^\prime,\mu,\Omega(\lambda,\alpha)\big)$$ is an isomorphism of $\mathcal{L}$-modules. Take any $0\neq v\in V$ and assume that \begin{eqnarray* \phi(v\otimes1)=\sum_{i=0}^pu_i\otimes \partial^i \end{eqnarray*}with $u_p\neq0$. Let $r^\prime$ be the minimal nonnegative integer such that $L_{r^\prime+k}u_i=0$ for all $i=0,1,\ldots, p$ and $k\geq 1$. It follows from $\phi\big(L_k(v\otimes1)\big)=L_k(\sum_{i=0}^pu_i\otimes \partial^i)$ that \begin{eqnarray}\label{5.22} &&\lambda_0^k\phi(v\otimes\partial_0\otimes1\cdots\otimes1)-\lambda_0^kk\alpha_0\phi(v\otimes1) +\sum_{i=0}^{r}\lambda_0^kk^{i+1}\phi\Big(\big(\frac{1}{(i+1)!}\bar L_i\big)v\otimes 1\Big)+ \nonumber \\&&\nonumber \phi\Big(v\otimes1\otimes\sum_{i=1}^m\lambda_i^k(1\otimes\cdots\otimes\partial_i\otimes\cdots\otimes1)\Big)-\sum_{i=1}^m\lambda_i^kk\alpha_i\phi(v\otimes1)\\ &=& \sum_{i=0}^p\Big\{ \lambda^ku_i\otimes (\partial-k\alpha)(\partial-k)^i+\lambda^k(\mu^k\sum_{j=0}^{r^{\prime}}\frac{k^j}{j!}L_{j-1}-L_{-1})u_i\otimes (\partial-k)^i\Big\}. \end{eqnarray} By (\ref{5.22}) and Proposition \ref{pro2.3}, we obtain $m=1$ and the following two cases. \begin{case} $\lambda_0=\lambda\mu,\lambda_1=\lambda$. \end{case} Note that $p=0,r=r^{\prime}-1$ in this case. So \eqref{5.22} can be simplified into \begin{eqnarray*} &&\lambda_0^k\phi(v\otimes\partial_0\otimes1)-\lambda_0^kk\alpha_0\phi(v\otimes1) +\sum_{i=0}^{r}\lambda_0^kk^{i+1}\Big(\frac{1}{(i+1)!} \varphi(\bar L_iv)\otimes 1\Big)+ \nonumber \\&&\nonumber \lambda_1^k\phi(v\otimes1\otimes\partial_1)-\lambda_1^kk\alpha_1\varphi(v)\otimes1\\ &=& \lambda^k u_0\otimes (\partial-k\alpha)+\lambda^k(\mu^k\sum_{j=0}^{r+1}\frac{k^j}{j!}L_{j-1}-L_{-1})u_0\otimes 1. \end{eqnarray*} Comparing the coefficient of $\lambda_0^k$, we get $\phi(v\otimes\partial_0\otimes1)=L_{-1}u_0\otimes 1$. It follows from $\phi\big(L_k(v\otimes\partial_0\otimes1)\big)=L_k(L_{-1}u_0\otimes 1)$ that \begin{eqnarray}\label{5.2} &&\lambda_0^k\phi(v\otimes(\partial_0-k\alpha_0)(\partial_0-k)\otimes1)+\sum_{i=0}^r\frac{k^{i+1}}{(i+1)!}\bar L_iv\otimes \lambda_0^k(\partial_0-k)\otimes1+\nonumber \\&& v\otimes \partial_0\otimes\lambda_1^k(\partial_1-k\alpha_1) \nonumber\\ &=& \lambda^k L_{-1}u_0\otimes (\partial-k\alpha)+\lambda^k(\mu^k\sum_{j=0}^{r+1}\frac{k^j}{j!}L_{j-1}-L_{-1})L_{-1}u_0\otimes 1. \end{eqnarray} Then a contradiction is obtained by comparing the highest degree of $k$ on both sides of \eqref{5.2}. \begin{case} $\lambda_0=\lambda,\lambda_1=\lambda\mu$. \end{case} In this case, we have $p=r=1,r^\prime=0$. For any $n,k\in\mathbb{Z}$, by $\phi\big(L_kL_{n-k}(v\otimes1)\big)=L_kL_{n-k}\sum_{i=0}^p(u_i\otimes \partial^i)$, we obtain \begin{eqnarray*} &&\phi\Big(v\otimes\lambda_0^n(\partial_0-k\alpha_0)(\partial_0-(n-k)\alpha_0-k)\otimes1+ \\&&\sum_{i=0}^r\frac{k^{i+1}}{(i+1)!}\bar L_iv\otimes \lambda_0^n(\partial_0-(n-k)\alpha_0-k)\otimes1+ \\&& v\otimes \lambda_0^{n-k}(\partial_0-(n-k)\alpha_0)\otimes\lambda_1^k(\partial_1-k\alpha_1)+\\&& \sum_{i=0}^r\frac{(n-k)^{i+1}}{(i+1)!}\bar L_iv\otimes \lambda_0^n(\partial_0-k\alpha_0)\otimes1+\\&& \sum_{i=0}^r\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)\sum_{i=0}^r\big(\frac{(n-k)^{i+1}}{(i+1)!}\bar L_i\big)v\otimes \lambda_0^n\otimes1+\\&& \sum_{i=0}^r\frac{(n-k)^{i+1}}{(i+1)!}\bar L_iv\otimes \lambda_0^{n-k}\otimes\lambda_1^k(\partial_1-k\alpha_1)+ \\&& v\otimes \lambda_0^{k}(\partial_0-k\alpha_0)\otimes\lambda_1^k(\partial_1-(n-k)\alpha_1)+ \\&& \sum_{i=0}^r\frac{k^{i+1}}{(i+1)!}\bar L_iv\otimes \lambda_0^k\otimes\lambda_1^{n-k}(\partial_1-(n-k)\alpha_1)+ \\&& v\otimes 1\otimes\lambda_1^k(\partial_1-(n-k)\alpha_1-k)(\partial_1-k\alpha_1)\Big) \nonumber\\ &=&\sum_{i=0}^p\Big(u_i\otimes \lambda^n(\partial-k\alpha)(\partial-(n-k)\alpha-k)(\partial-n)^i+\\&& (\mu^ke^{kt}-1)\frac{d}{dt}u_i\otimes \lambda^n(\partial-(n-k)\alpha-k)(\partial-n)^i+\\&& (\mu^{(n-k)}e^{(n-k)t}-1)\frac{d}{dt}u_i\otimes \lambda^n(\partial-k\alpha)(\partial-n)^i+\\&& (\mu^{(n-k)}e^{(n-k)t}-1)\frac{d}{dt}(\mu^ke^{kt}-1)\frac{d}{dt}u_i\otimes \lambda^n(\partial-n)^i\Big). \end{eqnarray*} Comparing the highest degree of $k^4$, we obtain that this case is impossible. Therefore, the two modules can not be isomorphic. \end{proof} \begin{remark} When $r=0$ in Proposition \ref{5.1}, the results can be found in Proposition 5.1 of {\rm\cite{CH}}. \end{remark} \begin{prop} Let $r,r^\prime\in\mathbb{Z}_+$. Assume that $V$ is an irreducible $\bar \mathcal{L}_{r}$-module and $V^\prime$ is an irreducible $\bar \mathcal{L}_{r^\prime}$-module. Then $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ is not isomorphic to any of the following irreducible $\mathcal{L}$-modules: \begin{eqnarray*} &&{\mathcal M}(V^\prime,\gamma(t)), \ \bigotimes_{j=1}^n\Omega(\mu_j,\beta_j)\otimes M,\ \bigotimes_{j=1}^n\Phi\big(\mu_j,\beta_j, h_j(t)\big)\otimes M, \\&& \bigotimes_{i=1}^n\Phi\big(\mu_i,\beta_i,h_i(t)\big)\otimes\bigotimes_{j=1}^q\Omega(\nu_j,\gamma_j)\otimes M, \ A_b, \ M, \end{eqnarray*} where $m,n,q\geq1,\lambda_0,\lambda_i,\alpha_i,\mu_i,\beta_i\in\mathbb{C}^*, \alpha_0,b\in\mathbb{C},$ $h_i(t)\in\mathbb{C}[t]$ with ${\rm deg}\ h_i(t)=1,$ $\lambda_0,\ldots,\lambda_m,$ $\mu_1,\ldots,\mu_n,\nu_1,\ldots,\nu_q$ being pairwise distinct$,$ $M$ is an irreducible $\mathcal{L}$-module for which there exists $R_M\in\mathbb{Z}_+$ such that $L_m$ is locally nilpotent on $M$ for all $m\geq R_M$. \end{prop} \begin{proof} Suppose that $$\phi: \mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\rightarrow {\mathcal M}(V^\prime,\gamma(t))$$ is an isomorphism of $\mathcal{L}$-modules. Take any $0\neq v\in V$ and assume that \begin{eqnarray*}\label{5.11} \phi(v\otimes1)=\sum_{i=1}^n\sum_j b_{i,j} w_i\otimes t^{m+j} \end{eqnarray*} for $m\in\mathbb{Z},n\in\mathbb{N},b_{i,j}\in\mathbb{C},w_i\in V^\prime$. For any $k\in\mathbb{Z}$, by $ \phi(L_k(v\otimes1))=\sum_{i=1}^n\sum_j b_{i,j} L_k( w_i\otimes t^{m+j}),$ we have \begin{eqnarray} &&\nonumber \phi\Big(\Big(v\otimes\lambda_0^k(\partial_0-k\alpha_0)+\sum_{i=0}^{r}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)v\otimes \lambda_0^k\Big)\otimes1\otimes\cdots\otimes1\Big)+\\ &&\nonumber \phi\Big(v\otimes1\otimes\sum_{i=1}^m\big(1\otimes\cdots\otimes\lambda_i^k(\partial_i-k\alpha_i)\otimes\cdots\otimes1\big)\Big) \\&=&\sum_{i=1}^n\sum_j b_{i,j} \Big(\big(a+k+j+\sum_{e=0}^{r^\prime}\frac{k^{e+1}}{(e+1)!} L_e\big)v\otimes t^{k+k+j}+\sum_{l}c_lv\otimes t^{k+j+l}\Big). \end{eqnarray} Using Proposition \ref{pro2.3} in \eqref{5.11}, we get $\phi\big( \bar L_rv\otimes 1\big)=0$ or $\phi\big(v\otimes 1\big)=0$. This is a contradiction with $\bar L_rv\otimes 1\neq0$ and $v\otimes 1\neq0$. Now we prove $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\ncong \bigotimes_{j=1}^n\Omega(\mu_i,\beta_i)\otimes M$. By the similar discussion as before, we suppose that $$\phi: \mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\rightarrow \bigotimes_{j=1}^n\Omega(\mu_j,\beta_j)\otimes M$$ is an isomorphism of $\mathcal{L}$-modules. Take any $0\neq v\in V$ and assume that \begin{eqnarray}\label{5.44} \phi(v\otimes1)=\sum\limits_{\mathbf{p}\in I}v_{\mathbf{p}}\otimes{\partial_1}^{p_1}\cdots{\partial_n}^{p_n}, \end{eqnarray} where $m\in\mathbb{N},v_{\mathbf{p}}\in M$, $I$ is a finite subset of $\mathbb{Z}_+^{m}$. For any $k\in\mathbb{Z}$, by $\phi(L_k(v\otimes1))=\sum\limits_{\mathbf{p}\in I}L_k(v_{\mathbf{p}}\otimes{\partial_1}^{p_1}\cdots{\partial_n}^{p_n}),$ it is easy to get \begin{eqnarray}\label{5.55} &&\nonumber \phi\Big(\Big(v\otimes\lambda_0^k(\partial_0-k\alpha_0)+\sum_{i=0}^{r}\big(\frac{k^{i+1}}{(i+1)!}\bar L_i\big)v\otimes \lambda_0^k\Big)\otimes1\otimes\cdots\otimes1\Big)+\\ &&\nonumber \phi\Big(v\otimes1\otimes\sum_{i=1}^m\big(1\otimes\cdots\otimes\lambda_i^k(\partial_i-k\alpha_i)\otimes\cdots\otimes1\big)\Big) \\&=&\sum\limits_{\mathbf{p}\in I}v_{\mathbf{p}}\otimes\sum_{j=1}^n\big({\partial_1}^{p_1}\cdots\mu_j^k(\partial_j-k\beta_i)(\partial-k)^{p_j}\cdots{\partial_n}^{p_n}\big). \end{eqnarray} Using Proposition \ref{pro2.3} in \eqref{5.55}, we get $n=m+1$ and there exists $j^\prime$ such that $\lambda_0=\mu_{j^\prime}$ and $p_{j^\prime}=r+1$, $p_j=0$ for $j=1,\ldots,j^\prime-1,j^\prime+1,\ldots,n$. Then \eqref{5.44} can be rewritten as \begin{eqnarray*} \phi(v\otimes1)=\sum\limits_{\mathbf{p}\in I}v_{\mathbf{p}}\otimes1\otimes\cdots\otimes\partial_{j^\prime}^{r+1}\otimes\cdots\otimes1. \end{eqnarray*} For any $k,l\in\mathbb{Z}$, it follows from $\phi(L_kL_{l-k}(v\otimes1))=L_kL_{l-k}(\sum\limits_{\mathbf{p}\in I}v_{\mathbf{p}}\otimes1\otimes\cdots\otimes\partial_{j^\prime}^{r+1}\cdots\otimes1)$ and comparing the highest degree of $k$ that we get $\phi\big( \bar L_rv\otimes 1\big)=0$, which is a contradiction. Then by the similar method of Proposition 5.2 in \cite{CH}, we obtain that $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ is not isomorphic to any of the irreducible $\mathcal{L}$-modules: \begin{eqnarray*} \bigotimes_{j=1}^n\Phi\big(\mu_j,\beta_j, h_j(t)\big)\otimes M,\ \bigotimes_{i=1}^n\Phi\big(\mu_i,\beta_i, h_i(t)\big)\otimes\bigotimes_{j=1}^q\Omega(\nu_j,\gamma_j)\otimes M. \end{eqnarray*} Similarly, we conclude that $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\ncong A_b$. Since the action of $L_m$ for each $m\in\mathbb{Z}$ on $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ is not locally nilpotent, one has $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)\ncong M$. This completes the proof. \end{proof} Now we can conclude this section with the following corollary. \begin{coro} Let $m\in\mathbb{N},r\in\mathbb{Z}_+,\alpha_0\in\mathbb{C},\lambda_0,\lambda_i,\alpha_i\in\mathbb{C}^*$ for $i=1,2,\ldots,m$ with $\lambda_0,\ldots, \lambda_m$ pairwise distinct and $V$ be an irreducible $\bar \mathcal{L}_{r}$-module. Suppose that $V$ is infinite-dimensional. Then $\mathcal{M}\big(V,\Omega(\lambda_0,\alpha_0)\big)\otimes\bigotimes_{i=1}^m\Omega(\lambda_i,\alpha_i)$ is not isomorphic to any irreducible $\mathcal{L}$-module in {\rm\cite{H,CH,LGW,LLZ,TZ1,MZ,LZ2}}. \end{coro} \section*{Acknowledgements} This work was supported by the National Natural Science Foundation of China (No. 11801369, 11501417, 11871421, 11431010, 11971350) and the Scientific Research Foundation of Hangzhou Normal University (No. 2019QDL012). \small
2,877,628,089,972
arxiv
\section{Introduction} One of the main driving factors for the success of the current deep learning research paradigm is that intelligent systems can obtain open-world perceptions from large amounts of data \cite{brown2020language,he2016deep} through high-capacity function approximators (neural networks, for instance). Deep reinforcement learning (DRL) follows this research paradigm, using neural networks for function approximation to solve sequential decision-making problems, and has achieved great success in a wide range of domains such as board games \cite{alpha}, video games \cite{nature_dqn,starcraft}, robot manipulation \cite{sac}, etc. However, it remains a challenging problem to create a DRL agent which maintains high sample efficiency, robustness, and low computing resource consumption \cite{drl_matters,andrychowicz2021matters}. There are considerable factors to influence the training and optimization process of DRL algorithms, such as high variance, sensitivity to hyper-parameters, high computational resource consumption but sample inefficiency caused by sample demand, and instability of training induced by the randomness of the learning process and environment. Some DRL algorithms attempt to tackle these issues from the perspective of ensemble methods, i.e., introducing more value functions, or more policy functions \cite{wiering2008ensemble,osband2016deep,chua2018deep,kurutach2018model,saphal2021seerl,sunrise}. The ensemble methods in DRL work well when the models are sufficiently rich in diversity, which is tricky because the policy and value function would grow increasingly similar as training proceeds. Furthermore, the training of ensemble methods in DRL requires multiples computing resources, introduces more hyper-parameters, and requires more non-algorithmic level tricks and subtle fine-tuning for hyper-parameters. Due to these requirements, there are still some tricky issues to be addressed when applying ensemble methods to DRL algorithms in practice compared to current model-free DRL methods. We ask: can we find a novel but simple approach to ensemble DRL algorithms to solve the resource consumption problem? We find one network might be enough. In this paper, we propose the \underline{M}inimalist \underline{E}nsemble \underline{P}olicy \underline{G}radient framework (MEPG) to deal with the aforementioned issues of ensemble methods in DRL. The source of our insight is that ensemble methods can be achieved by integrating multiple models into a single model. In this way, the issue of heavy resource consumption of ensemble learning methods can be well addressed, as one single model does not consume more computational resources than model-free deep reinforcement learning algorithms. In addition, this framework introduces only one additional hyper-parameter on modern model-free DRL algorithms. We implement our insight by minimalist ensemble consistent Bellman update that utilizes dropout \cite{srivastava2014dropout}, which is widely used in deep learning but not in DRL. Each dropout operator acting on the neural networks is equivalent to a new sub-model (or sub-network). The complete model without the dropout operator is equivalent to all the sub-networks acting together, i.e., integrating all the models. However, applying the dropout operator directly to the value functions creates the problem of source-target mismatch, i.e., the left and right sides of the Bellman equation do not correspond to the same value function. Therefore, we introduce minimalist ensemble consistent Bellman update where the same dropout operator acts on both sides of the Bellman equation at the same time. Furthermore, we show theoretically that the policy evaluation of MEPG framework is equivalent to a deep Gaussian Process \cite{damianou2013deep}. We apply our MEPG framework to simple DRL algorithms, DDPG \cite{ddpg}, and SAC \cite{sac}. And our experimental results show that our algorithm matches or outperforms state-of-the-art ensemble DRL and model-free DRL algorithms. We highlight that our framework does not introduce any auxiliary loss function or additional computational consumption compared to the basic algorithm. Moreover, the parameters of MEPG are much less than those of the modern model-free DRL algorithms. We highlight the application of deep neural network techniques that may contribute to reinforcement learning, which is a new view to improve DRL algorithms. Our contributions are summarized as follows. Firstly, we propose a general ensemble reinforcement learning framework, called MEPG, which is simple and easy to implement. This framework, without introducing any additional loss functions, solves the problem that ensemble reinforcement learning methods consume a large amount of computation resources. Secondly, we provide theoretical analysis for this framework, where the policy evaluation process in the MEPG framework is mathematically equivalent to a deep Gaussian Process. Thirdly, we experimentally demonstrate the effectiveness of the MEPG framework by combining our algorithm with the simple DDPG and SAC algorithms. The results show that the MEPG framework achieves or exceeds state-of-the-art ensemble method DRL and model-free DRL methods with less computational resources. Moreover, This framework can be combined with any DRL algorithm. \section{Related Work} We now briefly discuss off-policy DRL algorithms and ensemble DRL methods, then the dropout operator in DRL algorithms. And we compare these works with our framework. \subsection{Off-policy DRL algorithms} There is a clear trail for the development of off-policy Deep RL. Deep Q-networks (DQN) \cite{dqn} is the first successful off-policy deep RL algorithm to be applied to video games. It has long been recognized that the overestimation in Q-learning could severely impair the performance \cite{thrun1993issues}. Double Q-learning \cite{hasselt2010double,ddqn} is proposed to solve overestimation issue for discrete action space. The overestimation issue still exists in continuous control algorithms such as Deterministic Policy Gradient (DPG) \cite{dpg} and its deep variant Deep Deterministic Policy Gradient (DDPG) \cite{ddpg}. \cite{td3} introduced clipped double Q-learning (CDQ), which decreases the overestimation issue in DDPG. \cite{sac} proposed Soft Actor Critic (SAC) algorithm which is based on maximum entropy reinforcement learning framework \cite{ziebart2010modeling} and combined with CDQ, resulting in a stronger algorithm. However, the CDQ approach introduces a slight underestimation issue \cite{lan2019maxmin,he2020wd3}. We applied the MEPG framework to DDPG and SAC algorithm, which achieves or surpasses the performance of compared methods even without introducing a technique that getting a precise value function. \subsection{Ensemble DRL algorithms} Ensemble methods, a popular idea of machine learning, use multiple learning algorithms to obtain better performance than that could be obtained from arbitrarily composed algorithms. And ensemble methods are also used in DRL \cite{wiering2008ensemble,osband2016deep,chua2018deep} for different purposes. \cite{kurutach2018model} showed that modeling error can be reduced by ensemble methods in model-based DRL. \cite{lan2019maxmin,redq} tackled the estimation issue and gave unbiased estimation methods of the value function from the perspective of ensemble functions. \cite{rem} proposed Random Ensemble Mixture, which introduces a convex combination of multiple Q-networks to approximate the optimal Q-function. \cite{saphal2021seerl} proposed a method for model training and selection in a single run. \cite{sunrise} proposed SUNRISE framework that uses a ensemble-based weighted Bellman backups and UCB exploration \cite{auer2002finite}. We are inspired by ensemble learning. But unlike these methods, we only use a single model instead of multiple models to achieve our ensemble purpose. \subsection{Dropout in DRL algorithms} It was recognized early on that dropout \cite{srivastava2014dropout} can improve the performance of DRL algorithms in video games \cite{lample2017playing,vinyals2019grandmaster}. Because the dropout operator prevents complex co-adaptations of units in neural networks where a feature detector is only helpful in the context of several other feature detectors. Therefore, the effectiveness of the dropout in pixel-level input DRL is somehow similar to the supervised learning tasks for image input. \cite{kamienny2020privileged} proposed a privileged information dropout (PI-Dropout) method to improve sample efficiency and performance, but requires prior knowledge, i.e., an optimal representation for inputs. Our work does not require any auxiliary information. \cite{ICRA2019SafeRL,wu2021uncertainty} obtain the neural networks model uncertainty with dropout through multiple forwards given the same inputs. The closest work to ours is DQN+dropout \cite{gal2016dropout} in a discrete environment, which also utilizes the uncertainty of Q-function induced by dropout. Another approach to measuring uncertainty is to determine it using the variance generated by multiple Q-functions \cite{sunrise}. Our approach differs from previous works in that we neither use multiple Q-functions to estimate model uncertainty nor prevent pixel-level feature co-adaptation. We use the model ensemble property introduced by the dropout operator. \section{Background} We consider standard RL paradigm as a Markov Decision-making Process (MDP), which is characterized by a 6-tuple $(\mathcal{S, A, R, } P,\rho_0 ,\gamma)$, i.e., a state space $\mathcal{S}$, an action space $\mathcal{A}$, a reward function $\mathcal{R}: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$, a transition probability $P(s_{t+1} \mid s_t, a_t)$ - specifying the probability of transitioning from state $s_t$ to $s_{t+1}$ given action $a_t$, an initial state distribution $\rho_0$, and a discount factor $\gamma \in [0,1)$ \cite{rl}. The agent learns a policy, stochastic or deterministic from interacting with environment. At each time step, the agent generates action $a$ w.r.t. policy $\pi$ based on current state $s$ and send $a$ to environment. Then the agent receives a reward signal $r$ and a new state $s'$ from environment. Through multiple interactions, a trajectory $\tau = \{s_0, a_0, s_1, a_1, \cdots\}$ is generated. The optimization goal of RL is to maximize the expected cumulative discounted reward $J = \mathbb{E}_{\tau} [R_0]$, where $R_t = \sum_{i=t}^T \gamma^{i-t} r(s_i,a_i)$. Two functions play important roles in RL, the state value function $V^\pi(s) = \mathbb{E}_\tau [ R_0 | s_0=s ]$ and action value function (Q-function), $Q^\pi (s, a) = \mathbb{E}_\tau [ R_0 |s_0=s,a_0=a]$, a.k.a. critic, which represent how good a state is and how good a specific action is respectively. According to Bellman Equation \cite{bellman1954theory}, action value function satisfies \begin{equation} Q^{\pi}(s,a) = r(s,a) + \gamma \mathbb{E}_{s' \sim P(\cdot|s,a), a'\sim \pi(\cdot \mid s')}[Q^\pi(s',a')]. \label{eq: bellman equation Q} \end{equation} In Q-learning, the value function can be learned with Equation (\ref{eq: bellman equation Q}) style \cite{rl}. \subsection{Deterministic Policy Gradient} For a large state space $\mathcal{S}$, we can utilize neural networks to represent the corresponding policy and Q-function. It is necessary to introduce policy gradient theorem \cite{policy_gradient} to learn a policy w.r.t. its value function. In actor critic style DRL algorithms, the policy can be updated by Deterministic Policy Gradient \cite{dpg,ddpg} \begin{equation} \nabla_\phi J_{\text{DDPG}} ^ \pi (\phi) = \mathbb{E}_{s\sim P_\pi} [\nabla_a Q^\pi(s,a;\theta)|_{a=\pi(s;\phi)} \nabla_\phi \pi(s;\phi)], \label{eq: dpg} \end{equation} where the actor $\pi$ and critic $Q$ are parameterized by $\phi$ and $\theta$ respectively. The Q network is updated by temporal difference learning with a frozen auxiliary target network $Q^\pi(s,a;\theta')$ \begin{equation} \begin{aligned} J^Q_{\text{DDPG}} (\theta) = & \mathbb{E}_{(s,a)\sim \mathcal{B}} \Big[ \frac{1}{2} \Big( Q(s,a; \theta) \\ &- \big(r(s,a) + \gamma \mathbb{E}_{a' \sim \pi(s';\phi')}[Q(s',a';\theta')] \big)\Big)^2 \Big], \end{aligned} \label{eq: learn value function} \end{equation} where $\mathcal{B}$ is a replay buffer, storing some transition tuples $(s,a, r, s')$. The target network is updated by $\theta' \leftarrow \eta \theta + (1-\eta) \theta'$ at each time step, where $\eta$ is a small constant. In actor critic style algorithm, the critic is usually used to evaluate how good the current policy $\pi$ is. Thus how to learn a precise critic is called policy evaluation. Accordingly, how to make a policy better is called policy improvement. \cite{td3} introduced the TD3 algorithm, which takes the minimum value of a couple of critics as the target of the Bellman update. \subsection{Soft Actor Critic} \cite{sac} introduced Soft Actor Critic (SAC) algorithm based on maximum entropy RL framework \cite{ziebart2010modeling}, which encourages exploration by adding policy entropy to optimization objective. The policy optimization objective is \begin{equation} J^\pi_{\text{SAC}}(\phi) = \mathbb{E}_{s \sim \mathcal{B}} [\mathbb{E}_{a\sim \pi(\cdot \mid s, \phi)}[\alpha \log (\pi(a \mid s;\phi)) - Q(s,a; \theta)]]. \label{eq: sac policy objective} \end{equation} The soft Q-function can be obtained by minimizing the soft Bellman residual \begin{equation} \begin{aligned} J^Q_{\text{SAC}}(\theta) = \mathbb{E}_{(s,a)\sim \mathcal{B}} \Big[ \frac{1}{2} \Big( &Q(s,a;\theta) \\ &- \big(r(s,a) + \gamma \mathbb{E}_{s'\sim P}[V(s'; \theta')] \big) \Big)^2\Big], \end{aligned} \label{eq: sac Q objective} \end{equation} where $V(s;\theta') = \mathbb{E}_{a\sim \pi(\cdot | s;\phi)} [Q(s,a;\theta') - \alpha \log \pi(a|s;\phi)]$. The temperature parameter $\alpha$ can be given as a hyper-parameter or learned by minimizing \begin{equation} J^{\alpha}_{\text{SAC}}(\alpha) = \mathbb{E}_{a\sim \pi^*} [- \alpha \log \pi^* (a|s; \alpha, \phi) -\alpha \mathcal{H}], \label{eq: sac alpha objective} \end{equation} where $\mathcal{H}$ is pre-given target entropy. \section{Minimalist Ensemble Policy Gradient} In this section, we firstly propose minimalist ensemble consistent Bellman update. Then we propose the minimalist ensemble policy gradient (MEPG) framework. Secondly, we apply MEPG framework to model-free off-policy DRL algorithm, which integrates multiple value functions into a single model without introducing any loss function and auxiliary tasks. In principle, our framework can be applied to any modern DRL algorithms. In this paper, We deploy the MEPG framework in the popular DRL method DDPG and vanilla SAC. Thirdly, we show that the policy evaluation process in the MEPG framework is mathematically equivalent to a deep Gaussian Process. \subsection{Minimalist ensemble consistent Bellman update} It is intractable to integrate too many models for ensemble DRL due to limited computation resources. Thus, we consider integrating as many value functions as possible into a single model. To this end, we consider utilizing the shared parameters to achieve our purpose. We deploy the dropout operator \cite{srivastava2014dropout} in our framework. Each dropout operator acting on the neural networks is equivalent to a sub-model (or sub-network). The complete model without the dropout operator is equivalent to all the sub-networks acting together, i.e., integrating all the models. Thus, the ensemble property holds. A feed-forward operation of a standard neural network for layer $l$ and hidden unit $i$ can be described as \begin{equation} \begin{aligned} z_i^{l+1} = \mathbf{w}_i^{l+1} \mathbf{x}^l + b_i^{l+1}, \quad x_i^{l+1} = f(z_i^{l+1}), \end{aligned} \end{equation} where $f$ is a activation function, and $\mathbf{w}$ and $b$ represent the weights and bias respectively. We adopt the following dropout feed-forward style operation \begin{equation} \begin{aligned} &m_j^l\sim \text{Bernoulli}(1-p), \quad \tilde{\mathbf{x}}^l = \mathbf{m}^l \odot \mathbf{x}^l, \\ &z_i^{l+1} = f\big( \frac{1}{1-p} (\mathbf{w}_i^{l+1} \tilde{\mathbf{x}}^l + b_i^{l+1})\big), \end{aligned} \end{equation} where $p$ is the probability of an element to be set to zero and $\odot$ represents Hadamard product. The scale factor $\frac{1}{1-p}$ is added to ensure that the expected output from each unit would keep the same as the one without the dropout operator $\mathcal{D}_\mathbf{m}^{l}$. However, if the dropout operator directly acts on Bellman equation in this form, the algorithm cannot learn a precise Q-function due to the mismatch between the left side and the right side of Bellman equation (\ref{eq: bellman equation Q}). As a result, the algorithm fails to learn the value function, i.e., fail to estimate the policy $\pi$, resulting in the fail of whole process. To tackle this issue, we introduce minimalist ensemble consistent Bellman update. Let $\mathcal{D}_\mathbf{m}^l$ be a dropout operator acting on layer $l$ with parameter $\mathbf{m} \sim \text{Bernoulli}(1-p)$. We define the form of minimalist ensemble consistent Bellman update as \begin{equation} \begin{aligned} \mathcal{D}_\mathbf{m}^{l} J^Q (\theta) = & \mathbb{E}_{(s,a)\sim \mathcal{B}} \Big[ \frac{1}{2} \Big(\mathcal{D}_\mathbf{m}^lQ(s,a; \theta) - \\ &\big(r(s,a) + \gamma \mathbb{E}_{a' \sim \pi(s';\phi')}[\mathcal{D}_\mathbf{m}^lQ(s',a';\theta')]\big)\Big)^2 \Big], \end{aligned} \label{eq: consistent dropout bellman update} \end{equation} which means we take the same mask matrix $\mathbf{m}$ acting on both sides of Bellman equation. Thus minimalist ensemble consistent Bellman update can eliminate the mismatch without destroying the diversity of value functions so that good value functions learned and then a good policy can be derived. The diversity and ensemble properties of value functions hold due to the dropout operator. \begin{algorithm}[t] \caption{ME-DDPG} \label{algo: me-ddpg} \textbf{Initialize}: actor network $\pi$, critic network $Q$ with parameters $\phi, \theta$, target networks $\phi' \leftarrow \phi$, $\theta' \leftarrow \theta$, and replay buffer $\mathcal{B}$\\ \textbf{Parameters}: $T, p, p, \eta, d,$ and $t=0$ \begin{algorithmic}[1] \STATE Reset the environment and receive initial state $s$ \WHILE{$t < T$} \STATE Select action with noise $a = \pi(s;\phi) + \epsilon, \epsilon \sim \mathcal{N}(0, \sigma^{2}) $, and receive reward $r$, new state $s'$ \STATE Store transition tuple $(s, a, r, s')$ to $\mathcal{B}$ \STATE Sample mini-batch of $N$ transitions $(s, a, r, s')$ from $\mathcal{B}$ \STATE $\tilde{a} \leftarrow \pi(s';\phi') + \epsilon$, $\epsilon \sim \text{clip}(\mathcal{N}(0, \tilde{\sigma}^2), -c, c)$ \STATE Compute target for the Q-function: \STATE $y \leftarrow r + \gamma \mathcal{D}_\mathbf{m}^{l}Q(s', \tilde{a};\theta')$ \STATE Update $\theta$ by one step gradient descent using: \STATE $\nabla_\theta J(\theta) = N^{-1} \sum \nabla_\theta \frac{1}{2}\Big(y-\mathcal{D}_\mathbf{m}^{l}Q_{\theta}(s,a)\Big)^2$ \IF{$t$ mod $d$} \STATE Update $\phi$ by one step of gradient ascent using the Deterministic Policy Gradient: \STATE $\nabla_\phi J(\phi) =N^{-1} \sum\nabla_aQ(s, a;\theta)|_{a=\pi(s;\phi)} \nabla_\phi\pi(s;\phi)$ \STATE Update target networks: \STATE $\theta' \leftarrow \eta \theta + (1-\eta) \theta'$, $\phi' \leftarrow \eta \phi + (1-\eta) \phi'$ \ENDIF \STATE $t\leftarrow t+1$ \STATE $s\leftarrow s'$ \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{MEPG framework} The MEPG framework is formulated by using the aforementioned minimalist ensemble consistent Bellman update in the policy evaluation phase and the conventional policy gradient methods in the policy improvement phase. We apply the MEPG framework to the DDPG algorithm and SAC algorithm, called ME-DDPG and ME-SAC respectively. For the policy improvement phase in ME-DDPG, we utilize the original Deterministic Policy Gradient method (Equation (\ref{eq: dpg})) without $\mathcal{D}_\mathbf{m}^l$. For ME-SAC, we keep the original policy update style in SAC. For the policy evaluation phase, we adopt the aforementioned minimalist ensemble consistent Bellman update in Equation (\ref{eq: consistent dropout bellman update}). In ME-DDPG, we take two tricks from TD3 algorithm \cite{td3}, which include target policy smoothing regularization and delayed policy updates. And for ME-SAC, we only use one critic with delayed policy updates. ME-DDPG is summarized in Algorithm \ref{algo: me-ddpg}. ME-SAC is summarized in the technical appendix. \begin{figure*}[t] \centering \subfigure[Ant]{ \includegraphics[width=1.8in]{./bullet_results/AntBulletEnv-v0.pdf} } \hspace{-0.3in} \subfigure[Walker2D]{ \includegraphics[width=1.8in]{./bullet_results/Walker2DBulletEnv-v0.pdf} } \hspace{-0.3in} \subfigure[HalfCheetah]{ \includegraphics[width=1.8in]{./bullet_results/HalfCheetahBulletEnv-v0.pdf} } \hspace{-0.3in} \subfigure[Hopper]{ \includegraphics[width=1.8in]{./bullet_results/HopperBulletEnv-v0.pdf} } \caption{Performance curves on gym PyBullet suite. The shaded area represents half a standard derivation of the average evaluation over 10 trials. For visual clarity, we use slight exponential smoothing. Results show that the performance of the MEPG framework (ME-DDPG and ME-SAC) can match or outperform that of the tested algorithms.} \label{fig: performance learning process curve} \end{figure*} \subsection{Theoretical Analysis}\label{sec: theoretical analysis} In this subsection, we provide theoretical insights into the MEPG framework. Let $\hat{Q}^\pi$ be the output of action value function (a neural network), and a loss function $\mathcal{F}(\cdots)$. Let $\mathbf{W}_i$ be the weight matrix of $M_i \times M_{i-1}$ dimensions and the bias $ \mathbf{b}_i $ of layer $i$ of dimension $M_i$ for $i\in\{1,2,\cdots, L\}$. We define Bellman backup operator as \begin{equation} \mathcal{T} Q^{\pi}(s,a) \overset{\text{def}}{=} r(s,a) + \gamma \mathbb{E}_{s' \sim P(\cdot|s,a), a'\sim \pi(s')}[Q^\pi(s',a')]. \label{app eq: bellman backup} \end{equation} Let $Q^\pi_{\text{True}}$ be the fixed point of Bellman equation (\ref{eq: bellman equation Q}) w.r.t. policy $\pi$. In the DRL setting, the critic network, which characters action value, is used to find the fixed point of Bellman equation w.r.t. current policy $\pi$ through multiple Bellman update. This optimization method is a different paradigm from supervised learning. For convenient, we denote the input and output sets for critic as $\mathcal{X}$ and $\mathcal{Q}$ where $\mathcal{X} \subseteq \mathcal{S} \times \mathcal{A}$. For input $x_i$, the output of the action value function is $\hat{Q}_i$. We only discuss policy evaluation problems, i.e., how to approximate the true Q-function given policy $\pi$, thus we omit the $\pi$ in the following statement. For a more detailed analysis on the dropout operator behind deep learning, we would recommend the readers check \cite{gal2016dropout} for detailed analysis on this topic. And our analysis on approximating Bellman backup is based on \cite{gal2016dropout}. We often utilize modern optimization techniques like Adam \cite{kingma2015adam}, which introduces the $L_2$ regularization term in the learning process. Thus the objective for policy evaluation in conventional DRL can be formulated as \begin{equation} \begin{aligned} \mathcal{L}_{\text{critic}} &= \mathbb{E}_{Q}\big[ \mathcal{F}(\mathcal{T}\hat{Q}, \hat{Q}) \big] + \lambda \sum_{i=1}^L \Big(\| \mathbf{W}_i\|_2^2 + \| \mathbf{b}_i\|_2^2\Big) \\ & \approx \frac{1}{N} \sum_{i=1}^N \mathcal{F}(\mathcal{T}\hat{Q}_i, \hat{Q}_i) + \lambda \sum_{i=1}^L \Big(\| \mathbf{W}_i\|_2^2 + \| \mathbf{b}_i\|_2^2\Big). \end{aligned} \label{app eq: loss function for critic} \end{equation} By minimizing Equation (\ref{app eq: loss function for critic}), we find the fixed point of Bellman equation, and then solve the policy evaluation problem. With the application of the dropout operator, the units of the neural network, are set to zero with probability $p$. Next we consider a deep Gaussian Process (GP) \cite{damianou2013deep}. Now we are given a covariance function \begin{equation} \mathbf{C}(\mathbf{x},\mathbf{y}) = \int\limits_{\mathbf{w},b} d\mathbf{w}db \: p(\mathbf{w}) p(b) f(\mathbf{w}^\top \mathbf{x} + b) f(\mathbf{w}^\top \mathbf{y}+b), \label{app eq: covariance} \end{equation} where $f$ is element-wise non-linear function. Equation (\ref{app eq: covariance}) is a valid covariance function. Assume layer $i$ have a parameter matrix $ \mathsf{W}_i $ with dimension $\mathsf{M}_i \times \mathsf{M}_{i-1}$ and we include all parameters in a set $\mathsf{\omega} = \{\mathsf{W}\}_{i=1}^L$. Let $p(\mathbf{w})$ in Equation (\ref{app eq: covariance}) is the distribution of each row of $ \mathsf{W}_i $. We assume that the dimension of the vector $\mathbf{m}_i$ for each GP layer is $\mathsf{M}_i$. Given some precision parameter $\varepsilon > 0$, the predictive probability of the Deep GP model is \begin{equation} \begin{aligned} &p(Q \mid \mathbf{x}, \mathcal{X,Q}) = \int \limits_{\omega}d\omega \: p(Q \mid x, \omega) p(\omega \mid \mathcal{X,Q}) \\ &p(Q \mid \mathbf{x}, \omega) = \mathcal{N} (Q; \hat{Q}(x; \omega), \varepsilon \mathbf{I}_D) \\ &\hat{Q}(\mathbf{x}; \omega) = \sqrt{\frac{1}{\mathsf{M}_L}} \mathsf{W}_L f\Big(\cdots \sqrt{\frac{1}{\mathsf{M}_1}} \mathsf{W}_2 f (\mathsf{W}_1 \mathbf{x} +\mathbf{m}_1) \cdots \Big) \end{aligned} \end{equation} We utilize $ q(\omega) $ to approximate the intractable posterior $p(\omega; \mathcal{X,Q})$. Note that $q(\omega)$ is a distribution over matrices whose columns are randomly set to zero. We define $q(\omega)$ as \begin{equation} \begin{aligned} \hat{\mathsf{G}}_i &= \mathsf{G}_i \odot \text{diag} ( [z_{i,j}]_{j=1}^{\mathsf{M}_i}) \\ z_{i,j} &\sim \text{Bernoulli} (1 - p_i), \: \text{for} \: i \in \{1, \cdots, L\}, \: j\in\{1, \cdots, \mathsf{M}_{i-1}\} \end{aligned} \end{equation} where $\mathsf{G}_i$ is a matrix as variational parameters. The variable $z_{i,j}=0$ means unit $j$ in layer $i-1$ being zero as an input to layer $i$, which recovers the dropout operator in neural networks. To learn the distribution $q(\omega)$, we minimize the KL divergence between $q(\omega) $ and $ p(\omega) $ of the full Deep GP \begin{equation} J_{\text{GP}} = - \int \limits_{\omega} d\omega \, q(\omega) \log p(\mathcal{Q} \mid \mathcal{X}, \omega) + \text{D}_{\text{KL}} (q(\omega) \| p(\omega)). \label{app eq: deep gausian loss function} \end{equation} The first term in Equation (\ref{app eq: deep gausian loss function}) can be approximated by Monte Carlo method. We can approximate the second term in Equation (\ref{app eq: deep gausian loss function}), and obtain $\sum_{i=1}^L (\frac{p_i l^2}{2} \| \mathsf{G}_i \|_2^2 + \frac{l^2}{2} \| \mathbf{m_i}\|_2^2 ) $ with prior length scale $l$ (see section 4.2 in \cite{gal1506dropout_appendix}). Given the precision parameter $\varepsilon > 0$, the objective of deep GP can be formulated as \begin{equation} \begin{aligned} \mathcal{L}_{\text{GP}} \propto &\frac{1}{N\varepsilon } \sum_{i=1}^N - \log p(Q_i \mid \mathbf{x}_i ; \hat{\omega}) \\ &+ \frac{1}{N \varepsilon}\sum_{i=1}^L \Bigg( \frac{p_i l^2}{2} \|\mathsf{G}_i \|_2^2 + \frac{l^2}{2} \|\mathbf{m}_i \|_2^2 \Bigg ). \end{aligned} \end{equation} We can recover Equation (\ref{app eq: loss function for critic}) by setting $ \mathcal{F}(\mathcal{T}\hat{Q}, \hat{Q}) = - \log p(Q_i \mid x_i ; \hat{\omega}) $. Note that the sampled $\hat{\omega}$ leads to the realization of the Bernoulli distribution $z_{i,j}$, which is equivalent to the binary variable $z_{i,j}$ in the dropout operator. The above analysis shows that the policy evaluation process in the MEPG framework with dropout operator can be viewed as a deep Gaussian Process. The uncertainty introduced by dropout operator is still explicitly implied in the model. That means the uncertainty arises from the inherent properties of the model. Thus, the diversity and ensemble properties of value functions hold due to the dropout operator. \begin{table*}[h] \caption{The average of top five maximum average returns over five trials of one million time steps for various algorithms. The maximum value for each task is bolded. "InvPen", "InvDou" and "InvPenSwingup" are shorthand for "InvertedPendlum", "InvertedDoublePendlum" and "InvertedPendulumSwingup" respectively.} \label{table: performance table 4 results} \renewcommand\tabcolsep{3.0pt} \begin{center} \begin{tabular}{ccccccccc} \toprule \textbf{Algorithm}&\textbf{Ant}&\textbf{HalfCheetah}&\textbf{Hopper}&\textbf{Walker2D}&\textbf{InvPen}&\textbf{InvDouPen}&\textbf{InvPenSwingup}&\textbf{Reacher}\\ \midrule ME-SAC &\textbf{2906.98} &\textbf{3113.21} &2532.98 &\textbf{1870.53} &1000.0 &9359.96 &\textbf{893.71} &24.51\\ ME-DDPG &2841.04 &2582.32 &\textbf{2546.56} &1770.11 &1000.0 &\textbf{9359.98} &893.02 &24.34\\ SUNRISE &1234.89 &2017.94 &2386.46 &1269.78 &1000.0 &9359.93 &893.2 &\textbf{27.44}\\ REDQ &2155.07 &1734.5 &2215.06 &1085.09 &1000.0 &9359.16 &891.37 &26.02\\ TD3 &2758.38 &2360.2 &2190.12 &1868.65 &1000.0 &9359.66 &890.97 &24.99\\ SAC &2009.36 &2567.7 &2317.64 &1776.34 &1000.0 &9358.78 &892.36 &24.13\\ DDPG &2446.75 &2501.24 &1918.42 &1142.71 &1000.0 &9358.94 &546.65 &24.78\\ \bottomrule \end{tabular} \end{center} \end{table*} \section{Experimental Results} In this subsection, we answer the follow questions: \begin{itemize} \item How good MEPG framework is compared with SOTA model-free and ensemble DRL algorithms? \item What is the contribution of each component to the algorithm? \item How does the training time cost and number of parameters of our algorithm compared to other tested algorithms? \item How sensitive is MEPG to hyper-parameters? \end{itemize} To evaluate our framework MEPG, we conduct experiments on open source PyBullet suite \cite{bullet3}, interfaced through Gym simulator \cite{gym}. We highlight the fact that the PyBullet suite is usually considered harder to train than MuJoCo suite \cite{raffinSmoothExplorationRobotic2021}. Given the recent discussion in DRL reproducibility \cite{drl_matters,andrychowicz2021matters}, we strictly control all random seeds, and our results are reported over 5 trials unless otherwise stated, with the same setting and fair evaluation metrics. More experimental results and more details can be found in the technical appendix. \subsection{Evaluation} To evaluate the MEPG framework, we measure ME-DDPG and ME-SAC performance on PyBullet tasks compared with the state-of-the-art model-free algorithm and recently proposed ensemble DRL algorithms such as SUNRISE \cite{sunrise} and REDQ \cite{redq}. For TD3\footnote{Code: \url{https://github.com/sfujim/TD3}}, SUNRISE\footnote{Code: \url{https://github.com/pokaxpoka/sunrise}} and REDQ\footnote{Code: \url{https://github.com/watchernyu/REDQ}}, we use the authors' implementation with default hyper-parameters. We keep the same hyper-parameters for DDPG and SAC. For a fair comparison, we replace one-time policy update every twenty times critic optimization with one-time policy update every two-time critic optimization in REDQ. We run each task for one million time steps and perform one gradient step after each interaction step. Every $5,000$ time steps, we execute an evaluation step over 10 episodes without any exploration operation in every algorithm. Our performance comparison results are presented in Table \ref{table: performance table 4 results} and the learning curves are in Figure \ref{fig: performance learning process curve}. The results show that our proposed algorithm ME-DDPG and ME-SAC are just as good as, if not better than, state-of-the-of methods without any auxiliary tasks. For the Pendulum family of environments, all algorithms are equally good. Our framework is best in terms of learning speed and final performance for the Ant, Walker2D and Hopper environments. We highlight that our proposed algorithms ME-DDPG and ME-SAC achieve or surpass the best performance while consuming fewer computational resources. More experimental results and details are in the technical appendix. \subsection{Ablation Studies}\label{sec: ablation study} We conduct ablation studies to understand the contribution of each individual component. We show the ablation results for ME-DDPG and ME-SAC in Table \ref{table: ablation for me-ddpg 4 env}, where we compare the performance of removing or adding specific component from ME-DDPG or ME-SAC. Firstly, we investigate three key components, i.e., dropout (DO), Target Policy Smoothing (TPS), and Delay Update (DU) in ME-DDPG. In practice, we do not utilize Clipped Double Q-learning (CDQ) from TD3. The results show the effectiveness of each component varies in different tasks. CDQ usually has some help that is not particularly obvious but is not consistent as there are some environments that overestimation may encourage exploration and thus lead to better performance. DO, DU and TPS help a lot. Secondly, we perform a similar ablation analysis for the ME-SAC algorithm, where "FIX-ENT" means we adopt a fixed entropy coefficient $\alpha$. The ablation results of the ME-DDPG algorithm for CDQ, DU, and DO are also applicable to the ME-SAC method. Here we find that automatic adjustment of entropy is extremely helpful. In addtion, for both ME-DDPG and ME-SAC, the minimalist ensemble consistent Bellman update helps a lot and brings performance improvements. Additional experimental results and learning curves can be found in the technical appendix. \begin{table}[htbp] \caption{The average of the top five maximum average returns over five trials of one million time steps for ablation studies. $\pm$ means adding or removing corresponding component. The maximum value for each task is bolded. "MED", "MES" and "HCheetah", "Walker" are shorthand for "ME-DDPG", "ME-SAC" "HalfCheetah", and "Walker2D" respectively.} \label{table: ablation for me-ddpg 4 env} \renewcommand\tabcolsep{3.0pt} \begin{center} \begin{tabular}{ccccc} \toprule \textbf{Algorithm}&\textbf{Ant}&\textbf{HCheetah}&\textbf{Hopper}&\textbf{Walker}\\ \midrule MED &2841.04 &\textbf{2582.32} &2546.56 &1770.11\\ ME+CDQ &\textbf{2892.90} &2003.77 &\textbf{2584.1} &\textbf{2094.6}\\ MED-DO &2001.56 &2522.97 &2326.2 &1774.61\\ MED-DU &2610.23 &2575.64 &2330.16 &1784.34\\ MED-TPS &2623.13 &2605.99 &2328.32 &1675.31\\ DDPG &2446.75 &2501.24 &1918.42 &1142.71\\ \midrule MES &2906.98 &\textbf{3113.21}&\textbf{2532.98} &1870.53\\ MES+CQD &\textbf{3039.11} &2209.46 &2408.86 &\textbf{2060.8}\\ MES-DU &2605.52 &2718.17 &2211.27 &1631.99\\ MES+FIXENT &818.03 &733.59 &1632.85 &843.01\\ SAC &2009.36 &2567.7 &2317.64 &1776.34\\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Run time and number of parameters} We evaluate the run time of one million time steps of training for each tested RL algorithm. In addition, we also quantify the number of parameters of neural networks corresponding to the tested algorithms in different environments. For a fair comparison, we keep the same update steps in REDQ. And we cancel the evaluation process in this test. We show the run time test results in Table \ref{table: run time} and parameters quantification in Table \ref{table: number of parameters}. The MEPG framework consumes the shortest time among the same skeleton algorithms. Unsurprisingly, We find our algorithm ME-DDPG and ME-SAC act favorably compared to other algorithms we tested in terms of wall-clock training time. Our MEPG framework not only runs in one-third to one-seventh the time of ensemble learning methods, but even less than the training time of model-free DRL algorithms. All run time experiments are conducted on a single GeForce GTX 1070 GPU and an Intel Core i7-8700K CPU at 2.4GHZ. The number of parameters of the MEPG framework is between 14\% and 27\% of the number of parameters of the tested ensemble algorithms, but our algorithms perform better than the tested ones. \begin{table}[htbp] \caption{Run time comparison of training each RL algorithm.} \label{table: run time} \renewcommand\tabcolsep{3.0pt} \begin{center} \begin{tabular}{ccccc} \toprule \textbf{Algorithm}&\textbf{Ant}&\textbf{HalfCheetah}&\textbf{Hopper}&\textbf{Walker2D}\\ \midrule ME-DDPG &47m &44m &42m &45m\\ TD3 &57m &55m &52m &55m\\ DDPG &1h 1m &58m &56m &58m\\ \midrule ME-SAC &1h 14m &1h 13m &1h 10m &1h 12m\\ SAC &1h 17m &1h 15m &1h 12m &1h 15m\\ REDQ &3h 43m &3h 41m &3h 29m &3h 35m\\ SUNRISE &7h 24m &7h 15m &7h 10m &7h 16m\\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[htbp] \caption{The number of parameters is given in millions. For all tested environments, we utilize the same network architectures and the same hyper-parameters. The amount of parameters differs because different environments have various state and action dimensions, which results in different input and output dimensions of the neural network.} \label{table: number of parameters} \centering \renewcommand\tabcolsep{3.0pt} \begin{tabular}{ccccc} \toprule \textbf{Algorithm}&\textbf{Ant}&\textbf{HalfCheetah}&\textbf{Hopper}&\textbf{Walker2D}\\ \midrule ME-DDPG &0.302M &0.297M &0.283M &0.293M\\ ME-SAC &0.226M &0.223M &0.212M &0.220M\\ TD3 &0.453M &0.446M &0.425M &0.440M\\ SAC &0.377M &0.372M &0.354M &0.367M\\ REDQ &1.586M &1.564M &1.489M &1.543M\\ SUNRISE &1.132M &1.117M &1.063M &1.101M\\ \bottomrule \end{tabular} \end{table} \subsection{Hyper-parameter Sensitivity} The MEPG framework only introduces one hyper-parameter $p$, which means the probability of setting a unit of neural network to zero. We take eleven $p$ values in interval $[0, 1]$. For visual simplicity, we normalize the data by $\text{Ret}^{\text{env}}_p = \text{Ret}^{\text{env}}_p / \max_p \{\text{Ret}^{\text{env}}_p \}$, where $\text{Ret}^{\text{env}}_p$ means the average of top five maximum average returns in a run over five trials of one million time steps for various p-value on \texttt{env} environment. We show the results in Figure \ref{fig: sensitive heat map}. Each cell represents the result of different p-value on various environments. The experimental results show that even large p-values, i.e., implicitly integrating enough models, do not cause the learning process to fail. Overall, the difference in performance between the different p-values is not very significant. For the Pendulum family of environments, the performance of the MEPG framework induced by various $p$ values are equally good. Relatively speaking, smaller p-value give better performance. Our MEPG framework is not extremely hyper-parameter sensitive. Therefore we set $p=0.1$ as the default hyper-parameter setting for the ME-DDPG and ME-SAC algorithms. \begin{figure}[t] \centering \subfigure[ME-DDPG]{ \includegraphics[width=1.7in]{./sensitivity/heat_map/me_ddpg_heat_map.pdf} } \hspace{-0.3in} \subfigure[ME-SAC]{ \includegraphics[width=1.7in]{./sensitivity/heat_map/me_sac_heat_map.pdf} } \caption{Performance of ME-SAC and ME-DDPG algorithms given different p-values in different environments. Each cell represents the average of the top five maximum average returns in a run over five trials of one million time steps for various p-value. For most environments and p-values, both algorithms are able to learn successfully. Besides, the difference in performance between the different p-values is not very significant. Relatively speaking, smaller p-values give better performance. The performance of both algorithms perform best when $p=0.1$. } \label{fig: sensitive heat map} \end{figure} \section{Conclusion} Ensemble RL methods introduce multiple value and policy functions, aiming to mitigate instability in Q-learning and to learn a robust policy. As a result, it raises with it the problem of large mount of resource consumption issue and introduce more hyper-parameters. Before applying the ensemble DRL algorithms to the real world, we need to solve the aforementioned tricky problems. Therefore, in this paper, we propose a novel ensemble reinforcement learning framework, called \underline{M}inimalist \underline{E}nsemble \underline{P}olicy \underline{G}radient (MEPG). The ensemble property of the MEPG framework is induced by minimalist ensemble consistent Bellman update that utilizes the dropout operator. Then we show that the policy evaluation in MEPG framework is mathematically equivalent to a deep Gaussian Process. Next we verify the effectiveness of MEPG in the PyBullet control suite, which is generally considered harder than the commonly used MuJoCo suite. Moreover, the experimental results show that the performance of ensemble DRL algorithms can be easily outperformed by MEPG framework. Besides performance, the MEPG framework have tremendous advantages in terms of run time and number of parameters compared to the tested model-free and ensemble DRL algorithms. The core technique we adopt is the minimalist ensemble consistent Bellman update, wherein the dropout operator is a popular method in deep learning. For the reinforcement learning community, however, the neural network is generally considered as a function approximator under continuous control tasks. As we all know, the neural network is a complex and subtle tool, and reinforcement learning researcher may concentrate on the role of neural networks in DRL. Thus, we highlight that the potential of neural networks may not have been fully exploited, as exemplified by our MEPG framework. \section{Acknowledgments} We thank Jing Cui for her helpful comments. \bibliographystyle{unsrt}
2,877,628,089,973
arxiv
\section{Introduction} The dualization in Boolean lattices is a central problem in algorithmic enumeration as it is equivalent to the enumeration of the minimal transversals of a hypergraph, the minimal dominating sets of a graph, and to many other generation problems \cite{eiter1995identifying,kante2014enumeration}. It~is also a problem of practical interest in database theory, logic, artificial intelligence and pattern mining \cite{kavvadias1993horn,eiter1995identifying,gunopulos1997data,eiter2003new,elbassioni2002algorithm,nourine2012extending}. To~date, it is still open whether this problem can be solved in output-polynomial time. We say that an enumeration algorithm is running in {\em output-polynomial time} if its running time is bounded by a polynomial depending on the sizes of both the input and output data. If the running times between two consecutive outputs is bounded by a polynomial depending on the size of the input, then we say that the algorithm is running with {\em polynomial delay}. We refer the reader to \cite{johnson1988generating,creignou2019complexity,strozecki2019survey} for more details on the complexity of enumeration algorithms. As of now, the best known algorithm for the dualization in Boolean lattices is due to Fredman and Khachiyan and runs in output quasi-polynomial time~\cite{fredman1996complexity}. When generalized to arbitrary lattices, the problem is of practical interest in lattice-oriented machine learning through hypothesis generation~\cite{kuznetsov2004complexity,babin2017dualization}, and in pattern mining~\cite{nourine2012extending}. It~was recently proved in \cite{babin2017dualization} that the dualization in this context is impossible in output-polynomial time, unless {\sf P=NP}. In~\cite{defrain2019dualization}, it was shown that this result holds even when the premises in the implicational base (coding the lattice) are of size at most two. In the case of premises of size one---when the lattice is distributive---the problem is still open. The best known algorithm is due to Babin and Kuznetsov and runs in output sub-exponential time \cite{babin2017dualization}. Output quasi-polynomial time algorithms are known for several subclasses, including distributive lattices coded by products of chains \cite{elbassioni2009algorithms}, or those coded by the ideals of an interval order \cite{defrain2019dualization}. In this paper, we propose equivalent formulations for the dualization in distributive lattices, in terms of graphs, hypergraphs, and posets. In the new framework, a poset on vertices is given together with the input (hyper)graph. Then, the task is of enumerating minimal ideals of the poset with the desired property, i.e., transversality or domination. We show that the obtained problems are equivalent to the dualization in distributive lattices, even when considering various combined restrictions on graph classes and poset types, including bipartite, split, and co-bipartite graphs, and variants of neighborhood inclusion posets; see Theorems~\ref{thm:maintrans} and~\ref{thm:maindom}. Moreover, we believe that these equivalent problems may be simpler to attack using graph structure. For combined restrictions on graph classes and poset types that are not considered in Theorems~\ref{thm:maintrans} and~\ref{thm:maindom}, we show that the problem gets tractable relying on existing algorithms from the literature; see Theorems~\ref{thm:split} and~\ref{thm:bipartite}. A summary of these results is given in Figure~\ref{fig:sum}. The rest of the paper is organized as follows. In Section~\ref{sec:preliminaries} we introduce necessary concepts and definitions. In Sections~\ref{sec:transideal} and~\ref{sec:domideal}, we generalize the two problems of enumerating minimal transversals and minimal dominating sets to the dualization in distributive lattices. In Section~\ref{sec:tractable}, we exhibit tractable cases of the problem. We discuss future work in Section~\ref{sec:conclusion}. \section{Preliminaries}\label{sec:preliminaries} We refer to~\cite{diestel2005graph} for graph terminology not defined below; all graphs considered in this paper are undirected, finite and simple. A {\em graph} $G$ is a pair $(V(G),E(G))$ where $V(G)$ is the set of {\em vertices} and $E(G)\subseteq \{\{x,y\} \mid x,y\in V(G), x\neq y\}$ is the set of {\em edges}. Edges are usually denoted by $xy$ (or $yx$) instead of $\{x,y\}$. A {\em clique} in a graph $G$ is a set of vertices $K$ such that every two vertices in $K$ are adjacent. An {\em independent set} in a graph $G$ is a set of vertices $S$ such that no two vertices in $S$ are adjacent. The {\em subgraph} of $G$ induced by $X\subseteq V(G)$, denoted by $G[X]$, is the graph $(X,E(G)\cap \{\{x,y\} \mid x,y\in X,\ x\neq y\})$; $G-X$ is the graph $G[V(G)\setminus X]$. If $xy$ is an edge, $G-xy$ denotes the graph $(V(G),E(G)\setminus \{x,y\})$. We note $N(x)$ the set of {\em neighbors} of $x$ defined by $N(x)=\{y\in V(G)\mid xy\in E(G)\}$. We note $N[x]$ the set of {\em closed neighbors} of $x$ defined by $N[x]= N(x)\cup\{x\}$. If it is not clear from the context, we add the subscript $G$ to denote the neighborhood in $G$, as in $N_G[x]$. Two vertices $x,y$ are called {\em twin} if $N[x]=N[y]$, and {\em false twin} if $N(x)=N(y)$. For a given set $X\subseteq V(G)$, we respectively denote by $N[X]$ and $N(X)$ the two sets defined by $N[X]=\bigcup_{x\in X} N[x]$ and $N(X)=N[X]\setminus X$. Let $G$ be a graph. We say that $G$ is {\em bipartite} (resp.~{\em co-bipartite}) if $V(G)$ can be partitioned into two independent sets (resp.~two cliques). If $V(G)$ can be partitioned into one independent set and one clique, then $G$ is called {\em split}. Let $D,X\subseteq V(G)$ be two subsets of vertices of $G$. We say that $D$ \emph{dominates} $X$ if $X\subseteq N[D]$. It is (inclusion-wise) minimal if $X\not\subseteq N[D\setminus \{x\}]$ for any $x\in D$. A (minimal) \emph{dominating set} of $G$ is a (minimal) dominating set of $V(G)$. The set of all minimal dominating sets of $G$ is denoted by $\D(G)$, and the problem of enumerating $\D(G)$ given $G$ is denoted by \textsc{Dom-Enum}{}. The set of all minimal dominating sets of a given subset $X$ of vertices of $G$ is denoted by $\D_G(X)$. Let $x$ be a vertex of $D$. We say that $x$ has {\em private neighbor} $y$ w.r.t.~$D$ in $G$ if $y\in N[D]$ and $y\not\in N[D\setminus \{x\}]$. The set of private neighbors of $x\in D$ in $G$ is denoted by $\priv(D,x)$. It is easy to see that $D$ is a minimal dominating set of $G$ if and only if $D$ dominates $G$ and $\priv(D,x)\neq \emptyset$ for every $x\in D$. Also, note that a set in $\D_G(X)$ may contain vertices in $G-X$ as long as these vertices have private neighbors in~$X$. We refer to~\cite{berge1984hypergraphs} for hypergraph terminology not defined below. A {\em hypergraph} $\H$ is a pair $(V(\H),\E(\H))$ where $V(\H)$ is the set of {\em vertices} (or {\em groundset}) and $\E(\H)$ is a set of non-empty subsets of $V(\H)$ called {\em edges} (or {\em hyperedges}). In this paper, and as is custom, we will often describe a hypergraph by its set of edges only, and will denote $\e\in \H$ in place of $\e\in \E(\H)$. If $x$ is a vertex of $\H$, we denote by $\E_x$ the set of edges incident to $x$ defined by $\E_x=\{\e\in \E(\H) \mid x\in \e\}$. A {\em transversal} in a hypergraph $\H$ is a set of vertices $T$ that intersects every edge of $\H$. It is minimal if it does not contain any transversal as a proper subset. The set of all minimal transversals of $\H$ is denoted by $Tr(\H)$, and the problem of enumerating $Tr(\H)$ given $\H$ is denoted by \textsc{Trans-Enum}{}. A hypergraph $\H$ is called {\em Sperner} if $\e\not\subseteq \e'$ for any two distinct hyperedges $\e,\e'\in \H$. It is well known that hypergraphs can be considered Sperner when dealing with \textsc{Trans-Enum}{}. In the following, we denote by $\N(G)$ the Sperner hypergraph of closed neighborhoods of $G$ defined by $\N(G)=\Min_\subseteq\{N[x]\mid x\in V(G)\}$. It is not hard to see that \textsc{Dom-Enum}{} is a particular case of \textsc{Trans-Enum}{}, where the minimal dominating sets of $G$ are exactly the minimal transversals of $\N(G)$. Recently in \cite{kante2014enumeration}, it was shown that the two problems are in fact equivalent, even when restricted to co-bipartite graphs. The result in \cite{kante2014enumeration} is based on the following construction. For~any hypergraph $\H$, the {\em bipartite incidence graph} of $\H$ is the graph $I(\H)$ with bipartition $X=V(\H)$ and $Y=\{y_\e \mid \e\in \E(\H)\}$, and where there is an edge between $x\in X$ and $y_\e\in Y$ if $x$ belongs to~$\e$ in $\H$. The construction of a bipartite incidence graph is given in Figure~\ref{fig:inc-bipartite}. \begin{figure}[t] \center \includegraphics[scale=1.2]{fig-inc-bipartite.pdf} \caption{The bipartite incidence graph $I(\H)$ of bipartition $X=V(\H)$ and $Y=\{y_\e \mid \e\in \H\}$ for the hypergraph $\H=\{\e_1,\e_2,\e_3,\e_4\}$ where $\e_1=\{x_1,x_2,x_5\}$, $\e_2=\{x_1,x_2,x_3\}$, $\e_3=\{x_3,x_4,x_5\}$ and $\e_4=\{x_5,x_6\}$. Then $xy_e\in E(I(\H))$ if and only if $x\in e$.}\label{fig:inc-bipartite} \end{figure} A {\em partial order} on a set $X$ (or {\em poset}) is a binary relation $\leq$ on $X$ which is reflexive, anti-symmetric and transitive, denoted by $P=(X,\leq)$. Two elements $x,y$ of $X$ are said to be {\em comparable} if $x \leq y$ or $y \leq x$, otherwise they are said to be {\em incomparable}. If $x<y$ and there is no element $z$ such that $x<z<y$ then we say that $y$ {\em covers} $x$. Posets are represented by their \emph{Hasse diagram} in which each element is a vertex in the plane, and where there is a line segment or curve that goes upward from $x$ to $y$ whenever $y$ covers $x$. See Figure~\ref{fig:dual} for an example. A subset of a poset in which every pair of elements is comparable is called a {\em chain}. A subset of a poset in which no two distinct elements are comparable is called an {\em antichain}. A poset is an {\em antichain poset} (resp.~\emph{total order}) if the set of its elements is an antichain (resp.~chain). We call poset induced by $S\subseteq X$, denoted $P[S]$, the suborder restricted on the elements of $S$ only; $P-S$ is the poset $P[X\setminus S]$. A~set $I\subseteq X$ is an {\em ideal} of $P$ if $x\in I$ and $y\leq x$ imply $y\in I$. If $x\in I$ and $x\leq y$ imply $y\in I$, then $I$ is called {\em filter} of $P$. Note that the complement of an ideal is a filter, and vice versa. For~every $x\in P$ we associate the {\em principal ideal of $x$} (or simply {\em ideal of $x$}) denoted by $\downarrow\,x$ and defined by $\downarrow x=\{y\in X \mid y\leq x\}$. The {\em principal filter of} $x\in X$ is the dual $\uparrow x=\{y\in X \mid x\leq y\}$. The set of all subsets of $X$ is denoted by $2^X$, and the set of all ideals of $P$ by~$\mathcal{I}(P)$. Clearly, $\I(P)\subseteq 2^X$ and $\I(P)=2^X$ whenever $P$ is an antichain poset. If $S$ is a subset of $X$, we respectively denote by $\downarrow S$ and $\uparrow S$ the two sets defined by $\downarrow S=\bigcup_{x\in S} \downarrow x$ and $\uparrow S=\bigcup_{x\in S} \uparrow x$. We~respectively denote by $\Min(S)$ and $\Max(S)$ the sets of minimal and maximal elements of $S$ w.r.t.~$\leq$. The next definition is central in this paper. \begin{definition} Let $P=(X,\leq)$ be a partial order and $B^+$, $B^-$ be two antichains of $P$. We~say that $B^+$ and $B^-$ are {\em dual} in $P$ whenever $\downarrow B^+\,\cup \uparrow B^-=X$ and $\downarrow B^+\,\cap \uparrow B^-=\emptyset$. \end{definition} Note that deciding whether two antichains $B^+$ and $B^-$ of $P$ are dual can be done in polynomial time in the size of $P$ by checking whether ${B^-=\Min(P-\!\downarrow B^+)}$, or equivalently if ${B^+=\Max(P-\!\uparrow B^-)}$. Notations $B^+$ and $B^-$ in fact come from these equalities. However, the task becomes difficult when the poset is not fully given, but only an implicit coding---of possibly logarithmic size in the size of $P$---is given. This is usually the case when considering dualization problems in lattices. A {\em lattice} is a poset in which every two elements have a {\em supremum} (also called {\em join}) and a {\em infimum} (also called a {\em meet}); see \cite{davey2002introduction,gratzer2011lattice}. In this paper however, only the next two characterizations from \cite{birkhoff1940lattice} will suffice. We denote by {\em Boolean lattice} any poset isomorphic to $(2^X,\subseteq)$ for some set $X$; such a lattice is also called {\em hypercube}. We denote by {\em distributive lattice} any poset isomorphic to $(\I(P),\subseteq)$ for some partially ordered set $P=(X,\leq)$. Then, $X$ and $P$ are called {\em implicit coding} of the lattice and we denote by $\L(X)$ and $\L(P)$ the two lattices coded by $X$ and $P$. Clearly, every Boolean lattice is a distributive lattice where $P$ is an antichain poset, as $\I(P)=2^X$ for such $P$. In~fact, it can be easily seen that each comparability $x\leq y$ in $P$ removes from $(2^X,\subseteq)$ the Boolean lattice given by the interval $[y,X\setminus \{x\}]$, i.e., the elements containing $y$ but not $x$. At last, observe that $\L(P)$ may be of exponential size in the size of $P$: this is in particular the case when the lattice is Boolean, i.e., when $P$ is an antichain poset. An example of a distributive lattice coded by the ideals of a poset is given in Figure~\ref{fig:dual}. \begin{figure} \center \includegraphics[scale=1.2]{fig-dual.pdf} \caption{A poset $P=(X,\leq)$ (left) that codes the lattice $\L(P)=(\I(P),\subseteq)$ (right), and the border (curved line) formed by the two dual antichains $\B^+=\{\{x_1,x_2\}, \{x_2,x_4\}\}$ and $\B^-=\{\{x_1,x_2,x_3\}, \{x_1,x_2,x_4\}\}$ of $\L(P)$. For better readability, ideals are denoted by the indexes of their elements in the lattice, i.e., $123$ stands for $\{x_1,x_2,x_3\}$.} \label{fig:dual} \end{figure} In this paper, we are concerned with the following decision problem and one of its two generation versions. \begin{decproblem} \problemtitle{Dualization in Distributive Lattices given by the Ideals of a Poset (\textsc{Dual}{})} \probleminput{A poset $P=(X,\leq)$ and two antichains $\B^+,\B^-$ of $\L(P)$.} \problemquestion{Are $\B^+$ and $\B^-$ dual in $\L(P)$?} \end{decproblem} \begin{problemgen} \problemtitle{Generation version of \textsc{Dual}{} (\textsc{Dual-Enum}{})} \probleminput{A poset $P=(X,\leq)$ and an antichain $\B^+$ of $\L(P)$.} \problemquestion{The dual antichain $\B^-$ of $\B^+$ in $\L(P)$.} \end{problemgen} We stress the fact that the lattice $\L(P)$ is not given in any of the two problems defined above. Only $P$ is given, which is a crucial point. Hence, \textsc{Dual-Enum}{} can be reformulated without any mention of the lattice, namely as the enumeration of all inclusion-wise minimal ideals of $P$ that are not a subset of any ideal in $\B^+$, i.e., as the enumeration of the set \[ \B^-=\Min_\subseteq\{I\in \I(P) \mid I\not\subseteq B~\text{for any}~B\in \B^+\}. \] Then, computing a first solution to this problem is easy, as we start from $I=X$ as an ideal, and remove its maximal elements until it is a minimal ideal such that $I\not\subseteq B$ for any $B\in\B^+$. However, it is still open whether the problem can be solved in output quasi-polynomial time. To~date, the best known algorithm runs in output sub-exponential time $2^{O(n^{0,67} \log^3 N)}$ where $N=|\B^+|+|\B^-|$, and where $P$ is given as a $n\times n$ matrix \cite{babin2017dualization}. Output quasi-polynomial time algorithms running in time $\poly(N,n) + N^{o(\log N)}$ are known for several subclasses, including distributive lattices coded by products of chains \cite{elbassioni2009algorithms}, or distributive lattices coded by the ideals of an interval order \cite{defrain2019dualization}. If the poset is an antichain, i.e., if the lattice is Boolean, then this problem calls for enumerating every inclusion-wise minimal subset of $X$ that is not a subset of any $B\in\B^+$, or equivalently, that intersects every set in $\H=\{X\setminus B \mid B \in \B^+\}$. Under such a formulation, it is easily seen that the dualization in Boolean lattices is equivalent to \textsc{Trans-Enum}{} (hence to \textsc{Dom-Enum}{}), where $Tr(\H)=\B^-$; see~\cite{nourine2014dualization,nourine2016encyclopedia}. In~this case, the best known algorithm runs in output quasi-polynomial time $N^{o(\log N)}$ where $N=|\B^+|+|\B^-|$, and the existence of an output-polynomial time algorithm remains open after decades of research \cite{eiter1995identifying,fredman1996complexity,eiter2008computational}. Due to its equivalence with \textsc{Dom-Enum}{}, the complexity of the dualization in Boolean lattice has been precised under various restrictions on graph classes and parameters. Among these results, output-polynomial time algorithms were given for degenerate \cite{eiter2003new}, line \cite{kante2012neighbourhood,golovach2015incremental}, split \cite{kante2014enumeration}, chordal \cite{kante2015chordal}, triangle-free graphs \cite{bonamy2019triangle}, graphs of bounded clique-width \cite{courcelle2009linear}, LMIM-width \cite{Golovach2018}, etc. Other classes of graphs remain open, including co-bipartite (as the problem in this case is as hard as \textsc{Trans-Enum}{}, hence as hard as in general graphs \cite{kante2014enumeration}), or unit disk graphs~\cite{kante2008minimal,golovach2016enumerating}. The aim of this work is to generalize \textsc{Trans-Enum}{} and \textsc{Dom-Enum}{} to the dualization in distributive lattices, in order to obtain similar finer characterizations on the difficulty of the later problem, using (hyper)graph parameters. \section{Transversal-ideals}\label{sec:transideal} We generalize \textsc{Trans-Enum}{} to the enumeration of the minimal ideals of a poset with the transversal property. We show that the obtained problem is equivalent to the dualization in distributive lattices. Let $\H$ be a hypergraph and $P_\H$ be a partial order on vertices of $\H$. Let $I$ be a subset of vertices of $\H$. We say $I$ is a {\em transversal-ideal} of $\H$ w.r.t.~$P_\H$ if it is an ideal of $P_\H$, and a transversal of~$\H$. It~is minimal if it does not contain any transversal-ideal as a proper subset. We denote by $ITr(\H,P_\H)$ the set of minimal transversal-ideals of $\H$ w.r.t.~$P_\H$, and define the problem of generating $ITr(\H,P_\H)$ as follows. \begin{problemgen} \problemtitle{Minimal Transversal-Ideals Enumeration (\textsc{ITrans-Enum}{})} \probleminput{A hypergraph $\H$ and a partial order $P_\H$ on vertices of $\H$.} \problemquestion{The set $ITr(\H,P_\H)=\Min_\subseteq\{I\in \I(P_\H) \mid I\ \text{is a transversal of}\ \H\}$.} \end{problemgen} \begin{figure} \center \includegraphics[page=1,scale=1.2]{fig-running-example.pdf} \caption{A hypergraph $\H=\{e_1,e_2,e_3,e_4\}$ (left) and a partial order $P_\H$ on vertices of $\H$ (right), where $e_1=\{x_1,x_2,x_5\}$, $e_2=\{x_1,x_2,x_3\}$, $e_3=\{x_3,x_4,x_5\}$ and $e_4=\{x_5,x_6\}$. The minimal transversal-ideals for this instance are $I_1=\{x_2,x_3,x_5\}$ and $I_2=\{x_2,x_3,x_6\}$. Note that $I_1$ is the ideal of two minimal transversals $T_1=\{x_2,x_5\}$ and $T_2=\{x_3,x_5\}$.} \label{fig:running-example-H} \end{figure} An instance of this problem is given in Figure~\ref{fig:running-example-H}. Observe that as for \textsc{Dual-Enum}{}, computing a first solution to \textsc{ITrans-Enum}{} is easily done by starting with $I=V(\H)$ as a transversal-ideal, and greedily reducing it until it is minimal. It is worth pointing out that in the case where $P_\H$ is an antichain poset, then the minimal transversal-ideals of $\H$ w.r.t.~$P_\H$ are exactly the minimal transversals of $\H$, and the two problems \textsc{ITrans-Enum}{} and \textsc{Trans-Enum}{} are equivalent. In the general case, however, a minimal transversal-ideal of $\H$ w.r.t.~$P_\H$ may contain several minimal transversals of $\H$; see Figure~\ref{fig:running-example-H} for an example. If~$P_\H$ is a total order, then $\H$ admits a unique minimal transversal-ideal no matter the number of minimal transversals. Consequently, the size of $Tr(\H)$ may be exponential in the size of $Tr(\H,P_\H)$. It easily observed that \textsc{Dual-Enum}{} is a particular case of \textsc{ITrans-Enum}{}, where ${P=P_\H=(X,\leq)}$ and where to every ideal $B\in \B^+$ corresponds a hyperedge $\e=X\setminus B$ of $\H$, as in that case ideal $I$ is a transversal-ideal of $\H$ (i.e., $I\cap\e\neq\emptyset$ for all $\e\in \H$) if and only if $I\not\subseteq B$ for any $B\in \B^+$. In other words, \textsc{Dual-Enum}{} appears as a particular case of \textsc{ITrans-Enum}{} in which $\H$ defines a collections of filters of $P_\H$. However, $\H$ may not be a collection of filters of $P_\H$ in general, and \textsc{ITrans-Enum}{} may appear as a tougher problem at first glance. Nevertheless, we~show that the two problems are equivalent by showing that hypergraphs that do not share this property can be closed in the poset with no impact on the solutions to enumerate. For any hypergraph $\H$ and poset $P_\H$, we denote by $\uparrow \H$ the {\em filter-closed hypergraph} of $\H$ w.r.t.~$P_\H$ defined by $V(\uparrow\H)=V(\H)$ and $\E(\uparrow\H)=\Min_\subseteq\{\uparrow \e\mid \e\in \E(\H)\}$. Observe that $|\uparrow \H|\leq |\H|$. \begin{lemma}\label{lemma:hypergraph-closure} Let $I$ be an ideal of $P_\H$. Then $I$ is a transversal-ideal of $\H$ if and only if it is a transversal-ideal of $\uparrow \H$. In particular, $ITr(\H,P_\H)=ITr(\uparrow \H, P_\H)$. \end{lemma} \begin{proof} Let $I$ be an ideal of $P_\H$ and $\e$ be an edge of $\H$. We show that $I$ intersects $\e$ if and only if it intersects $\uparrow \e$. Clearly if $I$ intersects $\e$ then it intersects $\uparrow \e$ as $\e\subseteq \uparrow \e$. Let us assume that $I$ intersects $\uparrow \e$ and let $x\in I\cap \uparrow \e$. Then there exists $y\in \e$ such that $y\leq x$. Since $I$ is an ideal, $y\in I$. Thus $I\cap \e\neq\emptyset$. Hence $ITr(\H,P_\H)=ITr(\uparrow \H, P_\H)$. \end{proof} \begin{lemma}\label{lemma:incident-edge-incl} If $\uparrow \H=\H$ then $x\leq y$ implies $\E_x\subseteq \E_y$ for all $x,y\in V(\H)$. \end{lemma} \begin{proof} Let $\H$ such that $\uparrow \H=\H$, $x,y\in V(\H)$ such that $x\leq y$, and $E\in \E_x$. Since $E=\uparrow E$ and $x\leq y$, $y\in E$. Hence the desired result. \end{proof} In what follows, we say that $P_\H$ is a poset of {\em incident edge inclusion} of $\H$ if $x\leq y$ implies $\E_x\subseteq \E_y$. By Lemma~\ref{lemma:incident-edge-incl}, every partial order $P_\H$ is a poset of incident edge inclusion of $\uparrow\H$. We conclude with the following result. \begin{theorem}\label{thm:maintrans} \textsc{Dual-Enum}{} and \textsc{ITrans-Enum}{} are equivalent, even when restricted to posets of incident edge inclusion. \end{theorem} \begin{proof} It follows from the equivalence $I\not\subseteq B~\text{for any}~B\in \B^+$ if and only if $I\cap {(X\setminus B)\neq\emptyset}$ $\text{for all}~B\in \B^+$, that \textsc{Dual-Enum}{} is a particular case of \textsc{ITrans-Enum}{}, where $\H=\{X\setminus B \mid B\in \B^+\}$ and $ITr(\H,P_\H)=\B^-$. We show that \textsc{ITrans-Enum}{} reduces to \textsc{Dual-Enum}{}. Let $(\H,P_\H)$ be an instance of the first problem and $\G=\uparrow \H$ be the filter-closed hypergraph of $\H$ w.r.t.~$P_\H$. Clearly, $\G$ can be computed in polynomial time in the sizes of $\H$ and $P_\H$, and $\B^+=\{X\setminus \e \mid \e\in \E(\G)\}$ defines an antichain of $\L(P_\H)$. By Lemma~\ref{lemma:hypergraph-closure}, $ITr(\H,P_\H)=ITr(\G,P_\H)$. As~$ITr(\G,P_\H)=\{I\in \I(P_\H) \mid I\not\subseteq B~\text{for any}~B\in \B^+\}$, we deduce that $\B^-=ITr(\G,P_\H)$ where $\B^-$ is the dual antichain of $\B^+$ in $\L(P_\H)$. Hence that \textsc{ITrans-Enum}{} can be solved using an algorithm for \textsc{Dual-Enum}{} on $P_\H$ and $\B^+$. \end{proof} \section{Dominating-ideals}\label{sec:domideal} We generalize \textsc{Dom-Enum}{} to the enumeration of the minimal ideals of a poset with the domination property. We show that the obtained problem is equivalent to the dualization in distributive lattices. This will allow us to study the complexity of the problem under various restrictions on graph classes and poset types. Let $G$ be a graph and $P_G$ be a partial order on vertices of $G$. Let $D$ be a subset of vertices of $G$. We say that $D$ is a {\em dominating-ideal} of $G$ w.r.t.~$P_G$ if it is an ideal of $P_G$ and a dominating set of~$G$. It~is minimal if it does not contain any dominating-ideal as a proper subset. Note that a dominating-ideal $I$ is minimal if and only if $\priv(I,x)\neq \emptyset$ for all $x\in \Max(I)$. We denote by $\mathcal{ID}(G,P_G)$ the set of minimal dominating-ideals of $G$ w.r.t.~$P_G$, and define the problem of generating $\mathcal{ID}(G,P_G)$ as follows. \begin{problemgen} \problemtitle{Minimal dominating-ideals enumeration (\textsc{IDom-Enum}{})} \probleminput{A graph $G$ and a partial order $P_G$ on vertices of $G$.} \problemquestion{The set $\mathcal{ID}(G,P_G)=\Min_\subseteq\{I\in \I(P_G) \mid I\ \text{dominates}\ G\}$.} \end{problemgen} \begin{figure} \center \includegraphics[page=2,scale=1.2]{fig-running-example.pdf} \caption{A graph $G$ (left) and a partial order $P_G$ (right) on vertices of $G$ such that $\N(G)=\H$, where $\H$ is the hypergraph defined in Figure~\ref{fig:running-example-H}. The minimal dominating-ideals for this instance are $I_1=\{x_2,x_3,x_5\}$ and $I_2=\{x_2,x_3,x_6\}$.} \label{fig:running-example-G} \end{figure} An instance of this problem is given in Figure~\ref{fig:running-example-G}. Observe that as for the classical case when $P$ is an antichain, \textsc{IDom-Enum}{} naturally appears as a particular case of \textsc{ITrans-Enum}{} where $\mathcal{ID}(G,P_G)=ITr(\N(G),P_G)$. The rest of this section is devoted to the proof of their equivalence. In the following, we say that $P_G$~is a {\em neighborhood inclusion poset} of~$G$ if $x\leq y$ implies $N[x]\subseteq N[y]$, and that $P_G$~is a {\em weak neighborhood inclusion poset} of~$G$ if at least one of $N[x]\subseteq N[y]$ and $N[x]\supseteq N[y]$ holds whenever $x\leq y$. Clearly, every neighborhood inclusion poset is a weak neighborhood inclusion poset. It can be seen that the first restriction is closely related to the one of Lemma~\ref{lemma:incident-edge-incl}, as to every neighborhood inclusion poset of a graph corresponds an incident edge inclusion poset in $\N(G)$. Henceforth, neighborhood inclusion posets naturally appear when considering dualization problems in distributive lattices. The aforementioned equivalences are the following. \begin{theorem}\label{thm:maindom} \textsc{ITrans-Enum}{} and \textsc{IDom-Enum}{} are equivalent, even when restricted to: \begin{enumerate} \item bipartite graphs;\label{item:main1} \item split graphs and weak neighborhood inclusion posets; and\label{item:main2} \item co-bipartite graphs and neighborhood inclusion posets.\label{item:main3} \end{enumerate} \end{theorem} \begin{proof} Clearly, \textsc{IDom-Enum}{} is a particular case of \textsc{ITrans-Enum}{} where $\mathcal{ID}(G,P_G)=ITr(\N(G),P_G)$. We show that \textsc{ITrans-Enum}{} reduces to \textsc{IDom-Enum}{}. Let $(\H,P_\H)$ be a non-trivial instance (such that $\H\neq\emptyset$) of \textsc{ITrans-Enum}{}. Note that by Lemma~\ref{lemma:hypergraph-closure}, we can restrict ourselves to the case where $\H=\uparrow \H$. Hence by Lemma~\ref{lemma:incident-edge-incl}, $x\leq y$ in $P_\H$ implies $\E_x\subseteq \E_y$. Consider the bipartite incidence graph $I(\H)$ of $\H$ of bipartition $X=V(\H)$ and $Y=\{y_\e \mid \e\in \E(\H)\}$, where $xy_\e\in E(I(\H))$ if and only if $x\in X$, $y_\e \in Y$ and $x\in \e$; see~Section~\ref{sec:preliminaries} and Figure~\ref{fig:inc-bipartite}. A first observation is the following: \begin{observation}\label{obs:openneighborhoodinclusion} Let $x,y\in P_\H$. Then $x\leq y$ implies $N(x)\subseteq N(y)$ in $I(\H)$. \end{observation} The remainder of the proof is separated into three parts: we will adapt the construction of the bipartite incidence graph according to each item of the theorem. Let us first consider Item~\ref{item:main1}. Let $G$ be the graph obtained from $I(\H)$ by adding a single vertex $v$ connected to every vertex of $X$. Then $G$ is bipartite with bipartition $X$ and $Y\cup\{v\}$. Let $P_G$ be the poset obtained from $P_\H$ by making every $y\in Y$ greater than every $x\in X$, and $v$ incomparable with every other vertex, i.e., $P_G= P_\H \cup \{x<y \mid x\in X,\ y\in Y\}$. We prove the following. \begin{claim}\label{claim:bipartite} Let $I\subsetneq V(\H)$. Then $I\in ITr(\H,P_\H)$ if and only if $I\cup\{v\}\in \mathcal{ID}(G,P_G)$. \end{claim} \begin{proof}[Proof of the claim] Let $I\subsetneq V(\H)$ such that $I\in ITr(\H,P_\H)$. As $\H$ is non-empty, $I\neq \emptyset$. By construction, $I$ is an ideal of $P_G$ and it is a minimal dominating-ideal of subset $Y$, i.e., $Y\subseteq N[I]$ and $Y\not\subseteq N[I\setminus \{x\}]$ for any $x\in \Max(I)$. By hypothesis $I\neq V(\H)$, hence $I$ does not dominate $G$, and $I\cup\{v\}$ does; $v$ is here to dominate elements of $X$ that are not in the transversal. Since $v$ is not adjacent to $Y$, it does not steal private neighbors to vertices in $I$. Hence $I\cup \{v\}$ is a minimal dominating-ideal of $G$. % Let $I\subsetneq V(\H)$ such that $I\in \mathcal{ID}(G,P_G)$. Note that $y\not\in I$ for any $y\in Y$ as $X\subseteq \downarrow y$ and $X$ dominates $G$. As $\H$ is non-empty, $I\cap X\neq\emptyset$. By hypothesis, $I\neq X$. Thus $v$ has a private neighbor in $X$ and $\priv(I,x)\subseteq Y$ for all $x\in I$. Hence $I$ is a minimal transversal-ideal of $\H$. \renewcommand{\qed}{\cqedsymbol} \end{proof} Let us now consider Item~\ref{item:main2}. Let $G$ be the graph obtained from $I(\H)$ by completing $X$ into a clique, and by adding a single vertex $v$ connected to every vertex of the graph, i.e., $v$ is universal in $G$. Then $G$ is split with clique $X\cup\{v\}$ and independent set~$Y$. Let $P_G$ be the poset obtained from $P_\H$ by making every $y\in Y$ greater than~$v$, i.e., $P_G=P_\H \cup \{v<y \mid y\in Y\}$. We prove the following two claims. \begin{claim}\label{claim:weak} $P_G$ is a weak neighborhood inclusion poset on~$G$. \end{claim} \begin{proof}[Proof of the claim] Clearly, $x\leq y$ either implies $x,y\in X$, or both $x=v$ and $y\in Y$. In~the first case, it follows from Observation~\ref{obs:openneighborhoodinclusion} that $N[x]\subseteq N[y]$ as $X\cup v$ induces a clique. In~the other case, $N[v]\supseteq N[y]$ as $v$ is universal. \renewcommand{\qed}{\cqedsymbol} \end{proof} \begin{claim}\label{claim:split} Let $I\subseteq V(\H)$. Then $I\in ITr(\H,P_\H)$ if and only if $I\in \mathcal{ID}(G,P_G)$, $I\neq \{v\}$. \end{claim} \begin{proof}[Proof of the claim] Let $I\subseteq V(\H)$ such that $I\in ITr(\H,P_\H)$. As $\H$ is non-empty, $I\neq \emptyset$. By construction, $I$ is an ideal of $P_G$, $I\neq\{v\}$, and it is a minimal dominating-ideal of $Y$. As $I$ dominates $X\cup\{v\}$, it is a minimal dominating-ideal of $G$. Let $I\subseteq V(\H)$ such that $I\in \mathcal{ID}(G,P_G)$ and $I\neq \{v\}$. Note that $y\not\in I$ for any $y\in Y$ as $v\in \downarrow y$ and $v$ dominates $G$. Since $I\neq \{v\}$, $I\subseteq X$. Since $X$ induces a clique, $\priv(I,x)\subseteq Y$ for all $x\in I$. Hence $I$ is a minimal transversal-ideal of $\H$. \renewcommand{\qed}{\cqedsymbol} \end{proof} We now consider Item~\ref{item:main3}. Let $G$ be the graph obtained from $I(\H)$ by adding a single vertex $v$ connected to every vertex of $X$, and by completing both $X$ and $Y$ into cliques. Then $G$ is co-bipartite with cliques $X\cup\{v\}$ and $Y$. Let $P_G=P_\H$. We prove the following two claims. \begin{claim}\label{claim:strong} $P_G$ is a neighborhood inclusion poset on~$G$. \end{claim} \begin{proof}[Proof of the claim] Let $x,y\in P_G$ such that $x\leq y$. It follows from Observation~\ref{obs:openneighborhoodinclusion} that $N[x]\subseteq N[y]$ as $X\cup v$ induces a clique. \renewcommand{\qed}{\cqedsymbol} \end{proof} \begin{claim}\label{claim:co-bipartite} Let $I\subseteq V(\H)$. Then $I\in ITr(\H,P_\H)$ if and only if $I \in\mathcal{ID}(G,P_G)$ and $I\not\in \{\{x,y\} \mid x\in X\cup \{v\},\ y\in Y\}$. \end{claim} \begin{proof}[Proof of the claim] Let $I\subseteq V(\H)$ such that $I\in ITr(\H,P_\H)$. As $\H$ is non-empty, $I\neq \emptyset$. By construction, $I$ is an ideal of $P_G$, $I\not\in \{\{x,y\} \mid x\in X\cup \{v\}$, $y\in Y\}$, and it is a minimal dominating-ideal of $Y$. As $I$ dominates $X\cup\{v\}$, it is a minimal dominating-ideal of $G$. Let $I\subseteq V(\H)$ such that $I\in \mathcal{ID}(G,P_G)$ and $I\not\in \{\{x,y\} \mid x\in X\cup \{v\}$, $y\in Y\}\}$. Note that $y\not\in I$ for any $y\in Y$ or else, as $v$ is non adjacent to any vertex in $Y$, $I$ must contain one vertex of $X\cup \{v\}$ to dominate $G$. Then $I$ does not contain any other vertex as it dominates $G$, and $I\in \{\{x,y\} \mid x\in X\cup \{v\}$, $y\in Y\}\}$, a case excluded by hypothesis. Moreover, $v\not\in I$ as otherwise, $I$ must contain some $x\in X$ to dominate $Y$ and $N[v]\subseteq N[x]$. Hence $I\subseteq X$. Since $X$ induces a clique, $\priv(I,x)\subseteq Y$ for all $x\in I$. Hence $I$ is a minimal transversal-ideal of $\H$. \renewcommand{\qed}{\cqedsymbol} \end{proof} The proof of the theorem follows from Claims~\ref{claim:bipartite}, \ref{claim:weak}, \ref{claim:split}, \ref{claim:strong} and \ref{claim:co-bipartite}, observing that $G=I(\H)$ is constructed in polynomial time in the sizes of $\H$ and $P_\H$, and that $ITr(\H,P_\H)$ can be enumerated with polynomial delay from $\mathcal{ID}(G,P_G)$ on the constructed graph and poset. Indeed, in the case of Item~\ref{item:main1} only one extra solution (namely $I=V(\H)$) has to be handled separately. In the case of Item~\ref{item:main2}, only one solution (namely $I=\{v\}$) has to be discarded. In the case of Item~\ref{item:main3}, at most $|V(G)|^2$ solutions (namely every subsets of $V(G)$ of size two) have to be discarded. This concludes the proof. \end{proof} \section{Tractable cases for dominating-ideals enumeration}\label{sec:tractable} In the following, we show that combined restrictions left by Theorem~\ref{thm:maindom} are tractable (see Figure~\ref{fig:sum}), using existing algorithms and techniques from the literature for the enumeration of minimal dominating sets in split and triangle-free graphs \cite{kante2014enumeration,bonamy2019triangle}. Our results rely on the following important property. \begin{proposition}\label{prop:domantichain} Let $G$ be a graph and $P_G$ be a weak neighborhood inclusion poset on~$G$. Then, every minimal dominating set of $G$ is an antichain of $P_G$. Hence there is a bijection between minimal dominating sets of $G$ and their ideal in $P_G$. If in addition $P_G$ is a neighborhood inclusion poset, then $\Max(D)$ dominates $G$ whenever $D$ does. \end{proposition} \begin{proof} Let $D$ be a dominating set of $G$ and $x,y\in D$ be two comparable elements of $P_G$. If~$P_G$ is a weak neighborhood inclusion poset, then either $N[x]\subseteq N[y]$ or $N[x]\supseteq N[y]$. Thus, either $D\setminus \{x\}$ or $D\setminus \{y\}$ dominates $G$ and we deduce that every minimal dominating set of $G$ is an antichain of $P_G$. If $P_G$ is a neighborhood inclusion poset and $x\leq y$, then $D\setminus \{x\}$ dominates $G$ and we deduce that $\Max(D)$ dominates $G$. Since the set of antichains and the set of ideals of a poset are in bijection, we conclude to a bijection between minimal dominating sets of $G$ and their ideal in $P_G$. \end{proof} A consequence of Proposition~\ref{prop:domantichain} is the following equality. \begin{equation}\label{eqn:main} \mathcal{ID}(G,P_G)=\Min_\subseteq\{\downarrow D \mid D\in \D(G)\}.\tag{$5$} \end{equation} Note that instances that verify this property are not trivially tractable, as two of the constructed instances in the proof of Theorem~\ref{thm:maindom} satisfy Proposition~\ref{prop:domantichain}, despite the fact that the problem on such instances is \textsc{Dual-Enum}{}-hard, hence \textsc{Trans-Enum}{}-hard. \subsection{Split graphs and neighborhood inclusion posets} In \cite{kante2014enumeration}, the authors give a polynomial delay algorithm to enumerate minimal dominating sets in split graphs. Their algorithm relies on the two observations that if $G$ is a split graph of maximal independent set $S$, and clique $C$, then the set of intersections of minimal dominating sets of $G$ with $C$ is in bijection with $\D(G)$, and it forms an independence system. A pair $(X,\S)$ where $\S\subseteq 2^X$ is an {\em independence system} if $\emptyset\in \S$ and if $S\in \S$ implies that $S'\in \S$ for all $S'\subseteq S$. We show that these observations can be generalized in our case, giving a polynomial delay algorithm to enumerate $\mathcal{ID}(G,P_G)$ whenever $G$ is split and $P_G$ is a neighborhood inclusion poset. In what follows, we follow the notations of \cite{kante2014enumeration} to denote the intersection of a dominating set $D$ with some set $W\subseteq V(G)$, namely $D_W=D\cap W$. We extend this notation to the set of minimal dominating sets as follows: \[ \D_W(G)\eqdef\{D_W \mid D\in \D(G)\}. \] \begin{proposition}[\cite{kante2014enumeration}]\label{prop:kante-characterization} Let $G$ be a split graph with maximal independent set $S$, clique~$C$, and let $D$ be a minimal dominating set of $G$. Then $D_S=S\setminus N(D_C)$. Furthermore, $\D_C(G)=\{A \subseteq C \mid \forall x \in A,\ \priv(A,x)\cap S\neq\emptyset\}$ and \begin{enumerate} \item $\D_C(G)$ and $\D(G)$ are in bijection,\label{item:k-c-i1} \item $(C,\D_C(G))$ is an independence system.\label{item:k-c-i2} \end{enumerate} \end{proposition} In the following, we consider a split graph $G$ and a neighborhood inclusion poset $P_G$. As $P_G$ is a neighborhood inclusion poset, Equality~\eqref{eqn:main} applies. The next proposition allows us to consider a decomposition of $G$ into a maximal independent set $S$, and a clique $C$, such that $S\subseteq \Min(P_G)$. \begin{proposition}\label{prop:hypothesis-SminP} Let $G$ be a split graph and $P_G$ be a neighborhood inclusion poset on~$G$. Then there exists a decomposition of $G$ into a maximal independent set $S$, and a clique $C$, such that $S\subseteq \Min(P_G)$. \end{proposition} \begin{proof} Let $S,C$ be a decomposition of $G$ that maximizes the independent set $S$. If $x\in S$ and $x\not\in \Min(P_G)$, then there exists some $y_x\in C$ such that $y_x\leq x$, $N[x]=N[y_x]$, and thus such that $S\setminus \{x\}\cup\{y_x\}$ and $C\setminus \{y_x\}\cup\{x\}$ is still a decomposition of $G$ that maximizes the independent set. \end{proof} We now define \[ \D_C(G,P_G)\eqdef\{D_C \mid D\in \D(G)~\text{and}~\downarrow D\in \mathcal{ID}(G,P_G)\}, \] and show that Proposition~\ref{prop:kante-characterization} extends for this set. \begin{lemma}\label{lemma:subindependencesystem} Let $G$ be a split graph with maximal independent set $S$ and clique~$C$, and $P_G$ be a neighborhood inclusion poset on~$G$. Then $\D_C(G,P_G)$ and $\mathcal{ID}(G,P_G)$ are in bijection, and $\D_C(G,P_G)\subseteq \D_C(G)$. \end{lemma} \begin{proof} The bijection between $\D_C(G,P_G)$ and $\mathcal{ID}(G,P_G)$ follows from Propositions~\ref{prop:domantichain}, \ref{prop:kante-characterization} and Equality~\eqref{eqn:main}, where to every $A\in \D_C(G,P_G)$ corresponds a unique $I\in \mathcal{ID}(G,P_G)$ such that $I=\downarrow(A\cup (S\setminus N(A)))$, and to every $I\in \mathcal{ID}(G,P_G)$ corresponds a unique $A\in \D_C(G,P_G)$ such that $A=\Max(I)\cap C$. The inclusion $\D_C(G,P_G)\subseteq \D_C(G)$ follows from Equality~\eqref{eqn:main}, as $A\in \D_C(G,P_G)$ implies $A=D_C$ for some $D\in \D(G)$ such that $\downarrow D\in \D(G,P_G)$. \end{proof} \begin{lemma}\label{lemma:split-IS} Let $G$ be a split graph with maximal independent set $S\subseteq \Min(P_G)$ and clique~$C$, and $P_G$ be a neighborhood inclusion poset on~$G$. Then $(C,\D_C(G,P_G))$ is an independence system that can be enumerated with polynomial delay given $G$ and $P_G$. \end{lemma} \begin{proof} We first show that $(C,\D_C(G,P_G))$ is an independence system, by proving that if $A\in \D_C(G,P_G)$ and $A$ is not empty, then removing any element in $A$ yields another set in $\D_C(G,P_G)$. Let $\emptyset\neq A\subseteq C$ such that $A\in \D_C(G,P_G)$. Let us assume toward a contradiction that there exists $x\in A$ such that $A\setminus\{x\}\not\in \D_C(G,P_G)$. By~Proposition~\ref{lemma:subindependencesystem}, since $A\in \D_C(G,P_G)$ and since $\D_C(G)$ is an independence system, both $A$ and $A\setminus \{x\}$ belong to $\D_C(G)$. Let $D,D'\in \D(G)$ such that $A=D_C$ and $A\setminus\{x\}=D'_C$. By Proposition~\ref{prop:kante-characterization}, $D'=D\setminus \{x\}\cup \{s_1,\dots,s_k\}$ where $\{s_1,\dots,s_k\}=\priv(A,x)\cap S$. As by hypothesis $A\setminus\{x\}\not\in \D_C(G,P_G)$, there exists $D^*\in \D(G)$ such that $\downarrow D^*\subsetneq \downarrow D'$. Now, note that $\Min(P_G)\cap D'\subseteq D^*$, as otherwise there exists $w\in \Min(P_G)\cap D'\setminus D^*$, hence $D^*\subseteq \downarrow (D'\setminus \{w\})$, and we deduce that $\downarrow (D'\setminus \{w\})$ dominates $G$. But then by Proposition~\ref{prop:domantichain}, $\Max(\downarrow (D'\setminus \{w\}))=D'\setminus \{w\}$ dominates $G$, which contradicts the fact that $D'$ is a minimal dominating set. Therefore $\{s_1,\dots,s_k\}\subseteq D^*$ and as $\downarrow D^*\subsetneq \downarrow D'$, there exist $u\in D^*$ and $v\in D'\setminus \Min(P_G)\setminus D^*$ such that $u<v$. Note that $v\in D$ (as $D'\subseteq D$) and $v\neq x$ (as $x\not\in D'$). Let $D^\circ=D^*\cup \{x\}\setminus\{s_1,\dots,s_k\}$. Clearly $D^\circ$ dominates $G$. As $\downarrow D^*\subseteq \downarrow D'$, $\downarrow D^\circ \subseteq \downarrow D$. Moreover $\downarrow D^\circ\subsetneq \downarrow D$ as $v\in D$ and $v\not\in D^\circ$. This contradict the hypothesis that $A\in \D_C(G,P_G)$. Hence $A\cup \{x\}\in \D_C(G,P_G)$. Now, note that testing whether some arbitrary set $A\subseteq C$ belongs to $\D_C(G,P_G)$ can be done in polynomial time in the sizes of $G$ and $P_G$: first compute the unique $D\in \D(G)$ such that {$D_C=A$}, using Proposition~\ref{prop:kante-characterization}, and test whether $\downarrow D\in \mathcal{ID}(G,P_G)$ by checking if ${\priv(\downarrow D, x)}\neq \emptyset$ for every $x\in D$. Hence, $\D_C(G,P_G)$ can be enumerated with polynomial delay by adding vertices of $C$ one by one from the empty set to maximal elements of $\D_C(G,P_G)$, checking at each step whether the new set belongs to $\D_C(G,P_G)$. Repetitions are avoided with a linear order on vertices of $C$; see~\cite{kante2014enumeration} for further details on the enumeration of an independence system. \end{proof} We conclude to a polynomial delay algorithm to enumerate $\mathcal{ID}(G,P_G)$ whenever $G$ is a split graph and $P_G$ is a neighborhood inclusion poset on $G$. The algorithm first computes a decomposition $S,C$ that maximizes the independent set, makes $S$ a subset of $\Min(P_G)$ using Proposition~\ref{prop:hypothesis-SminP}, and enumerates the independence system $(C,\D_C(G,P_G))$ with polynomial delay using Lemma~\ref{lemma:split-IS}. For every $A\in \D_C(G,P_G)$, it outputs the unique corresponding $I=\downarrow D$ such that ${D_C=A}$ using Lemma~\ref{lemma:subindependencesystem}. This can clearly be done with polynomial delay. We conclude with the following result. \begin{theorem}\label{thm:split} There is a polynomial delay algorithm for \textsc{IDom-Enum}{} whenever $G$ is split and $P_G$ is a neighborhood inclusion poset. \end{theorem} \subsection{Triangle-free graphs and weak neighborhood inclusion posets} In \cite{bonamy2019triangle}, the authors give an output-polynomial algorithm to enumerate minimal dominating sets in triangle-free graphs, i.e., graphs with no induced clique of size three. These graphs include bipartite graphs. We rely on this algorithm to show that $\mathcal{ID}(G,P_G)$ can be enumerated in output-polynomial time in the same graph class, whenever $P_G$ is a weak neighborhood inclusion poset on~$G$. Our argument is based on the next observation. \begin{proposition}\label{prop:star-in-poset} Let $G$ be a triangle-free graph and $P_G$ be a weak neighborhood inclusion poset on~$G$. Then $P_G$ is of height at most two, and it is partitioned into an antichain $A$ of isolated elements (that are both minimal and maximal in $P_G$), and a family $\S$ of $k$ disjoint stars% \footnote{$S_i$ induces a star in the Hasse diagram of $P_G$.} $S_1,\dots,S_k$ of respective center $u_1,\dots,u_k$ such that either $S_i=\downarrow u_i$ or $S_i=\uparrow u_i$, for all $i\in [k]$. Furthermore, vertices in $S_i\setminus \{u_i\}$ are of degree one in $G$. \end{proposition} \begin{proof} This situation is depicted in Figure~\ref{fig:posetstars}. We first show that $P_G$ is of height at most two. Suppose that there exist $x,y,z$ such that $x<y<z$. Then $xy,xz,yz\in E(G)$ which contradicts the fact $G$ is triangle-free. \begin{figure} \center \includegraphics[scale=1.2]{fig-posetstars.pdf} \caption{The situation of Proposition~\ref{prop:star-in-poset}.} \label{fig:posetstars} \end{figure} Let us now prove the rest of the proposition. Let $A=\Min(P_G)\cap\Max(P_G)$, and $B=P_G\setminus A$. Let $S\subseteq B$ be a connected component in the Hasse diagram of $P_G$, and let $x,y\in S$ such that $x<y$. Two symmetric cases arise depending on whether $N[x]\subseteq N[y]$ or $N[x]\supseteq N[y]$. % If $N[x]\subseteq N[y]$ then $x$ is of degree one in $G$ (or else the other neighbor of $x$ would be connected to both $x$ and $y$ and would induce a triangle in $G$). Moreover, every other element $z\neq x$ that is comparable with $y$ verifies $N[z]\subseteq N[y]$ (or else it verifies $N[z]\supseteq N[y]$ and $xyz$ induces a triangle in~$G$), hence is of degree one (by previous remark). Also, it verifies $z<y$ as $P_G$ is of height at most two. Hence $S$ induces a star of center $y$ in the Hasse diagram of $P_G$, such that $S=\downarrow y$, and where every vertex in $S\setminus \{y\}$ is of degree one in $G$. % The other case $N[x]\supseteq N[y]$ leads to the symmetric situation where $S=\uparrow x$ and where every vertex in $S\setminus \{x\}$ is of degree one in $G$. \end{proof} In the following, we denote by $\{v_i^1,\dots,v_i^l\}$ the set of branches of some star $S_i\in \S$, $i\in [k]$, and by $u_i$ its center. Then, we denote by $G_{re}$ and $P_{G_{re}}$ the {\em reduced graph and poset} obtained from $G$ and $P_G$, where every star $S_i\in \S$ had its branches $\{v_i^1,\dots,v_i^l\}$ contracted into a single element $v_i$, and where every edge $u_iu_j$ that connects two distinct stars $S_i,S_j$ in $G$ has been removed. We denote by $B_u$ and $B_v$ the sets $B_u=\{u_1,\dots,u_k\}$ and $B_v=\{v_1,\dots,v_k\}$. The resulting graph is detailed below and is given in Figure~\ref{fig:decomposition1}. Observe that $P_{G_{re}}$ is partitioned into an antichain $A$ of isolated elements (that are both minimal and maximal in $P_{G_{re}}$, and left untouched by our transformation), and a set $B=B_u\cup B_v=\{u_1,v_1,\dots,u_k,v_k\}$ of $k$ disjoint chains $u_iv_i$ (such that either $u_i<v_i$ or $v_i<u_i$), $i\in [k]$. The graph $G_{re}$ is partitioned into one triangle-free graph induced by $A$ (left untouched by our transformation), and an induced matching $\{u_1v_1,\dots,u_kv_k\}$ ($B_u$ and $B_v$ induce two independent sets), where $v_i$ is disconnected from $A$, and $u_i$ is arbitrarily connected to $A$, for every $i\in [k]$. Clearly, $G_{re}$ and $P_{G_{re}}$ can be constructed in polynomial time in the sizes of $G$ and $P_G$. The following property is implicit in \cite{kante2014enumeration} and can also be found in the Ph.D.~thesis of Mary \cite{mary2013enumeration}. \begin{figure} \center \includegraphics[scale=1.2]{fig-decomposition.pdf} \caption{The decomposition $(A,B)$ of a reduced triangle-free graph $G_{re}$.} \label{fig:decomposition1} \end{figure} \begin{proposition}[\cite{mary2013enumeration,kante2014enumeration}]\label{prop:redundant} Let $G$ be a graph and $uv$ be an edge of $G$. Then $\D(G)=\D(G-uv)$ whenever there exists $u'\neq u$, $v'\neq v$ such that $N_{G-uv}[u']\subseteq N_{G-uv}[u]$ and $N_{G-uv}[v']\subseteq N_{G-uv}[v]$. Such an edge $uv$ is called redundant. \end{proposition} \begin{lemma}\label{lemma:redundant-bij} There is a bijection between $\mathcal{ID}(G,P_G)$ and $\mathcal{ID}(G_{re},P_{G_{re}})$. \end{lemma} \begin{proof} Let $S$ be a star of Proposition~\ref{prop:star-in-poset} of center $u$ and branches $v^1,\dots,v^l$. Then, observe that $v^1,\dots,v^l$ are false twins in $G$, i.e., $N(v^i)=N(v^j)=u$ for all $i,j\in [l]$. It is easy to see that a minimal dominating set contains $v^i$ for one such $i$ if and only if contains the whole set $\{v^1,\dots,v^l\}$ as a subset. Hence, the contraction of all branches $\{v^1,\dots,v^l\}$ of $S$ into a representative vertex $v$ in both $G$ and $P_G$ has no impact on the complexity of enumerating minimal dominating sets: one can replace $v$ by $\{v^1,\dots,v^l\}$ for every $D\in \D(G)$ such that $\downarrow D\in \mathcal{ID}(G,P_G)$ and $v\in D$ to obtain solutions of the graph before contraction. As for the deleted edges $u_iu_j$, $i,j\in [k]$, $i\neq j$, they are all redundant as $N_{G-u_iu_j}[v_p]\subseteq N_{G-u_iu_j}[u_p]$ for all $i,j,p\in [k]$, $i\neq j$. By Proposition~\ref{prop:redundant}, they can be removed from $G$ with no incidence on domination. \end{proof} \begin{proposition}\label{prop:min-Bu} For every minimal dominating set $D$ such that $\downarrow D \in \mathcal{ID}(G_{re},P_{G_{re}})$, $\Min(P_{G_{re}})\cap B_u\subseteq D$. \end{proposition} \begin{proof} Let $u\in B_u\cap \Min(P_G)$ and $v\in B_v$ be the unique vertex such that $u<v$. Since $v$ is of degree one in $G$, it must be dominated by either itself, or $u$. Since $u<v$, a dominating-ideal that contains $v$ is not minimal. Hence $\Min(P_{G_{re}})\cap B_u\subseteq D$ for all minimal dominating set $D$ such that $\downarrow D \in \mathcal{ID}(G_{re},P_{G_{re}})$. \end{proof} Let $G$ be a graph and $W,D$ be two subsets of vertices of $G$. Recall that $\D_G(W)$ denotes the set of minimal dominating sets of subset $W$ in $G$; see~Section~\ref{sec:preliminaries}. We now rely on an implicit result from \cite{bonamy2019triangle}, made explicit in~\cite{bonamy2019kt}. \begin{theorem}[\cite{bonamy2019triangle,bonamy2019kt}]\label{thm:triangle-free} There is an algorithm that, given a graph $G$ and a set $W\subseteq V(G)$ such that $G[W]$ is triangle-free, enumerates $\D_G(W)$ in total time $\poly(|G|)\cdot|\D_G(W)|^2$ and polynomial space. \end{theorem} Let us define the set $B_w=\Min(B)=\{w_1,\dots,w_k\}$. Note that $w_i=\Min_\leq\{u_i,v_i\}$ for all $i\in [k]$. We now consider the set \[ A'\eqdef{}A\setminus \bigcup_{i=1}^k N[w_i]. \] Clearly, $G_{re}[A']$ is triangle-free. Hence, $\D_{G_{re}}(A')$ can be enumerated in output-polynomial time $\poly(|{G_{re}}|)\cdot|\D_{G_{re}}(A')|^2$ using the algorithm of Theorem~\ref{thm:triangle-free}. We now show how to compute $\mathcal{ID}({G_{re}},P_{G_{re}})$ given $\D_{G_{re}}(A')$. \begin{figure} \center \includegraphics[scale=1.2,page=2]{fig-decomposition.pdf} \caption{The situation of Lemma~\ref{lemma:triangle-free}.} \label{fig:decomposition2} \end{figure} \begin{lemma}\label{lemma:triangle-free} Le $D$ be a minimal dominating set of $G$. Then $\downarrow D\in \mathcal{ID}({G_{re}},P_{G_{re}})$ if and only if $D= D^*\cup \{w_i \mid v_i\not\in N[D^*]\}$, $D^*\in \D_{G_{re}}(A')$. \end{lemma} \begin{proof} The situation of this lemma is depicted in Figure~\ref{fig:decomposition2}. We show the first implication. Let $D\in \D(G)$ such that $\downarrow D\in \mathcal{ID}({G_{re}},P_{G_{re}})$, and let $D^*=D\setminus \{w_i \mid w_i \in D\}$. Clearly, $D^*$ dominates $A'$. Let $t\in D^*$. We show that it has a private neighbor in $A'$. Let $a$ be a private neighbor of $t$ (w.r.t.~$D$) such that $a\not\in A'$. If no such $a$ exists, then we proved our claim, as in that case $t$ must have a private neighbor in $A'$. Else, $a$ belongs to $N[w_i]$ for some $i\in [k]$. If~$w_i=u_i$ then by Proposition~\ref{prop:min-Bu} $w_i\in D$ which contradicts the fact that $a$ is a private neighbor of $t$. If~$w_i=v_i$, then $a\in \{u_i,v_i\}$. Since either $u_i$ or $v_i$ belongs to $D$ (as $v_i$ is of degree one), it must be that either $t=u_i$ or $t=v_i$. As $t\neq w_i=v_i$, we know that $t=u_i$. In that case, $t$ has another private neighbor $a'\neq a$ that is non-adjacent to $v_i$ (or else $\downarrow D$ is not a minimal dominating-ideal as $t=u_i$ can be replaced by $v_i$, $a\in N[v_i]$, and $v_i<u_i$). At last, if $a'$ belongs to $w_{j}$ for some $j\in [k]$, then $w_{j}=u_{j}$ (as $N[v_i]=\{u_i,v_i\}$ and $B$ is an induced matching, hence $a\neq u_j$) which by Proposition~\ref{prop:min-Bu} is absurd, as $w_j\in D$. Hence $a'\in A'$, which proves our claim. Hence $D^*$ minimally dominates~$A'$, i.e., $D^*\in \D_{G_{re}}(A')$. Now, note that $w_i\in D$ if and only if $v_i \not\in N[D^*]$. Indeed, if $v_i \not\in N[D^*]$ then $w_i \in D$ (as otherwise $w_i \not\in D$, by Proposition~\ref{prop:min-Bu} $w_i=v_i$, hence $u_i \in D$, $u_i\in D^*$, and $v_i\in N[D^*]$ which is absurd). If $v_i \in N[D^*]$, then $u_i\in D^*$, $w_i=v_i$, and $w_i\not\in D$ or else $\{u_i,v_i\}\subseteq D$ which is absurd since $D$ is an antichain. Hence $D= D^*\cup \{w_i \mid v_i\not\in N[D^*]\}$ which concludes the first implication. We show the other implication. Let $D^*\in \D_{G_{re}}(A')$ and $D=D^*\cup \{w_i \mid v_i\not\in N[D^*]\}$. Clearly $D$ dominates ${G_{re}}$ as for all $i\in[k]$, either $v_i\in N[D^*]$ and therefore $u_i\in D^*$ (as $v_i$ is disconnected from $A'$) and $N[w_i]$ is dominated, or $v_i\not\in N[D^*]$ and $w_i$ dominates $N[w_i]$. Note that if $t\in D^*$ then it has private neighbors in $A'$ that are not adjacent to any $w_i$ (by construction), hence such that no ideal $I\subsetneq \downarrow (D\setminus \{t\})$ can dominate. If $t\in D\setminus D^*$ then $t=w_i$ for some $i\in [k]$, it has $v_i$ for private neighbor, and it is minimal in $P_{G_{re}}$. Hence $\downarrow D$ is minimal dominating-ideal of $G$. \end{proof} We conclude to the existence of an output-polynomial algorithm to enumerate the set $\mathcal{ID}(G,P_G)$ whenever $G$ is triangle-free and $P_G$ is a weak neighborhood inclusion poset. The algorithm first computes $G_{re}$ and $P_{G_{re}}$ in polynomial time in the sizes of $G$ and $P_G$, and then enumerates $\mathcal{ID}(G,P_G)$ using Lemmas~\ref{lemma:redundant-bij} and~\ref{lemma:triangle-free}. \begin{theorem}\label{thm:bipartite} There is an algorithm that, given a triangle-free graph~$G$ and a weak neighborhood inclusion poset $P_G$, enumerates $\mathcal{ID}(G,P_G)$ in output-polynomial time. \end{theorem} We note that as $G_{re}[A]$ can yield any triangle-free graph in our construction, improving the algorithm of Theorem~\ref{thm:bipartite} to run with polynomial delay constitutes a challenging open question~\cite{bonamy2019triangle}. \section{Conclusion}\label{sec:conclusion} In this paper, we generalized the two problems of enumerating the minimal transversals of a hypergraph, and the minimal dominating sets of a graph, to the enumeration of the minimal ideals of a poset with the desired property, i.e., transversality and domination. We showed that the obtained problems are equivalent to the dualization in distributive lattices, even when considering various combined restrictions on graph classes and poset types, including bipartite, split, and co-bipartite graphs, and variants of neighborhood inclusion posets; see Theorems~\ref{thm:maintrans} and~\ref{thm:maindom}. This study allowed us to consider the complexity of the problem under new parameters. For combined restrictions that are not considered in Theorem \ref{thm:maindom}, we showed that the problem is tractable relying on existing algorithms from the literature; see Theorems~\ref{thm:split} and~\ref{thm:bipartite}. A summary of the obtained complexities is given in Figure~\ref{fig:sum}. \begin{figure} \small \center \begin{tabular}{ | C{3cm} | C{2.3cm} | C{2.3cm} | C{2.3cm} |N} \hline Graph classes & N.I. posets & Weak N.I. posets & Arbitrary posets &\\ \hline \hline Bipartite & {\sf OutputP} & {\sf OutputP} & \textsc{D}-hard \\ \hline Split & {\sf PolyD} & \textsc{D}-hard & \textsc{D}-hard \\ \hline Co-bipartite & \textsc{D}-hard & \textsc{D}-hard & \textsc{D}-hard \\ \hline \end{tabular} \caption{Summary of the complexity results obtained in Theorems~\ref{thm:maindom},~\ref{thm:split} and~\ref{thm:bipartite} under combined restrictions on graph classes and poset types. {\sf OutputP} stands for output-polynomial, and {\sf PolyD} for polynomial delay. N.I.~stands for neighborhood inclusion, and \textsc{D}-hard for \textsc{Dual-Enum}{}-hard.}\label{fig:sum} \end{figure} We leave open the complexity status of distributive lattice dualization in general. We point that the results of Theorems~\ref{thm:split} and \ref{thm:triangle-free} characterize couples of antichains (coded by the graph) and distributive lattices (coded by the poset) for which the dualization is tractable. For future work, we would be interested in characterizations that only depend on the poset, in order to obtain classes of lattices for which the dualization is tractable, as in \cite{defrain2019dualization,elbassioni2009algorithms}, using graph structures presented in this paper.
2,877,628,089,974
arxiv
\section{Introduction} Fix \(0<\beta<1\). Let \(W\) be a wedge in \(\mathbf{R}^3\) of angle \(2\pi\beta\) delimited by two planes. Let \(p\) be a point in the interior of the wedge which is equidistant from the two faces and is located at distance \(1\) from the edge of \(W\). Take \(f\) to be the associated Green's function for the Laplacian with pole at \(p\) and zero normal derivative at the boundary of \(W\). The Gibbons-Hawking ansatz produces an hyperk\"ahler 4-manifold with boundary \(\overline{P}\) endowed with a circle action which preserves all the structure and has the closure of \(W\) as its space of orbits. The rotation that takes one face of \(W\) to the other is lifted to an isometry \(F\) of \(\overline{P}\), fixing the points over the edge of \(W\). We identify points on the boundary of \(\overline{P}\) which correspond under \(F\) to obtain a smooth manifold \(P\) without boundary, endowed with a metric \(g_{RF}\) which has cone angle \(2\pi\beta\) in transverse directions to the points fixed by \(F\). The upshot is that the direction of the edge of \(W\) defines a global complex structure \(I\) on \(P\) with respect to which \(g_{RF}\) is K\"ahler; and the complex manifold \((P, I)\) is indeed a very familiar one. \begin{theorem} \label{THEOREM} \(g_{RF}\) defines a Ricci-flat K\"ahler metric on \(\mathbf{C}^2\); it is invariant under the \(S^1\)-action \( e^{i\theta} (z, w)= (e^{i\theta}z, e^{-i\theta}w) \), it has cone angle \(2 \pi \beta \) along the conic \( C= \{zw=1\} \) and its volume form is \begin{equation*} \mbox{Vol}(g_{RF})= (\beta^2/2) |1-zw|^{2\beta -2} \Omega \wedge \overline{\Omega}, \end{equation*} where \( \Omega = (1/ \sqrt{2}) dzdw \). Moreover \begin{enumerate} \item \label{Item 1} At points on \(C\) it has cone singularities in a \(C^{\alpha}\) sense -as defined in \cite{DonaldsonKMCS}- with H\"older exponent \( \alpha =1 \) if \( 0< \beta \leq 1/2 \) and \( \alpha = (1/ \beta) -1 \) if \( 1/2< \beta <1 \) \item \label{Item 2} It is asymptotic to the Riemannian cone \(\mathbf{C}_{\beta} \times \mathbf{C}_{\beta} \) at rate \(-4\) if \( 0 < \beta \leq 1/2 \) and \(-2/ \beta \) if \( 1/2 < \beta <1 \) \item \label{Item 3} Its energy is finite and is given by \begin{equation*} E(g_{RF}) = 1 - \beta^2 \end{equation*} \end{enumerate} \end{theorem} We collect some background material on the Green's function of a wedge and the Gibbons-Hawking ansatz in Section \ref{background}. The proof of Theorem \ref{THEOREM} is done in Section \ref{proof thm section}; the identification of \((P, I)\) with \(\mathbf{C}^2\) is already in \cite{DonaldsonKMCS}, we repeat the argument filling-in small details. The original content of this article rests on the three items \ref{Item 1}, \ref{Item 2} and \ref{Item 3}, which are proved in \ref{complex structure section}, \ref{asymptotics section} and \ref{energy section} respectively. Finally, in Section \ref{additiona comments}, we discuss the sectional curvature of the metrics \(g_{RF}\) and the limits when \(\beta \to 0 \). The interest in Theorem \ref{THEOREM} comes from the blow-up analysis of the K\"ahler-Einstein (KE) equations in the context of solutions with cone singularities. In the case of smooth KE metrics on complex surfaces the solutions can only degenerate -in the non-collapsed regime- by developing isolated orbifold points, and the blow-up limits at these are the well-known ALE spaces. In the conical case a new feature arises when the curves along which the metrics have singularities degenerate. In this setting, the \(g_{RF}\) furnish a model for blow-up limits at a point where a sequence of smooth curves develops an ordinary double point. Models for blow-up limits of sequences in which the curves develop an ordinary \(d\)-tuple point are constructed in \cite{martin}. The energy of a Riemannian manifold \((M, g)\) is defined as \[ E(g) = \frac{1}{8\pi^2} \int_M | \mbox{Rm}(g)|^2 dV_g , \] where \(\mbox{Rm}(g)\) denotes the curvature operator of the metric. In our case \(g_{RF}\) is smooth on \(\mathbf{C}^2 \setminus C\) and we integrate on this region. Following next we clarify the meaning of the first two items in Theorem \ref{THEOREM}, but first let us introduce some notation. We write \( \mathbf{C}_{\beta} \) for the complex numbers endowed with the singular metric \( \beta^2 |\xi|^{2\beta -2} |d\xi|^2 \). We recognize it as the standard cone of total angle \(2\pi\beta\). Indeed, if we introduce the `cone coordinates' \begin{equation} \xi = r^{1/\beta} e^{i\theta} \end{equation} then \(\beta^2 |\xi|^{2\beta -2} |d\xi|^2 = dr^2 + \beta^2 r^2 d\theta^2 .\) There are two flat model metrics which are relevant to us. The first is \(\mathbf{C}_{\beta} \times\mathbf{C}\), which captures the local behavior of \(g_{RF}\) at points on the conic. In complex coordinates \(g_{loc} = \beta^2 |z_1|^{2\beta -2} |dz_1|^2 + |dz_2|^2 . \) Secondly is \(\mathbf{C}_{\beta} \times\mathbf{C}_{\beta}\), which models the asymptotic behavior of \(g_{RF}\) at infinity. In complex coordinates \( g_F = \beta^2 |u|^{2\beta -2} |du|^2 + \beta^2 |v|^{2\beta -2} |dv|^2 .\) We introduce `spherical coordinates' by setting \[ \rho^2 = |u|^{2\beta} + |v|^{2\beta} . \] The function \(\rho\) measures the intrinsic distance to \(0\) and it is easy to check to that \(g_F = d\rho^2 + \rho^2 \overline{g} \) where \(\overline{g}\) is a metric on the 3-sphere with cone angle \(2\pi\beta\) along the Hopf circles determined by the complex lines \(\{u=0\}\) and \(\{v=0\}\). We proceed with the explanation of Theorem \ref{THEOREM} \begin{itemize} \item Item \ref{Item 1}. Let \(p \in C \) and \( (z_1, z_2) \) be complex coordinates centered at \(p\) such that \(C=\{z_1=0\}\). Let \(z_1=r_1^{1/\beta}e^{i\theta_1}\). Set \(\epsilon_1 = dr_1 + i\beta r_1 d\theta_1 \) and \(\epsilon_2=dw\). The K\"ahler from associated to \(g_{RF}\) writes as \[ \omega_{RF} = i\sum_{j,k} a_{j\overline{k}} \epsilon_j \overline{\epsilon_k} \] for smooth functions \(a_{j\overline{k}}\) on the complement of \(\{z_1 =0\}\). We say that $g_{RF}$ is \(C^{\alpha}\) if, for every $p \in C$ and holomorphic coordinates centered at \(p\) as above, the $a_{j\overline{k}}$ extend to $\{z_1=0\}$ as \(C^{\alpha}\) functions in the cone coordinates \( (r_1 e^{i \theta_1}, z_2 ) \). We also require the matrix $(a_{j\overline{k}} (p))$ to be positive definite and that $a_{1\overline{2}} =0$ when \(z_1=0\). In particular these conditions imply that \( A^{-1}g_{loc} \leq g_{RF} \leq A g_{loc} \) for some \( A >0 \). \item Item \ref{Item 2}. We say that \(g_{RF}\) is asymptotic to \(g_F\) at rate \(-\mu \), for some \( \mu>0\), if there is a closed ball \(B \subset \mathbf{C}^2 \) and a map \(\Phi : \mathbf{C}^2 \setminus B \to \mathbf{C}^2 \) which is a diffeomorphism onto its image and with the property that \[ | \Phi^{*}g_{RF} - g_F |_{g_F} \leq A \rho^{-\mu}, \hspace{3mm} | \Phi^{*}I - I |_{g_F} \leq A \rho^{-\mu} \] for some constant \(A>0\); where \(I\) denotes the standard complex structure of \(\mathbf{C}^2 \). We write \( (z, w)= \Phi (u,v) \), so that necessarily \( \Psi (\{uv=0\}) \subset \{zw=1\} \). \end{itemize} \subsection*{Acknowledgments} I learned abou the Gibbons-Hawking anstaz from my supervisor, Simon Donaldson, back in 2011 during the first year of my PhD. I would like to thank Simon for his beautiful teaching and the European Research Council Grant 247331 for financial support. This article was written during my visit to the 2017 Summer Program at IMPA, Rio de Janeiro. I would also like to thank IMPA for providing me with excellent working conditions. \section{Background} \label{background} \subsection{Potential theory on a wedge} \label{green function section} Consider the wedge \[ W = \{(re^{i\tilde{\theta}}, s) \in \mathbf{R}^3 \hspace{3mm} \mbox{s.t.} \hspace{2mm} -\pi \beta < \tilde{\theta} < \pi \beta \} . \] Write \(S=\{0\} \times \mathbf{R}\) for the edge of \(W\) and let \(p=(1,0,0)\). It is a fact that there is a unique continuous and positive function \( \tilde{\Gamma}_p : \overline{W} \setminus \{p\} \to \mathbf{R}_{>0} \) which tends to \(0\) at infinity and solves the boundary value problem $$\left\{\begin{array}{cc} \displaystyle \Delta \tilde{\Gamma}_p= \delta_p & \mbox{ on } W, \\ \\ \displaystyle \frac{\partial \tilde{\Gamma}_p}{\partial\nu}=0 & \mbox{ on } \partial W . \\ \end{array}\right.$$ The first equation is meant to be interpreted in the sense of distributions, \(\Delta\) is the standard Euclidean Laplacian and \(\delta_p \) is the Dirac delta at \(p\). In the second equation \(\nu\) denotes the outward unit normal vector, which is well defined on the complement of the edge. In other words, \(\tilde{\Gamma}_p\) is the Green's function for the Laplace operator with pole at \(p\) associated to the Neumann boundary value problem over \(W\). Standard elliptic regularity theory -Weyl's lemma- implies that \(\tilde{\Gamma}_p\) is smooth on \( W \setminus \{p\} \). Indeed, on \(W\) we can write \begin{equation} \label{representation green} \tilde{\Gamma}_p (x) = \frac{1}{4\pi|x-p|} + F \end{equation} for some smooth harmonic function \(F\). The behavior of \(\tilde{\Gamma}_p\) around points on the edge is more subtle. We use a rotation to identify the faces of \(W\). Write \(\tilde{\theta}=\beta \theta\). We are led with a metric on \(\mathbf{R}^3\) with cone angle \(2\pi\beta\) along \(S=\{0\}\times \mathbf{R}\), \begin{equation} g_{\beta} = dr^2 + \beta^2 r^2 d\theta^2 + ds^2 . \end{equation} Write \(\Delta_{\beta}\) for its Laplacian. We let \(\Gamma_p(re^{i\theta}, s)= \tilde{\Gamma}_p (re^{i\tilde{\theta}}, s)\). The function \(\Gamma_p\) is continuous on \(\mathbf{R}^3 \setminus\{p\}\), smooth on the complement of \(S \cup\{p\}\) and solves the distributional equation \[ \Delta_{\beta} \Gamma_p = \delta_p . \] It is shown in \cite{DonaldsonKMCS} that \(\Gamma_p\) is \(\beta\)-smooth at points of \(S\); which means that it is a smooth function of the variables \(r^{1/\beta} e^{i\theta}, r^2, s \). Moreover, we have a `polyhomogeneous' expansion \begin{equation} \label{expansion} \Gamma_p = \sum_{j, k \geq 0} a_{j, k} (s) r^{(k/\beta) + 2j} \cos (k\theta) \end{equation} with $a_{j, k}$ smooth functions of \(s\) and which converges uniformly when \(r \leq 1/4\). Allowing the point \(p\) to vary we obtain, in the usaul way, the function \( G(p, q)= \Gamma_p (q) \) which provides an inverse for \(\Delta_{\beta} \psi = \phi\) by letting \(\psi(x)= \int G(x, y) \phi (y) dV_{\beta}(y) \). The symmetries and dilations of \(g_{\beta}\) are reflected in that \[ G(T_l p, T_l q) = G(p, q), \hspace{2mm} G(R_{\gamma}p, R_{\gamma}q)= G(p, q), \hspace{2mm} G( m_{\lambda} p, m_{\lambda} q) = \lambda^{-1} G(p, q) \] where \( T_l(r, \theta, s) = (r, \theta, s+l) \), \(R_{\gamma} (r, \theta, s) = (r, \theta + \gamma, s) \) and \( m_{\lambda} (r, \theta, s) = (\lambda r, \theta, \lambda s) \) for \(\lambda>0\). There is also the symmetric property \(G(p, q)= G(q, p)\). It follows from the \(\beta\)-smoothness that there is \(\kappa>0\) such that \begin{enumerate} \item \(|G(0, p)| \leq \kappa \) for every \(p\) with \(|p|=1\) \item \(|G(x_1, p)-G(x_2, p)| \leq \kappa |x_1 - x_2|^{1/\beta} \) whenever \(|p|=1\) and \(|x_1|, |x_2| \leq 1/2\) \end{enumerate} It is easy to write the Green's function with the pole located at \(S\), \[ G (0, x)=\frac{1}{4 \pi \beta |x|} . \] By homogeneity, if \(|x| \geq 2 |p| \) \begin{equation} |G(x, p)- G(x, 0)| = |x|^{-1} |G(|p||x|^{-1}, x|x|^{-1}) - G(0, |x|^{-1}x)| \leq \kappa |x|^{-1-1/\beta} . \end{equation} In particular we see that \(\Gamma_p\) decays as \(|x|^{-1}\). We include an observation regarding formula \ref{representation green} which will be useful for us later on \begin{lemma} \label{positive lemma} \(F>0\) \end{lemma} \begin{proof} Since \(F\) is harmonic it is enough to show that it is positive on \(\partial(B \cap W )\) for any sufficiently large ball \(B\). Since \( \tilde{\Gamma}_p \) is asymptotic to \(1/4\pi\beta|x|\) it follows that \(F>0\) on \(\partial B \cap W \). Note that \(F\) restricted to the edge is equal to \( (\beta^{-1}-1)/ 4\pi |x-p| >0 \). The fact that the normal derivative of \(\tilde{\Gamma}_p\) is zero at the boundary of \(W\) implies that \(F\) has no critical points when restricted to these planes, it then follows that \(F>0\) on \(\partial W \cap B\). \end{proof} To finish this section and for the sake of completeness we comment a bit more on the expansion \ref{expansion}. The coefficients $a_{j, k}$ are given in terms of Bessel's functions and we want to indicate how these arise. The technique is separation of variables. We write \begin{equation} \label{separation var} G(r, \theta, s; r', \theta', s') = \sum_{k=0}^{\infty} G_k(r, r', R) \cos k(\theta-\theta') \end{equation} with \(R=|s-s'|\). We decompose \begin{equation*} \triangle_{\beta} = \frac{\partial^2}{\partial r^2} + \frac{1}{r}\frac{\partial}{\partial r} + \frac{1}{\beta^2 r^2} \frac{\partial^2}{\partial \theta^2} + \frac{\partial^2}{\partial s^2} = L + \frac{\partial^2}{\partial s^2} . \end{equation*} The point is that -for any integer \(k \geq 0 \) and \(\lambda \geq 0 \)- the function \(\phi= J_{\nu}(\lambda r)e^{ik\theta} \) is an eigenfunction for \(L\) with \(L\phi = - \lambda^2 \phi \); where \(\nu= k/\beta \) and \begin{equation} \label{Bessel funct} J_{\nu}= \sum_{j=0}^{\infty} (-1)^j \frac{(z/2)^{\nu + 2j}}{j! (\nu+j)!} \end{equation} is Bessel's function, which solves \(f^{''} + z^{-1}f' + (1-\nu^2 z^{-2})f =0 \). This leads to a formula for the heat kernel associated to \(\Delta_{\beta}\) \[ (4\pi t)^{-1/2} e^{-R^2/4t} \sum_{k=0}^{\infty} \left( \pi^{-1} \int_{0}^{\infty} e^{-\lambda^2 t} J_{\nu}(\lambda r) J_{\nu}(\lambda r') d\lambda \right) \cos k(\theta - \theta') . \] The Green's function is obtained by integration of the heat kernel with respect to the time parameter and this gives us \begin{equation} \label{formal expresion} G_k (r, r', R)= \int_{0}^{\infty} \int_{0}^{\infty} (4\pi t)^{-1/2} e^{-\lambda^2 t -R^2/4t} J_{\nu}(\lambda r) J_{\nu}(\lambda r') d\lambda dt . \end{equation} We fix \((r', \theta', s')=(1, 0, 0)\), replace \ref{formal expresion} into \ref{separation var}, expand the Bessel's functions into power series \ref{Bessel funct} and exchange the integral with the summation; to obtain a formal polyhomogeneous expansion as in \ref{expansion}. The validity of the expression is guaranteed provided we check uniform convergence. In order to do this the integral \ref{formal expresion} has to be properly manipulated, suitable bounds must be derived and some expertise with Bessel's functions is required -see \cite{DonaldsonKMCS}-. \subsection{The Gibbons-Hawking ansatz} \label{GH section} This well-known construction provides a `local' correspondence between positive harmonic functions on domains in \( \mathbf{R}^3 \) and hyperk\"ahler structures with \(S^1\) symmetry. More precisely, let \(x_1, x_2, x_3\) be standard coordinates on \(\mathbf{R}^3\) and let \(f\) be a positive harmonic function on \(\Omega\). Consider an \(S^1\)-bundle over \(\Omega\) equipped with a connection \footnote{By a connection we mean an \(S^1\)-invariant \(1\)-form on the total space which gives \(1\) when contracted with the derivative of the \(S^1\)-action. Its curvature is \(d\alpha \) and it is a general fact that it is the pull-back by the bundle projection of a closed \(2\)-form on the base whose de Rham cohomology class represents \(-2\pi c_1\). We shall often suppress the pull-back by the bundle projection in our formulas.} \(\alpha\) which satisfies the Bogomolony equation \begin{equation} \label{Bogomolony} d \alpha = - \star df . \end{equation} The hyperk\"ahler structure is then defined by means of the three 2-forms \begin{equation} \omega_i = \alpha dx_i + f dx_j dx_k , \end{equation} here and in the rest of the article we use the notation of the indices \((i, j, k)\) varying over the cyclic permutations of \((1, 2, 3)\). The Bogomolony equation \ref{Bogomolony} is indeed equivalent to the \(\omega_i\) being closed. The bundle projection is then characterized as th hyperk\"ahler moment map for the \(S^1\)-action. \begin{example} \label{euclidean metric} A basic case is that of the Euclidean metric on \(\mathbf{R}^4 \cong \mathbf{C}^2\) equipped with the \(S^1\)-action \(e^{i\theta}(z_1, z_2)=(e^{i\theta}z_1, e^{-i\theta}z_2)\) which preserves its standard hyperk\"ahler structure. The hyperk\"ahler moment map agrees with the Hopf map \begin{equation} \label{Hopf map} H(z_1, z_2)= \left( z_1 z_2, \frac{|z_1|^2-|z_2|^2}{2} \right) . \end{equation} Removing \(0\) gives an \(S^1\)-bundle with first Chern class equal to \(-1\); and it is straightforward to check that \begin{equation} f= \frac{1}{2|x|}, \hspace{3mm} \alpha = \mbox{Re} \left( i \frac{\overline{z_2}dz_2 - \overline{z_1}dz_1}{|z_1|^2 + |z_2|^2}\right) . \end{equation} \end{example} A unit vector \(v\) in \(\mathbf{R}^3\) determines a parallel complex structure on the hyperk\"ahler manifold, by sending the horizontal lift of the constant vector field \(v\) to the derivative of the \(S^1\)-action. We now review an explicit construction of holomorphic functions for these complex structures on a particular case which will be relevant for us, our reference is Section 5.3 in \cite{DonaldsonKMCS}. Assume that \(\Omega\) is the product of a \emph{simply connected} domain \(U\subset \mathbf{R}^2\) with the \(x_3\) axis. We will consider the complex structure determined by the \(x_3\) direction. We trivialize the bundle and denote the circle coordinate with \(e^{it}\), so that \[ \alpha = dt + \sum_{j=1}^3 a_j dx_j . \] We can change gauge by \( \tilde{t}= t - \int_0^{x_3} a_3(x_1, x_2, q) dq \) -or in other words parallel translate in the \(x_3\) direction- and assume that \(a_3 \equiv 0\). The Bogomolony equation \ref{Bogomolony} amounts to \begin{equation} \label{bogomolony local} \frac{\partial a_2}{\partial x_3} = f_1, \hspace{3mm} \frac{\partial a_1}{\partial x_3}= -f_2, \hspace{3mm} \frac{\partial a_1}{\partial x_2} - \frac{\partial a_2}{\partial x_1} = f_3 ; \end{equation} where \(f_i\) means \(\partial f / \partial x_i \). We further assume that \( f_3 =0 \) on \( \{x_3 =0\} \), in other words this is to say that \(\alpha\) restricts to a flat connection over \(U\). Since \(U\) is simply connected we can perform a gauge transformation \( \tilde{t} = t - \phi (x_1, x_2) \) and assume that \( a_1 = a_2 = 0 \) on the slice $\{x_3=0\}$. The horizontal lifts of the coordinate vectors \(\partial/\partial x_i\) are given by $$ \tilde{\frac{\partial}{\partial x_1}}= \frac{\partial}{\partial x_1} - a_1 \frac{\partial}{\partial t}, \hspace{3mm} \tilde{\frac{\partial}{\partial x_2}}= \frac{\partial}{\partial x_2} - a_2 \frac{\partial}{\partial t} , \hspace{3mm} \tilde{\frac{\partial}{\partial x_3}}= \frac{\partial}{\partial x_3} ; $$ and the complex structure is defined as \begin{equation} I \tilde{\frac{\partial}{\partial x_1}} = \tilde{\frac{\partial}{\partial x_2}}, \hspace{3mm} I \frac{\partial}{\partial x_3} = -f \frac{\partial}{\partial t} \end{equation} The Cauchy-Riemann equations for a function $h$ to be holomorphic w.r.t. $I$ are then given by \begin{equation} \label{Cauchy Riemann} \frac{\partial h}{\partial x_1} + i \frac{\partial h}{\partial x_2} = (a_1 + i a_2) \frac{\partial h}{\partial t}, \hspace{4mm} \frac{\partial h}{\partial x_3}= if \frac{\partial h}{\partial t} \end{equation} We look for a function $h$ which has weight one for the circle action, so that \( {\partial h}/{\partial t} = ih \) . We use separation of variables and write $h = \tilde{h} e^{it}$ with $\tilde{h} = \tilde{h} (x_1, x_2, x_3)$. The second equation in \ref{Cauchy Riemann} gives us $\partial \tilde{h}/ \partial x_3 = -f \tilde{h}$; so that \begin{equation} \label{hol1} h= h_0 e^{-u} e^{it}, \hspace{3mm} u = \int_0^{x_3} f(x_1, x_2, q) dq, \hspace{3mm} h_0 = h_0(x_1, x_2) \end{equation} Recall that $a_1 = a_2 = 0$ on the slice $\lbrace x_3=0 \rbrace$. Let $h_0$ be any solution of the equation \begin{equation} \label{hol2} \frac{\partial h_0}{\partial x_1} + i \frac{\partial h_0}{\partial x_2} = 0 , \end{equation} in other words an holomorphic function of $x_1 + i x_2$. \begin{lemma} \label{hol function lemma} The function $h$ defined by \ref{hol1} and \ref{hol2} solves \ref{Cauchy Riemann}. \end{lemma} \begin{proof} The Cauchy-Riemann equations \ref{Cauchy Riemann} is an over-determined system. It follows from the definition of $h$ that it solves the second equation in \ref{Cauchy Riemann} and that it solves the first equation in \ref{Cauchy Riemann} only in the slice \(\{x_1 =0\} \). The point is that \(h\) is a solution thanks to the integrability condition provided by the Bogomolony equation \ref{Bogomolony}, as the following computation shows $$\frac{\partial h}{\partial x_1} + i \frac{\partial h}{\partial x_2} = e^{-u}e^{it} \left( \frac{\partial h_0}{\partial x_1} + i \frac{\partial h_0}{\partial x_2} - h_0 \frac{\partial u}{\partial x_1} - h_0 i \frac{\partial u}{\partial x_2} \right) = i h \left(- \frac{\partial u}{\partial x_2} + i \frac{\partial u}{\partial x_1} \right)$$ and it follows from \ref{bogomolony local} that \[ \frac{\partial u}{\partial x_1} = a_2 , \hspace{3mm} \frac{\partial u}{\partial x_2} = -a_1 \] \end{proof} Similarly, $h_0 e^u e^{-i\psi}$ defines a holomorphic function with weight $-1$ for the circle action. Given any holomorphic function of the complex variable \(x_1 + i x_3\), \(h_0\) on \(U\), the pair \begin{equation} z= h_0 e^{-u}e^{i\psi}, \hspace{4mm} w=h_0 e^u e^{-i\psi} \end{equation} defines an \(S^1\)-equivariant holomorphic map from \( (\Omega \times S^1, I) \) to a domain in \(\mathbf{C}^2 \) equipped with the circle action \(e^{it}(z, w)= (e^{it}z, e^{-it}w)\). \begin{example} Taub-Nut metric. Let \(c>0\) and consider the harmonic function \[f = 2c + \frac{1}{2|x|} . \] Let \(\alpha\) be the connection on the Hopf bundle \( H: \mathbf{R}^4 \setminus \{0\} \to \mathbf{R}^3 \setminus \{0\} \) given in Example \ref{euclidean metric}, so that \( d\alpha = - \star df \). We look at complex structure \(I\) on \(\mathbf{R}^4\) determined by the \(x_3\)-axis. \footnote{On \( \mathbf{R}^4 \) we have standard coordinates \(s_1, s_2, s_3, s_4 \). For notational convenience we write \(z_1=s_1 + is_2\) and \(z_2 = s_3 + is_4\), but \(z_1, z_2\) are \emph{not} necessarily complex coordinates for \( (\mathbf{R}^4, I)\).} We want a suitable trivialization of the bundle which fits us in the context of Lemma \ref{hol function lemma}. Note that it follows from \ref{Hopf map} that \[ 2|x| = |z_1|^2 + |z_2|^2, \] so \( |z_1| = (|x| - x_3)^{1/2} \) and \(|z_2| =(|x| + x_3)^{1/2} \). Write \( \xi= x_1 + i x_2 = |\xi| e^{i\theta} \) and let \(U\) be the complement of the negative real axis in the \( \xi \)-plane so that \(-\pi < \theta < \pi\). Over \( \Omega \) we have the following trivialization \[ \Phi (x, e^{it})=(z_1, z_2)= \left( (|x|-x_3)^{1/2}e^{i\theta/2}e^{it}, (|x| + x_3)^{1/2}e^{i\theta/2}e^{-it} \right) \] and it is easy to check that \[ \alpha = dt - \frac{x_3}{2|x|}d\theta , \] which clearly satisfies \(a_3 \equiv 0\) and \(a_1 = a_2 = 0\) when \(x_3=0\). \footnote{The restriction of the Hopf connection to the punctured plane $\{x_3 =0\}\setminus \{(0,0)\}$ is a flat connection with \emph{non-trivial} holonomy, this forces us to introduce a cut in the plane in order to define the desired trivialization with \(a_1=a_2=0\).} We apply Lemma \ref{hol function lemma} with \(h_0 = \sqrt{\xi}\). We can easily compute \[ u= \int_0^{x_3} 2c + \frac{1}{2 \sqrt{|\xi|^2 + q^2}} dq = 2cx_3 + \frac{1}{2} \log \left( x_3 + \sqrt{x_3^2 + |\xi|^2} \right) -\frac{1}{2} \log |\xi| \] and therefore \begin{equation} \label{Taub Nut coordinates} z= e^{c(|z_1|^2 - |z_2|^2)} z_1, \hspace{3mm} w= e^{c(|z_2|^2 -|z_1|^2)}z_2 . \end{equation} It is then clear that the map \( (z, w) = (z(z_1, z_2), w(z_1, z_2)) \) extends to define a biholomorphism between \( (\mathbf{R}^4, I) \) and \(\mathbf{C}^2\). As a final remark we relate the diffeomorphism \ref{Taub Nut coordinates} to LeBrun's expression for the K\"ahler potential of the Taub-Nut metric -see \cite{LeBrunTaubNut}-. Indeed, \( \omega = \alpha dx_3 + f dx_1dx_2 \) and it is easy to check that \( \omega = i \partial \overline{\partial} (|x| + c (x_1^2 + x_2^2 + 2x_3^2))\). Since \(|x|= (1/2)(|z_1|^2 + |z_2|^2) \) and \( x_1^2 + x_2^2 + 2 x_3^2 = (1/2) (|z_1|^4 + |z_2|^4) \) we obtain \[ \omega = \frac{i}{2} \partial \overline{\partial} \left( |z_1|^2 + |z_2|^2 + c(|z_1|^4 + |z_2|^4) \right) , \] with \(|z_1|, |z_2|\) determined implicitly in terms of \(z, w\) by means of \ref{Taub Nut coordinates}. \end{example} \subsection{4-dimensional Riemannian geometry} \label{4d section} Let \((M, g)\) be an oriented Riemannian four-manifold and let \(\mbox{Rm}(g) : \Lambda^2 \to \Lambda^2 \) be its curvature operator. The decomposition \(\Lambda = \Lambda^{+} \oplus \Lambda^{-} \) of the 2-forms into self and anti-self-dual given by the Hodge star operator of \(g\) determines the well-known decomposition of \( \mbox{Rm}(g) \) into four three by three blocks \[ \left( \begin{array}{c|c} \frac{s}{12} + W^{+} & \mathring{r} \\ \hline \mathring{r} & \frac{s}{12} + W^{-} \end{array} \right) . \] These blocks can also be interpreted in terms of the curvature \(F_{\nabla}\) of the Levi-Civita connection on the bundles \( \Lambda^{+}, \Lambda^{-} \). Indeed, if \( \theta_1, \theta_2, \theta_3\) is an orthonormal triple of anti-self-dual forms we can write \[ F_{\nabla} (\theta_i) = F_j \otimes \theta_k - F_k \otimes \theta_j \] for some 2-forms \(F_1, F_2, F_3\). We write the anti-self-dual parts as \(F_{i}^{-} = \sum_{j=1}^{3} c_{ij} \theta_j \). The fact is that \( (c_{ij})_{1 \leq i, j \leq 3} \) agrees with the block \( s/12 + W^{-} \); and similarly for the other blocks. The curvature 2-forms are given by \(F_i = T_i - T_j \wedge T_k\) where \(T_1, T_2, T_3 \) are the connection 1-forms \( \nabla (\theta_i) = T_j \otimes \theta_k - T_k \otimes \theta_j \). The torsion-free property gives us \begin{equation} \label{cartan} d\theta_i = T_j \wedge \theta_k - T_k \wedge \theta_j . \end{equation} A simple algebraic fact -see Proposition 2.3 in \cite{FineGauge}- is that the system of equations \ref{cartan} for \(T_1, T_2, T_3\) has the unique solution \begin{equation} \label{formula connection} 2T_i = \star \psi_i + \star (\star \psi_j \wedge \theta_k) - \star (\star \psi_k \wedge \theta_j) , \end{equation} where \(\psi_i = d\theta_i \) and \(\star\) denotes the Hodge operator of \(g\). This fact can be interpreted as a characterization of the Levi-Civita connection on \(\Lambda^{-}\) as the unique which is metric and torsion-free; somewhat analogous to the Cartan's lemma. We use the previous discussion to compute the energy distribution \(|\mbox{Rm}(g)|^2 \) of a metric \(g= f dx^2 + f^{-1} \alpha^2\) given by the Gibbons-Hawking anstaz. There is an orthonormal frame of self-dual 2-forms \( \omega_i = \alpha dx_i + f dx_j dx_k \); since \(d \omega_i =0\) the only non-vanishing part of the curvature operator is \(W^{-}\), the anti-self-dual Weyl curvature tensor and it is a general fact that \(W^{-}\) is symmetric and trace-free. We consider the orthonormal frame of \(\Lambda^{-}\) given by \(\theta_i = \alpha dx_i - f dx_j dx_k \); so \(d \theta_i = -2f_i dx_1 dx_2 dx_3 \) and \ref{formula connection} gives us \[ T_i = \frac{f_i}{f^2}\alpha - \frac{f_j dx_k -f_k dx_j}{f} . \] We compute the curvature forms and express their anti-self-dual components with respect to the \(\theta_i\) frame to obtain \[ c_{ij} = \frac{f_{ij}}{f^2} - 3 \frac{f_i f_j}{f^3} + \delta_{ij} \frac{|Df|^2}{f^3} .\] We can write this more succinctly as \begin{equation} \label{curv op GH met} W^{-} = (-f/2) \mathring{\mbox{Hess}} (f^{-2}) , \end{equation} where \(\mathring{\mbox{Hess}}\) denotes the standard Euclidean trace-free Hessian. In order to achieve our goal we proceed by straightforward computation, \[ |\mbox{Rm}(g)|^2 = \sum_{i,j} c_{ij}^2 = 6 f^{-6} |Df|^4 + f^{-4} \sum_{i,j} f_{ij}^2 - 6 f^{-5} \sum_{i,j} f_{ij}f_if_j . \] Let \(\Delta\) be the Euclidean Laplacian, so that \(\Delta f =0 \). It follows easily that \[ \Delta f^{-1} = -2f^{-3} |Df|^2, \hspace{3mm} (1/2) \Delta \Delta f^{-1} = 12 f^{-5} |Df|^4 + 2 f^{-3} \sum_{i,j} f_{ij}^2 - 12 f^{-4} \sum_{i,j} f_{ij}f_if_j . \] Comparing we obtain the formula -see Remark 2.4 in \cite{GrossWilson}- \begin{equation} \label{energy distribution} |\mbox{Rm}(g)|^2 = \frac{1}{4 f} \Delta \Delta f^{-1} . \end{equation} \section{Proof of Theorem} \label{proof thm section} We go back to the construction of the metric \(g_{RF}\) mentioned in the Introduction. It is convenient to perform the gluing right in the beginning. Equivalently, we start with \begin{equation} g_{\beta} = dr^2 + \beta^2 r^2 d\theta^2 + ds^2. \end{equation} Write \(\star_{\beta}\) for its Hodge operator, which acts one \(1\)-forms as \begin{equation*} \star_{\beta} dr = \beta r d\theta ds, \hspace{3mm} \star_{\beta} \beta r d\theta = ds dr, \hspace{3mm} \star_{\beta} ds = \beta r drd\theta . \end{equation*} Let $p=(1,0, 0)$ and let $\Gamma_p$ be the Green's function for \(\Delta_{\beta}\) with pole at \(p\) -see Subsection \ref{green function section}-. We take \(f=2\pi\Gamma_p \), so that \( -\star_{\beta} df \) integrates \(2\pi\) over spheres centered at \(p\). Note that \begin{equation*} d \star_{\beta} df = (\triangle_{\beta} f) V_{\beta} =0 \end{equation*} where \(dV_{\beta} = \beta r dr d\theta ds \) denotes the volume form. Let \( \Pi : P_0 \to \mathbf{R}^3 \setminus (S \cup \{p\}) \) be the \(S^1\)-bundle with \( c_1 (P_0)=-1 \). We shall show that there is a connection \(\alpha\) on \(P_0\) with curvature \(-\star_{\beta} df \) and with trivial holonomy along small loops that shrink to \(S\); note that these two conditions determine \(\alpha\) uniquely up to gauge equivalence. We consider the metric \begin{equation} g_{RF}= f g_{\beta} + f^{-1} \alpha^2 . \end{equation} It is clear that \(g_{RF}\) is locally hyperk\"ahler. On the other hand we can extend \(P_0\) to \( P = P_0 \sqcup(\mathbf{R} \times S^1) \sqcup \{\tilde{p} \} \) so that the \(S^1\)-action extends smoothly to \(P\), acting freely on \( P \setminus \{ \tilde{p} \} \) and fixing \(\tilde{p} \). The map \(\Pi\) also extends smoothly as the orbit projection \(\Pi : P \to \mathbf{R}^3 \) with \( \Pi (\tilde{p}) = p \). We shall see that \(g_{RF}\) extends smoothly over \(\tilde{p}\) and as a metric with cone singularities along \( \Pi^{-1} (S) \cong \mathbf{R} \times S^1 \). \subsection{Complex structure} \label{complex structure section} Let \(\mathbf{R}_{\ast}^2\) be the \(re^{i\theta}\)-plane with the point \((1,0)\) removed. Define the 1-form on \(\mathbf{R}_{\ast}^2 \times \mathbf{R} \) \begin{equation} \alpha_0 = a_1 dr + a_2 \beta r d\theta \end{equation} where \begin{equation} \label{connection} a_1 = -\frac{1}{\beta r} \int_0^s \frac {\partial f}{\partial \theta} (r, \theta, q) dq, \hspace{4mm} a_2 = \int_0^s \frac{\partial f}{\partial r} (r, \theta, q) dq . \end{equation} It is clear that $\alpha_0$ is smooth on $(\mathbf{R}_{\ast}^2 \times \mathbb{R})\setminus S$; and the functions $a_1, a_2$ extend continuously by $0$ over $S$ as $C^{\alpha}$ functions for $\alpha = \beta^{-1} -1$. The computations that follow are done over the complement of $S$. \begin{claim} \begin{equation} d\alpha_0 = - \star_{\beta} df \end{equation} \end{claim} \begin{proof} We have that $$ d \alpha_0 = -\frac{\partial a_2}{\partial s} \beta r d\theta ds + \frac{\partial a_1}{\partial s} ds dr + \left( \frac{\partial a_2}{\partial r} + \frac{1}{r}a_2 - \frac{1}{\beta r}\frac{\partial a_1}{\partial \theta} \right) \beta r dr d\theta$$ and $$ \star_{\beta} df = \frac{\partial f}{\partial r} \beta r d\theta ds + \frac{1}{\beta r} \frac{\partial f}{\partial \theta} ds dr + \frac{\partial f}{\partial s} \beta r dr d\theta$$ It is clear from \ref{connection} that ${\partial a_2}/{\partial s} = {\partial f}/{\partial r}$ and ${\partial a_1} /{\partial s} =- ({1}/{\beta r}) {\partial f}/{\partial \theta}$. It follows by symmetry that ${\partial f}/{\partial s} =0$ when $s=0$, so that $ {\partial f}/{\partial s} = \int_0^s \frac{\partial^2 f}{\partial s^2}(., t) dt$. Using that $\triangle_{\beta} f =0$ we get $$ \frac{\partial a_2}{\partial r} + \frac{1}{r}a_2 - \frac{1}{\beta r}\frac{\partial a_1}{\partial \theta} = \int_{0}^{s} \left( \frac{\partial^2 f}{\partial r^2} + \frac{1}{r} \frac{\partial f}{\partial r} + \frac{1}{\beta^2 r^2} \frac{\partial^2 f}{\partial \theta^2} \right) (r, \theta, q) dq = - \int_0^s \frac{\partial^2 f}{\partial s^2}(r, \theta, q) dq = - \frac{\partial f}{\partial s} $$ \end{proof} Consider the product $\mathbf{R}_{\ast}^2 \times \mathbf{R} \times S^1$ and write points in the circle factor as $e^{it}$. Define the connection 1-form form $\alpha = dt + \alpha_0$ and the metric \begin{equation} g_{RF} = f g_{\beta} + f^{-1} \alpha^2 \end{equation} The horizontal lifts of $\partial/\partial r, \partial/\partial \theta, \partial/\partial s$ are $$ \tilde{\frac{\partial}{\partial r}}= \frac{\partial}{\partial r} - a_1 \frac{\partial}{\partial t}, \hspace{3mm} \tilde{\frac{\partial}{\partial \theta}}= \frac{\partial}{\partial \theta} - a_2 \beta r \frac{\partial}{\partial t} , \hspace{3mm} \tilde{\frac{\partial}{\partial s}}= \frac{\partial}{\partial s}$$ We consider the complex structure determined by the \(s\)-axis \begin{equation} I \tilde{\frac{\partial}{\partial r}} = \frac{1}{\beta r} \tilde{\frac{\partial}{\partial \theta}}, \hspace{3mm} I \frac{\partial}{\partial s} = -f \frac{\partial}{\partial t} . \end{equation} The associated 2-form is \begin{equation} \omega_{RF} = g_{RF} (I . , .) = \alpha ds + f \beta r dr d\theta \end{equation} \begin{claim} $(g_{RF}, \omega_{RF}, I)$ defines a K\"ahler structure on $\mathbf{R}_{\ast}^2 \times \mathbf{R} \times S^1$ \end{claim} \begin{proof} The equation $d\alpha = - \star_{\beta} df$ implies that $d\alpha ds = - \frac{\partial f}{\partial s} \beta r drd\theta ds$ and this gives $d\omega = 0$. To prove that $I$ is integrable one can check that $$ \left[ \tilde{\frac{\partial}{\partial r}} + i \frac{1}{\beta r} \tilde{\frac{\partial}{\partial \theta}}, \frac{\partial}{\partial s} -i f \frac{\partial}{\partial t} \right] =0 $$ Strictly speaking we don't need to do this since we are going to find complex coordinates in what follows. \end{proof} The Cauchy-Riemann equations for a function $h$ to be holomorphic w.r.t. $I$ are given by \begin{equation} \label{CR} \frac{\partial h}{\partial r} + i \frac{1}{\beta r} \frac{\partial h}{\partial \theta} = (a_1 + i a_2) \frac{\partial h}{\partial t}, \hspace{4mm} \frac{\partial h}{\partial s}= if \frac{\partial h}{\partial t} . \end{equation} We look for a function $h$ which has weight one for the circle action, this goes as in Lemma \ref{hol function lemma}. Let $h_0$ be any solution of the equation \begin{equation} \label{h2} \frac{\partial h_0}{\partial r} + i \frac{1}{\beta r} \frac{\partial h_0}{\partial \theta} = 0 , \end{equation} that is \(h_0\) is a holomorphic function of the variable $r^{1/\beta} e^{i\theta}$. Set \begin{equation} \label{h1} h= h_0 e^{-u} e^{it}, \hspace{3mm} u = \int_0^s f(r, \theta, q) dq \end{equation} \begin{claim} The function $h$ defined by \ref{h1} and \ref{h2} solves \ref{CR}. Similarly, holomorphic functions with weight $-1$ for the circle action are given by $h_0 e^u e^{-it}$. \end{claim} \begin{proof} $$\frac{\partial h}{\partial r} + i \frac{1}{\beta r} \frac{\partial h}{\partial \theta} = e^{-u}e^{it} \left( \frac{\partial h_0}{\partial r} + i \frac{1}{\beta r} \frac{\partial h_0}{\partial \theta} - h_0 \frac{\partial u}{\partial r} - h_0 \frac{i}{\beta r} \frac{\partial u}{\partial \theta} \right) = i h \left(- \frac{1}{\beta r} \frac{\partial u}{\partial \theta} + i \frac{\partial u}{\partial r} \right)$$ and $ - \frac{1}{\beta r} \frac{\partial u}{\partial \theta} + i \frac{\partial u}{\partial r} = a_1 + i a_2$. \end{proof} Consider the segment in the \(re^{i\theta}\)-plane given by the points in the real line which are $\geq 1$, let $U$ be the complement of that segment and \(U^{\ast}= U \setminus \{0\} \). Write $c=\beta^{-1}$. The function $1-r^c e^{i\theta}$ maps $U$ to the complement of the negative real axis, so \begin{equation}\label{h0} h_0 = (1-r^c e^{i\theta})^{1/2} \end{equation} is a well defined function un $U$ which satisfies \ref{h2}. From now on we set $h_0$ to be given by \ref{h0} and define \begin{equation} z= h_0 e^{-u}e^{i\psi}, \hspace{4mm} w=h_0 e^u e^{-i\psi} \end{equation} Let \(V \subset \mathbf{C}^2 \) be the open set of points \((z, w)\) such that \(zw\notin \mathbf{R}_{\leq 0} \). Write \(C=\{zw =1\}\). \begin{claim} The map \(H =(z, w)\) gives a biholomorphism between \( (U^{*} \times \mathbf{R} \times S^1, I) \) and \(W \setminus C \). Moreover \(H\) extends as an homeomorphism between \( (U \times \mathbf{R} \times S^1, I) \) with \(H (\{0\} \times \mathbf{R} \times S^1 ) = C \). \end{claim} \begin{proof} First we provide an inverse for \(H\), showing the homeomorphism part. The pair \((r, \theta)\) is determined by \( r^c e^{i\theta}= 1- zw \). The function \(u(s)=\int_{0}^{s} f(r, \theta, q) dq \) is increasing \(u' =f >0\) and \(\lim_{s \to \pm \infty} u(s)= \pm \infty\); so \((s, e^{it})\) is given by \(e^{-u}e^{it}=h_0^{-1}z\). Next we compute the Jacobian of the map \(H\). Let \(\eta_1 = dr + i \beta r d\theta \) and \( \eta_2 = ds - i f^{-1}\alpha \), so that \(\{\eta_1, \eta_2\}\) is a basis of the \((1, 0)\)-forms on \( U^{*} \times \mathbf{R} \times S^1 \). It is straightforward to check that \[ dz = z \left( h_0^{-1} \frac{\partial h_0}{\partial r} - a_2 - i a_1 \right) \eta_1 - zf \eta_2 \] \[ dw = w \left( h_0^{-1} \frac{\partial h_0}{\partial r} + a_2 + i a_1 \right) \eta_1 + wf \eta_2 . \] The determinant of the linear map that takes \(\{\eta_1, \eta_2\}\) to \(\{dz, dw\}\) is \(-fcr^{c-1}e^{i\theta}\) and is non-zero on \( U^{*} \times \mathbf{R} \times S^1 \). \end{proof} We can compose the inverse of \(H\) with the projection of the trivial \(S^1\)-bundle \( \mbox{pr} (re^{i\theta}, s, e^{it}) = (re^{i\theta}, s) \) to obtain the map \(\Pi = \mbox{pr} \circ H^{-1} : V \to \mathbf{R}^3 \). We want to show that \(\Pi\) extends to all of \(\mathbf{C}^2\), as an orbit map for the \(S^1\)-action \(e^{it}(z, w)= (e^{it}z, e^{-it}w)\). Clearly the \(r, \theta\) coordinates of \(\Pi\) extend, since \(r^c e^{i\theta} = 1-zw\). The key step is to extend the function \(s\). \begin{claim} \(s \) extends to \(\mathbf{C}^2\), smoothly on the complement of \(C\). The map \(\Pi : \mathbf{C}^2 \to \mathbf{R}^3 \) is an orbit projection for the \(S^1\)-action \(e^{it}(z, w)= (e^{it}z, e^{-it}w)\) with \(\Pi(0)=p\) and \(\Pi(C)= \{0\}\times\mathbf{R}\). \end{claim} \begin{proof} The coordinate \(s\) is determined by \(e^{-u}e^{it}=h_0^{-1}z\), since \(|h_0|=|z|^{1/2}|w|^{1/2}\) we obtain \(e^{-u} =|z|^{1/2} |w|^{-1/2} \) and taking logarithms \begin{equation} \label{function s} \int_{0}^{s} f(re^{i\theta}, q)dq = \frac{1}{2} \log \left( \frac{|z|}{|w|} \right) . \end{equation} It is then clear that in the complement of \(\{zw = 0\} \) the map \(\Pi\) extends with the desired properties. We assume that \(|z w| < \epsilon\) for some small \(\epsilon\). Since \(r^c e^{i\theta} = 1-zw\), we can suppose that \(-\pi<\theta<\pi\). Let \(\tilde{\theta}=\beta \theta \), so that \begin{equation} \label{greens} f(re^{i\theta}, q) = \frac{1}{2 \sqrt{|re^{i\tilde{\theta}}-1|^2 +q^2}} +\frac{F}{2} \end{equation} for some smooth harmonic \emph{positive} function \(F= F(re^{i\tilde{\theta}}, q)\) -see Lemma \ref{positive lemma}-. Write \( \xi = re^{i\tilde{\theta}} -1 = (1-zw)^{\beta}-1 = \beta zw \psi \) for some \(\psi\) holomorphic function of \(zw\) with \(\psi(0)=1 \). We plug \ref{greens} into \ref{function s} to obtain \[ \log \left( \frac{s+\sqrt{s^2 + |\xi|^2}}{|\xi|} \right) + \int_{0}^{s} F = \log \left( \frac{|w|}{|z|} \right) . \] We exponentiate and re-arrange terms to obtain \begin{equation} \label{global s} 2s= \beta |w|^2 \psi e^{-\int_{0}^{s}F} - \beta |z|^2 \psi e^{\int_{0}^{s}F} ; \end{equation} note that in the standard case of \(\beta=1\), we have \(F \equiv 0\), \(\psi \equiv 1\) and therefore \(2s = |w|^2 - |z|^2\) -see \ref{Hopf map}-. The claim follows from \ref{global s}. The map \(\Pi\) sends the \(\{z=0\}\) complex line to the ray \(\{(1, 0, s): s>0\}\) via \(2s e^{\int_{0}^{s}F(1, 0, q)dq} = \beta |w|^2\), the fact that \(F>0\) implies that \(s \to se^{\int_{0}^{s}F}\) is a diffeomorphism of \([0, \infty)\). Similarly, \(\Pi\) sends \(\{w=0\}\) to \(\{(1, 0, s): s<0\}\) via \(2s e^{\int_{s}^{0}F(1, 0, q)dq} =- \beta |z|^2\). \end{proof} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{BundleProjection} \caption{} \label{fig:bundleprojection} \end{figure} \(I\) is the standard complex structure in the \(z, w\) coordinates. Since \(I ds = f^{-1}\alpha\), it follows that \(\alpha\) extends as a connection \(1\)-form on \(\mathbf{C}^2 \setminus \{0\}\) which is smooth on the complement of the conic. It is then standard to show -see for example \cite{AKL}- that \(g_{RF}\) extends smoothly over \(0\), since \(g\) is Euclidean in a neighborhood of \(p\), \(f\) differs from the Newtonian potential by a smooth harmonic function and \(\alpha\) is a smooth connection in a punctured neighborhood of \(0\). The function \(s\) is a moment map for the circle action, in the sense that \(\omega_{RF}(Y, \cdot)= ds \) where \(Y=2 \mbox{Im} \left( w \partial/ \partial w - z \partial / \partial z \right)\). \(s>0\) when \(|w|>|z|\) and \(s<0\) when \(|z|>|w|\). The metric \(g_{RF}\) is invariant under the map \((z, w) \to (w, z) \) which lifts \((re^{i\theta}, s) \to (re^{i\theta}, -s)\). See Figure \ref{fig:bundle-projection-1} The statement about the cone singularities along \(C\) also follows easily. Let \(p \in C\), we take holomorphic coordinates \(z_1 = zw -1\) and \(z_2 = w\), say, so \(C= \{z_1=0\}\). Write \(z_1=r^c e^{i\theta}\) and let \(\epsilon=dr+i\beta rd\theta\), we want to show that \(\omega_{RF}\) has \(C^{\alpha}\) coefficients with repect to the basis \(\{\epsilon \overline{\epsilon}, \epsilon \overline{dz_2}, dz_2 \overline{\epsilon}, dz_2 \overline{dz_2} \}\). Comparing with our previous notation we see that \(\epsilon = \eta_1 \) and \[\omega = (if/2) (\epsilon \overline{\epsilon} + \eta_2 \overline{\eta_2} ) , \] where \( \eta_2 = \gamma_1 dz_2 - \gamma_2 \epsilon \) with \[ \gamma_1 = (w f)^{-1}, \hspace{3mm} \gamma_2 = f^{-1} \left( h_0^{-1} \frac{\partial h_0}{\partial r} + a_2 + i a_1 \right) . \] The first item in Theorem \ref{THEOREM} then follows from \(h_0^{-1} \partial h_0 / \partial r = (1/zw) c r^{c-1} e^{i\theta} \) and the formula \ref{connection} for \(a_1, a_2\). To conclude this first part we compute the volume form of \(g_{RF}\). We write \( \Omega = (1/\sqrt{2}) dz dw \). \begin{claim} \begin{equation} \label{volume form} \omega_{RF}^2 = \beta^2 |1-zw|^{2\beta-2} \Omega \wedge \overline{\Omega} \end{equation} \end{claim} \begin{proof} It is immediate from the previous computation of the Jacobian of the map \(H=(z, w)\) that \[ \Omega \wedge \overline{\Omega} = (1/2)f^2c^2r^{2c-2}\eta_1 \eta_2 \overline{\eta_1 \eta_2}. \] We use that \[ \eta_1 \overline{\eta_1} = -2i\beta r dr d\theta, \hspace{2mm} \eta_2 \overline{\eta_2} = 2i f^{-1}ds \alpha, \hspace{2mm} \omega^2=2f\beta r \alpha ds dr d\theta \] and \( r= |1 -zw|^{\beta} \) to conclude the claim. \end{proof} \subsection{Asymptotics} \label{asymptotics section} There is a simple explanation, in terms of the complex curve \(C\), for the exponent \(-2 /\beta\) in Item \ref{Item 2} of Theorem \ref{THEOREM}. Among the diffeomorphisms \(F\) of \( \mathbf{C}^2 \) which, outside a compact set, take the conic \(C=\{zw=1\}\) to its asymptotic lines \(\{zw=0\}\), the ones which are closest to being holomorphic satisfy \( |\overline{\partial} F (x)| = O (|x|^{-2}) \). On the other hand \(\rho^2 = |z|^{2\beta}+ |w|^{2\beta}\) and therefore \( |\overline{\partial} F (x)| = O (\rho^{-2/ \beta})\). Our proof of Item \ref{Item 2} is based on the simple observation that applying the Gibbons-Hawking ansatz to \(2 \pi \Gamma_0\) gives rise to \( \mathbf{C}_{\beta} \times \mathbf{C}_{\beta} \). The asymptotic behavior of \(g_{RF} \) then follows from the fact that \(\Gamma_p \) is asymptotic to \(\Gamma_0\). The `matching map' \(\Phi \) is as a suitable bundle map. We begin by writing \(g_F = \beta^2 |u|^{2\beta-2} |du|^2 + \beta^2 |v|^{2\beta -2} |dv|^2\) as a Gibbons-Hawking metric. We use the cone coordinates \(u= \rho_1^{1/\beta}e^{i\psi_1}\) and \(v= \rho_2^{1/\beta}e^{i\psi_2}\) so that \[ g_F = d\rho_1^2 + \beta^2 \rho_1^2 d\psi_1^2 + d\rho_2^2 + \beta^2 \rho_2^2 d\psi_2^2 . \] Let \(\Pi_0 : \mathbf{C}^2 \to \mathbf{R}^3 \) be defined as \[ \Pi_0 (u, v) = \left( \beta \rho_1 \rho_2 e^{i (\psi_1 + \psi_2)}, \beta \frac{\rho_2^2 - \rho_1^2}{2} \right) . \] This is an orbit map for the \(S^1\)-action \(e^{it}(u, v)=(e^{it}u, e^{-it}v)\). If we let \( x= \Pi_0 (u, v) \) then \( |x|^2 = (\beta^2 / 4)(\rho_1^2 +\rho_2^2)^2 \), equivalently \begin{equation} \label{distance relation} \beta \rho^2 = 2|x| . \end{equation} The derivative of the action is \(Y = \partial / \partial \psi_1 - \partial / \partial \psi_2 \) and \(|Y|^2_{g_F}= \beta^2 \rho^2 = 1 / (2\beta|x|) \). We let \( \alpha_0 = |Y|_{g_F}^{-2} g_F(Y, \cdot) = \rho^{-2} (\rho_1^2 d\psi_1 - \rho_2^2 d\psi_2) \). It requires a simple computation to check that \begin{equation} g_F = f_0 g + f_0^{-1} \alpha_0^2, \hspace{3mm} \mbox{with} \hspace{1mm} f_0 = \frac{1}{2 \beta |x|} . \end{equation} Note that \(f_0 = 2\pi \Gamma_0 \). Now let \((\mathbf{C}^2, g_{RF}) \) together with \(\Pi : \mathbf{C}^2 \to \mathbf{R}^3\). We let \( B \subset \mathbf{R}^3 \) a closed ball of radius \(2\), say, so that \(p \in B \). The bundles \(\Pi_0\) and \(\Pi\) are isomorphic on the complement of \(B\), so there is an \(S^1\)-equivariant diffeomorphism \(\Phi : \Pi_0^{-1} (\mathbf{R}^3\setminus B ) \to \Pi^{-1} (\mathbf{R}^3\setminus B )\) which induces the identity on \(\mathbf{R}^3 \setminus B\), in particular note that \(\Phi (\{uv =0\}) \subset \{zw =1\} \). \(\Phi_0^{\ast} \alpha \) is a connection on \(\Pi_0\), therefore \( \Phi_0^{\ast} \alpha - \alpha_0 = \eta \) with \(\eta \) a 1-form on \(\mathbf{R}^3 \setminus B \). Moreover \[ d \eta = -2\pi \star_{\beta} d (\Gamma_p - \Gamma_0) . \] On the other hand, since \( |\Gamma_p - \Gamma_0| = O(|x|^{-1-1/\beta}) \), we can assume -after changing gauge if necessary- that \(|\alpha - \alpha_0| = O (|x|^{-1-1/\beta}) \). We evaluate \(g_{RF}\) in the orthonormal basis of \(g_F\) given by \(v_1= f_0^{1/2} Y\) and \(v_2, v_3, v_4 \) the horizontal lifts of \( f_0^{-1/2} \partial/ \partial r, f_0^{-1/2} (\beta r)^{-1} \partial/ \partial \theta, f_0^{-1/2} \partial/ \partial s \); our goal is to show that \(g_F (v_i, v_j) = \delta_{ij} + O(|x|^{-1 / \beta}) \) which is the same -due to \ref{distance relation}- as \(|g_{RF}-g_F|_{g_F} = O(\rho^{-2/\beta}) \). Note that \( \alpha (v_1)= f_0^{1/2} \) and \(\alpha (v_j)= f_0^{-1/2} O(|x|^{-1-1/\beta}) \) for \(j=2, 3, 4\). This follows from straightforward computation, but before doing that we state a simple observation. \begin{lemma} Let \(f= O(|x|^{-a}) \) and \( f-g = O(|x|^{-b}) \) with \(0 < a< b\) and \(f >0\). Then \(g/f = 1 + O(|x|^{-(b-a)})\). \end{lemma} Indeed \[ g/f = f/f + (g-f)/f . \] In particular, we conclude that \(f /f_0 = 1 + O (|x|^{-1/\beta}) \). We proceed with our proof, we let \(2 \leq j, k \leq 4\) and \(j \ne k\) \[g_{RF}(v_1, v_1)= f^{-1}f_0 = 1 + O(|x|^{-1/\beta}), \hspace{2mm} g_{RF}(v_j, v_j)= f_0^{-1} f + O(|x|^{-2/\beta}) = 1 + O(|x|^{-1/\beta}) \] \[g_{RF}(v_1, v_j)= f^{-1} O(|x|^{-1-1/\beta})= O(|x|^{-1/\beta}) , \hspace{2mm} g_{RF}(v_j, v_k)= O(|x|^{-2/\beta}) . \] Similarly, we can show that \( |\Phi^{\ast}\omega_{RF} - \omega_F|_{g_F} = O(\rho^{-2/\beta}) \) and therefore \( |\Phi^{\ast}I - I|_{g_F} = O(\rho^{-2/\beta}) \). \begin{remark} We can include derivatives in the statement of Item \ref{Item 2}, as \(| \nabla_X (\Phi^{*}g_{RF}-g_F)|_{g_F} = O(\rho^{-2/\beta -1}) \) and so on; but care must be taken in not to differentiate in transverse directions to the cone singularities more than once. \end{remark} \subsection{Energy} \label{energy section} It follows from \ref{curv op GH met} that the curvature operator of \(g_{RF}\) is given, up to the \(-f/2\) factor, by the trace free part of the Hessian of \(f^{-2}\) -with respect to the \(g_{\beta}\) metric-. In particular, close to the conic the curvature behaves as \(r^{1/\beta-2}\) and this is unbounded when \(\beta>1/2\). The norm-square of the curvature operator is \(O(r^{2/\beta-4})\). Comparison with the integral \(\int_{0}^{1} r^{2/ \beta-3}dr < \infty \) shows that \(|\mbox{Rm}(g_{RF})|^2 \) is locally integrable. According to our formula \ref{energy distribution}, \[ |\mbox{Rm}(g_{RF})|^2 = \frac{1}{4f} \Delta_{\beta} \Delta_{\beta} f^{-1} .\] We want to compute \(\int_{\mathbf{C}^2} |\mbox{Rm}(g_{RF})|^2\). We note that \( \Pi: (\mathbf{C}^2, g_{RF}) \to (\mathbf{R}^3, f \cdot g)\) is a Riemannian submersion whose fiber over \(x\) is a circle of length \( 2\pi f^{-1/2}(x)\). The volume form of \( f \cdot g\) is \(f^{3/2}dV_{\beta}\) and it is easy to conclude that \begin{equation} \label{en proof 1} \| \mbox{Rm}(g)\|_{L^2}^2 = (\pi /2) \int_{\mathbf{R}^3} \Delta_{\beta} \Delta_{\beta} f^{-1} dV_{\beta} . \end{equation} In order to compute this quantity we use Stokes' theorem \[ \int_{\Omega} \Delta_{\beta} \Delta_{\beta} f^{-1} dV_{\beta} = \int_{\partial \Omega} \langle D \Delta_{\beta} f^{-1}, \nu \rangle dA_{\beta} \] for an increasing sequence of domains \(\Omega\). There are two key lemmas \begin{lemma} Let \(C_r\) be a bounded cylinder consisting of points which are at distance \(r\) from the singular set \(S= \{0\} \times \mathbf{R} \). Then \[ \lim_{r \to 0} \int_{C_r} \langle D \Delta_{\beta} f^{-1}, \nu \rangle dA_{\beta} =0 \] \end{lemma} \begin{proof} The lemma is a consequence of the \(\beta\)-smoothness of \(f^{-1}\) together with the fact that \(\triangle_{\beta} f =0 \). Indeed, since \(f\) is harmonic, \( \triangle_{\beta} f^{-1} = 2 f^{-3} |Df|^2 \). The \(\beta\)-smoothness then gives us \(|D \Delta_{\beta} f^{-1}| = O (r^{2\beta^{-1} -3}) \) and therefore \(\int_{C_r} \langle D \Delta_{\beta} f^{-1}, \nu \rangle dA_{\beta} = O (r^{2\beta^{-1} -2}) \). \end{proof} \begin{lemma} Let \(S_R\) denote the sphere of points which are at distance \(R\) from \(0\). Then \[ \lim_{R \to \infty} \int_{S_R} \langle D \Delta_{\beta} (f^{-1} -f_0^{-1}), \nu \rangle dA_{\beta} =0 \] \end{lemma} \begin{proof} \[\Delta_{\beta} (f^{-1}-f_0^{-1}) = f^{-3}|Df|^2 - f_0^{-3}|Df_0|^2 = f_0^{-3} |Df_0|^2 \left( (f/f_0)^{-3} (|Df|/|Df_0|)^2 -1 \right) \] \(f_0^{-3}|Df_0|^2=O(|x|^{-1}) \) and \(f/f_0 = 1 + O(|x|^{-1/\beta}) \). On the other hand, \( | |Df|^2 - |Df_0|^2 | \leq (|Df| + |Df_0|) | D(f-f_0)| \) implies that \(|Df|^2/|Df_0|^2 = O (|x|^{1-1/\beta})\). We conclude that \( \Delta_{\beta}(f^{-1}-f_0^{-1}) = O(|x|^{-1/\beta}) \) Note that \(\nu\) is tangential to \(S\), so that $\langle D \Delta_{\beta} (f^{-1} -f_0^{-1}), \nu \rangle = O(|x|^{-1/\beta -1}) $. We deduce that the integral is \(O(|x|^{1-1/\beta})\). \end{proof} It follows easily from these results that \begin{equation} \label{en proof 2} \int_{\mathbf{R}^3} \Delta_{\beta} \Delta_{\beta} f^{-1} dV_{\beta} = \lim_{R \to \infty} \int_{S_R(0)} \langle D \Delta_{\beta} f_0^{-1}, \nu \rangle dA_{\beta} - \lim_{\epsilon \to 0} \int_{S_{\epsilon}(p)} \langle D \Delta_{\beta} f^{-1}, \nu \rangle dA_{\beta} . \end{equation} Finally, \begin{itemize} \item \[\lim_{\epsilon \to 0} \int_{S_{\epsilon}(p)} \langle D \Delta_{\beta} f^{-1}, \nu \rangle dA_{\beta} = -16 \pi .\] Indeed, \( g\) is isometric to the Euclidean metric in a neighborhood of \(p\) and we reduce to the standard situation where \(f= 1/2|x| \). We compute in spherical coordinates to obtain \(\Delta |x| = 2 |x|^{-1} \) and \( \int_{S} \langle D(1/|x|), \nu \rangle dA= -4\pi, \) for any sphere \(S\) centered at \(0\). \item \[\lim_{R \to \infty} \int_{S_R(0)} \langle D \Delta_{\beta} f_0^{-1}, \nu \rangle dA_{\beta} = -\beta^2 16 \pi .\] This goes along the same lines as in the previous item, replacing the Euclidean metric with \(g\). The extra factor \(\beta^2\) comes from \(f_0^{-1}= 2 \beta |x| \) and \(dA_{\beta}= \beta dA\). \end{itemize} We put \ref{en proof 1} and \ref{en proof 2} together, to obtain our desired formula \begin{equation} \| \mbox{Rm}(g_{RF}) \|^2_{L^2} = 8 \pi^2 (1-\beta^2) . \end{equation} \section{Additional comments} \label{additiona comments} \subsection*{Curvature and directions} As we mentioned, the curvature of \(g_{RF}\) is unbounded at points of the conic when \(\beta>1/2\); and it is conjectured that this is the case for any K\"ahler-Einstein metric. We can also ask in which directions the curvature blows-up. We can be more precise with the concept of direction at points of the curve \(C\). We consider the \(\mathbf{CP}^1\)-bundle of directions, \(P\), over \(\mathbf{C}^2\); whose fiber over \(x\) is \(\mathbf{P}(T_x \mathbf{C}^2)\). The metric \(g_{RF}\) is smooth in the complement of \(C\) and taking orthogonal complements defines an automorphism, \(\perp\), of \(P\) over that region. The point is that \(\perp\) extends continuously over all of \(P\) and therefore there is a well-defined notion of a normal direction to the curve \(C\). On the other hand it is a general fact that on Einstein \(4\)-manifolds the sectional curvatures of mutually orthogonal planes agree, indeed the Einstein condition is equivalent to the commutativity of the curvature operator with the Hodge star. We can provide a simple proof for the Ricci-flat K\"ahler case: Take normal coordinates at \(p\), so that \(g_{1\overline{1}}g_{2\overline{2}} - |g_{1\overline{2}}|^2 = e^F \) for some pluri-harmonic function \(F\) whose gradient vanishes at \(p\). Differentiate with respect to \(z_1\), \(\overline{z_1}\) and evaluate at \(p\) to obtain \( g_{1\overline{1}, 1\overline{1}} + g_{2\overline{2}, 1\overline{1}} =0\); similarly differentiating with respect to \(z_2,\overline{z_2}\) we obtain \( g_{1\overline{1}, 2\overline{2}} + g_{2\overline{2}, 2\overline{2}} =0\). It follows that \( g_{1\overline{1}, 1\overline{1}} = g_{2\overline{2}, 2\overline{2}} \); which is to say that the sectional curvature of the \(\partial/ \partial z_1 \) and \(\partial/ \partial z_2 \) planes at \(p\) agree. We go back to our setting, \(g_{RF}\) is smooth in tangent directions to \(C\) and it induces on it the metric of a rotationally symmetric negatively curved cylinder. The sectional curvature of \(g_{RF}\) remains bounded as we approach the curve in either tangential or normal directions. The upshot is that if \(p \in C \) and \(P_p \cong \mathbf{CP}^1 \) with \(\perp\) the standard \(\xi \to -1/\overline{\xi} \) and the tangent and normal directions to the curve corresponding to the North and South poles; then the sectional curvature is invariant under \(\perp\), bounded around the poles and unbounded around the equator. \subsection*{Quotients and limits when \( \beta \to 0 \)} \footnote{I want to thank Hans-Joachim Hein for discussions related to the content of this section.} We study the case when \(\beta=1/n\) with \(n\) a positive integer \(\geq 2\). The Green's function is explicit in this case, given by the Neumann reflection trick \[ \Gamma_p (x) = \frac{1}{4\pi} \sum_{j=0}^{n-1} \frac{1}{|x-p_j|}, \] with \(x= (r e^{i\tilde{\theta}}, s)\) and \(p_j=(e^{2\pi i(j/n)}, 0)\). Let \(X\) be the standard \(A_{n-1}\)-ALE space determined by \(p_0, \ldots, p_{n-1}\). The orthogonal transformation that fixes the \(s\)-axis and rotates by \(\pi/n\) has a lift as an isometry of \(X\), this lift is unique if we require that it fixes the points over the axis. This isometry generates a \(\mathbf{Z}_n\)-action and the quotient space is then identified with \((\mathbf{C}^2, g_{RF})\). When \(n=2\), \(X\) is the Eguchi-Hanson space and there is an explicit K\"ahler potential for \(g_{RF}\) which -up to a constant factor- is given by \(\phi = (|z|^2 + |w|^2 + |1-zw| +1)^{1/2}\). We let \(g_n \) denote the metric \(g_{RF}\) with \(\beta=1/n\). We want to know the possible Gromov-Hausdorff limits of the sequence \(\{g_n\}\) as \(n \to \infty\). First of all we note that \(g_n\) are complete, in the sense that Cauchy sequences with respect to the induced distance converge. Indeed the standard proof for Gibbons-Hawking spaces applies in our case -see \cite{AKL}-, the point being that the bundle projection is a Riemannian submersion onto a complete space. Since we are dealing with non-compact spaces we must choose points and talk about pointed Gromov-Hausdorff limits. We choose the points to be the ones fixed by the \(S^1\)-action. As we shall see the curvature of \(g_n\) blows-up at this point and if we re-scale in order to keep it bounded we obtain the Taub-Nut metric in the limit, in symbols \( (\mathbf{C}^2, \lambda_n g_n, 0) \to (\mathbf{C}^2, g_{TN}, 0) \) with \(\lambda_n \approx |\mbox{Rm}(g_n)|\) and \[ 0< \lim_{n \to \infty} \frac{|\mbox{Rm}(g_n)|(0)}{n \log n} < \infty . \] We consider a unit circle in the plane with \(n\) points equally separated. If we fix one of this points and consider the sum of the inverse distances to the others, then -up to a constant factor- the sum is \(n(1+1/2+\ldots+1/n)\). We go back to the sequence \(g_n\), with the marked points mapping to \(0 \in \mathbf{R}^3\) and conclude that in a small ball we can write the harmonic functions as \(1/2|x| + (n \log n) F_n \) with \(F_n\) converging uniformly to a positive constant. It is then easy to derive the claims made in the previous paragraph. It is worth to point out that in the previous limit we are magnifying a neighborhood of \(0\) and pushing-off the cone singularities to infinity. On the other hand \[ \lim_{n \to \infty} \| \mbox{Rm}(g_{n}) \|_{L^2}^2 = \lim_{n \to \infty} 8\pi^2 (1-1/n^2)= 8 \pi^2 = \| \mbox{Rm}(g_{TN}) \|_{L^2}^2 .\] So the metrics \(g_n\) become nearly flat, as \(n \to \infty\), around the conic. It is also tempting -by approximating large circles with a line- to compare the metrics \(g_n\) for \(n\) large with `the' Ooguri-Vafa metric \cite{GrossWilson}, obtained from the Gibbons-Hawking ansatz applied to the potential of infinitely many charges lying on a line and equally separated. However, as a word of caution, it must be said that the Ooguri-Vafa metric is \emph{not} complete. The Ooguri-Vafa metric is indeed a one parameter family of metrics, parametrized by the distance between the charges. Scaling the metrics when the parameter tends to zero at the fixed point of the \(S^1\)-action recovers the Taub-Nut metric in the limit; we can use the triangle inequality to relate this sequence to \(\lambda_n g_n\). However one can ask for a more precise correspondence, relating their associated harmonic functions -both admitting asymptotic expansions in terms of Bessel functions-. We can also scale the metrics \(g_{RF}\) so that their volume forms are \(|1-zw|^{2\beta-2} \Omega \wedge \overline{\Omega}\) and then take the point-wise limit of these tensors as \(\beta \to 0\), proceeding like this results into a degenerate limit \(g_{\infty} \geq 0\). It is not known whether there is a K\"ahler metric on the complement of the conic with volume form \(|1-zw|^{-2} \Omega \wedge \overline{\Omega}\). Note that such a metric would be necessarily complete Ricci-flat and that \(\pi_1(\mathbf{C}^2 \setminus C) \cong \mathbf{Z}\). \subsection*{Variants} As mentioned in \cite{DonaldsonKMCS}, there are many variants of the construction. Finite sums of Green's functions \(\Gamma_p\) at different points give rise to Ricci-flat metrics with cone singularities on \(A_n\) manifolds. It is also possible to consider several parallel wedges an obtain metrics on \(\mathbf{C}^2\) with cone singularities along disjoint conics. Another variant is to add a positive constant term to the Green's function to obtain analogs of (multi)-Taub-Nut spaces. More interesting is the case of a curve \(C \subset \mathbf{C}^2\) which is invariant under an \(S^1\)-action different from the one we considered. For example $\{w=z^2\}$ and \(\{wz^2=1\}\) are invariant under \((e^{it}z, e^{2it}w)\) and \((e^{it}z, e^{-2it}w)\) respectively. We can ask for \(S^1\)-invariant Ricci-flat metrics with cone singularities along the curve; but a suitable extension of the Gibbons-Hawking ansatz to the context of Seifert fibrations or \(S^1\)-actions which rotate the complex volume form seems not to be available. \bibliographystyle{plain}
2,877,628,089,975
arxiv
\section{Introduction} Within the context of the heterotic superstring and heterotic M-theory \cite{Lukas:1997fg, Lukas:1998yy, Lukas:1998tt, Donagi:1999ez}, there have been a number of vacuum states whose four-dimensional low energy effective field theory \cite{Ovrut:1979pk} has the exact spectrum of the MSSM--with or without right-handed neutrino chiral multiplets--and, to prohibit rapid proton decay, contains R-parity \cite{Font:1989ai,Martin:1997ns,Martin:1992mq}--either as a discrete symmetry or as a subgroup of an anomaly free $U(1)$ extension of the standard model gauge group \cite{Krauss:1988zc,Aulakh:1999cd, Aulakh:2000sn, Babu:2008ep, Feldman:2011ms, FileviezPerez:2011dg,Aulakh:1982yn, Hayashi:1984rd, Mohapatra:1986aw, Masiero:1990uj}. One such vacuum was presented in \cite{Braun:2005zv, Braun:2005nv, Ambroso:2009jd, Ambroso:2010pe} and will be referred to as the $B-L$ MSSM. Be that as it may, this is only the first step in finding a realistic heterotic string vacuum. Any such theory must also also be compatible with all presently observed low-energy phenomenology; that is, it must spontaneously break electroweak (EW) symmetry at the observed scale, must be compatible with the newly discovered Higgs particle with mass $\sim 125$~GeV \cite{Aad:2014aba, Chatrchyan:2013lba}, have all sparticle masses above the present observational lower bounds and--assuming R-parity is contained in an additional $U(1)$ symmetry--spontaneously break that Abelian group with an associated gauge boson mass in excess of the present experimental lower bound. In a series of papers \cite{Ovrut:2012wg, Marshall:2014kea, Marshall:2014cwa, Ovrut:2014rba, Ovrut:2015uea}, the $B-L$ MSSM was examined in detail--using a random statistical sampling of the initial set of soft supersymmetry (SUSY) breaking parameters--and the results confronted with these phenomenological requirements. It was shown, {\it within a restricted region of the compactification moduli space,} that the $B-L$ MSSM easily passed each of these requirements for a large and basically uncorrelated set of initial conditions. Furthermore, this analysis led to a series of low energy predictions--for example, directly relating the lightest stop decay channels and branching ratios to the neutrino mass hierarchy and mixing angles--thus linking LHC experimental results to neutrino measurements. That is, the $B-L$ MSSM is a possible candidate for a phenomenologically acceptable theory of the real world--a statement that will be directly testable as its structure and predictions are confronted with upcoming data from the LHC, neutrino experiments and cosmological observations. For these reasons, in this paper we extend the results of \cite{Ovrut:2012wg, Marshall:2014kea, Marshall:2014cwa, Ovrut:2014rba, Ovrut:2015uea} to a {\it more general, and more natural, region} of the $B-L$ MSSM moduli space. To quantify this, we should briefly discuss the structure of the $B-L$ MSSM. It arises in the observable sector of heterotic M-theory compactified to four-dimensions on a Shoen Calabi-Yau (CY) threefold \cite{Braun:2004xv} with first homotopy group $\pi_{1}={\mathbb{Z}}_{3} \times {\mathbb{Z}}_{3}$. This manifold admits a specific slope-stable holomorphic vector bundle \cite{Donagi:2000zf} with structure group $SU(4)\subset E_{8}$, as well as two Wilson lines--each wrapped over a two-cycle associated with a different ${\mathbb{Z}}_{3}$ homotopy factor. This theory has three, in principle distinct, mass scales--$M_{U}$ at which the gauge bundle spontaneously breaks $E_{8}$ to $SO(10)$ and two Wilson line mass scales, which we denote by $M_{\chi_{3R}}$ and $M_{\chi_{B-L}}$, associated with the inverse radii of their respective two-cycles. In our previous analysis, we worked {\it in a restricted region of CY moduli space} where the radius of one two-cycle is distinctly smaller than that of the other; that is, we chose $M_{U} \simeq M_{\chi_{B-L}} > M_{\chi_{3R}}$. By taking the scale of separation of the two Wilson lines to be approximately an order of magnitude, one can exactly unify all gauge coupling parameters-- thus specifying boundary conditions in the renormalization group equations (RGEs). However, this specificity comes at the cost of introducing an {\it additional scaling regime}. Between $M_{\chi_{B-L}}$ and $M_{\chi_{3R}}$ the effective theory is that of the ``left-right'' model \cite{Ovrut:2012wg,Babu:2008ep,FileviezPerez:2008sx,Hayashi:1984rd} with gauge group $SU(3)_{C} \times SU(2)_{L} \times SU(2)_{R} \times U(1)_{B-L}$ and a specific particle spectrum that can be computed from string theory. It is only for energy-momentum below the lightest Wilson line mass $M_{\chi_{3R}}$ that one obtains the spectrum and $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y} \times U(1)_{B-L}$ gauge symmetry of the $B-L$ MSSM. In this paper, we generalize and simplify the phenomenological analysis of the $B-L$ MSSM by working in a {\it generic} region of CY moduli space where the radii of the two Wilson lines and the average radius of the CY manifold are all approximately equal: that is, with $M_{U} \simeq M_{\chi_{B-L}} \simeq M_{\chi_{3R}}$. This generalization is significant in that 1) {\it the region of moduli space is much larger and more ``natural''} than that used previously and 2) {\it the ``left-right'' scaling region is eliminated}, with the $B-L$ MSSM emerging immediately below the compactification scale--thus simplifying the scaling regimes. Of course, the four $B-L$ MSSM gauge couplings will no longer unify near the scale of the CY radius. This does somewhat complicate the RG analysis. However, it opens the door for a discussion of unification of all gauge couplings with the gravitational coupling at the ``string scale''--as has been discussed by many authors in \cite{Kaplunovsky:1992vs, Kaplunovsky:1995jw, Mayr:1993kn, Dienes:1995sq, Dienes:1996du, Kiritsis:1996dn, Dolan:1992nf, Nilles:1997vk, Nilles:1998uy, Ghilencea:2001qq, Klaput:2010dg, deAlwis:2012bm, Bailin:2014nna}. More specifically, such unification should take place at tree level. However, at the one-loop (and higher) level one expects such unification to be split by ``threshold'' corrections. These are due to several effects, such as the inclusion of field theory thresholds at each of the SUSY, $B-L$ and ``unification'' scales, and genus-one string theory corrections. Since in this analysis the latter is expected to be the largest, we will focus exclusively on them. By running the four $B-L$ MSSM gauge parameters up to the string scale, we will 3) {\it statistically compute the heavy string threshold corrections for each gauge coupling}. Furthermore, we will statistically compute the hypercharge gauge threshold and, by subtracting various thresholds, analyze the moduli dependent sub-component of each. Finally, there is yet another important benefit of analyzing the $B-L$ MSSM at the generic region of its CY moduli space--although we will not pursue this in the present paper. In our previous work and the present analysis, the scale of the soft SUSY breaking parameters is chosen to be in the TeV region. This is done in order for our low energy phenomenological predictions to be LHC accessible. However, unlike in the case of unified gauge couplings enforced by splitting the Wilson line masses discussed in \cite{Ovrut:2012wg, Marshall:2014kea, Marshall:2014cwa, Ovrut:2014rba, Ovrut:2015uea}--which, for reasons elucidated in the Conclusion, is essentially restricted to the TeV region--for the simultaneous Wilson line masses discussed in this paper, the SUSY breaking mass scale can be taken to be arbitrarily large. This has a number of important applications, both in particle phenomenology and in early universe cosmology \cite{Lukas:1998qs}. We will pursue this in future work \cite{inPreparation}. The present paper is structured as follows. In Section II we review the salient parts of the $B-L$ MSSM theory, presenting the spectrum, the supersymmetric and the soft SUSY breaking Lagrangians, discussing the generic structure of spontaneous $B-L$ and EW symmetry breaking and setting our notation. Section III is devoted to defining the exact meaning of the ``simultaneous'' Wilson line analysis presented in this paper--as opposed to the ``split'' Wilson line approach in our previous work. It then discusses, in detail, the four relevant mass scales from ``unification'' to electroweak symmetry breaking. The statistical definitions of the ``unification'' mass and gauge coupling are given in our present context. In Section IV, the three scaling regimes--for both the ``right-side-up'' and ``upside-down'' scenarios--along with the associated gauge coupling beta function parameters are presented. The RG running of the Yukawa couplings, including their transition at the SUSY scale, is discussed. Section V gives a brief review of the ``statistical'' approach to setting the initial soft supersymmetry breaking parameters at the ``unification'' scale presented in detail in our previous work. In Section VI, the experimental constraints on the sparticle masses, the heavy vector boson mass and the lightest neutral Higgs mass are presented. We then solve the RGEs--for randomly chosen initial soft SUSY breaking parameters-- sequentially from the ``unification'' scale down through the EW breaking scale subject to these constraints. Plots--and the exact number--of the initial points that sequentially satisfy the experimental constraints are given; ending with the robust number of phenomenologically ``valid'' black points that satisfy all experimental constraints. A brief analysis of both the LSP and non-LSP spectra is then given. In Section VII, we briefly discuss fine-tuning in the $B-L$ MSSM. In Section VIII, we statistically calculate the heavy string threshold corrections for each of the four $B-L$ MSSM gauge couplings, as well as the hypercharge gauge threshold, and analyze the moduli dependent differences of these quantities. Finally, in Section IX, we present our conclusions. \section{The Minimal SUSY $B-L$ Model} \label{sec:model} In this section, we briefly review the minimal anomaly free extension of the MSSM with gauge group \begin{eqnarray} SU(3)_C\times SU(2)_L\times U(1)_{3R}\times U(1)_{B-L} \ , \label{eq:458} \end{eqnarray} whose structure was motivated by heterotic string theory in \cite{Braun:2005nv} and by phenomenological considerations in \cite{Barger:2008wn,Everett:2009vy}. Although this model has been discussed in our previous papers \cite{Ovrut:2012wg, Marshall:2014kea, Marshall:2014cwa, Ovrut:2014rba, Ovrut:2015uea}, we outline its main features in this section for specificity and to set our notation. The Abelian gauge factors $U(1)_{3R}\times U(1)_{B-L}$ can be rotated into physically equivalent charge bases, such as $U(1)_Y\times U(1)_{B-L}$. However, as shown in~\cite{Ovrut:2012wg}, this comes at the cost of introducing kinetic mixing between the gauge fields. We therefore prefer to work in the basis $U(1)_{3R}\times U(1)_{B-L}$. The gauge covariant derivative is \begin{equation} D=\partial-ig_{3R}I_{3R}W_{3R}-ig_{BL}\frac{I_{BL}}{2}B^\prime \ , \end{equation} where $I_{3R}$, $I_{BL}$ and $g_{3R}$, $g_{BL}$ are the generators and couplings for the $U(1)_{3R}$ and $U(1)_{B-L}$ groups respectively. The gauge boson associated with $U(1)_{B-L}$ is denoted $B^\prime$ to distinguish it from the gauge boson associated with $U(1)_Y$, which is normally denoted $B$. The factor of $\frac{1}{2}$ in the last term is introduced by redefining the gauge coupling $g_{BL}$, thus simplifying many equations. A radiatively induced vacuum expectation value (VEV) of a right-handed sneutrino will break the Abelian factors $U(1)_{3R} \times U(1)_{B-L}$ to $U(1)_Y$, in analogy with the way the MSSM Higgs fields break $SU(2)_L\times U(1)_Y$ to $U(1)_{EM}$. This process is referred to as ``$B-L$'' symmetry breaking, although technically it breaks a specific combination of the groups generated from $I_{3R}$ and $I_{BL}$, leaving invariant the usual hypercharge group generated by \begin{equation} Y = I_{3R} + \frac{I_{BL}}{2} \ . \end{equation} The particle content of the model is simply that of the MSSM plus three right-handed neutrino chiral multiplets. This amounts to three generations of matter superfields \begin{eqnarray} Q=\trix{c}u\\d\end{array}\right)\sim({\bf 3}, {\bf 2}, 0, \frac{1}{3}) & \begin{array}{rl}u^c\sim&(\bar{\bf 3}, {\bf 1}, -1/2, -\frac{1}{3}) \\ d^c\sim&(\bar{\bf 3}, {\bf 1}, 1/2, -\frac{1}{3})\end{array} \ , \nonumber\\ L=\trix{c}\nu\\e\end{array}\right)\sim({\bf 1}, {\bf 2}, 0, -1)&\begin{array}{rl}\nu^c\sim&({\bf 1}, {\bf 1}, -1/2, 1)\\ e^c\sim&({\bf 1}, {\bf 1}, 1/2, 1)\end{array} \ , \label{eq:246} \end{eqnarray} along with the usual two Higgs supermultiplets \begin{eqnarray} H_u=\trix{c}H_u^+\\H_u^0\end{array}\right)&\sim&({\bf 1}, {\bf 2}, 1/2, 0) \ ,\nonumber\\ H_d=\trix{c}H_d^0\\H_d^-\end{array}\right)&\sim&({\bf 1}, {\bf 2}, -1/2, 0) \label{eq:247} \end{eqnarray} where we have displayed their $SU(3)_{C}\times SU(2)_{L} \times U(1)_{3R} \times U(1)_{B-L}$ quantum numbers. The superpotential of the $B-L$ MSSM is given by \begin{eqnarray} W=Y_u Q H_u u^c - Y_d Q H_d d^c -Y_e L H_d e^c +Y_\nu L H_u \nu^c+\mu H_u H_d \ , \end{eqnarray} where both generational and gauge indices have been suppressed. In principle, the Yukawa couplings are three-by-three complex matrices. However, the observed smallness of the CKM mixing angles and CP-violating phase imply that the quark Yukawa matrices can be approximated as diagonal and real for the purposes of RG evolution in this paper. The charged lepton Yukawa coupling can be made diagonal and real by moving the PMNS angles and phases into the neutrino Yukawa couplings. The small size of neutrino masses implies that the neutrino Yukawa couplings can be neglected for the purposes of RG evolution in this paper. The smallness of first- and second-generation fermion masses implies that first and second-generation Yukawa quark and charged lepton Yukawa couplings can also be neglected. The $\mu$-parameter can be chosen to be real without loss of generality. The soft supersymmetry breaking Lagrangian is \begin{align} \begin{split} -\mathcal L_{\mbox{\scriptsize soft}} = & \left( \frac{1}{2} M_3 \tilde g^2+ \frac{1}{2} M_2 \tilde W^2+ \frac{1}{2} M_R \tilde W_R^2+\frac{1}{2} M_{BL} \tilde {B^\prime}^2 \right. \\ & \left. \hspace{0.4cm} +a_u \tilde Q H_u \tilde u^c - a_d \tilde Q H_d \tilde d^c - a_e \tilde L H_d \tilde e^c + a_\nu \tilde L H_u \tilde \nu^c + b H_u H_d + h.c. \right) \\ & + m_{\tilde Q}^2|\tilde Q|^2+m_{\tilde u^c}^2|\tilde u^c|^2+m_{\tilde d^c}^2|\tilde d^c|^2+m_{\tilde L}^2|\tilde L|^2 +m_{\tilde \nu^c}^2|\tilde \nu^c|^2+m_{\tilde e^c}^2|\tilde e^c|^2 \\ &+m_{H_u}^2|H_u|^2+m_{H_d}^2|H_d|^2 \ , \end{split} \label{home5} \end{align} where generation and gauge indices have been suppressed. The $a$-parameters and sfermion soft-mass terms can, in principle, be hermitian matrices in family space. However, this tends to lead to unobserved CP violation. Therefore, we proceed assuming that they are diagonal and real. Furthermore, as discussed in Section V, we assume that the a-parameters are proportional to the Yukawa couplings. This implies that all the $a$-parameters can be neglected except for the (3,3) component of the quark and charged lepton $a$-parameters. The $b$-parameter can be chosen to be both real and positive without loss of generality. Although the gaugino soft masses can be complex in principle, this tends to lead to unobserved flavor and CP violation. Therefore, we proceed assuming that they are real. The $B-L$ symmetry is spontaneously broken by the VEV in a right-handed sneutrino, which carries the appropriate $I_{3R}$ and $I_{B-L}$ charges to break those symmetries while preserving hypercharge symmetry. This VEV is brought about by a sneutrino soft-mass term becoming tachyonic\footnote{Throughout this paper, we use the term ``tachyon'' to describe a scalar particle with a negative mass squared parameter.} at the TeV scale due to the RGE evolution. As discussed in~\cite{Mohapatra:1986aw, Ghosh:2010hy, Barger:2010iv}, this VEV will be purely in one of the three right-handed sneutrino generations -- not in a linear combination of them. Furthermore, the three generations of right-handed sneutrinoes can be relabeled without loss of generality. Therefore, we henceforth assume that it is the third-generation right-handed sneutrino that acquires a VEV. Electroweak symmetry is broken by VEVs in the neutral components of the up and down Higgs multiplets. The electroweak breaking VEVs and the $B-L$ breaking VEV together lead to small VEVs in all three generations of left-handed sneutrinos. The above VEVs will be denoted by \begin{equation} \left< \tilde \nu^c_3 \right> \equiv \frac{1}{\sqrt 2} v_R, \ \ \left<\tilde \nu_i\right> \equiv \frac{1}{\sqrt 2} {v_L}_i, \ \ \left< H_u^0\right> \equiv \frac{1}{\sqrt 2}v_u, \ \ \left< H_d^0\right> \equiv \frac{1}{\sqrt 2}v_d, \end{equation} where $i=1,2,3$ is the generation index. The neutral gauge boson that becomes massive due to $B-L$ symmetry breaking is referred to as $Z_R$. Defining $v^{2}=v_{u}^{2}+v_{d}^{2}$, and assuming that $v^{2} \ll v_{R}^{2}$, $Z_{R}$ acquires to leading order a mass of \begin{equation} M_{Z_R}^2 = \frac{1}{4}\left(g_{3R}^2+g_{BL}^2 \right) v_R^2\ . \label{eq:237} \end{equation} The hypercharge gauge coupling is given by \begin{equation} \label{eq:Y.3R.BL} g_Y = g_{3R} \sin \theta_R = g_{BL}\cos \theta_R \ , \end{equation} where \begin{equation} \cos \theta_R = \frac{g_{3R}}{\sqrt{g_{3R}^2+g_{BL}^2}} \ . \end{equation} The smallness of the neutrino masses implies, first, that the neutrino Yukawa couplings are small and, second, that the left-handed sneutrino VEVs are much smaller than the electroweak scale. In this limit, the minimization conditions of the potential simplify to \begin{align} \label{eq:MC.vR} v_R^2=&\frac{-8m^2_{\tilde \nu_{3}^c} + g_{3R}^2\left(v_u^2 - v_d^2 \right)}{g_{3R}^2+g_{BL}^2} \ , \\ {v_L}_i=&\frac{\frac{v_R}{\sqrt 2}(Y_{\nu_{i3}}^* \mu v_d-a_{\nu_{i3}}^* v_u)} {m_{\tilde L_{i}}^2-\frac{g_2^2}{8}(v_u^2-v_d^2)-\frac{g_{BL}^2}{8}v_R^2} \ , \\ \label{eq:EW.mu} \frac{1}{2} M_Z^2 =&-\mu^2+\frac{m_{H_u}^2\tan^2\beta-m_{H_d}^2}{1-\tan^2\beta} \ , \\ \label{eq:EW.b} \frac{2b}{\sin2\beta}=&2\mu^2+m_{H_u}^2+m_{H_d}^2 \ . \end{align} Noting from above that $ |v_{u}^{2}-v_{d}^{2}|\ll|m_{\tilde\nu_3^c}^{2}|$, equations \eqref{eq:237} and \eqref{eq:MC.vR} can be combined to give \begin{equation} \label{eq:MZR.mnuc} M_{Z_R}^2 = -2 m_{\tilde \nu^c_3}^2\ . \end{equation} The VEV in the third-generation right-handed sneutrino induces spontaneous bilinear R-parity violation through the operators \begin{equation} \label{W.brpv} W \supset \epsilon_i \, L_i \, H_u - \frac{1}{\sqrt 2 }{Y_e}_i \, {v_L}_i \, H_d^- \, e^c_i \ , \end{equation} where \begin{equation} \epsilon_i \equiv \frac{1}{\sqrt 2} {Y_\nu}_{i3} v_R \ . \end{equation} Bilinear R-parity violation has been discussed extensively, including its relevance to neutrino masses. See, for example, some early works~\cite{Mukhopadhyaya:1998xj,Chun:1998ub, Chun:1999bq, Hirsch:2000ef}. The Lagrangian of this model contains additional bilinear terms due to the sneutrino VEVs: \begin{align} \begin{split} \label{L.n} \mathcal{L} \supset & - \frac{1}{2}{v_L}_i^* \left[ g_2 \left(\sqrt 2 \, e_i \tilde W^+ + \nu_i \tilde W^0\right) - g_{BL} \nu_i \tilde B' \right] \\ & -\frac{1}{2} v_R \left[-g_R \nu_3^c \tilde W_R + g_{BL} \nu_3^c \tilde B' \right]+ \text{h.c.} \end{split} \end{align} The R-parity violating terms in this model have a variety of interesting consequences that have been studied in a number of different contexts. These include LHC studies~\cite{Barger:2008wn, Everett:2009vy, FileviezPerez:2012mj, Perez:2013kla}, predictions for neutrinos~\cite{Mohapatra:1986aw, Ghosh:2010hy, Barger:2010iv}, and connections between the two~\cite{Marshall:2014kea, Marshall:2014cwa}. It has been shown that the R-parity violation can give rise to Majorana neutrino masses, with the lightest left-handed neutrino being massless. There is also a pair of sterile right-handed neutrinos that can have cosmological implications~\cite{Perez:2013kla}. The minimal supersymmetric $B-L$ model, reviewed in this section, will be referred to simply as the $B-L$ MSSM throughout the rest of this paper. We now turn to connecting the phenomenology of the $B-L$ MSSM to its high-scale origins. Specifically, we are considering the possibility that the $B-L$ MSSM is the observable sector of the low-energy effective theory of an $E_8\times E_8$ heterotic string theory. In this context, the $B-L$ MSSM gauge group unifies into an $SO(10)$ gauge group, which is itself the commutant of the $SU(4)$ structure group of the observable sector $E_{8}$ vector bundle on the CY threefold. We have previously studied the $B-L$ MSSM in this context~\cite{Ovrut:2015uea}. In this paper, however, we will study the effects of string threshold corrections on gauge unification. This requires a discussion of gauge unification--to which we now turn. \section{Journey From the ``Unification'' Scale} \label{sec:unif} This section outlines the scales and scaling regimes associated with the evolution of the $B-L$ MSSM from ``unification'' to the electroweak scale. Compactification to four dimensions yields a unified gauge group, $SO(10)$, at mass scale $M_{U}$. This unified gauge group is broken by two Abelian Wilson lines, denoted by $\chi_{3R}$ and $\chi_{B-L}$. The mass scales associated with these Wilson lines, $M_{\chi_{3R}}$ and $M_{\chi_{B-L}}$ respectively, depend on the inverse radii of the 2-cycles over which they are wrapped. These, in turn, depend on the chosen point in the CY moduli space. Generically, one expects that the two Wilson line masses are approximately the same and close to the $SO(10)$ unification scale. That is, one ``naturally'' expects \begin{equation} M_{U} \simeq M_{\chi_{B-L}} \simeq M_{\chi_{3R}} \label{s1} \end{equation} % over a wide region of the CY moduli space. However, as one moves away from these generic points the Wilson line mass scales need not remain the same. This leads to an intermediate regime between the two scales associated with the Wilson lines. The particle content and gauge group in this intermediate regime depends on which Wilson line has a higher associated mass. If $M_{U} \simeq M_{\chi_{B-L}} > M_{\chi_{3R}}$, the particle content and gauge group of the intermediate regime is that of a ``left-right'' model. If $M_{U} \simeq M_{\chi_{3R}} > M_{\chi_{B-L}}$, the particle content and gauge group of the intermediate regime is similar to that of a``Pati-Salam'' model. In each case, the lower-mass Wilson line breaks the model in the intermediate regime to the $B-L$ MSSM. In fact, it was shown in~\cite{Ovrut:2012wg} that exact gauge coupling unification at one-loop {\it requires} that these scales be different. For specificity of the RGE calculation, it was convenient to {\it impose} precise gauge coupling unification. Hence, in~\cite{Ovrut:2012wg} we studied the two cases with separated Wilson line masses--even though this can occur only in special regions of moduli space. Under the assumption that the soft SUSY breaking masses be of TeV order--to assure that sparticle masses potentially be LHC accessible--we found that gauge coupling unification dictates that the Wilson line scales must be separated by less than/approximately an order of magnitude in either case. Additionally, we found that both cases lead to similar low energy phenomenology. Hence, for specificity, we carried out our analysis using the first of these symmetry breaking patterns; that is, the intermediate regime containing the ``left-right'' model. We refer the reader to~\cite{Ovrut:2012wg} for that analysis. Here, for concreteness, we simply show in Figure \ref{fig:MU.MI.MSUSY} the relationship of the $M_{U} \simeq M_{\chi_{B-L}}$ unification scale to that of the mass $M_{\chi_{3R}}$ of the the second Wilson line in the ``left-right'' model case. This is plotted as a function of $M_{{\mbox{\scriptsize SUSY}}}$ -- defined below in Eq. \eqref{eq:358}. \begin{figure} \centering \includegraphics[scale=1.2]{figureOne.png} \caption{The $M_{U} \simeq M_{\chi_{B-L}}$ unification mass and $M_{\chi_{3R}}$ as functions of the SUSY scale in the ``left-right'' scenario.} \label{fig:MU.MI.MSUSY} \end{figure} In this paper, we turn to the analysis of the {\it generic} region of moduli space where equation \eqref{s1}, that is, $M_{U} \simeq M_{\chi_{B-L}} \simeq M_{\chi_{3R}}$, is satisfied, thereby {\it giving up exact gauge unification}. Be that as it may, to enable direct comparison of our new simultaneous Wilson line results with those from the split Wilson lines analyzed in~\cite{Ovrut:2015uea}, we continue to use the same notation for all quantities. In particular, it is important to use identical notation for the $B-L$ gauge coupling. Thus far in this paper, we have discussed the gauge parameter $g_{BL}$, which couples to the $\frac{I_{BL}}{2}$ generator. However, as was discussed in~\cite{Ovrut:2012wg}, this gauge coupling has to be properly normalized so as to unify with the other gauge parameters in the split Wilson line scenarios. The appropriate coupling was denoted $g_{BL}'$ and defined by \begin{equation} g_{BL}' = \sqrt{\frac{2}{3}} g_{BL} \ . \end{equation} Even though the four gauge couplings, including $g_{BL}' $, will not unify in the simultaneous Wilson line scenario in this paper, we will continue to use this parameter when appropriate. Note that $g_{BL}'$ couples to the $\sqrt{\frac{3}{8}} I_{BL}$ generator and will appear in the RGEs. For quantities of physical interest, such as physical masses, $g_{BL}$ will be used. To fully understand the evolution of this model from ``unification'' to the electroweak scale, it should be noted that there are four relevant mass scales of interest. All four are described in the following:\\ \noindent\textbf{$M_U$: the unification and the first and second Wilson lines mass scale.} Since, as discussed above, exact unification of the four gauge couplings no longer occurs for simultaneous Wilson lines, it is essential to give a justification--and an explicit definition--of what we mean by the ``unification mass'' in the present context. In~\cite{Ovrut:2015uea}, every phenomenologically valid point in the space of randomly chosen initial soft supersymmetry breaking parameters corresponds to an explicit unification mass $M_{U}$ and unified coupling $\alpha_{u}$. Both the unification scale and unification parameter vary for different valid points. The associated statistical histograms for these quantities are shown in Figures \ref{fig:a} and \ref{fig:b} respectively, along with their average values. These are found to be \begin{figure} \centering \includegraphics[scale=1.2]{thresholdHistogramUnificationScale.png} \caption{A histogram of the unification scale for the 53,512 phenomenologically valid points in the split Wilson line ``left-right'' unification scheme. The average unification scale is $\left<M_U\right>=3.15\times10^{16}$ GeV.} \label{fig:a} \end{figure} \begin{figure} \centering \includegraphics[scale=1.2]{thresholdHistogramAlpha.png} \caption{A histogram of the unification scale for the 53,512 valid points in the split Wilson line ``left-right'' unification scheme. The average value of the unified gauge coupling is $\left<\alpha_u\right>=0.0498$.} \label{fig:b} \end{figure} \begin{equation} \left< M_{U} \right> = 3.15 \times 10^{16}~\mbox{GeV} ~~, \quad \left< \alpha_{u} \right> = 0.0498 \ . \label{s2} \end{equation} In this paper, we will refer to the average values $\left< M_{U} \right>$ and $\left< \alpha_{u} \right>$ as the ``unification'' mass and ``unified'' gauge coupling--and RG scale the gauge parameters between this scale and the electroweak scale. The values of the four diverse couplings $\alpha_{3},~ \alpha_{2}, ~\alpha_{3R}$ and $ \alpha_{BL}^{\prime}$ at $\left< M_{U} \right>$ will be determined for each statistical choice of soft supersymmetry breaking parameters. Henceforth, for specificity, we will always take this unification scale and both Wilson line masses to be strictly identical; that is \begin{equation} \left<M_{U}\right> = M_{\chi_{3R}} = M_{\chi_{B-L}} \ . \label{hani1} \end{equation} % \\ \noindent\textbf{$M_{B-L}$: the mass at which the right-handed sneutrino VEV triggers $U(1)_{3R}\times U(1)_{B-L} \to U(1)_Y$ symmetry breaking.} Physically, this corresponds to the mass of the neutral gauge boson $Z_R$ of the broken symmetry and, therefore, the scale of $Z_R$ decoupling. Specifically \begin{equation} M_{Z_R} = M_{B-L}. \label{eq:249} \end{equation} Note that $M_{Z_R}$ itself depends on parameters evaluated at $M_{B-L}$. This results in a transcendental equation that can be solved using for $M_{B-L}$ using numerical methods. The boundary condition relating the hypercharge coupling to the gauge couplings of $U(1)_{3R}$ and $U(1)_{B-L}$ at this scale is nontrivial. It is given by \begin{equation} \label{eq:1.3R.BL} g_1 = \sqrt{\frac{5}{3}} g_{3R} \sin \theta_R = \sqrt{\frac{5}{2}} g_{BL}'\cos \theta_R \ , \end{equation} where \begin{equation} \cos \theta_R = \frac{g_{3R}}{\sqrt{g_{3R}^2+\frac{3}{2} g_{BL}'^2}} \ . \label{home1} \end{equation} As with the $B-L$ gauge coupling, the hypercharge coupling has been rescaling to allow for unification in the split Wilson line scenarios. The rescaled hypercharge gauge coupling, $g_1$, is defined by \begin{equation} g_1 = \sqrt{\frac{5}{3}} g_Y \ . \label{home2} \end{equation} \\ \noindent\textbf{$M_\text{SUSY}$: the soft SUSY breaking scale.} This is the scale at which all sparticles are integrated out, with the exception of the right-handed sneutrinos, which are associated with $B-L$ breaking and, therefore, are integrated out at the $B-L$ scale~\cite{Ovrut:2015uea}. While the sparticles do not all have the same mass, we use the scale of stop decoupling as a representative scale associated with all sparticles. That is, \begin{equation} M_\text{SUSY} = \sqrt{m_{\tilde t_1}\ m_{\tilde t_2}}. \label{eq:358} \end{equation} The scale of stop decoupling is the best choice for the SUSY scale since the stops give the dominant radiative corrections to phenomenologically important quantities such as the electroweak scale and the Higgs mass. See, for example,~\cite{Gamberini:1989jw} for more details. Note that the physical stop masses depend on quantities evaluated at $M_{\mbox{\scriptsize SUSY}}$. Therefore, this equation must be solved using iterative numerical methods for the correct value of $M_{\mbox{\scriptsize SUSY}}$.\\ \noindent\textbf{$M_{\text{EW}}$: the electroweak scale.} This is the well-known scale associated with the $Z$ and $W$ gauge bosons of the standard model (SM). We identify this scale with the mass of $Z$ boson, as is conventional. That is, \begin{equation} M_{\text{EW}} = M_Z. \end{equation} \section{The Physical Regimes and the RG Scaling of the Supersymmetric Parameters} Having defined the relevant mass scales, we turn to a brief discussion of RG evolution that occurs between them. The gauge coupling RGEs are \begin{equation} \frac{d}{d t} \alpha_a^{-1} = -\frac{b_a}{2 \pi} \ , \end{equation} where $a$ indexes the associated gauge groups. The slope factors $b_a$ are different in the different scaling regimes. \begin{itemize} \item $\left<M_{U}\right> - {\rm max}(M_{\text{SUSY}}, M_{B-L})$: We refer to this regime as the ``$B-L$ MSSM regime'' because the particle content and gauge group are the $B-L$ MSSM. The $b_a$ factors are \begin{equation} b_3 = -3 ,\ b_2 = 1,\ b_{3R}= 7,\ b_{BL'}= 6 \ . \label{red1} \end{equation} \end{itemize} Note that the hierarchy between the SUSY and $B-L$ scales depends on the point chosen in the initial parameter space. The remaining two regimes depend on which of the following two cases occurs: $M_{B-L} > M_{\text{SUSY}}$--the ``right-side-up'' hierarchy--and $M_{\text{SUSY}} > M_{B-L}$--the ``upside-down'' hierarchy. \\ \noindent \underline{right-side-up hierarchy}: \begin{itemize} \item $M_{B-L} - M_{\text{SUSY}}$: In this regime, the gauge group and particle content is that of the MSSM plus two right-handed neutrino supermultiplets. The gauge couplings in this regime evolve with the slope factors \begin{equation} b_3 = -3,\ b_2 = 1,\ b_{1}=\frac{33}{5} \ . \end{equation} We refer to this regime as the ``MSSM'' regime. \item $M_{\text{SUSY}} - M_{\text{EW}}$: In this regime, the sparticles are integrated out, leaving the SM with an additional two sterile neutrinos. It has the well-known slope factors \begin{equation} b_3 = -7,\ b_2 = -\frac{19}{6},\ b_{1}=\frac{41}{10} \ . \label{eq:644} \end{equation} We refer to this regime as the ``SM'' regime. \end{itemize} \noindent \underline{upside-down hierarchy}: \begin{itemize} \item $M_{{\mbox{\scriptsize SUSY}}} - M_{B-L}$: In this regime, sparticles, with the exception of the third-generation right-handed sneutrino, are integrated out. But $B-L$ is still a good symmetry. This yields a non-SUSY $SU(3)_C \times SU(2)_L \times U(1)_{3R} \times U(1)_{B-L}$ model, which also includes three generations of right-handed sneutrinos--the third of which acts as the $B-L$ Higgs. The slope factors are \begin{equation} b_3 = -7,\ b_2 = \frac{19}{6},\ b_{3R}= \frac{53}{12}, \ b_{BL'} = \frac{33}{8}. \end{equation} \item $M_{B-L} - M_{\text{EW}}$: This regime is identical to the SM regime with slope factors given in Eq.~(\ref{eq:644}). \end{itemize} The boundary conditions imposed on the gauge couplings are that the three $\alpha_{i}$ coefficients of the SM take their experimental values at $M_Z$~\cite{PDG}: \begin{equation} \label{eq:alpha.ew} \alpha_3(M_Z)=0.118,\ \alpha_2(M_Z)=0.0337,\ \alpha_1(M_Z)=0.0170 \ . \end{equation} These experimental values will then be scaled up through the various regimes: $M_{EW} \rightarrow M_{SUSY}$, $M_{SUSY} \rightarrow M_{B-L}$ (\rm for the right-side-up hierarchy) or $M_{EW} \rightarrow M_{B-L}$, $M_{B-L} \rightarrow M_{SUSY}$ (\rm for the upside-down hierarchy), followed by scaling through the $B-L$ MSSM regime to $\left<M_{U}\right>$ using the beta functions listed above. The ``splitting'' of $\alpha_{1}$ to $\alpha_{3R}$ and $\alpha^{\prime}_{BL}$ at $M_{B-L}$ is achieved using the boundary conditions \eqref{eq:1.3R.BL}, \eqref{home1}. In previous work~\cite{Ovrut:2012wg}, exact unification conveniently specified $\sin^2\theta_R\approx 0.6$. However, in the present scenario we are not requiring exact unification. Hence, this specificity is lost and $\sin^2\theta_R$ is a free parameter. We proceed by simply setting % \begin{equation} \sin^2\theta_R=0.6 \label{burt1} \end{equation} % in order to make the results of this paper more directly comparable to those of~\cite{Ovrut:2012wg}. An example of the running of the gauge couplings from the electroweak scale to $\left<M_{U}\right>$, as well as the values of the couplings $\alpha_{3}(\left<M_{U}\right>),~ \alpha_{2}(\left<M_{U}\right>), ~\alpha_{3R}(\left<M_{U}\right>)$ and $ \alpha_{BL}^{\prime}(\left<M_{U}\right>)$, is presented in Figure \ref{fig:c} using a phenomenologically acceptable point in the space of initial soft SUSY breaking parameters. % \begin{figure} \centering \includegraphics[scale=1.2]{nonUnification.png} \caption{Running gauge couplings for one of the valid points in our main scan, discussed below, with $M_{\mbox{\scriptsize SUSY}}=2350$ GeV, $M_{B-L}=4670$ GeV and $\sin^{2}\theta_R = 0.6$. In this example, $\alpha_3(\left<M_U\right>)=0.0377$, $\alpha_2(\left<M_U\right>)=0.0377$, $\alpha_{3R}(\left<M_U\right>)=0.0433$, and $\alpha_{BL^\prime}(\left<M_U\right>)=0.0360$.} \label{fig:c} \end{figure} Similarly to the gauge parameters, the Yukawa couplings run differently under the RG through each of the above scaling regimes. Before discussing them, we must first decide which Yukawa couplings are relevant to our analysis. As discussed in \cite{{Ovrut:2015uea}}, we begin by inputting the experimentally determined Yukawa couplings derived from the fermion masses at the electroweak scale. For the purposes of this paper, the SM Yukawa couplings, which are three-by-three matrices in flavor space, can all be approximated to be zero except for the three-three elements which give mass to the third-generation SM fermions. The experimentally determined initial conditions are \begin{equation} y_t=0.955,\quad y_b=0.0174,\quad y_\tau=0.0102. \label{home3} \end{equation} For details on relating fermion masses to Yukawa couplings, see~\cite{Djouadi:2005gi}. We use lower case $y$ to denote Yukawa couplings in the non-SUSY regime. The one-loop RGEs for these Yukawa couplings were presented in Appendix A of \cite{{Ovrut:2015uea}}, to which we refer the reader. The Yukawa couplings in Eqn. \eqref{home3} can be evolved to the $\left<M_{U}\right>$ scale as follows. For the right-side-up scenario, the RGEs from $M_{EW} \rightarrow M_{SUSY}$ are given in Eqs.~(A5) -~(A7). There are non-trivial boundary conditions at the SUSY scale given by \begin{eqnarray} y_{t}(M_{\mbox{\scriptsize SUSY}})&=&Y_{t}(M_{\mbox{\scriptsize SUSY}})\sin\beta\nonumber\\ y_{b,\tau}(M_{\mbox{\scriptsize SUSY}})&=& Y_{b,\tau}(M_{\mbox{\scriptsize SUSY}})\cos\beta. \label{home4} \end{eqnarray} From $M_{SUSY} \rightarrow M_{B-L}$, these parameters evolve as in (A12)~-~(A13). The boundary condition at the $B-L$ scale is trivial. Finally, from $M_{B-L} \rightarrow \left<M_{U}\right>$ the $B-L$ MSSM RGEs are given in Eqs.~(A14)~-~(A16). For the upside-down case, one evolves from $M_{EW} \rightarrow M_{B-L}$ using (A5)~-~(A7), as previously. However, between $M_{B-L} \rightarrow M_{SUSY}$ the RGEs are given in Eqs.~(A8)~ -~(A10). Finally, above the SUSY scale one uses the $B-L$ MSSM equations given in (A14)-(A16). \section{The Soft Supersymmetry Breaking Parameters} The remaining parameters of the $B-L$ MSSM are the massive coefficients appearing in Eqn.~\eqref{home5} which are responsible for softly breaking supersymmetry. Their RGEs in each physical regime were presented in detail in \cite{{Ovrut:2015uea}} and won't be discussed in this paper. Here, we simply note that flavor and CP-violation experimental results place well-known limits on these quantities. Generically, the implication of these constraints are, approximately, as follows: \begin{itemize} \item Soft sfermion mass matrices are diagonal. \item The first two generations of squarks are degenerate in mass. \item The trilinear $a$-terms are diagonal. \item The gaugino masses and trilinear $a$-terms are real. \end{itemize} It is typically assumed that the soft trilinear $a$-terms are proportional to the Yukawa couplings. That is, $a = Y A$ for each fermions species. Each $A$ is real and associated with the SUSY scale. Each $Y$ factor is a dimensionless matrix in family space. This condition effectively makes all non-third-generation trilinear terms insignificant. The above constraints are summarized as \begin{align} \begin{split} & m_{\tilde q}^2 = \text{diag}\left(m^{2}_{\tilde q_1},m^{2}_{\tilde q_1}, m^{2}_{\tilde q_3} \right)~~,~~ \tilde q = \tilde Q, \, {\tilde u}^{c}, \, {\tilde d}^{c} \ , \\ & m_{\tilde \ell}^2 = \text{diag}\left(m^{2}_{\tilde \ell_1},m^{2}_{\tilde \ell_2}, m^{2}_{\tilde \ell_3} \right)~,~~ \tilde \ell = \tilde L, \, {\tilde e}^{c} \ , \tilde \nu^c \ , \\ & a_f = Y_fA_f ~~~~,~~ \ f = t,\, b, \, \tau \ . \end{split} \end{align} These constraints can be implemented at the scale $\left<M_{U}\right>$, since RG evolution to the SUSY scale will not spoil these relations. Note that we do not assume that the first and second generation slepton masses are degenerate, unlike the squark masses, since this is not required by experiments. The degeneracy or non-degeneracy of the first and second generation sleptons will not, however, greatly effect the results of this paper. We now turn to the input values for the SUSY breaking parameters. Unlike the cases of the gauge and Yukawa couplings, these soft SUSY breaking parameters are not experimentally determined. In \cite{{Ovrut:2015uea}}, we introduced a novel way to analyze the initial parameter space of a SUSY model. We will follow the same approach in the present analysis of simultaneous Wilson lines. Specifically, we run a statistical scan of input parameters at the scale $\left<M_U\right>$. The randomly generated input parameters are then RG evolved to the SUSY scale. We conduct an analysis of which of these high-scale initial conditions lead to realistic physics. Although the soft SUSY breaking Lagrangian contains over 100 dimensionful parameters, the phenomenologically motivated assumptions discussed briefly above only allow significant values for 24 of them. These, along with $\tan \beta$ and the sign of certain parameters, are presented in the first column of Table~\ref{tbl:scan}. The high-scale initial values of the 24 relevant dimensionful SUSY breaking parameters are determined as follows. We make the assumption that there is only one overall scale associated with SUSY breaking, requiring that these parameters be separated by no more than an order of magnitude, or so, from each other. To quantify this, we demand that any dimension one soft SUSY breaking parameter be chosen at random within the range \begin{equation} (\frac{M}{f}, Mf) \ , \end{equation} where $M$ is the overall scale of SUSY breaking and $f$ is a dimensionless number satisfying $1\leq f \lesssim 10$. We will further insist that any such parameter be evenly scattered around $M$; that is, that $M$ be the average of the randomly generated values. In \cite{{Ovrut:2015uea}}, we found that in the case of split Wilson line masses, the maximal number of phenomenologically acceptable ``valid'' initial points were obtained by statistically scattering within the interval defined by \begin{eqnarray} M=2700 \text{ GeV} ,\ f=3.3 \ . \label{cat1} \end{eqnarray} To allow direct comparison of the results of this paper to those of \cite{{Ovrut:2015uea}}, we will continue to use these values in the present context. This is shown in the second column of Table~\ref{tbl:scan}, along with the scattering interval associated with tan${\beta}$ and the allowed signs of various parameters. \begin{table}[htdp] \begin{center} \begin{tabular}{|c|c|c|} \hline Parameter & Range \\ \hline \hline \quad $m_{\tilde q_1} = m_{\tilde q_2}, \ m_{\tilde q_3}: \quad \tilde q = \tilde Q, \tilde u^c, \tilde d^c$ \quad & (820, 8900) GeV \\ $m_{\tilde \ell_1}, m_{\tilde \ell_2}, \ m_{\tilde \ell_3}: \quad \tilde \ell = \tilde L, \tilde e^c, \tilde \nu^c$ & \ (820, 8900) GeV \\ $m_{H_u}, m_{H_d}$ & (820, 8900) GeV \\ $\left|A_f\right|: \quad f = t,b, \tau$ & (820, 8900) GeV \\ $\left|M_a\right|: \quad a = 3R, BL^\prime, 2, 3$ & (820, 8900) GeV \\ $\tan \beta$ & (1.2, 65) \\ Sign of $\mu, a_f, M_a: \quad f=t,b,\tau \quad a=3R, BL^\prime, 2, 3$ & [-,+] \\ \hline \end{tabular} \end{center} \caption{The parameters and their ranges scanned in this study. The ranges for the soft SUSY breaking parameters are taken to be those of \cite{{Ovrut:2015uea}}.} \label{tbl:scan} \end{table}% \section{The Parameter Scan and Results} The technical details of our statistical scan over the interval of soft supersymmetry breaking parameters, the complete set of all RG equations, the evolution of all parameters under the RGEs and a discussion of the sparticle and the Higgs masses were presented in detail in both the text and Appendices of \cite{{Ovrut:2015uea}}. We will not repeat them here and refer the reader to that paper. In this section, we will simply apply these methods to the more ``natural'' case of simultaneous Wilson lines satisfying Eqn.~\eqref{hani1}. As we did in \cite{{Ovrut:2015uea}}, we will perform a scan over 10 million random initial points, searching for those ``valid'' points that satisfy all present experimental lower bounds on the masses of the different types of SUSY particles and the $B-L$ gauge boson. These lower bounds are presented in Table~\ref{tbl:mass.bounds}. \begin{table}[htdp] \begin{center} \begin{tabular}{|c|c|} \hline Particle(s) & Lower Bound \\ \hline \hline Left-handed sneutrinos & 45.6 GeV \\ $\quad$Charginos, sleptons $\quad$& 100 GeV \\ Squarks, except for stop or sbottom LSP's & 1000 GeV \\ Stop LSP (admixture) & 450 GeV \\ Stop LSP (right-handed) & 400 GeV \\ Sbottom LSP & 500 GeV \\ Gluino & 1300 GeV \\ $Z_R$ & 2500 GeV \\ \hline \end{tabular} \end{center} \caption{The different types of SUSY particles and the lower bounds implemented in this paper.} \label{tbl:mass.bounds} \end{table}% In addition, we will impose the requirement that the Higgs mass be within the $2\sigma$ allowed range from the value measured at the ATLAS experiment at the LHC~\cite{Aad:2014aba,Chatrchyan:2013lba}: \begin{eqnarray} m_{h^0}=125.36\pm 0.82\mbox{ GeV}. \end{eqnarray} Since the initial soft SUSY breaking parameter space is 24-dimensional, graphically displaying the results is, in principle, very difficult. However, as was discussed in both the text and Appendices of \cite{{Ovrut:2015uea}}, much of the scaling behavior of the parameters is controlled by the two $S$-terms, $S_{BL^\prime}$ and $S_{3R}$, defined by \begin{eqnarray} \label{eq:S.BLA} &S&_{BL^\prime}=\mbox{Tr$\;$}(2m_{\tilde Q}^2-m_{\tilde u^c}^2-m_{\tilde d^c}^2-2m_{\tilde L}^2+m_{\tilde \nu^c}^2+m_{\tilde e^c}^2) \ ,\\ \label{eq:S.RA} &S&_{3R}=m_{H_u}^2-m_{H_d}^2+\mbox{Tr$\;$}\left(-\frac{3}{2}m_{\tilde u^c}^2+\frac{3}{2}m_{\tilde d^c}^2-\frac{1}{2} m_{\tilde \nu^c}^2+\frac{1}{2} m_{\tilde e^c}^2\right) \ , \end{eqnarray} where ``Tr'' implies a sum over the three families. It follows that our results can be reasonably displayed in the two-dimensional $S_{BL^\prime}(\left<M_{U}\right>)$ - $S_{3R}(\left<M_{U}\right>)$ plane. We begin by presenting in Fig.~\ref{fig:1204} all 10 million initial points in the $S_{BL^\prime}(\left<M_{U}\right>)$ - $S_{3R}(\left<M_{U}\right>)$ plane in order to explore, sequentially, which points satisfy the first two fundamental checks that we require; that is, $B-L$ breaking and the experimental $Z_{R}$ mass lower bound. Points that do not break $B-L$ are shown in red, points that satisfy $B-L$ breaking but not the $Z_R$ mass bound are in yellow, and points that break $B-L$ symmetry and satisfy the $Z_R$ mass bound are shown in green. We find that out of the 10 million initial points, \begin{itemize} \item 1,629,001 --the green and yellow points-- break $B-L$ symmetry. \item 697,886 --the green points--break $B-L$ with $M_{Z_{R} }>$ 2.5 TeV. \end{itemize} \begin{figure} \centering \includegraphics[scale=1.2]{bLScatterPlot.png} \caption{\small Points from the main scan in the $S_{BL^\prime}(\left<M_{U}\right>)$ - $S_{3R}(\left<M_{U}\right>)$ plane. Red indicates no $B-L$ breaking, in the yellow region $B-L$ is broken but the $Z_R$ mass is not above its 2.5 TeV lower bound, while green points have both $B-L$ breaking and $M_{Z_R}$ above this bound. The figure expresses the fact that, despite there being 24 parameters at the UV scale scanned in our work, $B-L$ physics is essentially dependent on only two combinations of them--the two $S$-terms. Note that the green points obscure some yellow and red points behind them. Similarly the yellow points obscure some red points.} \label{fig:1204} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[scale=1.2]{eWScatterPlot.png} \caption{\small A plot encompassing the green region in Fig~\ref{fig:1204}. The green points in this plot correspond to those which appropriately break $B-L$ symmetry, but which do not break electroweak symmetry. However, the purple points, in addition to breaking $B-L$ symmetry with an appropriate $Z_{R}$ mass, also break EW symmetry. Note that a small density of green points that do not break EW symmetry are obscured by the purple points.} \label{fig:1205} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[scale=1.2]{boundsScatterPlot.png} \caption{\small A plot of the ``valid'' points in our main scan. The green and purple points correspond to the green and purple points in Fig~\ref{fig:1205}. The cyan points additionally satisfy all sparticle mass lower bounds. The black points are fully valid. That means that, in addition to satisfying all previous checks, they reproduce the correct Higgs mass within the stated tolerance. The distribution of points indicates that while $B-L$ breaking prefers large $S$-terms, sfermion mass constraints prefer them to be not too large. Again, the cyan and black points may obscure a low density of other points not satisfying their constraint.} \label{fig:1148} \end{figure} This plot shows that $B-L$ breaking consistent with present experiments is a robust phenomena. Furthermore, it shows the strong dependence of $B-L$ breaking and the $Z_R$ mass on the values of the $S$-terms. There is a line in the $S_{BL^\prime}$ - $S_{3R}$ plane--between the yellow and red regions--below which $B-L$ breaking is not possible. Note that this includes the origin, which corresponds to vanishing $S$-terms and, hence, universal soft masses. This shows that at least a small splitting from sparticle universality is required for $B-L$ breaking. Another line exists--between the green and yellow regions--below which $Z_R$ is always lighter than its experimental lower bound. Proceeding sequentially, we present in Fig.~\ref{fig:1205} the initial points in the $S_{BL^\prime}(\left<M_{U}\right>)$ - $S_{3R}(\left<M_{U}\right>)$ plane that, in addition to breaking $B-L$ with a $Z_{R}$ mass above the experimental bound, also break EW symmetry. The entire colored region encompasses the green points shown in Fig.~\ref{fig:1204}. Those points that also break EW symmetry are displayed in purple. This plot indicates that most of the points that break $B-L$ with a $Z_{R}$ mass above the experimental bound, also break EW symmetry. Note that a small density of green points that do not break EW symmetry are obscured by the purple points. Specifically, we find that out of the 697,886 green points that break $B-L$ with $M_{Z_{R} }>$ 2.5 TeV, \begin{itemize} \item 485,952 -- the purple points-- also break EW symmetry. \end{itemize} In Fig.~\ref{fig:1148}, we reproduce Fig.~\ref{fig:1205} but now, in addition, sequentially indicate the points that are consistent with the remaining checks--that is, all lower bounds on sparticles masses satisfied and, finally, that they reproduce the Higgs mass within the experimental uncertainty. Points that appropriately break $B-L$ symmetry but do not satisfy electroweak symmetry breaking are still shown in green. Points that, additionally, do break electroweak symmetry are again shown in purple. Such points that also satisfy all lower bounds on sparticles masses, but do not match the known Higgs mass, are now indicated in cyan. Finally, points that satisfy all checks, including the correct Higgs mass, are shown in black. These are the ``valid'' points. The density of black points indicate that there is a surprisingly high number of initial parameters that satisfy all present low energy experimental constraints. Specifically, we find that out of the 485,952 purple points that appropriately break $B-L$ symmetry as well as EW symmetry, \begin{itemize} \item 228,278 --the cyan points-- also satisfy all sparticle lower mass bounds. \item 44,884 --the black points-- satisfy all sparticle lower mass bounds and also give the measured value of the Higgs mass. \end{itemize} The distribution of black points can be explained from the fact that, while $B-L$ breaking favors non-zero $S$-terms, very large $S$-terms can effect the RGE evolution of sfermion masses adversely. Since the effect of the $S$-terms depends on the charge of the sfermion in question, some sfermions will become quite heavy while others light or tachyonic. Therefore, in general, the valid points in our scan are a compromise between large $S$-terms, needed for a $Z_R$ mass above its lower bound, and small $S$-terms needed to keep the sfermion RGEs under control.\\ \noindent {\bf The LSP Spectrum:} An important property of the initial SUSY parameter space in determining low-energy phenomenology is the identity of the LSP. Recall that when R-parity is violated, no restrictions exist on the identity of the LSP; for example, it can carry color or electric charge. Our main scan provides an excellent opportunity to examine the possible LSP's and the probability of their occurrence . To this end, a histogram of possible LSP's is presented in Fig.~\ref{fig:1039}--with the possible LSP's indicated along the horizontal axis, and $\log_{10}$ of the number of valid points with a given LSP on the vertical axis. The notation here is a bit condensed, but is specified in more detail in Table~\ref{tbl:LSP}. The notation is devised to highlight the phenomenology of the different LSP's, specifically their decays\footnote {Recall that when R-parity is violated, as it is in this paper, the LSP can decay to lighter non-supersymmetric states.}, which are also presented in Table~\ref{tbl:LSP}. \begin{figure} \centering \includegraphics[scale=1.2]{lspHistogram.png} \caption{\small A histogram of the LSP's in the main scan showing the percentage of valid points with a given LSP. Sparticles which did not appear as LSP's are omitted. The y-axis has a log scale. The dominant contribution comes from the lightest neutralino, as one might expect. The notation for the various states, as well as their most likely decay products, are given in Table~\ref{tbl:LSP}. Note that we have combined left-handed first and second generation sneutrinos into one bin, and that each generation makes up about 50\% of the LSP's. The same is true for the first and second generation right-handed sleptons and sneutrinos. } \label{fig:1039} \end{figure} \begin{table}[htdp] \begin{center} \begin{tabular}{|c|c|c|} \hline \ Symbol \ & Description & Decay \\ \hline \hline $\tilde \chi^0_{\tilde B}$ & A bino-like neutralino, mostly rino ($\tilde W_R$) or mostly blino ($\tilde B'$). & \multirow{4}{*}{$\ell^\pm W^\mp$, $\nu Z$, $\nu h$} \\ $\tilde \chi^0_{\tilde W}$ & Mostly wino neutralino. & \\ $\tilde \chi_{\nu^c}$ & Mostly third-generation right-handed neutrino. & \\ $\tilde \chi^0_{\tilde H}$ & Mostly Higgsino neutralino. & \\ \hline $\tilde \chi^{\pm}_{\tilde W}$ & Mostly wino charginos. & \multirow{2}{*}{$\nu W^\pm$, $\ell^{\pm} Z$, $\ell^{\pm} h$} \\ $\tilde \chi^{\pm}_{\tilde H}$ & Mostly Higgsino charginos. & \\ \hline $\tilde g$ & Gluino. & $t \bar t \nu$, $t \bar b \ell^-$ \\ \hline $\tilde t_{ad}$ & Left- and right-handed stop admixture. & $\ell^+ b$ \\ \hline $\tilde t_{r}$ & Mostly right-handed stop (over 99\%). & $t \nu$, $\tau^+ b$ \\ \hline $\tilde q_R$ & Right-handed first and second generation squarks. & $\ell^+j$, $\nu j$ \\ \hline $\tilde b_L$ & Mostly left-handed sbottom. & $b \nu$ \\ \hline $\tilde b_R$ & Mostly right-handed sbottom. & $b \nu$, $\ell^- t$ \\ \hline \multirow{2}{*}{$\tilde \nu_{L_{1,2}}$} & First and second generation left-handed sneutrinos. & \multirow{3}{*}{\parbox[t]{3cm}{$b \bar b$, $W^+ W^-$, $ZZ$, \\ $t \bar t$, $\ell'^+ \ell^-$, $hh$, $\nu \nu$}} \\ & LSP's are split evenly among these two generations. & \\ $\tilde \nu_{L_3}$ & Third generation left-handed sneutrino. & \\ \hline $\tilde \nu_{R_{1,2}}$ & First and second generation right-handed sneutrinos. & $\nu \nu$ \\ \hline \multirow{2}{*}{$\tilde \tau_L$} & \multirow{2}{*}{Third generation left-handed stau.} & $t \bar b$, $W^- h$, \\ & & $e \nu$, $\mu \nu$, $\tau \nu$ \\ \hline \multirow{2}{*}{$\tilde e_R, \mu_R$} & First and second generation right-handed sleptons. & \multirow{2}{*}{$e \nu$, $\mu \nu$} \\ & LSP's are split evenly between these two generations. & \\ \hline $\tilde \tau_R$ & Third generation right-handed stau. & $t \bar b$, $e \nu$, $\mu \nu$, $\tau \nu$ \\ \hline \end{tabular} \end{center} \caption{The notation used for the states in Fig.~\ref{fig:1039} and their probable decays. More decays are possible in certain situations depending on what is kinematically possible and the parameter space. Gluino decays are especially dependent on the NLSP, here assumed to be a neutralino. Here, the word ``mostly'' means it is the greatest contribution to the state. The symbol $\ell$ represents any generation of charged leptons. The left-handed sneutrino decay into $\ell'^+ \ell^-$ indicates a lepton flavor violating decay--that is, $\ell'^+$ and $\ell^-$ do not have the same flavor. Note that $j$ is a jet--indicating a light quark.} \label{tbl:LSP} \end{table} The most common LSP in our main scan is the lightest neutralino, $\tilde \chi_1^0$. However, not all $\tilde \chi_1^0$ states are created equal. LHC production modes for the lightest neutralino depend significantly on the composition of the neutralino--a bino LSP cannot be directly produced at the LHC, but the other neutralino LSP's can. This is the basis we use for the division of these states. The state $\tilde \chi_{\tilde B}^0$ designates a mostly rino or mostly blino neutralino, $\tilde \chi_{\tilde W}^0$ a mostly wino neutralino and $\tilde \chi_{\tilde H}^0$ a mostly Higgsino neutralino. Here, the subscript mostly indicates the greatest contribution to that state. As an unrealistic example, if $\tilde \chi_1^0$ is 34\% wino, 33\% bino and 33\% Higgsino, it is still labeled $\tilde \chi_{\tilde W}^0$. The chargino LSP's are similarly separated into wino-like and Higgsino-like charginos, and the stops and sbottom divisions are as in our earlier papers, references~\cite{Marshall:2014kea,Marshall:2014cwa}. Note that this notation for the stops, $\tilde t_{ad}$ and $\tilde t_r$, are only used to describe stop LSP's. For non-LSP stops, we use the conventional notation $\tilde t_1$ and $\tilde t_2$. To make Fig.~\ref{fig:1039} more readable, we have made an effort to combine bins that have similar characteristics. The first and second generation left-handed sneutrinos are combined into one bin, where about 50\% of the LSP's are first generation sneutrinos. The same holds true for the first and second generation right-handed sleptons, while the first generation right-handed sneutrino is always chosen to be lighter than the second generation right-handed sneutrino. This similarity between the first and second generation sleptons is expected, since their corresponding Yukawa couplings are not large enough to distinguish them through the RG evolution. For both sleptons and squarks, more LSP's exist for the third-generation--as expected from the effects of the third-generation Yukawa couplings, which tend to decrease sfermion masses in the RGE evolution. The myriad of possible LSP's leads to a rich collider phenomenology. This phenomenology is not the main focus of this paper, but it is worthwhile to briefly review it here. In models where R-parity violation is parameterized by bilinear R-parity breaking, such as the $B-L$ MSSM, SUSY particles are still pair produced and cascade decay to the LSP. At this point, the bilinear R-parity violating terms allow the LSP to decay. While only a few studies have been done on the phenomenology of the minimal $B-L$ MSSM~\cite{FileviezPerez:2012mj, Perez:2013kla,Marshall:2014kea,Marshall:2014cwa}, there have been several works on the phenomenology of explicit bilinear R-parity violation, which has some similarities to this model. See~\cite{Porod:2000hv, Hirsch:2003fe, Graham:2012th, Graham:2014vya} for general discussions. Table~\ref{tbl:LSP} provides some basic information on the most probable decay modes of each of the possible LSP's. Note that $\ell$ signifies a charged lepton of any generation and $j$ a jet--implying a light quark. Some interesting aspects of Table~\ref{tbl:LSP} were discussed in \cite{{Ovrut:2015uea}}.\\ \noindent {\bf The Non-LSP Spectrum:} To get a sense of the non-LSP spectrum, we produce histograms of the masses of the sparticles associated with the valid points in the main scan. In the following histograms, there will be quite a few pairs of fields that will be highly degenerate; these will be represented by only one curve. This includes $SU(2)_L$ sfermion partners, which are only split by small electroweak terms. First generation squarks are also degenerate with second generation squarks with the same isospin, due to phenomenological constraints. A consequence of this is that all first and second generation left-handed squarks are highly degenerate. \begin{figure} \centering \includegraphics[scale=0.6]{spectrumHistogram0.png} \includegraphics[scale=0.6]{spectrumHistogram1.png} \includegraphics[scale=0.6]{spectrumHistogram2.png} \caption{\small Histograms of the squark masses from the valid points in the main scan. The first- and second-family left-handed squarks are shown in the top-left panel. Because they come in $SU(2)_{L}$ doublets, and the first- and second-family squarks must be degenerate, all four of these squarks have nearly identical mass and the histograms coincide. The first- and second-family right-handed squarks are shown in the top-right panel. The right-handed down squarks are generally lighter than their up counterparts because of the effect of the $U(1)_{3R}$ charge in the RGEs. The third family squarks are shown in the bottom panel. } \label{fig:1052} \end{figure} \begin{figure} \centering \includegraphics[scale=0.6]{spectrumHistogram3.png} \includegraphics[scale=0.6]{spectrumHistogram4.png} \includegraphics[scale=0.6]{spectrumHistogram5.png} \caption{\small Histograms of the sneutrino and slepton masses associated with the valid points in the main scan. First- and second-family entries are in the top-left panel, along with the third family left-handed sneutrino. Staus are in the top-right panel with mass-ordered labeling. In the bottom panel, the first- and second-family right-handed sneutrinos are labeled such that $\tilde \nu_{R1}$ is always lighter than $\tilde \nu_{R2}$.} \label{fig:hist.sleptons} \end{figure} \begin{figure} \centering \includegraphics[scale=0.6]{spectrumHistogram6.png} \includegraphics[scale=0.6]{spectrumHistogram7.png} \includegraphics[scale=0.6]{spectrumHistogram8.png} \includegraphics[scale=0.6]{spectrumHistogram9.png} \caption{\small The CP-even component of the third-family right-handed sneutrino, heavy Higgses, neutralinos, charginos and the gluino in the valid points from our main scan. The CP-even component of the third-generation right-handed sneutrino is degenerate with $Z_R$. The $\tilde \chi^0_5$ and $\tilde \chi^0_5$ are typically Higgsinos.} \label{fig:1054} \end{figure} Figure~\ref{fig:1052} shows histograms of the squark masses. Because they come in $SU(2)_{L}$ doublets and the first- and second-family squarks must be degenerate, all four of the first- and second-family left-handed squarks have nearly identical mass and the histograms coincide. The degeneracy of first- and second-family squarks is also evident in the right-handed squark masses. The first and second family right-handed down squarks are generally lighter than their up counterparts because of the effect of the $U(1)_{3R}$ charge in the RGEs. Figure~\ref{fig:hist.sleptons} shows histograms of the masses of the sneutrinos and sleptons. The third-family sleptons and left-handed sneutrinos tend to be the lighter because of the influence of the $\tau$ Yukawa coupling. The right-handed sneutrinos are labeled such that $\tilde \nu_{R_1}$ is always lighter than $\tilde \nu_{R_2}$. Figure~\ref{fig:1054} presents histograms of the CP-even component of the third-generation right-handed sneutrino, the heavy Higgses, the neutralinos, the charginos, and the gluino. The CP-even component of the third-generation right-handed sneutrino is degenerate with $Z_R$. It is always heavier than 2.5 TeV because we have imposed the collider bound on $Z_R$. The neutralinos and charginos are labeled from lightest to heaviest as is canonical in SUSY models. The $\tilde \chi^0_5$ and $\tilde \chi^0_6$ are typically Higgsinos. We emphasize that all of the above histograms are calculated using our main scan; that is, for the choice of $M=2700$ GeV and $f=3.3$. We remind the reader that these values were chosen in \cite{{Ovrut:2015uea}} so as to maximize the number of valid points and repeated in this paper so as to enable simple comparison with the split Wilson mass results. However, the mass scale of these histograms is heavily dependent on the choice of $M$. Smaller (larger) values for $M$ will move the above distributions distinctly toward lighter (heavier) sparticle masses. Plots of the physical particle spectra for two valid points are presented in Fig.~\ref{fig:156}. These two points are selected from the pool of valid points from the main scan based on the simple criteria that they are the valid points with the largest right-side-up and upside-down hierarchy respectively; that is, the largest splittings between the $B-L$ and SUSY scales in the two possible hierarchies. \begin{figure}[!htbp] \centering \includegraphics[scale=0.6]{maxHierarchySpectrum.png} \includegraphics[scale=0.6]{minHierarchySpectrum.png} \caption{\small Two sample physical spectra with a right-side-up hierarchy and upside-down hierarchy. The $B-L$ scale is represented by a black dot-dash-dot line. The SUSY scale is represented by a black dashed line. The electroweak scale is represented by a solid black line. The label $\tilde u_L$ is actually labeling the nearly degenerate $\tilde u_L$ and $\tilde c_L$ masses. The labels $\tilde u_R$, $\tilde d_L$ and $\tilde d_R$ are similarly labeling the nearly degenerate first- and second- family masses.} \label{fig:156} \end{figure} Plots of the high-scale boundary values for two sample valid points from our main scan are presented in Fig.~\ref{fig:160}. While these look like Figs.~\ref{fig:156}, they do not correspond to physical masses but, rather, mass parameters at $\left<M_{U}\right>$. These two valid points are selected from the pool of valid points from the main scan based on a simple criterion. The two plots show the valid points with the largest and smallest amount of splitting in the initial values of the scalar soft mass parameters. The amount of splitting is defined as the standard deviation of the initial values of the 20 scalar soft mass parameters. \begin{figure}[!htbp] \centering \includegraphics[scale=0.6]{largestStdSpectrum.png} \includegraphics[scale=0.6]{smallestStdSpectrum.png} \caption{\small Example high-scale boundary conditions at $\left<M_{U}\right>$ for the two valid points with the largest and smallest amount of splitting. The label $\tilde Q_1$ is actually labeling the nearly degenerate $\tilde Q_1$ and $\tilde Q_2$ soft masses. The labels $\tilde u^c$ and $\tilde d^c$ are similarly labeling the nearly degenerate first and second family masses.} \label{fig:160} \end{figure} \section{Fine-Tuning} \begin{figure} \centering \includegraphics[scale=1.2]{fineTuningHistogram.png} \caption{The blue line in the histogram shows the amount of fine-tuning required for valid points in the main scan of the simultaneous Wilson line $B-L$ MSSM. Similarly, the green line specifies the amount of fine-tuning necessary for the valid points of the R-parity conserving MSSM--computed using the same statistical procedure as for the $B-L$ MSSM with $M=2700$ GeV and $f=3.3$. The $B-L$ MSSM shows slightly less fine-tuning, on average, than the MSSM.} \label{fig:fineTuning} \end{figure} A detailed discussion of the the little hierarchy problem, fine-tuning and the Barbieri-Giudice (BG) method of quantifying the degree of fine-tuning was presented in \cite{Ovrut:2015uea}. Here, we simply give the results in the simultaneous Wilson line scenario discussed in this paper. Unlike the quantitites presented above, which can differ substantially from the split Wilson line results in \cite{Ovrut:2015uea}, the BG fine-tuning histogram for simultaneous Wilson lines in the $B-L$ MSSM is very similar to that of the split Wilson line scenario. Be that as it may, for completeness, we present it here--along with the fine-tuning histogram for the R-parity conserving MSSM--in Fig.~\ref{fig:fineTuning}. Note that the highest percentage of valid points require fine-tuning of the order of 1/4,000 -- 1/5,000. However, there remain a small number of points with fine-tuning less than 1/1,000. As in the split Wilson line scenario, the simultaneous Wilson line $B-L$ MSSM manifests somewhat less fine-tuning than the R-parity conserving MSSM. \section{String Threshold Corrections} As discussed in the Introduction and Section III, and graphically illustrated for a valid initial point in Figure \ref{fig:c}, the four gauge couplings of the $B-L$ MSSM do not unify at $\left<M_{U}\right>$ for simultaneous Wilson line masses; that is, when \begin{equation} \left<M_{U}\right> = M_{\chi_{3R}} = M_{\chi_{B-L}} \ . \label{hani2} \end{equation} % However, as described in the Introduction, the $B-L$ MSSM arises on the observable orbifold plane of heterotic $M$-theory compactified on a Schoen Calabi-Yau threefold with $\pi_{1}={\mathbb{Z}}_{3} \times {\mathbb{Z}}_{3}$ and a holomorphic vector bundle with $SU(4) \subset E_{8}$ structure group. That is, the $B-L$ MSSM is a low energy effective theory of heterotic string theory. Hence, as discussed in numerous papers \cite{Kaplunovsky:1992vs, Kaplunovsky:1995jw, Mayr:1993kn, Dienes:1995sq, Dienes:1996du, Dolan:1992nf, Kiritsis:1996dn, Nilles:1997vk, Nilles:1998uy, Ghilencea:2001qq, Klaput:2010dg, deAlwis:2012bm, Bailin:2014nna}, it is expected that at {\it string tree level} all four gauge couplings, along with the dimensionless gravitational parameter \begin{equation} \sqrt{8\pi\frac{G_{N}}{\alpha^{\prime}}} \label{hani3} \end{equation} where $G_{N}$ is Newton's constant and $\alpha^{\prime}$ is the string Regge slope, unify to a single parameter $g_{\rm string}$ at a ``string unification'' scale \begin{equation} M_{\rm string}=g_{\rm string} \times 5.27 \times 10^{17}~ \mbox{GeV} \ . \end{equation} The string coupling parameter $g_{\rm string}$ is set by the value of the dilaton, and is typically of ${\cal{O}}(1)$. A common value in the literature, see for example \cite{Dienes:1996du,Bailin:2014nna,Nilles:1998uy}, is $g_{\rm string}= 0.7$ which, for specificity, we will use henceforth. Therefore, we take $\alpha_{\rm string}$ and the string unification scale to be \begin{equation} \alpha_{\rm string}=\frac{g_{\rm string}^{2}}{4\pi} = 0.0389 , \quad M_{\rm string}=3.69 \times 10^{17}~ \mbox{GeV} \label{hani4} \end{equation} respectively. Note that $ M_{\rm string}$ is approximately an order of magnitude larger than $\left<M_{U}\right>$. Below $M_{\rm string}$ however, the couplings begin to evolve according to the RGEs of effective field theory. This adds another--fourth-- scaling regime to the three discussed at the beginning of Section IV. This new regime is \begin{itemize} \item $M_{\rm string}$ -- $\left<M_{U}\right>$: The effective field theory in this regime remains that of the $B-L$ MSSM with the couplings ${\alpha}_{a}$, $a=3,2,3R, BL^{\prime}$ and the slope factors \begin{equation} b_3 = -3~,~b_2 = 1~, ~b_{3R}= 7~,~ b_{BL'}= 6 \label{hani5} \end{equation} as in Eqn.~\eqref{red1}. However, the RGEs are now altered to become \footnote{The RGE for the a-th gauge coupling generically contains the term $k_{a} \alpha_{\rm string }^{-1}$ on the right-hand side, where $k_{a}$, $a=3,2,3R,BL^{\prime}$ are the associated string affine levels. However, these are all unity for the scaled gauge couplings of the $B-L$ MSSM.} \begin{equation} 4\pi {\alpha_{a}}^{-1}( p)=4\pi \alpha_{\rm string }^{-1}-b_{a}\ln\big(\frac{p^2}{M_{\rm string}^{2}}\big) + {\tilde{\Delta}}_{a}. \label{hani6} \end{equation} Note that the one-loop running couplings no longer unify exactly at $M_{\rm string}$. Rather, they are ``split'' by dimensionless threshold effects. These arise predominantly from massive genus-one string modes that contribute to the correlation function $\left<F^{a}_{\mu\nu}F^{a \mu\nu}\right>$ and, hence, to the ${\alpha}_{a}$ gauge couplings. This is depicted graphically in Figure \ref{fig:rehan}. \end{itemize} \begin{figure} \centering \includegraphics[scale=.75]{standalone-3.png} \caption{The worldsheet correlation function $\left<F^{a}_{\mu\nu}F^{a\mu\nu}\right>$ on the genus-one string worldsheet is a typical example of heavy string threshold correction terms that need to be calculated.} \label{fig:rehan} \end{figure} Recall that in this paper we have found $44,884$ valid initial points in the space of soft supersymmetry breaking dimensionful couplings-- each of which satisfies all low energy phenomenological criteria. For {\it each} of these points, we can calculate--by scaling from the electroweak scale to $\left<M_{U}\right>$-- the four gauge couplings $\alpha_{3}(\left<M_{U}\right>),~ \alpha_{2}(\left<M_{U}\right>), ~\alpha_{3R}(\left<M_{U}\right>)$ and $ \alpha_{BL}^{\prime}(\left<M_{U}\right>)$. Note that in this analysis, we have defined an ``average'' SUSY scale $M_{SUSY}$, the $B-L$ breaking scale $M_{B-L}$, as well as an ``average'' unification scale $\left<M_{U}\right>$, in (28), (24) and (22) respectively. The RGEs have been scaled through the requisite intermediate regimes with the appropriate beta-function coefficients. That is, we have already taken into account the predominant threshold effects associated with each of these scales. For statistically ``average'' valid initial points--the vast majority of the phenomenologically acceptable initial soft SUSY breaking parameters--possible additional threshold effects arising from the ``splitting'' of particle masses around these scales are expected to be relatively small--and will be systematically ignored relative to the heavy string thresholds. That is, the ${\tilde{\Delta}}_{a}$ , $a=3, 2, 3R, BL^{\prime}$ parameters in (50) will closely approximate the four heavy string gauge thresholds. With this input, using eqn.~\eqref{hani5}, $p=\left<M_{U}\right>=3.15 \times 10^{16}~$GeV and $\alpha_{\rm string}$, $M_{\rm string}$ given in \eqref{hani4}, one can calculate the associated heavy string thresholds from~\eqref{hani6}; that is, \begin{equation} {\tilde{\Delta}}_{a}=4\pi {\alpha_{a}}^{-1}(\left<M_{U}\right> )-4\pi \alpha_{\rm string }^{-1}+b_{a}\ln\big(\frac{\left<M_{U}\right>^2}{M_{\rm string}^{2}}\big) \ . \label{hani7} \end{equation} for each $a=3,2,3R, BL^{\prime}$. Of course, these thresholds are expected to differ for each different valid initial point. It follows that one should analyze the thresholds statistically--graphing the dispersion of each as one runs over the $44,884$ valid initial points. The histograms associated with each of these four thresholds are presented in Figure~\ref{fig:thresh1}. \begin{figure} \centering \includegraphics[scale=1.2]{thresholdHistogram3.png} \includegraphics[scale=1.2]{thresholdHistogramL.png} \includegraphics[scale=1.2]{thresholdHistogramR.png} \includegraphics[scale=1.2]{thresholdHistogramBl.png} \caption{\small Histograms of each of the heavy string thresholds ${\tilde{\Delta}}_{a}$, $a=3,2,3R,BL'$ arising from the $44,884$ phenomenologically valid points of our statistical survey. Each threshold value is plotted against the percentage of valid points giving rise to it. The bin width is 0.1.} \label{fig:thresh1} \end{figure} To better understand the relationship of these different thresholds, we find it useful to plot all four of them in a single histogram. This is presented in Figure~\ref{fig:thresh2}. \begin{figure} \centering \includegraphics[scale=1.2]{thresholdHistogramAll.png} \caption{\small All four histograms in Fig. \ref{fig:thresh1} combined into a single graph to elucidate their relative occurrence and values.} \label{fig:thresh2} \end{figure} It is also useful to calculate the string threshold associated with the Abelian hypercharge coupling $\alpha_{1}$ defined, using \eqref{eq:1.3R.BL} and \eqref{home1}, by \footnote{As with the other $B-L$ MSSM gauge couplings, this scaled hypercharge coupling has string affine level $k_{1}=1$.} % \begin{equation} \alpha^{-1}_{1}=\frac{3}{5}\alpha^{-1}_{3R}+\frac{2}{5} \alpha^{-1}_{BL'} \ . \label{rehan2} \end{equation} % The associated statistical histogram is given in Figure \ref{fig:thresh3}. \begin{figure} \centering \includegraphics[scale=1.0]{thresholdHistogramY.png} \caption{\small Histogram of the string hypercharge threshold ${\tilde{\Delta}}_{1}$ arising from the $44,884$ phenomenologically valid points of our statistical survey. Each threshold value is plotted against the percentage of valid points giving rise to it. The bin width is 0.1.} \label{fig:thresh3} \end{figure} It is well-known \cite{Kaplunovsky:1992vs,Dienes:1996du,Dolan:1992nf,Mayr:1993kn} that each string threshold breaks into two parts, \begin{equation} {\tilde{\Delta}}_{a}= {\mathbb{Y}}+\Delta_{a} \ , \label{rehan3} \end{equation} where $ {\mathbb{Y}}$ is a ``universal'' piece independent of the gauge group and $\Delta_{a}$ records the contributions of all massive string states as they propagate around the genus-one string worldsheet torus diagram shown in Figure \ref{fig:rehan}. Note again that all string affine levels are unity in our normalization. As discussed in \cite{Dienes:1996du}, an explicit calculation of the universal piece $\mathbb{Y}$ is difficult due to the presence of infrared divergences. However, the $\Delta_{a}$ threshold terms, although moduli dependent, can be directly calculated from string theory using a formulation given by V. Kaplunovsky in \cite{Kaplunovsky:1992vs} and by Kaplunovsky and Louis in \cite{Kaplunovsky:1995jw}. Such calculations are heavily model dependent \cite{Dienes:1995sq,Klaput:2010dg,Bailin:2014nna,Kiritsis:1996dn} and, to date, have not been carried out in the $B-L$ MSSM context. Be that as it may, it is useful to present our experimental predictions for ${\tilde{\Delta}}_{1}-{\tilde{\Delta}}_{2}$, ${\tilde{\Delta}}_{1}-{\tilde{\Delta}}_{3}$, and ${\tilde{\Delta}}_{2}-{\tilde{\Delta}}_{3}$ --from which information about $\Delta_{3}$, $\Delta_{2}$ and $\Delta_{1}$ can be inferred. The statistical results for these three quantities are presented in Figure \ref{fig:thresh4}. \begin{figure} \centering \includegraphics[scale=1.2]{thresholdHistogramY3.png} \includegraphics[scale=1.2]{thresholdHistogramYL.png} \includegraphics[scale=1.2]{thresholdHistogramL3.png} \caption{\small Histograms of our statistical predictions for the values of ${\tilde{\Delta}}_{1}-{\tilde{\Delta}}_{2}$, ${\tilde{\Delta}}_{1}-{\tilde{\Delta}}_{3}$, and ${\tilde{\Delta}}_{2}-{\tilde{\Delta}}_{3}$. The third of these plots looks different because the quantity $\tilde\Delta_2-\tilde\Delta_3$ falls in a very narrow range. The bin width in all three plots is 0.1.} \label{fig:thresh4} \end{figure} It would be very interesting to compare these results to direct calculations using \cite{Kaplunovsky:1992vs,Kaplunovsky:1995jw} in the $B-L$ MSSM context. We will not attempt that in the present paper. \section{Conclusion} Our previous work on the $B-L$ MSSM used the constraint of exact gauge unification. That is, we chose $M_U\simeq M_{\chi_{B-L}} > M_{\chi_R}$ and, by appropriately separating the Wilson line masses, forced the gauge couplings to unify exactly at the scale $M_U$. This hypothesis enhanced the specificity of the calculation. For example, it set all gauge couplings to a unified value $\alpha_{u}$ at $M_{U}$ and determined $\sin^2\theta_R$ at the SUSY transition mass. However, the splitting of the Wilson lines limits the analysis to a restricted region of CY moduli space where the associated two-cycles have considerably different radii. Furthermore, it introduces a new scaling regime between $M_{U} \simeq M_{\chi_{B-L}}$ and $M_{\chi_R}$--in our example a ``left-right'' model with a specific spectrum. For soft SUSY breaking masses in the TeV range, this constraint was reasonable since the separation between the Wilson line masses--and, hence, the difference in the two-cycle radii--was less than an order of magnitude. However, if one tries to take larger values for the soft SUSY breaking masses, the difference in the Wilson line masses grows rapidly. For example, for $10^{4}$ TeV soft masses, the Wilson lines must be separated by a factor of $10^{3}$--reducing the calculation to an extremely unnatural region of CY moduli space. For even larger values of the soft masses, the calculation breaks down completely. It follows that if one wishes to discuss the $B-L$ MSSM for large soft SUSY breaking masses--see our discussion below--then it becomes necessary to analyze the theory for the more natural case of simultaneous, or nearly simultaneous, Wilson lines. Hence, in this paper, we have carried out the analysis of the $B-L$ MSSM under the more natural hypothesis of equal Wilson line scales; that is, we assumed that $M_U\simeq M_{\chi_{B-L}}\simeq M_{\chi_R}$. Although this hypothesis does not allow exact gauge unification, it is well-known--see the previous section--that string threshold corrections can be responsible for such non-unification, providing a consistent theoretical framework for this analysis. Our results indicate that a substantial region of the initial soft SUSY breaking parameter space is consistent with all low-energy experimental data--see Figs.~\ref{fig:1204}-\ref{fig:1148}. We also presented our results for important physical quantities; histograms of the LSP species--see Fig.~\ref{fig:1039}--histograms of the sparticle masses--Figs.~\ref{fig:1052}-\ref{fig:1054}--plots of sample mass spectra at the SUSY scale--see Fig.~\ref{fig:156}--and plots of sample initial mass data at the average unification scale--Fig.~\ref{fig:160}. Although different in some details from the previous exact gauge unification analysis, the results presented in this paper share many common features. In addition, they show that all low-energy phenomenological constraints can be satisfied for a wide range of initial soft SUSY breaking parameters over a more natural and generic region of CY moduli space. The fundamental difference between the calculations in this paper from those of our previous work is that the gauge couplings of the $B-L$ MSSM can no longer unify at an average mass associated with the inverse CY radius. Rather, the gauge couplings remain split at that scale--a splitting that allows one to compute the heavy string threshold corrections due to heavy states on the genus-one string worldsheet. These string threshold corrections and their differences are analyzed statistically, and predict what one should obtain using a a more direct worldsheet formalism. It would be interesting to see the results of such formal string calculations done in the heterotic $B-L$ MSSM context. Finally, having achieved these more generic results, we are now in a position to arbitrarily raise the mass scale of the soft SUSY breaking parameters--without encountering the above mentioned difficulties. For example, it is now possible to raise the soft SUSY breaking scale up to $10^{12}$ or $10^{13}$ GeV--values consistent with the mass scale in ``split supersymmetry'' theories \cite{ArkaniHamed:2004fb, Giudice:2004tc, ArkaniHamed:2004yi}. As we will show in a subsequent publication, this freedom will allows us to discuss a new theory of inflation within the context of the $B-L$ MSSM. This work will appear elsewhere \cite{inPreparation}. \section*{Acknowledgments} We would like to thank Sogee Spinner for helpful conversations. R. Deen and B.A. Ovrut are supported in part by the DOE under contract No. DE-SC0007901. A. Purves was supported by DOE contract No. DE-SC0007901 when much of this work was carried out.
2,877,628,089,976
arxiv
\section{INTRODUCTION} \IEEEPARstart{T}{he} development of small-size, light-weight, low-power and low-cost satellites has witnessed significant growth in the last few years. An important class of small satellites, which is being used by academia, industry and government as a platform for space exploration and research, is CubeSats. These satellites are special category of nanosatellites defined in terms of 10 cm $ \times $ 10 cm $ \times $ 10 cm sized units (approx. 1.3 kg each) called ``U\textquotesingle s''. Although a 1U CubeSat can be extended to higher configuration (i.e., 1.5, 2, 3, 6, and 12U) if more capability is required, it is crucial to resist the creep toward larger and more expensive CubeSat missions, as this defeats the primary goal of maintaining low-cost approaches as the cornerstone of CubeSat development \cite{academy}. Small satellites, deployed as a sensor network in space, have an advantage over conventional satellites in space exploration because of their potential to perform coordinated observations, high-resolution measurements, and identification of Earth\textquotesingle s asset that is inclusive of its space environment. Low-latency communications between these satellites result in improved availability for observation, telecommunications and reconnaissance applications \cite{conney}. One fundamental reason for the shift from using large and expensive satellites to multiple low-cost small satellites is the resulting inherent intelligence of the distributed multi-satellite nodes which has potential for autonomous operations. Another driving motivation for the development of large constellations of small satellites is the desire for rapid revisit rates or persistence from low-earth-orbit (LEO) satellites. Such satellites have the potential to provide inter-satellite data relay, providing a highly survivable mesh of nodes capable of relaying data before downlink to ground stations \cite{conney}. To enable cooperation among these distributed multi-satellite nodes requires a need for inter-satellite communication. Presently, the dominant research and development for implementing inter-satellite communication links (ISLs) consists of using either radio frequency (RF) or highly directed lasers. The latter will require a highly accurate pointing satellite control system, while the former is not suitable for systems with sensitive electronics onboard or in applications where high data rates are required due to the limited available spectrum. It is also mechanically challenging to deploy large parabolic antennas on small satellites equipped with RF radios in order to support high data rates. The required pointing accuracy needed for laser communication presents a challenge to the form factor of pico-/nano class of satellites due to the stringent SMaP restrictions imposed by the platform. Lasers produce a narrow and focused beam of light that could fall out of the field-of-view (FOV) of a small satellite receiver due to slight movements. Furthermore, for formation flying systems in LEO, the ISLs are much shorter than links between satellites in geostationary orbit; thus, the use of lasers and the highly accurate pointing they provide can be considered superfluous\cite{wood},\cite{amanor}. To minimize the SMaP constraints imposed by the platform, along with the need for reduced pointing accuracy and to achieve high data transmission rates, we propose a visible light communication (VLC) subsystem for pico-/nano class of satellites for ISC. These multiple small satellite missions will benefit from VLC\textquotesingle s ability to transmit higher data rates with smaller, light-weight nodes, while avoiding the usual interference problems associated with RF, as well as the apparent radio spectrum scarcity below the 6 GHz band \cite{rajagopal}. Furthermore, the electronics required for achieving precision pointing accuracy for laser communication systems will be avoided. With approximately 300 THz of free bandwidth available for VLC, high capacity data transmission rates could be provided over short distances using arrays of LEDs. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{fig_1.JPG} \caption{Network of Small Satellites: Adapted from \cite{amanor2}} \label{fig 1} \end{figure} This paper is an extensive treatment of the preliminary study presented in \cite{amanor} and \cite{amanor4s}. In these conference papers, we proposed a high-level description of a VLC system for ISC among small satellites. In this paper, we developed the physical layer requirements and design concepts for a VLC-based communication subsystem for ISC in small satellite networks. The proposed system addresses the SMaP constraints of small satellites and challenges associated with RF and Laser ISLs in small satellites networks. The remainder of the paper is structured as follows. Section II covered background information of related works on ISC. The design considerations and proposed system description are presented in Sections III and IV, respectively. Section V examined the VLC link physical model, while Section VI treated the solar background noise model. The characteristics of the VLC modulated signal is discussed in Section VII, followed by an example power budget design in Section VIII. The performance evaluation of an analytical model of the proposed system is treated in Section IX, and the concluding remarks are presented in Section X. \section{BACKGROUND} Most of the launched and projected missions of multiple small satellite systems employed RF or laser ISLs \cite{radakrishnan2}. Among these missions, the most ambitious one is QB-50, which uses RF ISLs and consists of a network of CubeSats that will study the Earth\textquotesingle s upper thermosphere, measuring oxygen levels, and electron behavior among others. All 50 CubeSats were supposed to be launched together in February 2016, but due to the unavailability of the launch vehicle, the plan was revised and 28 CubeSats were deployed from the International Space Station (ISS) in May 2017, followed by the launch of another 8 CubeSats from an Indian Polar Satellite Launch Vehicle (PSLV) in late May 2017. Notwithstanding the dominance of RF and laser ISLs in most multiple small satellite missions, recent advancements in LED technology have triggered renewed interest in VLC as a viable alternative to RF and laser for LOS communication links of moderate scope. Visible light communication systems exploit the optical bandwidth available within the visible light band (i.e., 380 nm to 750 nm) for data communications. LED-based transmitter sources have a relative advantage over RF and laser transmission sources due to their low power requirements, light-weight, and small footprint. In \cite{wood}, the feasibility of LEDs for short-range ISLs was examined for a hypothetical low-end ISL. The work discussed methods for minimizing background illumination, but did not quantitatively evaluate solar background illumination and its impact on ISL performance. The fundamental analysis for VLC system using LED lights for indoor applications was discussed in \cite{komine}. In \cite{nakajima}, a VLC system using LEDs was successfully demonstrated between satellite and ground. The ShindaiSat, Shinshu University Satellite, is a VLC experimental satellite for on-orbit technology demonstration using LED light as a communication link. To achieve this feat, the ShindaiSat used a relatively large micro-satellite measuring 400 mm $\times$ 400 mm $\times$ 450 mm and weighing 35 kg. This contrasts sharply with the form-factor of CubeSats. In \cite{arruego}, LEDs were evaluated and flown in orbit for intra-satellite communication between internal assemblies’ onboard satellites. These lamps combine very low-power consumption with an extremely long operational life, maintaining during all their operation the same chromaticity without significant changes. The above studies and on-orbit demonstrations on LED-based VLC underscores the potential feasibility of this technology for ISC. However, investigating the feasibility of LEDs for ISC without a quantitative evaluation of solar background radiation that reaches the receiver field-of-view (FOV) and its impact on the SNR leaves a research gap that needs to be addressed. This is because the radiative energy that the Sun emits within the visible light spectrum (i.e., 380 nm - 750 nm) to the Earth system, including LEO, is about 595 W/$\m^2$. This high background illumination is large enough to ``drown" the received information signal from an LED source. By modeling the solar background power and numerically evaluating its impact on the ISL, this paper seeks to fill the research gap in previous studies, and demonstrate the feasibility of using LED-based visible light links for ISC in future multiple small satellite space missions. \begin{figure*}[!t] \centering \includegraphics[width=12cm,height=2.7cm]{fig_2.JPG} \caption{Fraunhofer Lines within Visible Light Spectrum \cite{britannica}: \textsl{wavelength} (nm)} \label{fig2} \end{figure*} \section{KEY DESIGN CONSIDERATIONS} In general, a satellite whether small or large, is composed of several functional subsystems including communications, attitude determination and control, tracking, telemetry and command (TTC), as well as electrical power supply. For small satellites, it is crucial for the subsystem\textquotesingle s designer to take into account the overall system\textquotesingle s SMaP constraints in order to avoid any violation of the stringent size and volume restrictions. The characteristics of the operational environment must also be considered. In this section, we summarized the critical design issues of LED-based VLC system for ISC among small satellites. \subsection{APPROACH TO MITIGATE BACKGROUND RADIATION} The radiative energy per cross-sectional unit area that the Sun emits to the Earth system across all wavelengths of the electromagnetic spectrum based on Planck\textquotesingle s radiation formula is 1360 W/$\m^2$. The equivalent radiative energy within the visible light band is approximately 595 W/$\m^2$. The severity of this background power is enough to degrade the SNR at the receiver and thus poses a threat to reliable visible light communications. However, at certain frequencies within the visible band, the Sun\textquotesingle s output spectrum has been absorbed by chemical elements present in the Sun, and in the process they leave a characteristic fingerprint on the solar spectrum in the form of black lines (i.e., Fraunhofer lines). The power and resulting noise from the Sun at these frequencies is reduced. By placing photodetectors behind optical filters selected to match Fraunhofer lines can enable clear signal detection even when the detector is directly facing the Sun \cite{wood}. At the most intense Fraunhofer lines, the solar background falls below 10 percent of its continuum values \cite{gelbwachs}. This work is inspired by the background radiation mitigation concept espoused in \cite{wood} and the receiver design approach proposed in \cite{barry} to develop a noise-resistant inter-satellite communication system for small satellites using LED(s) at the transmitter and a photodetector at the receiver. The LED is chosen such that its peak transmission energy (or peak wavelength) lies at the center of a Fraunhofer line, while the optical front-end of the receiver consists of a filter whose passband matches the Fraunhofer line spectral width. Some prominent Fraunhofer lines are illustrated in Fig. 2. \subsection{DOPPLER EFFECTS} A fundamental problem that needs to be addressed for ISC is Doppler effect and its impact on the ISLs. A Doppler shift causes the received signal frequency of a source to differ from the sent frequency due to motion that is increasing or decreasing the distance between the source and receiver. For our proposed application, the background signal from the Sun and the information carrying signal from the transmitting satellite will both experience some form of Doppler shifts at the receiver due to the continuous motion of the receiver that either increases or decreases the distance between the receiver and the two sources. The impact of Doppler effects on the performance of inter-satellite links in LEO has been studied in \cite{liu2} and \cite{yang}. The normalized wavelength shift between a transmitting and receiving satellite is given by \cite{liu2}: \begin{equation}\Delta \lambda= \frac{\lambda_s}{c} \frac {d}{dt}| r(t,\tau)| \end{equation} where \begin{equation}\Delta \lambda= {\lambda_d - \lambda_s} \end{equation} and \begin{math}\Delta \lambda \end{math} stands for normalized Doppler wavelength shift; \begin{math}\lambda_d \end{math} and \begin{math}\lambda_s \end{math} are the wavelengths of the received signal and emitted signal, respectively. The term,\begin{math}\ r(t,\tau) \end{math}, represents the actual propagation range of the signal from the source satellite to destination. It follows from (1) that a normalized Doppler shift of 0.015 nm corresponds to spacecraft moving at a relative velocity of 9 km/s for a 500 nm emitted light signal, while a shift of 0.05 nm corresponds to a relative velocity of 30 km/s. Shifts smaller than 0.001 nm are assumed to be insignificant \cite{kerr}. In this work, we focus on intra-orbit ISLs, where the distance between satellites is fixed and Doppler effects is negligible. \subsection{PROPOSED LED PEAK WAVELENGTHS FOR TRANSMISSION} A limited number of Fraunhofer lines offer a natural low background noise channel for VLC. The wavelengths and bandwidths of the most intense Fraunhofer lines are shown in Table I. We selected Fraunhofer lines with bandwidths greater than 250 GHz in order to ensure that Doppler shifts are accommodated within the Fraunhofer linewidth. The LED signal transmissions will be centered on these Fraunhofer lines and the bandwidths are broad enough to accommodate any Doppler shifts that may cause marginal shifts of the targeted Fraunhofer line without the need for additional on-board electronics to provide retuning. The Fraunhofer lines in Table I possess significant bandwidth that can be exploited for high data rate ISLs. In particular, the Fraunhofer lines at 393.3682 nm and 396.8492 nm wavelengths are broad enough to guarantee stable transmissions even in the presence of sizable Doppler shifts. We did not consider Fraunhofer lines in the range 490 nm to 590 nm in the selection of potential frequencies (shown in Table I) for the proposed system due to their proximity to the Sun\textquotesingle s peak, which is close to the 500 nm wavelength mark. For Si PIN Photodiodes, transmissions along Fraunhofer lines below 390 nm wavelength may suffer from poor detector responsivity, and therefore would not be appropriate for applications where very weak signals reaches the detector. The proximity of these lines to the ultravoilet region also poses a hazard to terrestrial applications, but this may not be an issue for space applications. \begin{table}[ht] \caption{The Most Intense Solar Fraunhofer Lines with Bandwidth greater than 250 GHz \cite{lang, sethi}} \centering \begin{tabular}{c c c c} \hline\hline Wavelength\ & Spectral Width\ & Bandwidth\ & Element\ \\ nm & nm & GHz \\[0.5ex] \hline 381.5851 & 0.1272 & 262.1 & Fe \\ 382.0436 & 0.1712 & 351.9 & Fe \\ 382.5891 & 0.1519 & 311.3 & Fe \\ 383.2310 & 0.1685 & 344.2 & Mg \\ 383.8302 & 0.1920 & 391.0 & Mg \\ 385.9922 & 0.1554 & 312.9 & Fe \\ \cellcolor{gray!25}393.3682 & \cellcolor{gray!25}2.0253 & \cellcolor{gray!25}3926.6 & \cellcolor{gray!25}Ca \\ \cellcolor{gray!25}396.8492 & \cellcolor{gray!25}1.5467 & \cellcolor{gray!25}2946.3 & \cellcolor{gray!25}Ca \\ 410.1748 & 0.3133 & 558.7& H \\ 434.0475 & 0.2855 & 454.6& H \\ 486.1342 & 0.3680 & 467.2& H \\ 656.2808 & 0.4020 & 280.0& H \\ [1ex] \hline \end{tabular} \label{table:table1} \end{table} \begin{figure*}[!b] \centering \includegraphics[width=15cm,height=5.0cm]{fig_3.JPG} \caption{Conceptual Architecture of Full Duplex VLC System for ISC for Small Satellites} \label{fig3} \end{figure*} \subsection{LED SPECIFICATION} \begin{table}[!ht] \caption{Spectral Colors Emitted By Specific Wavelengths} \centering \begin{tabular}{c c c c} \hline\hline Wavelength\ & Spectral Width\ & Bandwidth\ & Color\ \\ nm & nm & GHz \\[0.5ex] \hline \cellcolor{violet!15}381.5851 & \cellcolor{violet!15}0.1272 & \cellcolor{violet!15}262.1 & \cellcolor{violet!30} Violet\\ \cellcolor{violet!15}382.0436 & \cellcolor{violet!15}0.1712 & \cellcolor{violet!15}351.9 & \cellcolor{violet!30} Violet\\ \cellcolor{violet!15}382.5891 & \cellcolor{violet!15}0.1519 & \cellcolor{violet!15}311.3 & \cellcolor{violet!30} Violet\\ \cellcolor{violet!15}383.2310 & \cellcolor{violet!15}0.1685 & \cellcolor{violet!15}344.2 & \cellcolor{violet!30} Violet\\ \cellcolor{violet!15}383.8302 & \cellcolor{violet!15}0.1920 & \cellcolor{violet!15}391.0 & \cellcolor{violet!30} Violet\\ \cellcolor{violet!15}385.9922 & \cellcolor{violet!15}0.1554 & \cellcolor{violet!15}312.9 & \cellcolor{violet!30} Violet \\ \cellcolor{blue!25}393.3682 & \cellcolor{blue!25}2.0253 & \cellcolor{blue!25}3926.6 & \cellcolor{blue!55} Blue\\ \cellcolor{blue!25}396.8492 & \cellcolor{blue!25}1.5467 & \cellcolor{blue!25}2946.3 & \cellcolor{blue!55} Blue\\ \cellcolor{blue!25}410.1748 & \cellcolor{blue!25}0.3133 & \cellcolor{blue!25}558.7& \cellcolor{blue!55} Blue \\ \cellcolor{blue!25}434.0475 & \cellcolor{blue!25}0.2855 & \cellcolor{blue!25}454.6& \cellcolor{blue!55} Blue \\ \cellcolor{blue!25}486.1342 & \cellcolor{blue!25}0.3680 & \cellcolor{blue!25}467.2& \cellcolor{blue!55} Blue \\ \cellcolor{red!35}656.2808 & \cellcolor{red!35}0.4020 & \cellcolor{red!35}280.0& \cellcolor{red!55} Red\\ [1ex] \hline \end{tabular} \label{table:table1} \end{table} For our proposed system, LEDs with peak wavelengths centered in the blue and/ or red wavelengths can be utilized in the transmitter. Table II is an approximation of the spectral colors emitted by the wavelengths in Table I. Note that the boundaries depicted in the Table II are not precise. The color of an LED is determined by the wavelength of the light emitted, which also depends on the semiconductor materials used in the manufacture of the LED. Thus, technically it is possible to manufacture LEDs for most wavelengths in the visible light band \cite{whitepaper}. The technology for creating Red and Green LEDs is generally viewed as mature. Aluminium gallium arsenide (AlGaAs) and gallium phosphide (GaP) can be used to manufacture red and green LEDs, respectively. With the development of aluminum indium gallium phosphide (AlInGaP), gallium nitride (GaN), and indium gallium nitride (InGaN), LEDs can be produced for a broad range of colors in the visible light spectrum. These new materials are now replacing GaP and AlGaAs as the semiconducting materials of choice for most commercial LEDs. These materials are durable and can withstand high temperatures which makes them ideal for space applications. \subsection{PHOTODETECTORS} Several factors influence the choice of a detector for a given application. Key among these include the light power level, wavelength range of the incident light, electrical bandwidth of the detector amplifier and the mechanical requirements of the application, such as size or temperature range of operation. Also important are cost, and the space environment. Most often, these criteria will limit the options for a given application. Avalanche photodiodes (APD) and PIN photodiodes have been used in many experimental studies on free space optical communication including VLC \cite{wood, barry, lee, kharraz}. APDs are advantageous over PIN photodiodes in applications where the dominant noise is the electrical noise in the pre-amplifier, rather than shot noise \cite{barry}. They have superior advantages in fiber optic systems, where the only source of shot noise is the photodetector dark current and the signal itself is weak. However, in free space optical communication systems, the background light is generally large enough that the resulting shot noise overshadows the thermal noise produced within the amplifiers and load resistors internal to the detection system (primarily in the front end), even with a PIN diode, thus limiting the usefulness of APDs for free-space optical wireless communication systems. \section{PROPOSED SYSTEM DESCRIPTION} Fig. 3 depicts a block diagram representation of the proposed LED-based VLC system for ISC. The main sub-systems in the transmitter block are the modulator, optical driver and LED emitter; while the optical front-end, Si PIN photodetector (PD), transimpedance amplifier (TIA) and demodulator constitute the main elements in the receiver. The primary concept of the design is the utilization of Fraunhofer lines as natural low background noise channels for signal transmission. The design of the receiver optical front-end follows the approach proposed in \cite {barry} in order to take advantage of a high gain, wide FOV front-end. We provide further elaboration on the proposed transmitter and receiver front-end architectures. \subsection{TRANSMITTER FRONT-END CONCEPT} A single high-power LED or a bank of LEDs in series can be employed in VLC transmitter systems using On-Off Keying (OOK), which relies mainly on switching the light source on and off. However, OOK is a binary modulation scheme with low spectral efficiency. Thus, OOK can only provide limited data rates. Generally, optical transmitter front-ends using single high-power LEDs (or bank of LEDs) are not optimized for higher-order modulation and multi-carrier schemes. In \cite{haas}, the authors proposed an LED(s) transmitter front-end that is optimized for high data rates and can be used for higher-order modulation and multi-carrier schemes. They employed discrete power level stepping technique, which allows utilization of the full dynamic range of LEDs by avoiding non-linearity issues. In this paper and for our simulations, we assume the transmitter front-end consist of a single LED or bank of low-power LEDs with an equivalent amount of ouput optical power. \subsection{RECEIVER FRONT-END OPTICS} The receiver front-end is designed as shown in Fig. 4. A narrow-band optical filter is bonded to the outer surface of a hemispherical concentrator in order to achieve a high gain, wide FOV optical front-end. It was shown in \cite {barry} that, under certain conditions, the gain of the hemispherical front-end is nearly omni-directional which makes it a more useful configuration to deploy in a wide FOV application. It is also more robust to receiver movements and FOV misalignments compared to a planar optical front-end. We used PIN photodiode because the background light is generally large enough that the resulting shot noise dominates the thermal noise produced within the electrical front-end. \begin{figure}[!ht] \centering \includegraphics[width=3.5in]{fig_4.JPG} \caption{Receiver Front-End Architecture} \label{fig 4} \end{figure} \section{VLC LINK PHYSICAL MODEL} We can model the line-of-sight (LOS) link between any two adjacent satellites in a trailing formation or within a cluster according to the generic LOS VLC scenario shown in Fig. 5. The distance between the LED emitter and detector is denoted by\begin{math}\ d \end{math}, while the detector aperture radius and physical area are represented by\begin{math}\ r \end{math} and\begin{math}\ A_{\pd} \end{math}, respectively. The angle of incidence with respect to the receiver axis is\begin{math}\ \psi \end{math}, and the angle of irradiance with respect to the transmitter perpendicular axis is\begin{math}\ \varphi \end{math}. Angle\begin{math}\ \varphi \end{math} is referred to as viewing angle as it indicates how focused the beam is when emitted from the LED. \begin{figure}[!ht] \centering \includegraphics[width=3.5in]{fig_5.JPG} \caption{LOS VLC Link Model: Adapted from \cite {cui}} \label{fig 5} \end{figure} In line-of-sight (LOS) optical links, the relationship between the received optical power\begin{math}\ P_r \end{math} and the transmitted optical power\begin{math}\ P_t \end{math} can be represented by \cite{komine, amanor} \begin{equation}\ P_r = H(0)P_t \end{equation} The quantity\begin{math}\ H(0) \end{math} represents the channel DC gain and it is the single most important quantity for characterizing LOS optical links. As shown in \cite{barry2}, the channel gain in LOS optical links can be estimated fairly accurately by considering only the LOS propagation path and can be expressed as \begin{equation} H(0)= \begin{cases} \frac{(m+1)}{2\pi d^2} A_{\pd} \cos^m(\varphi) T_s g(\psi) \cos(\psi), &: 0\leq\psi\leq \psi_c\\ 0,&: \psi > \psi_c \end{cases} \end{equation} where \begin{math}\ m \end{math} is the order of Lambertian emission (i.e., a number which describes the shape of the radiation characteristics). The filter transmission coefficient (or gain) and concentrator gain are represented by the parameters \begin{math}\ T_s \end{math} and \begin{math}\ g(\psi) \end{math}, respectively, while the concentrator FOV semi-angle is denoted by \begin{math}\ \psi_c \end{math}. The Lambertian order \begin{math}\ m \end{math} is related to the semi-angle at half illuminance of an LED, \begin{math}\ \phi_{\frac{1}{2}} \end{math} and is given by \cite{cui}, \cite{barry2} \begin{equation}\ m = \frac{-\ln2}{ \ln(\cos(\phi_{\frac{1}{2}}))} \end{equation} By using a hemispherical lens (i.e., non-imaging concentrator) with internal refractive index\begin{math}\ n \end{math}, we can achieve a gain of \cite{barry} \begin{equation} g(\psi)= \begin{cases} \frac{n^2}{\sin^2 \psi_c} &: 0\leq\psi\leq \psi_c\\ 0,&: \psi > \psi_c \end{cases} \end{equation} A hemisphere can achieve \begin{math}\ \psi_c \approx \frac{\pi}{2} \end{math} and \begin{math}\ g(\psi) \approx n^2 \end{math} over its entire FOV provided the hemisphere is sufficiently large in relation to the detector, i.e., \begin{math}\ R > n^2r \end{math}, where \begin{math}\ r \end{math} and \begin{math}\ R \end{math} represents the detector and hemisphere radii, respectively \cite{barry2}. For a given receiver FOV, the effective signal-collection area\begin{math}\ A_{\eff}(\psi) \end{math} of the detector is given by\begin{math}\ A_{\eff}(\psi) = A_{\pd} \cos \psi \end{math} where \begin{math}\ |\psi| < FOV \end{math}. For non-Lambertian emission sources, (4) does not hold. For such sources, where the LEDs have particular beam shaping components, knowledge of the reshaped beam spatial distribution function \begin{math}\ g_s (\theta) \end{math} is needed in order to calculate the path loss \cite{cui}. Following from (3), the average received optical power \begin{math}\ P_r \end{math} can be expressed as the sum of the transmitted power and path-loss on a dB scale, i.e., \begin{math}\ P_r = P_t + H(0) \end{math}, where the channel has an optical path loss of \begin{math}\ -10\log_{10}H(0) \end{math} [measured in Optical decibels]. The electrical signal component at the receiver side is given by \cite{lee} \begin{equation}\ S = (\gamma P_r)^2 \end{equation} Depending on the desired transmitter power, an array of standard LEDs can be used as the transmitter. When such multiple LEDs are used, spatially connecting distributed LEDs to a single receiver, we can obtain the total optical power by summing (or superimposing) the received power of all single LOS links within the receiver field of view (FOV) \cite{cui}. For the situation where two signals from different satellites are within the receiver's FOV, we can distinguish between these signals in the medium access control (MAC) layer. The MAC layer provides functionality for coordinating access to the shared wireless channel and utilizing protocols that facilitates the quality of communications over the medium. Interested readers are referred to \cite{radakrishnan2}, where various multiple access techniques applicable to ISC for small satellites systems are discussed. \begin{table*}[!t] \caption{Comparison of Different Models for Solar Flux Estimation} \centering \begin{tabular}{c c c c c c c c} \hline\hline No.\ &Wavelength Interval\ & Observed Solar Flux \ & Solar Flux for a BB Sun \ & Proposed Model for Solar\ \\ - & (nm) & @ 1 AU (W/$\m^2$) & @ 5780K (W/$\m^2$) & Flux @ 1 AU (W/$\m^2$) \\[0.5ex] \hline 1 & 240 - 400 & 118 & 158 & 157.18 \\ 2 & 400 - 800 & 643 & 630 & 627.98 \\ 3 & 800 - 1310 & 348 & 349 & 347.68 \\ 4 & 1310 - 1860 & 148 & 123 & 122.92 \\ 5 & 1860 - 2480 & 52 & 51 & 50.61 \\ 6 & 2480 -3240 & 29 & 24 & 24.13 \\ 7 & 3240 - 4500 & 17 & 14 & 13.95 \\ 8 & 4500 - 8000 & neglected & 7.7 & 7.70 \\ 9 & 8000 - 12000 & dust band & 1.3 & 1.30 \\ 10 & 12000 - 24000 & 15 $\mu m $ $CO_2$ band & 0.9 & 0.50 \\ 11 & 24000 - 60000 & neglected & 0 & 0.07 \\ 12 & 60000 - 1000000 & neglected & 0 & 0.00 \\ \hline \end{tabular} \label{table:table2} \end{table*} \section{The Noise Model} In this work, we consider the Sun as the main source of background illumination from the environment. We modeled the Sun as a blackbody using Planck's blackbody radiation model, in which spectral irradiance of the source is a function of wavelength and temperature \cite{lee}, i.e., \begin{equation}\ W(\lambda,T) = \frac{2\pi h_p c^2}{ \lambda^5} \frac{1}{(e^{\frac{h_pc}{\lambda k T}}-1)} \end{equation} where \begin{math}\ \lambda \end{math} is the wavelength, \begin{math}\ c \end{math} is the speed of light, \begin{math}\ h_p \end{math} is Planck\textquotesingle s constant, \begin{math}\ k \end{math} is Boltzmann’s constant and \begin{math}\ T \end{math} is average temperature of the Sun\textquotesingle s surface. Following the approach of Spencer \cite{spencer}, we developed a simple yet fairly accurate analytical model that describes the irradiance that falls within the spectral range of the receiver optical filter \begin{equation}\ E_{det} \approx 2.15039 \times 10^{-5} d_f t_f \int_{\lambda_a}^{\lambda_b} W(\lambda,T) d\lambda \end{equation} where \begin{math}\ d_f \end{math} and \begin{math}\ t_f \end{math} are coefficients that represents the \emph{day of the year} and \emph{time of day}, respectively. For this work, we assume the maximum value for\begin{math}\ t_f \end{math}, which is 1.0. We validated our model by evaluating (9) for different wavelength intervals and compared the results with observed solar fluxes (W/$\m^2$) taken from the 1985 Wehrli Standard Extraterrestrial Solar Irradiance Spectrum and a Blackbody (BB) Sun model from NASA \cite{nasa},\cite{nasamodel}. The BB Sun produces an integrated flux over these intervals of 1359 W/$\m^2$ at 1 astronomical unit (AU) compared to 1355 W/$\m^2$ for the observed sun. Our model produces an integrated flux of 1354 W/$\m^2$ over the same wavelength intervals as shown in Table III. The background noise power detected by the optical receiver physical area can be computed as \cite{barry}: \begin{equation}\ P_{bg} =E_{det} T_s A_{\pd} n^2 \end{equation} where \begin{math}\ T_s \end{math} is the filter transmission coefficient and\begin{math}\ n \end{math} is the internal refractive index of the concentrator at the receiver\textquotesingle s optical front-end. The total input noise variance \begin{math}\ N \end{math} is the sum of the variances of the shot noise and thermal noise \cite{barry}: \begin{equation}\ N = \sigma^2_{\shot} + \sigma^2_{\thermal} \end{equation} We neglect the effects of intersymbol interference (ISI) based on the assumption that the inter-satellite link between any two adjacent satellites in a leader-follower or cluster formation is not susceptible to multipath propagation. The shot noise variance is given by \cite{lee} \begin{equation}\ \sigma^2_{\shot} = 2q\gamma(P_r + I_2 P_{bg})B \end{equation} where \begin{math}\ q \end{math} is the electronic charge, \begin{math}\ B \end{math} is the equivalent noise bandwidth, \begin{math}\ \gamma \end{math} represents the photodetector responsivity, and \begin{math}\ I_2 \end{math} is the noise bandwidth factor for a rectangular transmitter pulse. Following the analysis in \cite{barry}, the thermal noise variance can be expressed by: \begin{equation}\ \sigma^2_{\thermal} = \frac{8\pi k T_{\A}}{G} \eta A_{\pd} I_2 B^2 + \frac{16\pi^2 k T_{\A} \Gamma}{g_m} \eta^2 A^2_{\pd} I_3 B^3 \end{equation} where \begin{math}\ k \end{math} is Boltzmann’s constant, \begin{math}\ T_{\A} \end{math} is the absolute temperature, \begin{math}\ G \end{math} is the open-loop voltage gain,\begin{math}\ \eta \end{math} is the fixed capacitance of photodetector per unit area, \begin{math}\ \Gamma \end{math} is the FET channel noise factor, \begin{math}\ g_m \end{math} is the FET transconductance and \begin{math}\ I_3 \end{math} is the noise bandwidth factor for a full raised-cosine pulse shape \cite{barry}. Finally, the electrical SNR at the receiver, which is a key metric for measuring the quality of the communication link, can be determined by \begin{equation}\ \SNR =\frac{S}{N}= \frac{(\gamma P_r)^2}{\sigma^2_{\shot} + \sigma^2_{\thermal}} \end{equation} \section{CHARACTERISTICS OF THE VLC MODULATED SIGNAL} A key difference between VLC and RF communications is in the way data is encoded or conveyed. While data can be encoded in the amplitude or phase of an RF signal, signal intensity is the primary parameter used for conveying information in VLC systems \cite{pathak, tsonev}. The implication is that phase and amplitude modulation techniques cannot be applied in VLC; rather the data has to be encoded in the varying intensity of the emitting light pulses \cite{tsonev}. At the receiver side, direct detection is the dominant approach for signal recovery due to changes in the instantaneous power of the transmitted signal \cite{medina}. Thus, IM/DD schemes are the main modulation/demodulation methods used in VLC systems. A further attribute of an IM/DD system is that the modulating signal must be both {\em real valued} and {\em unipolar} \cite{tsonev}. This distinctive feature of VLC, as an IM/DD system, has profound consequence on the type of modulation scheme to use. In other words, many full-fledged modulation schemes used in RF communications are inapplicable in VLC systems. Additionally, unlike RF communication systems, the modulation scheme for a VLC system is generally required to support dimming and flicker mitigation \cite{pathak}. Dimming is particularly important for applications where illumination is not a primary requirement as it can be used as a technique for conserving energy and increasing battery life. Nevertheless, dimming should not result in degradation of the communication performance. Besides dimming, an additional requirement for VLC modulation schemes is resistance to flickering. Flickering is the human-perceivable fluctuations in the brightness of light and it is usually caused by long runs of 0s or 1s in the data sequence which can reduce the rate at which light intensity changes and cause the flickering effect \cite{pathak}. Flickering was shown in \cite{berman} as a likely cause of adverse physiological changes in humans. However, for a space-based application, flickering may not be an issue. \section{LINK BUDGET DESIGN} Unlike RF communication links, not much work has been done in the formulation and analysis of link budgets for visible light links between CubeSats. The closest work in the literature is the seminal work done by \cite{popescu}, where they examined the power budgets for inter-satellite links between CubeSat radios. However, the link budget parameters for an RF link differ from a VLC link. While the propagation path loss of an RF link is dependent on the radio signal frequency, path loss for LOS optical links is assumed to be independent of wavelength. Following from (3), (7), (11) and (14), the SNR per bit can be expressed as \cite{ghassemlooy},\cite{sklar}: \begin{equation}\ \SNR =\frac{E_b}{N_o}= \frac{[\gamma H(0) P_t ]^2}{N} \frac{B}{R} \end{equation} where \begin{math}\ B \end{math} is the bandwidth in Hz over which noise is measured, \begin{math}\ R \end{math} represents the desired bit-rate to be supported by the link in bits per second (bps), and \begin{math} \frac{E_b}{N_o} \end{math} is the bit-energy per noise-spectral-density. Note also that \begin{math} N = N_o B \end{math} \cite{sklar}, where \begin{math} N_o \end{math}, is the maximum single-sided noise power spectral density in W/$\Hz $, and it is generally assumed to be uniformed. Equation (15) can be expressed on a logarithmic dB scale, which is a more appropriate form for the analysis of the link power budget \begin{equation}\ \SNR (\dB) = 10 \log_{10} \left( \frac{[\gamma H(0) P_t ]^2}{N} \frac{B}{R} \right) \end{equation} \begin{flalign} \SNR (\dB) &= 10 \log_{10} \gamma^2 + 10 \log_{10} H(0)^2 + 10 \log_{10} (P_t)^2 + && \\\nonumber & 10 \log_{10} B -10 \log_{10} N - 10 \log_{10}R && \end{flalign} From (17), it is possible to estimate the minimum transmitter power required to achieve a targeted SNR. To ensure a resilient link, the link budget usually include other terms to account for additional losses as well as a link margin. \section{PERFORMANCE EVALUATION AND RESULTS} For our system model, we consider two, 1U CubeSats in direct LOS and in a leader-follower configuration. We assume that the satellites are deployed in nearly circular lower Earth orbits and that the distance between the CubeSats is fixed. We used the numerical values in Table IV for the simulation of our analytical model. The optical filter at the receiver\textquotesingle s front-end is tuned to the deep Fraunhofer line at 656.2808 nm wavelength with a line width of 0.4020 nm. We assumed a concentrator radius of 2.0 cm and PIN photodiode with active physical area of 7.84 $\cm^2$ (Hamamatsu Si Photodiode S3584). \begin{table}[ht] \caption{Simulation Model Parameter Assumptions} \centering \begin{tabular}{l c} \hline\hline Parameter\ & Value\ \\ \hline Semi-angle at Half Power, \begin{math}\ \Phi_{\frac{1}{2}} \end{math} & \begin{math}\ 30^o \end{math} \\ LED Peak Wavelength, \begin{math}\ \lambda_{\peak} \end{math} & 656.2808 nm \\ Concentrator FoV Semi-angle, \begin{math}\ \psi_c \end{math} & \begin{math}\ 35^o \end{math}\\ Filter Transmission Coefficient, \begin{math}\ T_o \end{math} & 1.0 \\ Incidence Angle, \begin{math}\ \varphi \end{math} & \begin{math}\ 30^o \end{math} \\ Irradiance Angle, \begin{math}\ \psi \end{math} & \begin{math}\ 15^o\end{math} \\ Detector Responsivity, \begin{math}\ \gamma \end{math} & \begin{math}\ 0.51 \end{math}\\ Refractive Index of Lens, \begin{math}\ n \end{math} & 1.5 \\ Radius of Concentrator, \begin{math}\ R \end{math} & 2.0 cm \\ Detector Active Area, \begin{math}\ A_{\pd} \end{math} & 7.84\begin{math}\ \cm^2 \end{math} \\ Desired Electrical Bandwidth, \begin{math}\ B \end{math} & 0.5 MHz \\ Optical Filter Bandwidth, \begin{math}\ \Delta \lambda \end{math} & 0.4020 nm \\ Optical Filter Lower Limit, \begin{math}\ \lambda_1 \end{math} & 656.0798 nm \\ Optical Filter Upper Limit, \begin{math}\ \lambda_2 \end{math} & 656.4818 nm \\ Open Loop Voltage Gain, \begin{math}\ G \end{math} & 10 \\ FET Transconductance, \begin{math}\ g_m \end{math} & 30 ms \\ FET Channel Noise Factor, \begin{math}\ \Gamma \end{math} & 0.82 or 1.5 \\ Capacitance of Photodetector, \begin{math}\ \eta \end{math} & 38 pF\begin{math}\ /\cm^2 \end{math} \\ Link Distance, \begin{math}\ d \end{math} & 0.5 km \\ Noise Bandwidth Factor for White Noise,\begin{math}\ I_2 \end{math} & 0.562 \\ Noise Bandwidth Factor for \begin{math}\ f^2 \end{math} noise, \begin{math}\ I_3 \end{math} & 0.0868 \\ Boltzmann Constant, \begin{math}\ k \end{math} & \begin{math}\ 1.3806\times10^{-23}\J/\K\end{math}\\ Absolute Temperature, \begin{math}\ T_{\A} \end{math} & 300 K \\ \hline\hline \end{tabular} \label{table:table1} \end{table} In this section, we investigated the impact of solar background illumination on the SNR at the receiver, and then conducted a comparative evaluation of the ISC link performance for five different IM/DD schemes, namely, on-off keying non-return-to-zero (OOK-NRZ), pulse position modulation (PPM), digital pulse interval modulation (DPIM), DC biased optical OFDM (DCO-OFDM) and asymmetrically clipped optical OFDM (ACO-OFDM). These schemes were considered based on the individual merits they bring to small satellites. These include bandwidth and power efficiency, reduced implementation complexity, as well as robustness to ISI. We also assessed the performance of the VLC link with and without the use of forward error correction (FEC). Table V is a summary of methods for determining the BER and bandwidth requirements for the above modulation schemes. The BER has been expressed as a function of SNR to simplify the analysis and allow a quantitative comparison of the different schemes. \begin{table}[!ht] \caption{Methods for BER and Bandwidth Requirements \cite {trisno,mesleh,elganimi,hayes,amanorphd}} \centering \begin{tabular}{c c c } \hline\hline Modulation\ & BER\ & Bandwidth \ \\ Scheme & & Requirement \\[0.5ex] \hline \\[0.1ex] OOK-NRZ & \begin{math}\ \frac{1}{2} \erfc(\frac{1}{2\sqrt{2}} \sqrt{\SNR}) \end{math} & \begin{math}\ R_b \end{math} \\ \\[0.1ex] L-PPM & \begin{math}\ \frac{1}{2} \erfc(\frac{1}{2\sqrt{2}} \sqrt{\SNR \frac{L}{2} \log_2 L })\end{math} & \begin{math}\ R_b \frac{L}{\log_2 L} \end{math} \\ \\[0.1ex] DPIM & \begin{math}\ \frac{1}{2} \erfc(\frac{1}{2\sqrt{2}} \sqrt{\SNR \frac{L_{avg}}{2} \log_2 L })\end{math} & \begin{math}\ R_b \frac{L_{avg}}{\log_2 L} \end{math} \\ \\[0.1ex] DCO-OFDM & \begin{math}\ \frac{\sqrt{M} -1}{\sqrt{M} \log_2 \sqrt{M}} \erfc(\sqrt{\frac{3 \SNR}{2(M-1)} }) \end{math} & \begin{math}\ \frac{R_b(N+N_g)}{(\frac{N}{2}-1)\log_2M} \end{math} \\ \\[0.1ex] ACO-OFDM & \begin{math}\ \frac{\sqrt{M} -1}{\sqrt{M} \log_2 \sqrt{M}} \erfc(\sqrt{\frac{3 \SNR}{2(M-1)} }) \end{math} & \begin{math}\ \frac{R_b(N+N_g)}{(\frac{N}{4}-1)\log_2M} \end{math} \\ \\[0.1ex] \hline\hline \end{tabular} \label{table:table1} \end{table} \subsection{Impact of Solar Background on SNR} For a transmitted optical power of 2W, Fig. 6 represents a plot of the SNR for different values of concentrator FoV. Clearly, the impact of the concentrator FoV on the SNR is apparent. A \begin{math}\ 10^o \end{math} reduction in the FoV semi-angle translates into an improvement of the SNR by about 3.5 dB. It is important, following the analysis in \cite{barry}, that the concentrator FoV semi-angle,\begin{math}\ \psi_c \end{math} is greater than the incidence angle,\begin{math}\ \varphi \end{math} in order to achieve a concentrator gain \begin{math}\ g(\psi) \end{math} of \begin{math}\ n^2 \end{math} or greater. Fig. 7 shows that doubling the link distance results in a drastic degradation of SNR. It is also evident from (3), (4) and (14), that doubling the transmitted optical power or halfing the active detector area has a profound impact on SNR. However, for a given small satellite configuration, the SMaP constraints limit the extent to which \emph{power} and \emph{detector area} can be extended. Using the minimum desired bandwidth for a given application will also yield an improved SNR. Ultimately, the task of the communication system designer is to trade-off these critical parameters in order to achieve the desired performance. \begin{figure}[!t] \centering \includegraphics[width=3.3in, height=5.6cm]{fig_6.JPG} \caption{Impact of Solar Background on SNR for Link Distance of 0.5 km, Transmitted Optical Power Output of 2W and Electrical Bandwidth of 0.5 MHz} \label{fig 6} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.3in, height=5.6cm]{fig_7.JPG} \caption{SNR Plot for Link Distance of 1.0 km, Transmitted Optical Power Output of 2W, and Electrical Bandwidth of 0.5 MHz for Different Concentrator FoVs} \label{fig 7} \end{figure} \subsection{Analysis of Different IM/DD Schemes} For a targeted BER of 10\textsuperscript{-6}, Table VI depicts the required transmitted optical power for the different IM/DD modulation schemes. The results show that for higher levels of L (i.e., L \begin{math} \geq 4 \end{math}), PPM requires less optical power than OOK-NRZ to achieve the same error performance. Similarly, for L=8, DPIM requires 65 percent less optical power than OOK. Moreover, unlike PPM, DPIM requires no symbol synchronization, thus yielding a less complicated receiver structure. Compared to multi-carrier modulation schemes such as DCO-OFDM and ACO-OFDM, DPIM (L=8) requires about 67 percent less power than ACO-OFDM (M=16) for the same BER. As illustrated in Fig. 8, at low to moderate data-rates, PPM and DPIM exhibit better error properties than DCO-OFDM and ACO-OFDM. However, at very high data-rates, the multi-carrier schemes (i.e., DCO-OFDM and ACO-OFDM) are more resilient to noise and offer superior capabilities in terms of throughput. \begin{figure}[!ht] \centering \includegraphics[width=3.4in, height=5.6cm]{fig_8.JPG} \caption{BER Plot for Link Distance of 0.5 km, Transmitted Optical Power Output of 4W, and Electrical Bandwidth of 2.5 MHz} \label{fig 8} \end{figure} The disadvantage of these schemes is the cost of the associated high transmitted optical power. Clearly, for low to moderate data-rates, the higher power requirement of DCO-OFDM, puts it at a relative disadvantage to power-efficient modulation schemes required for small satellites, where mass and volume of onboard electronics are restricted. The simplified receiver structure of DPIM coupled with its relatively good power-efficiency and bandwidth requirements makes it an attractive choice for ISC for small satellites at moderate data-rates. For very high data-rates, the multi-carrier schemes can be considered at the expense of high transmitted optical power. \begin{table}[!t] \caption{Required Transmitted Optical Power for Link Distance of 0.5 km, Assumed Bandwidth of 0.5 MHz and Targerted BER of 10\textsuperscript{-6}} \centering \begin{tabular}{c c c c } \hline\hline Modulation \ & \ & SNR\ & TX Optical Power \ \\ Scheme & & (dB) & @ 5 Percent Background \\[0.5ex] \hline OOK-NRZ & & 19.56 & 2.2 W \\ \hline L-PPM & L=2 & 19.56 & 2.2 W \\ & L=4 & 13.54 & 1.1 W \\ & L=8 & 8.77 & 0.6 W \\ \hline DPIM & L=2 & 18.59 & 1.97 W \\ & L=4 & 14.12 & 1.18 W \\ & L=8 & 10.40 & 0.77 W \\ \hline DCO-OFDM & M=4 & 13.54 & 1.1 W + DC Bias \\ & M=16 & 20.42 & 2.4 W + DC Bias \\ & M=64 & 26.56 & 5.0 W + DC Bias \\ \hline ACO-OFDM & M=4 & 13.54 & 1.1 W \\ & M=16 & 20.42 & 2.4 W \\ & M=64 & 26.56 & 5.0 W \\ \hline\hline \end{tabular} \label{table:table2} \end{table} \subsection{Uncoded versus Coded Performance Evaluation} In this sub-section, we examined the impact of forward error-correction (FEC) on the performance of the VLC link. We used the uncoded transmission characteristic of a 16-QAM constellation, which can be applied in ACO-OFDM and DCO-OFDM schemes. The simulation was carried out in MATLAB for a range of bit-energy per noise-spectral-density Eb/No (i.e., SNR per bit) values from 7dB to 12 dB. For the coded case, we used a Reed-Solomon encoder and decoder pair consisting of a RS(15,11) code. The code has two-symbol error correction capability and a generator polynomial given by: \begin{equation} \label{eq:1} \begin{aligned} g(X)= X^4 + (\alpha^3 + \alpha^2 + 1)X^3 + (\alpha^3 + \alpha^2)X^2 \\ + (\alpha^3)X + (\alpha^3)X + (\alpha^2 + \alpha + 1), \end{aligned} \end{equation} where \begin{math} \alpha \end{math} is root of the primitive polynomial \begin{math} p(X) \end{math} in GF(16): \begin{equation} p(X)= X^4 + X + 1 \end{equation} Thus, the generator polynomial $ g(X) $ can be expressed as: \begin{equation} g(X)= X^4 + 13X^3 + 12X^2 + 8X + 7 \end{equation} We further examined the impact of redundancy on the BER by comparing the performance of a RS(15,13) encoder/decoder pair against the above encoder/decoder pair and the uncoded modulation case. The generator polynomial of the RS(15,13) code is given by \begin{equation} g_2(X)= X^2 + (\alpha^2 + \alpha)X + \alpha^3 \end{equation} where \begin{math} \alpha \end{math} is root of primitive polynomial (24) in GF(16), \\ i.e., \begin{equation} g_2(X)= X^2 + 6X + 8 \end{equation} \begin{figure}[!ht] \centering \includegraphics[width=3.4in, height=5.6cm]{fig_9.JPG} \caption{Bit Error Rate versus Eb/No (i.e., SNR per bit) } \label{fig 9} \end{figure} Fig. 9 depicts the simulation results of the uncoded and coded cases. For a Eb/No of 12 dB, the error probability of RS (15, 11) has improved by a factor of more than 100 compared to the uncoded case. Clearly, the added redundancy resulted in faster signaling, less energy per channel symbol, and more errors detected out of the demodulator. It is evident from the profile of the RS (15, 13) that the higher the redundancy (i.e., the lower the code rate), the better the bit-error performance. However, the implementation complexity of a RS encoder rises with increases in redundancy. Additionally, there must be corresponding expansion in bandwidth to accommodate the redundant bits for any real-time communications application. \section{CONCLUSIONS} A major limitation of small satellites is their restricted form factor which regulates the size, mass, and power of the electronics that can be carried onboard. For a given small satellite configuration, these restrictions limit the range and throughput that can actually be achieved across the ISL. In this paper, we proposed an LED-based VLC system for ISC that addresses the SMaP constraints of small satellites and discussed essential physical layer requirements and design concepts for the realization of high performance visible light ISLs. The proposed system can be deployed within a constellation of small satellites and it is capable of establishing reliable communication links in the presence of steady background solar radiation through the use of natural low-background noise channels. The major contributions of this work include the following: \begin{enumerate} \item This work is the first to provide a quantitative assessment of solar background illumination on ISLs between small satellites. \item We investigated the use of natural low background noise channels (i.e., Fraunhofer lines) for VLC systems of medium scope using hypothetical LEDs whose peak wavelength coincides with the chosen Fraunhofer lines. \item We developed an analytical model of the ISL and evaluated the impact of solar background illumination on its performance for both uncoded and coded IM/DD schemes. \item The work discussed the design and formulation of power link budget for VLC ISLs. \item We discussed physical layer design issues and attempt to provide recommendations on key issues to be considered in the development of VLC-based communication subsystem for multiple small satellite systems. \end{enumerate} Using a transmitted optical power of 4W and DPIM modulation, a receiver bandwidth requirement of 3.5 MHz is needed to achieve a data rate of 2.0 Mbits/s for a moderate link distance of 0.5 km at an uncoded BER of 10\textsuperscript{-6}, which is the performance requirement for stable communication link. This data rate is sufficient to support navigation, command and health data as well as science data. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2,877,628,089,977
arxiv
\section{Introduction} Magellanic Cloud PNe (MCPNe) are the nearest large population of extragalactic PNe where the low reddening and favourable viewing angle allows for population-wide studies not possible in the Milky Way. The most important of these is the extragalactic standard candle [O~III] PNLF in which MCPNe serve as the benchmark population (Ciardullo et al. 2010). However, until a detailed multi-wavelength analysis of MCPNe to remove contaminating non-PNe is performed, their acclaimed benchmark status could be considered uncertain. The deep $YJK_s$ photometry of the VMC survey will allow for a thorough appraisal of the majority of MCPNe for the first time. \firstsection \section{A multi-wavelength study of 102 LMC PNe and non-PNe} Miszalski et al. (2011b) performed a multi-wavelength analysis of 102 objects using VMC data (Cioni et al. 2011) and a host of mid-infrared and optical observations. A total 46/67 or 69\%\footnote{This includes RP227 mistakenly omitted from Table 6 of Miszalski et al. (2011b).} of objects in our sample from Reid \& Parker (2006b) were reclassified as non-PNe (HII regions, field stars, emission line stars, symbiotic stars and a young stellar object). These results are numerically dominated by the complex 30 Doradus region and some reclassifications may be explained by the relatively low survey resolution of Reid \& Parker (2006a). Miszalski et al. (2011b) developed a range of diagnostic diagrams (e.g. Fig. \ref{fig:fig1}) to guide future analysis of larger samples of MCPNe with the growing VMC dataset (Miszalski et al. in preparation). The inclusion of OGLE-III $I$-band (Udalski et al. 2008a,b) and VMC $K_s$ lightcurves were a powerful means to firmly identify contaminating variable stars (e.g. the Mira RP793 with $\Delta K_s=\sim$0.4 mag and 503.4 day period from OGLE-III). None of the variables identified had the multi-wavelength properties of typical PNe, which underscores the importance of a clean PN population before searching for binary central stars. The large fraction of giant companions claimed by Shaw et al. (2009) is instead explained by the numerous field stars and emission line stars found by Miszalski et al. (2011b). \begin{figure} \centering \includegraphics[scale=0.7]{fig.ps} \caption{\emph{(left)} The VMC `ant diagram' showing the position of PNe and various non-PNe in the sample analysed by Miszalski et al. (2011b). \emph{(right)} Colour-composites of four bona-fide PNe made from stacked $K_s$ (red), $J$ (green) and $Y$ (blue) VMC images.} \label{fig:fig1} \end{figure} \firstsection \section{Towards an improved census of LMC PNe} The Reid \& Parker (2010) [O~III] PNLF of LMC PNe is a substantial advance over previous work, however further effort is required to improve this important PNLF. Firstly, only a small fraction of the sample has been cleaned of non-PNe and secondly, there remain many undiscovered PNe to be added. Miszalski et al. (2011a) found 2--3 new PNe in a $63\times63$ arcmin$^2$ region not listed by Reid \& Parker (2006b) and concluded that perhaps 50--75 new PNe remain to be found in the inner $5\times5$ deg$^2$ of the LMC. Reid \& Parker (these proceedings) report on new PNe found outside this zone using optical selection criteria. Optical searches may not find all PNe as demonstrated by MNC4 (Miszalski et al. 2011a) whose [WC9] or later central star is too cool to ionise [O~III]. With more VMC data we will build NIR PNLFs to complement the [O~III] PNLF.
2,877,628,089,978
arxiv
\section{Introduction} In the theory of Fuchsian groups, one of the important old problem is the ``\hbox{discreteness} problem": given two elements in ${\rm PSL}(2, \R)$, whether or not the group generated by them is discrete. For an elaborate account of this problem, see Gilman \cite{gilman}. \hbox{Algorithmic} solutions to this problem were given by Rosenberger \cite{r}, Gilman and Maskit \cite{gm}, Gilman \cite{gilman}. The J\o{}rgensen inequality \cite{j} is one of the major results related to this problem. J\o{}rgensen \cite{j} obtained an inequality that the generators of a discrete, non-elementary, two-generator subgroup of ${\rm SL}(2, \C)$ necessarily satisfy. Wada \cite{w} used this inequality to provide an effective algorithm that helps the software OPTi to test discreteness of subgroups, as well as to draw deformation spaces of discrete groups. A two-generator discrete subgroup of isometries of the hyperbolic space is called \emph{extreme group} if it satisfies equality in the J\o{}rgensen inequality. Investigation of extreme groups in ${\rm SL}(2, \C)$ was initiated by J\o{}rgensen and Kikka \cite{jk}. Following that, there have been many investigations to classify the two-generator extreme groups in ${\rm SL}(2, \C)$, for eg. see \cite{gm, gr}. In a series of papers, Sato et. al. \cite{sato0}--\cite{sato5} have investigated this problem in great detail and provided a conjectural list of the parabolic-type extreme groups. Callahan \cite{cal} has provided a counter example to that conjecture. Callahan has also classified all non-compact arithmetic extreme groups that was not in the list of Sato et. al. The problem of classifying parabolic-type J\o{}rgensen groups in ${\rm SL}(2, \C)$ is still open. Recently, Vesnin and Masley \cite{vm} have investigated extremality of other J\o{}rgensen type inequalities in ${\rm SL}(2, \C)$. The problem of classifying extreme J\o{}rgensen groups in higher dimension has not seen much investigation till date. The aim of this paper is to address this problem for J\o{}rgensen type inequalities in $\S$, where $\H$ is the division ring of the real quaternions and $\S$ is the group of $2 \times 2$ quaternionic matrices with Dieudonn\'e determinant $1$. It is well-known that $\S$ acts on the five dimensional real hyperbolic space $\h^5$ by the M\"obius transformations (or linear fractional transformations), for a proof see \cite{kg}. The isometries of $\h^5$ are classified by their fixed points as elliptic, parabolic and hyperbolic (or loxodromic). This classification can be characterized algebraically by conjugacy invariants of the isometries, see \cite{p, ps, kg, cao} for more details. The J\o{}rgensen inequality has been generalized in higher dimensions by Martin \cite{martin} who formulated it by identifying the hyperbolic space as the upper half space or the unit ball in $\R^{n+1}$. Hence, in Martin's generalization, the isometries are real matrices of rank $n+1$. Generalizing the approach of using rank two real and complex matrices in low dimensions, Ahlfors \cite{ahlfors} used Clifford algebras to investigate higher dimensional M\"obius groups. In this approach, the isometry group of the hyperbolic $n$-space can be identified with a group of $2 \times 2$ matrices over the Clifford numbers, see Ahlfors \cite{ahlfors}, Waterman \cite{waterman} for more details. Using the Clifford algebraic formalism, a generalization of J\o{}rgensen inequality was obtained by Waterman \cite{waterman}. However, it may be difficult to deal with the Clifford matrices due to the complicated multiplicative structure of the Clifford numbers. Using the real quaternions there is an intermediate approach between the complex numbers and the Clifford numbers, that should provide the closest generalization of the low dimensional results for four and five dimensional M\"obius groups. The Clifford group that acts by isometries on the hyperbolic $4$-space, is a proper subgroup of $\S$. So, Waterman's result restricts to this case. Kellerhals \cite{kel2} has used this quaternionic Clifford group to investigate collars in $\h^4$. Recently, Tan et. al. \cite{t} have obtained a generalization of the classical Delambre-Gauss formula for right-angles hexagons in hyperbolic $4$-space using the quaternionic Clifford group of Ahlfors and Waterman. The Clifford group that acts on $\h^5$, however, is not a subgroup of $\S$. In fact, the group $\S$ is not in the list of the Clifford groups of Ahlfors and Waterman. However, following the approaches of Waterman, it is not hard to formulte J\o{}rgensen type inequalities for pairs of isometries in $\S$. Kellerhals \cite{kel} derived J\o{}rgensen inequality for two-generator discrete subgroups in $\S$ where one the of the generators is either unipotent parabolic or hyperbolic. Using similar methods as that of Waterman, we give here slightly generalized versions of the J\o{}rgensen inequalities in $\S$ when one of the generators is either semisimple or fixes a point on the boundary, see \thmref{jss} and \thmref{jg} in \secref{jse}. As corollaries we derive the formulations by Kellerhals and Waterman in the quaternionic set up, see \corref{kele} and \corref{wat} respectively. We also formulate a J\o{}rgensen type inequality for strictly hyperbolic elements that is very close to the original formulation of J\o{}rgensen, see \corref{jh}. We recall here that a strictly hyperbolic element or a stretch is conjugate to a diagonal matrix that has real diagonal entries different from $0, ~1$ or $-1$. We also give as corollaries two weaker versions of the inequality when one generator is semisimple. We investigate the extremality of these J\o{}rgensen inequalities in \secref{sext}. We extend the results of J\o{}rgensen and Kikka in the quaternionic set up, see \thmref{ext1}, \corref{extc1} and, \thmref{extt2}. We also obtain necessary conditions for a two-generator subgroup of $\S$ to be extremal, see \corref{extp1} and \corref{extc2}. \section{Preliminaries} \subsection{The Quaternions} Let $\H$ denote the division ring of quaternions. Recall that every element of $\H$ is of the form $a_{0}+a_{1}i+a_{2}j+a_{3}k$,where $a_{0},a_{1},a_{2},a_{3}\in \R$, and $ i,j,k$ satisfy relations: $i^{2}=j^{2}=k^{2}=-1,ij=-ji=k,jk=-kj=i,ki=-ik=j$, and $ ijk=-1$. Any $a\in {\H}$ can be written as $a=a_{0}+a_{1}i+a_{2}j+a_{3}k=(a_{0}+a_{1}i)+(a_{2}+a_{3}i)j=z+wj$, where $z=a_{0}+a_{1}i,~ w=a_{2}+a_{3}i\in\bf{\C}$. For $a\in\bf{\H}$,with $a=a_{0}+a_{1}i+a_{2}j+a_{3}k$,we define $\Re(a)=a_{0}$=the real part of $a$ and $\Im(a)=a_{1}i+a_{2}j+a_{3}k=$ the imaginary part of $a$. Also,define the conjugate of $a$ as $\overline {a}= \Re(a)-\Im(a)$ If $\Re(a)=0$,then we call $a$ as a vector in $\H$ which we can identify with ${\R}^{3}$. The norm of $a$ is $|a|=\sqrt{a_0^{2}+a_1^{2}+a_2^{2}+a_3^{2}}$. \subsubsection{{Useful Properties}} We note the following properties of the quaternions that will help us further: \begin{enumerate} \item {For $x\in{\bf{\R}},~ a\in{\H},\text{ we have}\medspace ax=xa $.} \item {For $ a\in{\bf{\C}},~ aj=j\overline{a}$.} \item {For $a,b\in{\H},|ab|=|a||b|=|ba|\thickspace \text{and if}\thickspace a\neq 0,\text{then}\thickspace a^{-1}=\frac{\overline{a}}{|a|^2}$.} \end{enumerate} Two quaternions ${a,b}$ are said to be \emph{similar} if there exists a non-zero quaternion $ {c}$ such that $ {b=c^{-1}ac}$ and we write it as $ {a\backsim b}$. Obviously $ {'\backsim'}$ is an equivalence relation on ${\H}$ and denote $[a]$ as the class of $a$. It is easy to verify that $ {a \backsim b}$ if and only if $ {\Re(a)=\Re(b)}$ and $|a|=|b|$. Equivalently, $ {a \backsim b}$ if and only if $ {\Re(a)=\Re(b)}$ and $|\Im (a)|=|\Im (b)|$. Thus the similarity class of every quaternion $a$ contains a pair of complex conjugates with absolute-value $|a|$ and real part equal to $\Re( a)$. Let $a$ is similar to $re^{i \theta}$, $\theta \in [-\pi, \pi]$. In most cases, we will adopt the convention of calling $|\theta|$ as the \emph{argument} of $a$ and will denote it by $\arg(a)$. According to this convention, $\arg( a )\in [0, \pi]$, unless specified otherwise. \medskip Suppose a quaternion $q$ is conjugate to a complex number $z=re^{ i \alpha}$. Since $\Re(q)=\Re(z)$ and $|q|=|z|$, it follows that $|\Im q|=|\Im z|=|r\sin \alpha|$, i.e. $|\sin \alpha|=\frac{|\Im q|}{|q|}$. \subsection{Matrices over the quaternions} Let $ {\rm M{(2, \H)}}$ denotes the set of all $2\times2$ matrices over the quaternions. If $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}$, then we can associate the `quaternionic determinant' $\det(A)=|ad-aca^{-1}b|$. A matrix $A\in{\rm M{(2, \H)}}$ is invertible if and only if $\det(A)\neq0$. Also, note that for $A,B\in {\rm M{(2, \H)}}, ~ \det(AB)=\det(A)\det(B)$. Now set $$\S=\bigg\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in {\rm M}_2(\H):\det{\begin{pmatrix}a&b\\c&d\end{pmatrix}} =|ad-aca^{-1}b|=1\bigg \}.$$ The group $\S$ acts as the orientation-preserving isometry group of the hyperbolic $5$-space $\h^5$. We identify the extended quaternionic plane $\hat \H=\H \cup \infty$ with the conformal boundary $\s^4$ of the hyperbolic $5$-space. The group $\S$ acts on $\hat \H$ by M\"obius transformations: $$\begin{pmatrix}a&b\\c&d\end{pmatrix}: Z \mapsto (aZ+b)(cZ+d)^{-1}.$$ The action is extended over $\h^5$ by Poincar\'e extensions. \subsection{Classification of elements of $\S$} Every element $A$ of $\S$ has a fixed point on the closure of the hyperbolic space ${\overline \h}^5$ and this gives us the usual classification of elliptic, parabolic and hyperbolic (or loxodromic) elements in $\S$. Further, it follows from Lefschetz fixed point theorem that every element of $\S$ has a fixed point in conformal boundary. Up to conjugacy, we can take that fixed point to be $\infty$ and hence every element in $\S$ is conjugate to an upper-triangular matrix. We would like to note here that an elliptic or hyperbolic element $A$ is conjugate to a matrix of the form $$\begin{pmatrix} \lambda & 0 \\ 0 & \mu \end{pmatrix}$$ where $\lambda, \mu \in \C$. If $|\lambda|=|\mu|(=1)$ then $A$ is elliptic. Otherwise it is hyperbolic. In the hyperbolic case $|\lambda| \neq 1 \neq |\mu|$ and $|\lambda||\mu|=1$. A hyperbolic or loxodromic element will be called \emph{strictly hyperbolic} if it is conjugate to a real diagonal (non-identity) matrix. A parabolic isometry is conjugate to an element of the form $$\begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}, ~ |\lambda|=1.$$ For more details of the classification and algebraic criteria to detect them see \cite{cao, kg, p, ps}. \subsection{Conjugacy invariants} According to Foreman \cite{foreman} the following three functions are conjugacy invariants of $\S$: For $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\S$, \begin{eqnarray*} \beta={\beta}_A&=&|d|^{2}\Re(a)+|a|^{2}\Re(d)-\Re(\overline{a}bc)-\Re(bc\overline{d})\\&=&\Re[(ad-bc)\overline{a}+(da-cb)\overline{d}],\\ \gamma={\gamma}_A&=&|a|^{2}+|d|^{2}+4\Re(a)\Re(d)-2\Re(bc)\\&=&|a|^{2}+|d|^{2}+2[\Re(a\overline{d})+\Re(ad)]-2\Re(bc)\\&=&|a+d|^{2}+2\Re(ad-bc),\\ \delta={\delta}_A&=&\Re(a)+\Re(d)=\Re(a+d) \end{eqnarray*} Parker and Short \cite{ps} defined another two quantities for each $A\in \S$ as follows: \begin{eqnarray*} \sigma={\sigma}_A &=&cac^{-1}d-cb,when \; c\neq 0,\\ &=&bdb^{-1}a,when \; c=0,b\neq 0,\\ &=&(d-a)a(d-a)^{-1}d,when \; b=c=0,a\neq d,\\ &=&a\overline{a},when \; b=c=0,a=d\\ \tau={\tau}_A&=&cac^{-1}+d,when \; c\neq 0\\ &=&bdb^{-1}+a,when \; c=0,b\neq 0\\ &=&(d-a)a(d-a)^{-1}+d,when \; b=c=0,a\neq d\\ &=&a+\overline{a},when \; b=c=0,a=d \end{eqnarray*} It can be proved that in each case $|\sigma|^{2}=\alpha=1$,where $$\alpha={\alpha}_A=|a|^2|d|^2+|b|^2|c|^2-2\Re(a\overline{c}d\overline{b}).$$ We are going to show that $\sqrt{\alpha}=det(A)=|ad-aca^{-1}b|=|\sigma|$. \begin{lemma}If $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in{\rm M{(2, \H)}}$, then $\sqrt{\alpha}=det(A)=|ad-aca^{-1}b|=|\sigma|$. \end{lemma} \begin{proof} We observe that\begin{eqnarray*} (det(A))^2&=&|ad-aca^{-1}b|^2=(ad-aca^{-1}b)\overline{(ad-aca^{-1}b)}\\ &=&(ad-aca^{-1}b)(\overline{d}\overline{a}-\overline{b}{\overline{a}}^{-1}\overline{c}\overline{a})\\ &=&|a|^2|d|^2+|b|^2|c|^2-ad\overline{b}{\overline{a}}^{-1}\overline{c}\overline{a}-aca^{-1}b\overline{d}\overline{a}\\ &=&|a|^2|d|^2+|b|^2|c|^2-2\Re(aca^{-1}b\overline{d}\overline{a})\\ &=&|a|^2|d|^2+|b|^2|c|^2-2\Re(c\overline{a}b\overline{d})=|a|^2|d|^2+|b|^2|c|^2-2\Re(a\overline{c}d\overline{b})=\alpha. \end{eqnarray*} This completes the proof. \end{proof} \subsection{Some Observations} It can be checked that ${\alpha}={\alpha}_A=|l_{ij}|^2=|r_{ij}|^2, 1 \leq i,j \leq 2$, where $l_{ij}$, $r_{ij}$ are defined as follows: \begin{align*} l_{11}&=da-dbd^{-1}c & l_{12}&=bdb^{-1}a-bc \\ l_{21}&=cac^{-1}d-cb & l_{22}&=ad-aca^{-1}b\\ r_{11}&=ad-bd^{-1}cd & r_{12}&=db^{-1}ab-cb\\ r_{21}&=ac^{-1}dc-bc & r_{22}&=da-ca^{-1}ba \end{align*} \begin{theorem} \cite{kel} Let $M=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in{\rm M{(2, \H)}}$ be such that $det(M)\neq 0$.Then $M$ is invertible $$M^{-1}=\begin{pmatrix}{l_{11}}^{-1}d&-{l_{12}}^{-1}b\\-{l_{21}}^{-1}c&{l_{22}}^{-1}a\end{pmatrix}=\begin{pmatrix}d{r_{11}}^{-1}&-b{r_{12}}^{-1}\\-c{r_{21}}^{-1}&a{r_{22}}^{-1}\end{pmatrix}.$$ \end{theorem} \subsection{ Notations} For our convenience we use the following notations: \begin{align*} d\sptilde&=l_{11}^{-1}d, & c\sptilde&=l_{21}^{-1}c,& b\sptilde&=l_{12}^{-1}b,& a\sptilde&=l_{22}^{-1}a\\ d_{\sptilde}&=dr_{11}^{-1},& c_{\sptilde}&=cr_{21}^{-1},& b_{\sptilde}&=br_{12}^{-1},& a_{\sptilde}&=ar_{22}^{-1} \end{align*}\\ Kellerhals has proved some interesting properties of these numbers given by following lemma: \begin{lemma} \cite{kel} Let $M=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in {\rm M{(2, \H)}}$ be invertible.Then we have the following properties: \begin{enumerate} \item $ad_{\sptilde}-bc_{\sptilde}=1=da_{\sptilde}-cb_{\sptilde},~ {d\sptilde}a-{b\sptilde}c=1={a\sptilde}d-{c\sptilde}b$. \item $a{d\sptilde}-b{c\sptilde}=1=d{a\sptilde}-c{b\sptilde}, ~{d_{\sptilde}}a-{b_{\sptilde}}c=1={a_{\sptilde}}d-{c_{\sptilde}}b$. \item $a{b\sptilde}=b{a\sptilde},c{d\sptilde}=d{c\sptilde},~{a\sptilde}c={c\sptilde}a,{b\sptilde}d={d\sptilde}b$. \item $ab_{\sptilde}=ba_{\sptilde},cd_{\sptilde}=dc_{\sptilde},~a_{\sptilde}c=c_{\sptilde}a,b_{\sptilde}d= d_{\sptilde}b$. \end{enumerate} \end{lemma} \section{J\o{}rgensen inequality for $\S$}\label{jse} The following proposition gives a J\o{}rgensen inequality for a two-generator subgroup of $\S$ when one of the generators is semisimple. \begin{theorem}\label{jss}Let $ S=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and $T=\begin{pmatrix}{\lambda}&0\\0&{\mu}\end{pmatrix}$, $\lambda$ is not similar to $\mu$, generate a discrete non-elementary subgroup of $\S$. Then $$\{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\}(1+|bc|)\geq 1.$$ \end{theorem} \begin{proof} Let us suppose that $K_0=\{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\}(1+|bc|)<1.$\\ Consider the Shimizu-Leutbecher sequence defined inductively by $$S_0=\begin{pmatrix}a_0&b_0\\c_0&d_0\end{pmatrix}=S=\begin{pmatrix}a&b\\c&d\end{pmatrix}, ~S_{n+1}=\begin{pmatrix}a_{n+1}&b_{n+1}\\c_{n+1}&d_{n+1}\end{pmatrix}=S_nTS_n^{-1}.$$ Now, \begin{eqnarray}\label{sls}S_{n+1}&=&S_nTS_n^{-1}=\begin{pmatrix}a_n&b_n\\c_n&d_n\end{pmatrix}\begin{pmatrix}\lambda&0\\0&\mu\end{pmatrix} \begin{pmatrix}d\sptilde_n&-b\sptilde_n\\-c\sptilde_n&a\sptilde_n\end{pmatrix}\\ &=&\begin{pmatrix}a_n\lambda&b_n\lambda\\c_n\lambda&d_n\lambda\end{pmatrix}\begin{pmatrix}d\sptilde_n&-b\sptilde_n\\-c\sptilde_n&a\sptilde_n\end{pmatrix}\\ &=&\begin{pmatrix}a_n\lambda d\sptilde_n-b_n\mu c\sptilde_n&-a_n\lambda b\sptilde_n+b_n\mu a\sptilde_n\\ c_n\lambda d\sptilde_n-d_n\mu c\sptilde_n&-c_n\lambda b\sptilde_n+d_n\mu a\sptilde_n\end{pmatrix}\\ &=&\begin{pmatrix}a_{n+1}&b_{n+1}\\c_{n+1}&d_{n+1}\end{pmatrix} \end{eqnarray}\\ So,\begin{align*} a_{n+1}&=a_n\lambda d\sptilde_n-b_n\mu c\sptilde_n,& b_{n+1}&=-a_n\lambda b\sptilde_n+b_n\mu a\sptilde_n\\ c_{n+1}&=c_n\lambda d\sptilde_n-d_n\mu c\sptilde_n,& d_{n+1}&=-c_n\lambda b\sptilde_n+d_n\mu a\sptilde_n \end{align*}\\ Now, we have \begin{eqnarray*} |b_{n+1}||c_{n+1}|&=&|(-a_n\lambda b\sptilde_n+b_n\mu a\sptilde_n)(c_n\lambda d\sptilde_n-d_n\mu c\sptilde_n)|\\ &=&|a_nb_nc_nd_n||\lambda-a_n^{-1}b_n\mu a\sptilde_n{b\sptilde_n}^{-1}||\lambda-c_n^{-1}d_n\mu c\sptilde_n{d\sptilde_n}^{-1}| \end{eqnarray*} By an easy computation, we see that \begin{eqnarray*} |\lambda-a_n^{-1}b_n\mu a\sptilde_n{b\sptilde_n}^{-1}|&=&|\Re\lambda+\Im \lambda-\Re\mu-a_n^{-1}b_n(\Im\mu)a\sptilde_n{b\sptilde_n}^{-1}|, \hbox{ since }\; a_nb\sptilde_n=b_na\sptilde_n\\ &=&|(\Re\lambda -\Re\mu)+\Im \lambda -a_n^{-1}b_n(\Im\mu)a\sptilde_n{b\sptilde_n}^{-1}|\\ &=&\sqrt{(\Re\lambda -\Re\mu)^2 +|\Im \lambda -a_n^{-1}b_n(\Im\mu)a\sptilde_n{b\sptilde_n}^{-1}|^2}\\ &\leq&\sqrt{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2}.\end{eqnarray*} Similarly, we may deduce that $|\lambda-c_n^{-1}d_n\mu c\sptilde_n{d\sptilde_n}^{-1}|\leq\sqrt{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2}$.\\ Therefore, \begin{equation} |b_{n+1}||c_{n+1}|\leq |a_nb_nc_nd_n|\{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\}\end{equation} Since $|a_nd_n|\leq 1+|b_nc_n|$, this implies \begin{equation}\label{ine1} |b_{n+1}||c_{n+1}| \leq \{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\}(1+|b_nc_n|)|b_nc_n|. \end{equation} Since, $K_0=\{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\}(1+|bc|)<1$, by using induction process we have the relation, $|b_{n+1}c_{n+1}|\leq K_0^n|bc|\Rightarrow b_nc\sptilde_n\rightarrow 0,\;as\; n\rightarrow \infty$, and so, $a_nd\sptilde_n=1+b_nc\sptilde_n\rightarrow 1,\;as\; n\rightarrow\infty$.\\ Since, $|a_{n+1}|=|a_n\lambda d\sptilde_n -b_n\mu c\sptilde_n|,\;|d_{n+1}|=|-c_n\lambda b\sptilde_n +d_n\mu a\sptilde_n|$, we have\\ $|\lambda||a_nd\sptilde_n|-|\mu||b_nc\sptilde_n|\leq|a_{n+1}|\leq|\lambda||a_nd\sptilde_n|+|\mu||b_nc\sptilde_n| \Rightarrow |a_{n+1}|\rightarrow |\lambda|,\;as\; n\rightarrow\infty$.\\ Similarly, we have $|d_{n+1}| \rightarrow |\mu|,\;\hbox{ as }\; n\rightarrow\infty$.\\ Again,we have \begin{eqnarray*}|b_{n+1}|&=&|-a_n\lambda b\sptilde_n +b_n\mu a\sptilde_n|=|a_nb\sptilde_n||\lambda-a_n^{-1}b_n\mu a\sptilde_n{b\sptilde_n}^{-1}|\\&\leq&|a_nb_n|\sqrt{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2}\\&\leq&{K_0}|a_n||b_n|\rightarrow {K_0}|b_n|,\;\hbox{ since }\; |a_n|\rightarrow 1.\\&\leq&{K_0^n}|b| \rightarrow 0,\;\hbox{ since }\; K_0<1. \end{eqnarray*} Thus, for all positive integers, $|b_n|\rightarrow 0\;\hbox{ as }\; n\rightarrow \infty$, i.e. $b_n\rightarrow 0 \;\hbox{ as }\; n\rightarrow \infty$.\\ Similarly, we may show that $c_n \rightarrow 0 \;\hbox{ as }\; n \rightarrow \infty$.\\ Thus the sequence {$S_n$} has a convergent subsequence and since the subgroup $\langle{A,B}\rangle$ is discrete, so we arrive at a contradiction. This proves the theorem. \end{proof} \begin{cor}\label{kele} Let $ S=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and $T=\begin{pmatrix}{\lambda}&0\\0&{\mu}\end{pmatrix}\in\S$, $\lambda$ is not similar to $\mu$, generate a discrete non-elementary subgroup $\langle S, T \rangle$ of $\S$. Then $$2(\cosh{\tau}-\cos(\alpha+\beta))(1+|bc|)\geq 1,$$ where $\alpha=arg(\lambda),~\beta=arg(\mu)$, $\tau=2 \log |\lambda|$. \end{cor} \begin{proof}Without loss of generality, assume $|\lambda|=r\geq 1$. Observe that, \begin{eqnarray*} & &(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\\ &=&(r \cos\alpha -\frac{1}{r} \cos\beta)^2 +(r|\sin\alpha|+\frac{1}{r}|\sin\beta|)^2\\ &=&r^2+\frac{1}{r^2}-2(\cos\alpha \cos\beta -|\sin\alpha||\sin\beta|\big)\\ &=&2(\cosh \tau-\cos(\alpha+\beta)), \hbox{ where } ~ r=e^{\frac{\tau}{2}}, ~ \tau \geq 0. \end{eqnarray*} This completes the proof. \end{proof} \begin{remark} \label{jre1} Kellerhals \cite[Proposition 3]{kel2} proved the above result assuming $T$ hyperbolic, i.e. when $\tau \neq 0$. However, it follows from above that Kellerhals's result carry over to the elliptic case as well, i.e. when $\tau=0$. Also we have avoided the normalization of the constant term of $(1+|bc|)$ in the inequality to make it sharp. The \thmref{jss} also extends Waterman's Theorem 9 in \cite{waterman} when restricted to the quaternionic set up. Note that $\S$ is not a Clifford group and hence, Theorem 9 of Waterman does not restrict to $\S$. For example, the element $$T=\begin{pmatrix} e^{i \theta} & 0 \\ 0 & e^{i \phi} \end{pmatrix},$$ does not belong the Clifford group ${\rm SL}_2(C_2)$, see \cite[p. 95]{waterman}, but it belongs to the group $\S$. This class of elements are also covered by \thmref{jss}. \end{remark} The next theorem generalizes the J\o{}rgensen's inequality in $\S$ for strictly \hbox{hyperbolic} elements with some given conditions. The formulation resembles the original inequality by J\o{}rgensen. \begin{cor}\label{jh} Let $A,B\in \S$ be such that both $A$ and the commutator $[A, B]$ are strictly hyperbolic. If $\langle A,B\rangle$ is a non-elementary discrete subgroup of $\S$, then $$|\delta^2_A-4|+|\delta_{ABA^{-1}B^{-1}}-2|\geq 1.$$ \end{cor} \begin{proof} Let $A=\begin{pmatrix}k&0\\0&k^{-1}\end{pmatrix}$, where $k>1$ and $B=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ with $c\neq 0$.\\ So, $\delta_A=k+k^{-1}\Rightarrow |\delta^2_A-4|=|(k+k^{-1})^2-4|=|k-k^{-1}|^2$.\\ Now,\begin{eqnarray*} AB&=&\begin{pmatrix}k&0\\0&k^{-1}\end{pmatrix}\begin{pmatrix}a&b\\c&d\end{pmatrix}=\begin{pmatrix}ka&kb\\k^{-1}c&k^{-1}d\end{pmatrix}\\ ABA^{-1}B^{-1}&=&\begin{pmatrix}ka&kb\\k^{-1}c&k^{-1}d\end{pmatrix}\begin{pmatrix}k^{-1}&0\\0&k\end{pmatrix}\begin{pmatrix}c^{-1}d\sigma^{-1}c&-a^{-1}b\sigma^{-1}cac^{-1}\\-\sigma^{-1}c&\sigma^{-1}cac^{-1}\end{pmatrix}\\ &=&\begin{pmatrix}a&k^2b\\k^{-2}c&d\end{pmatrix}\begin{pmatrix}c^{-1}d\sigma^{-1}c&-a^{-1}b\sigma^{-1}cac^{-1}\\-\sigma^{-1}c&\sigma^{-1}cac^{-1}\end{pmatrix}\\ &=&\begin{pmatrix}ac^{-1}d\sigma^{-1}c-k^2b\sigma^{-1}c&(k^2-1)b\sigma^{-1}cac^{-1}\\(k^{-2}-1)d\sigma^{-1}c&d\sigma^{-1}cac^{-1}-k^{-2}ca^{-1}b\sigma^{-1}cac^{-1}\end{pmatrix}. \end{eqnarray*} So, we have, \begin{eqnarray*}\delta_{ABA^{-1}B^{-1}}&=&\Re(ac^{-1}d\sigma^{-1}c-k^2b\sigma^{-1}c)+\Re(d\sigma^{-1}cac^{-1}-k^{-2}ca^{-1}b\sigma^{-1}cac^{-1})\\ &=&\Re(ac^{-1}d\sigma^{-1}c)-k^2\Re(b\sigma^{-1}c)+\Re(d\sigma^{-1}cac^{-1})-k^{-2}\Re(ca^{-1}b\sigma^{-1}cac^{-1})\\ &=&2\Re(cac^{-1}d\overline{\sigma})-(k^2+k^{-2})\Re(b\overline{\sigma}c)\\ &=&2(1+\Re(cb\overline{\sigma}))-(k^2+k^{-2})\Re(b\overline{\sigma}c), \hbox{ since }\; \sigma=cac^{-1}d-cb.\\ &=&2-(k^2+k^{-2}-2)\Re(b\overline{\sigma}c)\\ &=&2-(k-k^{-1})^2\Re(b\overline{\sigma}c). \end{eqnarray*} This implies that $|\delta_{ABA^{-1}B^{-1}}-2|=|k-k^{-1}|^2|\Re(b\overline{\sigma}c)|$. Since $ABA^{-1}B^{-1}$ is strictly hyperbolic, we have $$b\overline{\sigma}c=b\overline{d}c\overline{a}-|bc|^2\Rightarrow \Re(b\overline{\sigma}c)=\Re(b\overline{d}c\overline{a})-|bc|^2=\Re(a\overline{c}d\overline{b})-|bc|^2.$$ Also, we have $b\overline{\sigma}cac^{-1}=0\Rightarrow b\overline{(cac^{-1}d-cb)}cac^{-1}=0\Rightarrow |b|^2(|ad|^2-\overline{b}a\overline{c}d)=0\Rightarrow |ad|^2=\overline{b}a\overline{c}d \hbox{ since} \; bc\neq 0$, for otherwise $\langle A,B \rangle $ becomes elementary. This shows that $b\overline{\sigma}c=|ad|^2-|bc|^2=\Re(b\overline{\sigma}c)$. Thus, $$|\delta^2_A-4|+|\delta_{ABA^{-1}B^{-1}}-2|=|k-k^{-1}|^2(1+|bc|).$$ Now the theorem follows from \thmref{ext1}. \end{proof} The following two corollaries give weaker versions of \thmref{jss}. \begin{cor}\label{jss2} Suppose $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix}$ and $T=\begin{pmatrix} \lambda& 0\\ 0& \mu\end{pmatrix}$ generate a non-elementary discrete subgroup in $\S$. Then we have $$\beta (T) L^k \geq 1,$$ where $$\beta (T) = \displaystyle{\sup_{e,f \neq 0,\infty} |(\lambda -e \mu e^{-1})(\lambda - f \mu f^{-1})|},$$ $$L = 1+|\mu| \;\;\text{and} \;\;k= [1+|bc|] +1,$$ $[~.~]$ denotes the greatest integer function . \end{cor} \begin{proof} Since $L>1$, $k>2$, note that $1+|bc| \leq k \leq L^k$. Let $$K=(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2.$$ Using conjugation if necessary, suppose without loss of generality that $\lambda$, $\mu$ are complex numbers. Note that, both $\beta(T)$ and $K$ are invariant if we conjugate the matrix $T$ in the above theorems to a diagonal matrix in $\S$ over the complex numbers. Note that $$|\lambda-j \mu j^{-1}|= \sqrt{ (\Re \lambda - \Re \mu)^2 + (|\Im \lambda|^2 + |\Im \mu|^2)},$$ hence $K \leq \beta(T)$. Further note that a diagonal element $T \in \S$ can be conjugated to a diagonal matrix $T' \in {\rm SL}(2, \C)$ and the conjugation is done using a diagonal element in $\S$. So, given $\langle S, T \rangle$ as in the above results, if we conjugate it to $\langle DSD^{-1}, DTD^{-1} \rangle$, where $D$ a diagonal matrix in $\S$, then the conjugation makes $DTD^{-1}$ a diagonal matrix over $\C$. And further, it is easy to check that if $DTD^{-1}=\begin{pmatrix} a' & b ' \\ c' & d' \end{pmatrix}$, then $|b' c'|=|bc|$. Thus conjugation of $\langle S, T \rangle$ by a diagonal matrix in $\S$ does not change the left hand sides of the above inequalities. Since $\langle S, T \rangle$ is discrete, non-elementary, $K(1+|bc|)>1$. Hence $$\beta(T)L^k \geq K(1+|bc|) \geq 1.$$ \end{proof} \begin{cor}\label{jssc2} Suppose $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix}$ and $T=\begin{pmatrix} \lambda& 0\\ 0& \mu\end{pmatrix}$ generate a non-elementary discrete subgroup in $\S$. Then we have $$\beta (T)(1+|bc|)\geq1,$$ where $\beta (T) = \displaystyle{\sup_{e,f \neq 0,\infty} |(\lambda -e \mu e^{-1})(\lambda - f \mu f^{-1})|}$. \end{cor} \medskip The next theorem gives J\o{}rgensen inequality for a two-generator subgroup where one of the generators fixes $\infty$. \begin{theorem}\label{jg} Suppose $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix}$ and $T=\begin{pmatrix} \lambda& \eta \\ 0& \mu \end{pmatrix}$, where $\Re \lambda= \Re \mu \neq 0$, $|\lambda|\leq 1\leq |\mu|$, generate a non-elementary discrete subgroup in $\S$. Suppose \\ \hbox{$S(\lambda , \mu) = |\mu| (|\Im \lambda| + | \Im \mu| ) \leq \frac{1}{4\sqrt 2}$}. Then we have $$|c| \sqrt{|\tau_0| |t_0|} \geq \frac{1+\sqrt{1-4\sqrt 2 S(\lambda ,\mu)}}{2},$$ where $\tau_0 = \lambda(- c^{-1}d) +\eta +(c^{-1}d) \mu\;\hbox{and, }t_0 = \lambda(ac^{-1}) +\eta - (ac^{-1}) \mu$. \end{theorem} \begin{proof} Let $\alpha=\arg \lambda,~\beta = \arg \mu$. Denote $r = |\lambda|$, where we see that $r^2 \cos \alpha= \cos \beta$. Consider the Shimizu-Leutbecher sequence $$S_0 =S, ~S_{n+1}=S_n T {S_n}^{-1}, \; \hbox{ where }S_n =\begin{pmatrix}a_n &b_n\\ c_n &d_n\end{pmatrix}.$$ Now, \begin{eqnarray*} S_{n+1} &=& S_n T {S_n}^{-1}\\ &=& \begin{pmatrix} a_n&b_n\\c_n &d_n\end{pmatrix} \begin{pmatrix} \lambda & \eta\\0& \mu\end{pmatrix} \begin{pmatrix}d\sptilde_n&-b\sptilde_n\\-c\sptilde_n&a\sptilde_n\end{pmatrix}\\ &=&\begin{pmatrix}a_n\lambda d\sptilde_n- a_n \eta c\sptilde_n-b_n\mu c\sptilde_n&-a_n\lambda b\sptilde_n+ a_n \eta a\sptilde_n +b_n\mu a\sptilde_n\\ c_n\lambda d\sptilde_n - c_n \eta c\sptilde_n - d_n \mu c\sptilde_n&-c_n\lambda b\sptilde_n+ c_n \eta a\sptilde_n+d_n\mu a\sptilde_n. \end{pmatrix} \end{eqnarray*} Define $\tau_n, t_n$ by \begin{eqnarray} \tau_n &=& {\lambda} (-{c_n}^{-1} d_n) + \eta + ({c_n}^{-1} d_n)\mu\\ t_n &=& \lambda(a_n {c_n}^{-1}) + \eta-( a_n {c_n}^{-1}) \mu. \end{eqnarray} Since $\Re \lambda=\Re\mu$ by assumption, using this we obtain \begin{eqnarray} \tau_n &=& \Im{\lambda} (-{c_n}^{-1} d_n) + \eta + ({c_n}^{-1} d_n)\Im \mu\\ t_n &=& \Im \lambda(a_n {c_n}^{-1}) + \eta-( a_n {c_n}^{-1}) \Im \mu. \end{eqnarray} We see that \begin{eqnarray*} c_{n+1} &=& c_n\lambda d\sptilde_n-c_n c\sptilde_n - d_n\lambda c\sptilde_n\\ &=& c_n (\Im \lambda (d\sptilde_n{c\sptilde_n}^{-1}) - \eta - ({c_n}^{-1}d_n) \Im \mu)c\sptilde_n\\ &=& -c_n \{\Im \lambda(-{c_n}^{-1}d_n) + \eta + ( c_n^{-1}d_n)\Im \mu \} c\sptilde_n\\ &=& - c_n \tau_n c\sptilde_n \\ &\Rightarrow& |c_{n+1}| = |\tau_n c_n| |c_n|. \end{eqnarray*} \begin{eqnarray*} d_{n+1} &=& -c_n\lambda b\sptilde_n+ c_n \eta a\sptilde_n+d_n\mu a\sptilde_n \\ &=& \Re \lambda (d_n a\sptilde_n - c_n b\sptilde_n) + c_n \{ \Im \lambda (- b\sptilde_n {a\sptilde_n}^{-1}) + \eta +({c_n}^{-1} d_n) \Im \mu\}a\sptilde_n\\ &=& \Re \lambda + c_n \{ \Im \lambda ({a_n}^{-1}{c\sptilde_n}^{-1} - {c_n}^{-1}d_n) +\eta + ({c_n}^{-1} d_n) \Im \mu\}a\sptilde_n \\&=& r \cos \alpha + c_n \tau_n a\sptilde_n + c_n \Im \lambda~~{ a_n}^{-1}{c\sptilde_n}^{-1} a\sptilde_n \end{eqnarray*} By similar computations, we have $$ a_{n+1} = r \cos \alpha - a_n \tau_n c\sptilde_n + {c\sptilde_n}^{-1} \Im \mu ~c\sptilde_n.$$ Using above equalities, we see that \begin{eqnarray*} \tau_{n+1} &=& \Im \lambda (- {c_{n+1}^{-1}}d_{n+1}) + \eta + ({c_{n+1}}^{-1} d_{n+1}) \Im \mu \\ &=& \Im \lambda\{{c\sptilde_n}^{-1} {\tau_n}^{-1} {c_n}^{-1}(r \cos \alpha + c_n \tau_n a\sptilde_n + c_n \Im \lambda~ {a_n}^{-1}{c\sptilde_n}^{-1} a\sptilde_n)\} +\\ &{}& \eta - \{{c\sptilde_n}^{-1} {\tau_n}^{-1} {c_n}^{-1}(r \cos \alpha + c_n \tau_n a\sptilde_n + c_n \Im \lambda~ {a_n}^{-1}{c\sptilde_n}^{-1} a\sptilde_n)\} \Im \mu\\ &=& t_n + r \cos \alpha \;\Im \lambda~ {c\sptilde_n}^{-1}{\tau_n}^{-1}{c_n}^{-1} + \Im \lambda~ ~{c\sptilde_n}^{-1}{\tau_n}^{-1} \Im \lambda~ ~{a_n}^{-1}{c\sptilde_n}^{-1}a\sptilde_n - \\&{}&r \cos \alpha ~{c\sptilde_n}^{-1}{\tau_n}^{-1}{c_n}^{-1} \Im \mu - {c\sptilde_n}^{-1}{\tau_n}^{-1} \Im \lambda~~ {a_n}^{-1} {c\sptilde_n}^{-1}{a\sptilde_n} \Im \mu \\ \Rightarrow |\tau_{n+1}| &\leq& |t_n| + \frac{(r^2 |\sin \alpha| + |\sin \beta|)(|\cos \alpha| + |\sin \alpha|)}{|\tau_n {c_n}^2|}, \hbox{ using, } |a_n \sptilde|=|a_n||l_{22}^{-1}|,\\ & & |l_{22}^{-1}|=\det S_n =1, \hbox{ and, }~ |\Im \lambda|=|\lambda||\sin \alpha|,~ |\Im \mu|=|\mu||\sin \beta|\\ \\ \Rightarrow |\tau_{n+1}c_{n+1}| &\leq& |\tau_n c_n||t_n c_n| + \sqrt 2 S(\lambda , \mu), ~\hbox{ since, }|\cos \alpha|+|\sin \alpha|\leq \sqrt 2, ~ \text{where, }\end{eqnarray*} \begin{eqnarray*} S(\lambda , \mu) &=& (|\sin \alpha| + |\mu|^2 |\sin \beta|)\\ & = & (\frac{|\Im \lambda|}{|\lambda|} + |\mu | |\Im \mu|)\\ &=& |\mu|(|\Im \lambda| + |\Im \mu|). \end{eqnarray*} Similarly we also have, $$|t_{n+1} c_{n+1}| \leq |\tau_n c_n||t_n c_n| + \sqrt 2 S(\lambda , \mu)$$ In a similar way we can have \begin{eqnarray*} |d_{n+1}| &\leq& |\tau_n c_n| |a_n| + 2r\\ |a_{n+1}| &\leq& |\tau_n c_n| |a_n| + \frac{2}{r}\\ \end{eqnarray*} Also, $|b_{n+1}| \leq |a_n|^2 + r S(\lambda , \mu) |a_n| |b_n|$\\ Considering the sequence $$ x_0 = |c| \sqrt {|\tau_0| |t_0|}, ~ x_{n+1} = x_n^2 + \sqrt 2 S(\lambda , \mu).$$ If $ 0 \leq x_0 < \frac{1+\sqrt{1-4 \sqrt 2 S(\lambda , \mu)}}{2} \leq 1$, then $\{x_n\}$ is a monotonically decreasing sequence of real numbers and is bounded above by $ \frac{1+\sqrt{1-4 \sqrt 2 S(\lambda , \mu)}}{2}$ and converges to $\frac{1-\sqrt{1-4 \sqrt 2 S(\lambda , \mu)}}{2}$. Hence $$|t_n c_n| < \frac{1+\sqrt{1-4 \sqrt 2 S(\lambda , \mu)}}{2}\leq 1, \hbox{ and }$$ $$|\tau_n c_n|<\frac{1+\sqrt{1-4 \sqrt 2 S(\lambda , \mu)}}{2}\leq 1.$$ One a subsequence $|t_n c_n|$ and $|\tau_n c_n|$ converges to values at most $\frac{1-\sqrt{1-4 \sqrt 2 S(\lambda , \mu)}}{2}$. Hence on a subsequence $|a_n|$, $|b_n|$, $|c_n|$, $|d_n|$ converge. In particular, $|c_n| \to 0$. Note that $c_n \neq 0$ unless $c=0$ and $c$ can not be zero as the group $\langle S, T \rangle$ is non-elementary by assumption. This proves the theorem. \end{proof} \begin{cor} Suppose $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix}$ and $T=\begin{pmatrix} \lambda& \eta \\ 0& \mu \end{pmatrix}$, where $\Re \lambda= \Re \mu\neq 0 $, $\eta \neq 0$, $|\lambda|\leq 1\leq |\mu|$, \hbox{generate} a non-elementary discrete subgroup in $\S$. Suppose, $$S'(\lambda , \mu) = \frac {|\mu|}{|\eta|^2}(|\Im \lambda| + | \Im \mu| ).$$ Then we have $$|c| \sqrt{|\tau'_0| |t'_0|} \geq \frac{1+\sqrt{1-4 \sqrt 2 |\eta|^2 S'(\lambda ,\mu)}}{2|\eta|},$$ where $\tau'_0 = \lambda(- c^{-1}d){\eta}^{-1}+ 1 +(c^{-1}d)\mu {\eta}^{-1}\;\hbox{and, }t'_0 = \lambda(ac^{-1}){\eta}^{-1} +1 - (ac^{-1})\mu {\eta}^{-1}$. \end{cor} \begin{proof} If $\eta \neq 0$, we write $\tau_0= \tau'_0 \eta$ and $t_0=t'_0 \eta$. Then the result follows from the inequality in \thmref{jg}. \end{proof} \begin{cor}\label{rez} Suppose $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix}$ and $T=\begin{pmatrix} \lambda& \eta \\ 0& \mu \end{pmatrix}$, where $\Re \lambda=\Re \mu=0$, $|\lambda|\leq 1\leq |\mu|$, generate a non-elementary discrete subgroup in $\S$. Suppose \hbox{$S(\lambda , \mu) = |\mu| (|\Im \lambda| + | \Im \mu| ) \leq \frac{1}{4}$}. Then we have $$|c| \sqrt{|\tau_0| |t_0|} \geq \frac{1+\sqrt{1- 4S(\lambda ,\mu)}}{2},$$ where $\tau_0 = \lambda(- c^{-1}d) +\eta +(c^{-1}d) \mu\;\hbox{and, }t_0 = \lambda(ac^{-1}) +\eta - (ac^{-1}) \mu$. \end{cor} \begin{proof} In this case, we proceed as in the proof of the previous theorem. The only difference from the previous proof is essentially the following bound: \begin{eqnarray*} |\tau_{n+1}| &\leq& |t_n| + \frac{(r^2 |\sin \alpha| + |\sin \beta|) |\sin \alpha|}{|\tau_n {c_n}^2|}, \hbox{ using, } |a_n \sptilde|=|a_n||l_{22}^{-1}|,\\ & & |l_{22}^{-1}|=\det S_n =1, \hbox{ and, }~ |\Im \lambda|=|\lambda||\sin \alpha|,~ |\Im \mu|=|\mu||\sin \beta|\\ \\ \Rightarrow |\tau_{n+1}c_{n+1}| &\leq& |\tau_n c_n||t_n c_n| + S(\lambda , \mu), ~ \text{where, }\end{eqnarray*} \begin{eqnarray*} S(\lambda , \mu) &=& (|\sin \alpha| + |\mu|^2 |\sin \beta|)\\ & = & (\frac{|\Im \lambda|}{|\lambda|} + |\mu | |\Im \mu|)\\ &=& |\mu|(|\Im \lambda| + |\Im \mu|). \end{eqnarray*} Noting this bound, the rest is similar. \end{proof} Given any parabolic transformation in $\S$, it is conjugate to a transformation of the form $$T=\begin{pmatrix} \lambda& 1\\ 0& \lambda\end{pmatrix}, ~ |\lambda|=1,$$ and moreover, one can choose $\Re(\lambda)=0$ up to conjugacy. Thus, using \corref{rez}, gives Waterman's result \cite[Theorem 8]{waterman} in $\S$. \begin{cor}\label{wat} If $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix},~T=\begin{pmatrix} \lambda& 1\\ 0& \lambda\end{pmatrix}$, $|\lambda|=1$ generates a non-elementary discrete subgroup in $\S$ with $T$ parabolic fixing $\infty$, then $$|c| \sqrt{|T(ac^{-1}) - ac^{-1}|}\sqrt{|T(-c^{-1}d) - (-c^{-1}d)|} \geq \frac{1+\sqrt{1-8 |\Im \lambda|}}{2}.$$ \end{cor} \begin{proof} Note that $T(a c^{-1})=(\lambda (a c^{-1})+ 1)\lambda^{-1}$ and $T(-c^{-1} d)=(\lambda (-c^{-1} d) + 1)\lambda^{-1}$. Now, $T(ac^{-1})-(ac^{-1})=(\lambda (ac^{-1})+ 1 -(ac^{-1}) \lambda) \lambda^{-1}=t_0 \lambda^{-1}$. Similarly, $ T(-c^{-1}d) - (-c^{-1}d)=\tau_o \lambda^{-1}$. Since $|\lambda|=1$, the result follows. \end{proof} Recently, Erlandsson and Zakeri \cite{ez} have proved a more geometric version of \thmref{jg}. Their geometric inequality does not depend on any quantity like $S(\lambda, \mu)$. Also, in the asymptotic case, it covers some of the two-generators groups whose discreteness remain inconclusive by \corref{wat}. However, the inequality of Erlandsson and Zakeri does not involve the algebraic coefficients of the matrices. In that sense, the theorems in this paper give a more explicit algorithm involving the matrix coefficients to test discreteness. \medskip Using similar argument as in the proof of \thmref{jg}, we can also prove the following theorem that gives J\o{}rgensen inequality for a two-generator subgroup where one of the generators has a fixed point $0$. \begin{theorem}\label{jlt} Suppose $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix}$ and $T=\begin{pmatrix} \lambda& 0 \\ \eta& \mu \end{pmatrix}$, where $\Re \lambda= \Re \mu =\kappa$, $|\lambda|\leq 1\leq |\mu|$, generate a non-elementary discrete subgroup in $\S$. Suppose, \hbox{$ S(\lambda , \mu) = |\mu| (|\Im \lambda| + | \Im \mu| ) \leq \epsilon$}. Then $$|c| \sqrt{|\tau_0| |t_0|} \geq \frac{1+\sqrt{1-\epsilon^{-1} S(\lambda ,\mu)}}{2},$$ where $\tau_0 = \mu (- b^{-1} a) +\eta +(b^{-1} a) \lambda ,\;t_0 = \mu (d b^{-1}) +\eta - (d b^{-1}) \lambda$ and $\epsilon=\frac{1}{4 \sqrt 2}$ or $\frac{1}{4}$ depending upon $\kappa \neq 0$ or $\kappa=0$. \end{theorem} \section{Extremality of J\o{}rgensen Inequality} \label{sext} The following theorem generalizes Theorem-1 of J\o{}rgensen-Kikka \cite{jk}. \begin{theorem}\label{ext1} Let $ S=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and $T=\begin{pmatrix}{\lambda}&0\\0&{\mu}\end{pmatrix} \in\S$. Suppose, $\langle S, T \rangle$ is discrete, non-elementary and for $\alpha=arg(\lambda),~\beta=arg(\mu)$, $\tau=2 \log |\lambda|$, $$\{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\}(1+|bc|)= 1.$$ We consider the Shimizu-Leutbechar sequence $$S_0=S, \hspace{.5 cm} S_{n+1}= S_n T S_n^{-1}.$$ Then $T$ and $S_{n+1}=S_n T S_n^{-1}$ generate a non-elementary discrete group and $$\{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\}(1+|b_n c_n|)= 1.$$ \end{theorem} \begin{proof} We consider the Shimizu-Leutbechar sequence $$S_0=S, \hspace{.5 cm} S_{n+1}= S_n T S_n^{-1}.$$ From relation (3.1) we get \begin{align*} a_{n+1}&=a_n\lambda d\sptilde_n-b_n\mu c\sptilde_n,& b_{n+1}&=-a_n\lambda b\sptilde_n+b_n\mu a\sptilde_n\\ c_{n+1}&=c_n\lambda d\sptilde_n-d_n\mu c\sptilde_n,& d_{n+1}&=-c_n\lambda b\sptilde_n+d_n\mu a\sptilde_n \end{align*}\\ and we also have, \begin{eqnarray*} |b_{n+1}||c_{n+1}|&=&|(-a_n\lambda b\sptilde_n+b_n\mu a\sptilde_n)(c_n\lambda d\sptilde_n-d_n\mu c\sptilde_n)|\\ &=&|a_nb_nc_nd_n||\lambda-a_n^{-1}b_n\mu a\sptilde_n{b\sptilde_n}^{-1}||\lambda-c_n^{-1}d_n\mu c\sptilde_n{d\sptilde_n}^{-1}|. \end{eqnarray*} This implies, (see \eqnref{ine1} in the proof of \thmref{jss}) \begin{equation}\label{ee1} |b_{n+1}c_{n+1}| \leq \{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\} (1+|b_n c_n|) | b_n c_n|. \end{equation} Let \begin{equation}\label{k} K=(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2. \end{equation} Construct the sequence $w_n$ where $$w_0=|bc|, \hspace{.2in} w_{n}=|b_n c_n|.$$ It follows from \eqnref{ee1} that $w_{n+1} \leq K w_n (1+w_n)$. Now note that $K(1+w_0)=1$. Now $w_0 \neq 0$, for otherwise, $S$ and $T$ will have a common fixed point. Hence $K<1$. Observe that $$1 \leq K(1+w_1) \leq K(1+Kw_0(1+w_0)) \leq K(1+w_0)=1,$$ and hence $K(1+w_1)=1$. By induction it follows that $K(1+w_n)=1$ for all $n \geq 0$. Since $K<1$, it follow that $w_n \neq 0$ for all $n$ and hence the result follows. \end{proof} The following corollary generalizes Theorem-2 of J\o{}rgensen and Kikka \cite{jk}. \begin{cor}\label{extc1} Let $ S=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and $T=\begin{pmatrix}{\lambda}&0\\0&{\mu}\end{pmatrix} \in\S$. If $\langle S, T \rangle$ is discrete, non-elementary and $$\{(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\}(1+|bc|)=1,$$ then $T$ is elliptic of order at least seven. \end{cor} \begin{proof} If possible suppose $T$ is hyperbolic. As in the above proof, it follows from the extremal relation that $K<1$. Now, Let $\arg \lambda=\alpha$ and $\arg \mu=\beta$. Then \begin{eqnarray*} K&=& (\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2\\ &=& |\lambda|^2 + |\mu|^2 +2 (|\Im \lambda| | \Im \mu|-\Re \lambda \Re \mu)\\ &=& |\lambda|^2 + |\mu|^2 + 2 |\sin \alpha| |\sin \beta|-\cos \alpha \cos \beta\\ &=& |\lambda|^2 + |\mu|^2-2\cos(\alpha+ \beta). \end{eqnarray*} Let $|\lambda|=e^{\frac{\tau}{2}}$. Then using $\cosh(\tau)=\frac{e^{\tau} + e^{-\tau} }{2}$, observe that \begin{eqnarray*} K & = & e^{\tau} + e^{-\tau} - 2 \cos(\alpha+ \beta)\\ &\geq& e^{\tau} + e^{-\tau}+2=(e^{\frac{\tau}{2} }+e^{-\frac{\tau}{2} })^2. \end{eqnarray*} Since $e^{\frac{\tau}{2} }+e^{-\frac{\tau}{2} }>1$, this implies $K>1$. This is a contradiction. Hence $T$ must be elliptic. Since $T$ is elliptic, $\tau=0$. Now, $K=1$ implies, $\cos(\alpha+\beta)>\frac{1}{2}$. Thus $0<\alpha+\beta<\frac{\pi}{3}$. This implies that the order of $T$ must be at least seven. This completes the proof. \end{proof} \begin{cor}\label{extcc2} Let $ S=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and $T=\begin{pmatrix}{\lambda}&0\\0&{\mu}\end{pmatrix} \in\S$. Suppose, $\langle S, T \rangle$ is discrete, non-elementary and $$\beta(T)(1+|bc|) = 1,$$ then $T$ is elliptic of order at least seven. \end{cor} \begin{proof} Suppose, up to conjugacy, $\lambda$, $\mu$ are complex numbers. Then $K \leq \beta(T)$. Since $\langle S, T \rangle$ is discrete, we must have $K(1+|bc|) \geq 1$. Hence the equality in the hypothesis implies $K(1+|bc|)=1$. The result now follows from the above corollary. \end{proof} \begin{cor}\label{extsc3} Let $ S=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and $T=\begin{pmatrix}{\lambda}&0\\0&{\mu}\end{pmatrix} \in\S$. If $\langle S, T \rangle$ is discrete, non-elementary and $$\beta(T) L^k=1,$$ where $k = [1+|bc|] +1 >2$ and $L= 1+|\mu|>1$, then $T$ is elliptic of order at least seven. \end{cor} \begin{proof} Up to conjugacy, we assume $\lambda$, $\mu $ are complex numbers. It is enough to show that $K(1+|bc|) =1$, where $K=(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2$. Since the subgroup $\langle S,T \rangle$ generates a discrete non-elementary subgroup of $\S$, then we have $K(1+|bc|) \geq 1$. Now note that $$K \leq \beta(T)=\frac{1}{L^{k}} \leq \frac{1}{1+|bc|},$$ This imples, $K(1+|bc|) \leq 1$. Hence, $K(1+|bc|)=1$. The result now follows from \corref{extc1}. \end{proof} The following characterizes non-extreme groups. \begin{cor}\label{extp1} Let $ S=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and $T=\begin{pmatrix}{\lambda}&0\\0&{\mu}\end{pmatrix}$ generate a discrete non-elementary subgroup in $\S$. Suppose $$||ad|-1|> \big({\cot}^2 \big(\frac{\alpha + \beta}{2}\big) - 3\big).$$ Then $\langle S, T \rangle$ is not an extreme group. \end{cor} \begin{proof} If possible suppose $\langle S, T \rangle$ satisfy equality in J\o{}rgensen inequality. Note that, it follows from the equality in J\o{}rgensen inequality that \begin{equation} \label{bc} |bc|=\frac{1-K}{K}, \end{equation} where $ K=(\Re\lambda -\Re\mu)^2 +(|\Im \lambda|+|\Im\mu|)^2$ . The condition $|\sigma|=|ad-aca^{-1}b|=1$ implies, \begin{eqnarray*} 1 & \leq & |ad|+|bc| \\ \Rightarrow |ad|& \geq & 1-|bc| \\ &=& K(1+|bc|)-|bc| \\ &=& K+ (K-1) \frac{(1-K)}{K}\\ &=&2-\frac{1}{K}. \end{eqnarray*} This implies \begin{equation} \label{ad} |ad|\geq 1-|bc|. \end{equation} Also we have from $|\sigma|=1$ that $|ad|-|bc|\leq 1$. This implies $|ad|\leq 1+|bc|$. Combining this with \eqnref{ad} we get $$||ad|-1|\leq |bc|.$$ Now we see that $K=2(1-\cos (\alpha + \beta))$ and, \begin{eqnarray*} |bc|&=& \frac{1-K}{K}= \frac{2\cos (\alpha + \beta) -1}{2-2\cos (\alpha + \beta)}\\ &=& \frac{\cos^2 ({\frac{\alpha + \beta}{2}})-3 \sin^2 (\frac{\alpha + \beta}{2})}{4\sin^2 (\frac{\alpha + \beta}{2})}\\ &=& \frac{\cot^2 (\frac{\alpha + \beta}{2})-3}{4}\\ &\leq& (\cot^2 (\frac{\alpha + \beta}{2}) -3).\end{eqnarray*} Hence, we have $$||ad|-1| \leq (\cot^2 (\frac{\alpha + \beta}{2}) -3),$$ which is a contradiction. This proves the result. \end{proof} \medskip \begin{theorem}\label{extt2} Suppose $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix}$ and $T=\begin{pmatrix} \lambda& \eta \\ 0& \mu\end{pmatrix}$, $\Re \lambda = \Re \mu=\kappa$, $|\lambda|\leq 1\leq |\mu|$, generate a non-elementary discrete subgroup in $\S$. Suppose, $$|c| \sqrt{|\tau_0||t_0|} = \frac{1+\sqrt{1- \epsilon^{-1} S(\lambda , \mu)}}{2},$$ where $S(\lambda , \mu)=|\mu|(|\Im \lambda|+| \Im \mu| )\leq \epsilon$ and, $\epsilon=\frac{1}{4 \sqrt 2}$ or $\frac{1}{4}$ depending on $\kappa \neq 0$ or $\kappa=0$ . We consider the Shimizu-Leutbechar sequence $$S_0=S, \hspace{.5 cm} S_{n+1}= S_n T S_n^{-1}.$$ Then, for each $n$, $\langle S_{n} , T \rangle$ is a non-elementary discrete subgroup of $\S$ and $$|c_n| \sqrt{|\tau_n||t_n|} = \frac{1+\sqrt{1- \epsilon^{-1} S(\lambda , \mu)}}{2}.$$ where $\tau_n = \lambda (-{c_n}^{-1}d_n)+\eta + ({c_n}^{-1}d_n)\mu ,\;t_n=\lambda (a_n {c_n}^{-1})+\eta - (a_n{c_n}^{-1}) \mu$. \end{theorem} \begin{proof} We prove the result assumeing $\kappa \neq 0$. The case $\kappa=0$ is similar. Consider the Shimizu-Leutbecher sequence $S_0 =S, ~S_{n+1}=S_n T {S_n}^{-1}$, where $S_n =\begin{pmatrix}a_n &b_n\\ c_n &d_n\end{pmatrix}$\\{}\\ Now,\begin{eqnarray*} S_{n+1} &=& S_n T {S_n}^{-1}\\ &=& \begin{pmatrix} a_n&b_n\\c_n &d_n\end{pmatrix} \begin{pmatrix} \lambda & \eta\\0& \mu\end{pmatrix} \begin{pmatrix}d\sptilde_n&-b\sptilde_n\\-c\sptilde_n&a\sptilde_n\end{pmatrix}\\ &=&\begin{pmatrix}a_n\lambda d\sptilde_n- a_n \eta c\sptilde_n-b_n\mu c\sptilde_n&-a_n\lambda b\sptilde_n+ a_n \eta a\sptilde_n +b_n\mu a\sptilde_n\\ c_n\lambda d\sptilde_n-c_n \eta c\sptilde_n - d_n\mu c\sptilde_n&-c_n\lambda b\sptilde_n+ c_n \eta a\sptilde_n+d_n \mu a\sptilde_n\end{pmatrix} \end{eqnarray*} Define $\tau_n , t_n$ by \begin{eqnarray} {\tau}_n &=& {\lambda} (-{c_n}^{-1} d_n) +\eta + ({c_n}^{-1} d_n) \mu \\ t_n &=& \lambda (a_n {c_n}^{-1}) + \eta - (a_n {c_n}^{-1}) \mu \end{eqnarray} We see that \begin{eqnarray*} c_{n+1} &=& c_n\lambda d\sptilde_n-c_n \eta c\sptilde_n - d_n\mu c\sptilde_n\\ &=& - c_n ( \lambda (-d\sptilde_n{c\sptilde_n}^{-1}) +\eta + ({c_n}^{-1}d_n ) \mu)c\sptilde_n\\ &=& - c_n \tau_n c\sptilde_n\\ \text{So,}\;|c_{n+1}| &=& |\tau_n c_n| |c_n| \end{eqnarray*} In a similar way we can have \begin{eqnarray*} |d_{n+1}| &\leq& |\tau_n c_n| |a_n| + 2r\\ |a_{n+1}| &\leq& |\tau_n c_n| |a_n| + \frac{2}{r}\\ \end{eqnarray*} Also, $|b_{n+1}| \leq |a_n|^2 + r S(\lambda , \mu) |a_n||b_n|$, as in the proof of \thmref{jg}. Also, we have \begin{eqnarray*} |\tau_{n+1} c_{n+1}| &\leq& |\tau_n c_n| |t_n c_n| + \sqrt 2 S(\lambda , \mu)\\ |t_{n+1} c_{n+1}| &\leq& |\tau_n c_n| |t_n c_n| + \sqrt 2 S(\lambda , \mu) \end{eqnarray*} \\ Considering the sequence $$ x_0 = |c| \sqrt {|\tau_0| |t_0|}, ~ x_{n+1} = x_n^2 + \sqrt 2 S(\lambda , \mu), ~ \hbox{ where, }S(\lambda , \mu) \leq \frac{1}{4\sqrt 2}.$$ Note that $\{x_n\}$ is a monotonically decreasing sequence of real numbers and is bounded above by $ \frac{1+\sqrt{1-4 \sqrt 2 S(\lambda , \mu)}}{2}$. By the hypothesis $x_0=\frac{1+\sqrt{1-4 \sqrt 2 S(\lambda, \mu)|}}{2}$. Hence $\{x_n\}$ must be a constant sequence. In particular, $c_n \neq 0$ for all $n$ and hence, $S_n$ and $T$ can not have a common fixed point. Thus $\langle S_n, T \rangle$ is non-elementary. \end{proof} \begin{cor}\label{extc2} Let $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix} ,~T=\begin{pmatrix} \lambda& \eta\\ 0& \mu\end{pmatrix}$,\;where $\Re \lambda =\Re \mu$, $|\lambda|\leq 1\leq |\mu|$, generate a discrete, non-elementary subgroup of $\S$. Suppose \begin{eqnarray*}{\tau}_0 &=& {\lambda} (-{c}^{-1} d) + \eta + ({c}^{-1} d) \mu,\\ t_0 &=& \lambda (a {c}^{-1}) + \eta - (a {c}^{-1}) \mu. \end{eqnarray*} If $$\frac{|\tau_0-t_0|}{|\tau_0 t_0|} > |\bar c d + a \bar c|,$$ then $\langle S, T \rangle$ is not extreme. \end{cor} \begin{proof} Without loss of generality, assume $|\lambda|\leq 1$. Suppose $\langle S, T \rangle$ is extreme. Then $|c|^2 |\tau_0 t_0|={\kappa_0}^2$. Note that $$\tau_0-t_0=-\lambda(c^{-1} d + ac^{-1}) + (c^{-1} d + ac^{-1}) \mu=\Im \lambda(c^{-1} d + ac^{-1})+(c^{-1} d + ac^{-1}) \Im \mu.$$ Thus \begin{eqnarray*} |\tau_0-t_0| &\leq& (|\Im \lambda|+|\Im \mu|)( |c^{-1}d+ac^{-1}|)\\ & \leq &(|\Im \lambda|+|\Im \mu|) {\frac{1} {|c|^2} } | \bar c d + a \bar c|. \frac{|c|^2|\tau_0 t_0|}{{\kappa_0}^2}. \end{eqnarray*} This implies $$\frac{|\tau_0-t_0|}{|\tau_0 t_0|}\leq S(\lambda, \mu) \frac{|\bar c d + a \bar c|}{{\kappa_0}^2}.$$ Now note that $S(\lambda, \mu) \leq \frac{1}{4 \sqrt 2}<\frac{1}{4}$ and $\kappa_0\geq \frac{1}{2}$, hence $\frac{S(\lambda, \mu)}{\kappa_0^2} \leq 1$. So, $$\frac{|\tau_0-t_0|}{|\tau_0 t_0|} \leq |\bar c d + a \bar c|.$$ This proves the result. \end{proof} If we choose $T=\begin{pmatrix} \lambda& 0 \\ \eta& \mu\end{pmatrix}$, then analogous results to \thmref{extt2} and \corref{extc2} follow using similar arguments as above. \begin{cor}\label{extc3} Let $S=\begin{pmatrix} a&b\\ c&d \end{pmatrix} ,~T=\begin{pmatrix} \lambda& 0\\ \eta& \mu\end{pmatrix}$, $\Re \lambda =\Re \mu$, generate a discrete, non-elementary subgroup of $\S$. Suppose \begin{eqnarray*}\tau_0 &=&\mu (- b^{-1} a) +\eta +(b^{-1} a) \lambda ,\\t_0 &=& \mu (d b^{-1}) +\eta - (d b^{-1}) \lambda.\end{eqnarray*} If $$\frac{|\tau_0-t_0|}{|\tau_0 t_0|} > |\bar b d + a \bar b|,$$ then $\langle S, T \rangle$ is not extreme. \end{cor} \subsection{Examples of Extreme Groups} Let us consider $~S=\begin{pmatrix}a&0\\c&d\end{pmatrix},~T=\begin{pmatrix}\lambda&c^{-1}j\\0&\mu\end{pmatrix} \in \S$ with $|c|\geq 1$ and $\Im\lambda =\Im\mu=0$. Suppose that the subgroup $\langle S,T \rangle$ in $\S$ is non-elementary and discrete. For eg. if $a=d=c=1$ and $\lambda=\mu=1$ then this is the case. Then we see that $\tau_0 = c^{-1}j,~ t_0= c^{-1}j$ and so we have $|c|\sqrt{|\tau_0||t_0|} = 1$,whereas we also observe that $S(\lambda , \mu)=0$ and so we have $\frac{1+\sqrt{1-4\sqrt 2 S(\lambda , \mu)}}{2}= \frac{2}{2}= 1\;.$ So, we have $|c|\sqrt{|\tau_0||t_0|} = 1=\frac{1+\sqrt{1-4\sqrt 2 S(\lambda , \mu)}}{2}$.
2,877,628,089,979
arxiv
\section{Introduction \label{sec:intro}} Exoplanet imaging survey instruments reach deep contrast performance by attenuating the stellar PSF using a coronagraph~\citep[e.g.][]{oppenheimer2012,macintosh2014,beuzit2008,liu2010}. Many designs have significantly reduced sensitivity within a $5 ~\lambda/D $ angular region around the host star, where $\lambda$ is the wavelength and $D$ is the telescope diameter. High resolution, non-occulting methods, like non-redundant mask (NRM) interferometry~\citep[e.g.,][]{baldwin1986,tuthill2000}, complement high contrast methods by probing small spatial scales at moderate contrast. NRM coupled with adaptive optics can reach contrast of about 6 magnitudes at $\lambda/B$, with reduced contrast below $\lambda/B$~\citep{lacour2011}. In this case $B$ is the longest baseline spanned by the mask (typically close to the telescope diameter). This complementary high resolution approach can reveal the presence of close-in structures to bright point sources, particularly exciting for young protoplanetary systems. The NRM is especially suited for multiplicity studies at $<2\lambda/D$ scales. Combined with polarimetry, resolved polarized structures can be resolved close in to the host star. High resolution imaging can play an important role in bridging the gap between companion point source detection methods. Very high contrast methods probe the outer architectures of solar systems and have little or no overlap with astrometry or radial velocity detection sensitivities (the latter in part due to differences in age sensitivities between RV and imaging). High resolution methods like NRM are sensitive to a objects at intermediate separations, especially for sources over $100$pc away. NRM on large ground-based telescopes has been used to resolve structure in the gaps of transitional disks~\citep[e.g.,][]{kraus2012, biller2012, sallum2015}, has helped push multiplicity studies to closer separations~\citep[e.g.][]{kraus2008, sana2014, duchene2018} and track the orbits of close binaries in combination with radial velocity to determine dynamical masses for young stellar binaries~\citep{rizzuto2016}. NRM has also been used for image reconstruction of massive stars~\citep[e.g.,][]{tuthill1999, norris2012} and disks~\citep[e.g.,][]{cheetham2015,sallum2015tcha}. Combining aperture masking with an adaptive optics (AO) system provides stable observations to take advantage of both image quality provided by AO and self-calibrating observables measured with a non-redundant pupil. By splitting the pupil into a set of unique hole-to-hole baseline pairs, fringe phases and amplitudes can be measured uniquely, where each fringe is formed from a pupil baseline. In addition, phase closure calibrates hole-based phase errors that arise from atmospheric fluctuations and instrument non-common path aberrations~\citep{jennison1958}. In the case of extreme adaptive optics (ex-AO), which uses thousands of actuators to correct small corrugations in the wavefront, interference fringes are stable over many seconds of integration making fainter sources more accessible through this method. In this paper we present results from observations with the Gemini Planet Imager NRM and discuss the performance and post-processing in detail. Our analysis provides a comparison with aperture masking on other ground-based instruments, and demonstrate complementarity with upcoming space-based NRM on JWST-NIRISS~\citep{doyon2012}. We confirm results using two different data pipelines detailing the data reduction procedures. With the release of this article, we make our primary pipeline public, along with examples of analyses in this paper. \section{Implementing NRM on The Gemini Planet Imager} \subsection{The Gemini Planet Imager's non-redundant mask} GPI has a 10-hole non-redundant mask (Fig. \ref{fig:g10s40}) in its apodizer wheel, a warm pupil located after the deformable mirror. We provide the mask hole coordinates with respect to the primary mirror in Table \ref{tab:maskgeom}, including the outer diameter physical size in the apodizer wheel where the mask sits. This pupil mask transmits roughly 6.2\% of the light compared to a completely unocculted pupil (not considering spiders, secondary obstructions, or Lyot stops). The mask forms 45 unique baselines (spatial frequencies), which correspond to 45 fringes in the image plane. $\lambda/B$ spans $\sim45-330$ mas in H band. There are 120 total combinations of hole triplets that form closing triangles, and a set of 36 unique triangles that don't repeat any baseline. \begin{figure} \centering \includegraphics[height=1.5in]{fig1a.jpg} \includegraphics[width=1.5in]{fig1b.pdf} \caption{\textbf{Left:} The 10-hole non-redundant mask on the Gemini Planet Imager. \textbf{Right:} Associated spatial frequency coverage, where the longest baseline is $6.68~$m.} \label{fig:g10s40} \end{figure} \begin{table}[htbp!] \caption{Mask hole dimensions measured in mm from center. \label{tab:maskgeom}} \begin{center} \begin{tabular}{c|c} X & Y \\ \hline -1.061& -3.882 \\ -0.389& -5.192 \\ 2.814& -0.243 \\ 3.616& 0.995 \\ -4.419& 2.676 \\ -1.342& 5.077 \\ -4.672& -2.421 \\ 4.157& -2.864 \\ 5.091& 0.920 \\ 1.599& 4.929 \\ \end{tabular} \end{center} Hole diameter: 0.920 mm \\Gemini S outer diameter (OD): 7.770 m (after baffling) \\Apodizer outer diameter in this re-imaged pupil plane: 11.68 mm. (Lenox Laser, Glen Arm, MD).\\ Projection of in-pupil coordinates are magnified by a factor of $\sim650$ onto the primary.\\ \end{table} GPI’s focal plane masks are implemented as mirrors that reflect the off-axis light to the science channel and pass the on-axis starlight through a central hole. In NRM mode, we use a mirror with no hole, so the full field of view passes to the IFS. However, in coronagraph mode the central starlight is sent to a tip/tilt sensor for additional low-order correction. Therefore, all \textit{non-coronagraphic} observations do not benefit from this additional tip/tilt correction. Small jitter in the image leads to slight smearing of fringes and reduced contrast. This is worsened in poorer weather conditions, including high winds. We discuss this in detail in Section \ref{sec:contrast}. The NRM pupil position for GPI has been measured and fixed to lie entirely within the pupil and not overlap with any defective actuators or spider supports. The in-pupil mask coordinates are listed in Table \ref{tab:maskgeom} and are converted to projected coordinates on the primary mirror by the factor between the pupil and primary outer diameter (OD): $7770.1/11.998$\footnote{ future calibration may change this magnification slightly.}. The position should not need to be adjusted but any vignetting can be investigated with the pupil-viewing camera. A detailed discussion of the procedure to determine the mask orientation and adjusting its position can be found in \cite{greenbaum2014spie}. Baseline coordinates are computed as $U_{i,j} = X_i - X_j$,$V_{i,j} = Y_i - Y_j$ for $[i,j]$ combinations, where $X$ and $Y$ are the mask hole position in the pupil (Table \ref{tab:maskgeom}). In the coordinate system used in this work, to reach the detector orientation the mask coordinates were rotated clockwise by $114.7^{\circ}$. In Python, Converting the initial baseline vectors $[U_0, V_0]$ into vectors rotated by $\theta$, $[U_r, V_r]$ consists of the operation: $U_{r} = U_0\cos(\theta) - V_0\sin(\theta)$, $V_{r} = U_0\sin(\theta) + V_0\cos(\theta)$. \subsection{Observing Sequences and Calibration} Uncorrected wavefront and non-common path errors lead to residual phase errors. At least one nearby calibration source, close in time, (single, unresolved) should be observed in a sequence. In spectroscopic mode it is less important to choose a calibration source that matches the target color because the individual wavelength slices are close to monochromatic. A calibration source should aim to match the target brightness in the wavefront sensing filter (approximately I band). Multiple calibration sources in a survey-like program can provide a good estimate of systematic calibration errors~\citep[e.g.,][]{kraus2008}, as long as the sources are observed consecutively, in similar conditions. However, in the case of observing individual science targets it may not be practical or efficient to obtain many calibration sources. At current operation, it takes approximately 10 minutes to slew to and acquire a new target. This makes back and forth switching between target and calibration source time consuming. We have adopted the strategy of observing the target in full sequence followed by one or two calibration sources. A polarimetric sequence additionally involves looping through four half waveplate angles (HWPAs) per ``integration." While this increases the total integration on source compared to the spectroscopic mode, polarimetric images are broadband so each integration is generally shorter. Choosing the exposure time for a single integration is a balance between observation efficiency and minimizing fringe smearing. Typically, we aim for an exposure time that provides at minimum $3000$ counts in the peak of the raw detector image and at maximum $14000$ counts to avoid saturation. The total number of photons collected should satisfy the desired contrast sensitivity. We discuss systematics that degrade contrast sensitivity beyond photon noise in Section \ref{sec:contrast}. Table \ref{tab:blims} lists the approximate maximum brightness for NRM observations in each filter combination. All brightness limits and estimated exposure times approximate and derived empirically from commissioning observations. An empirically-determined exposure time calculator is available in the \pkg{ImPlaneIA} pipeline\footnote{https://github.com/agreenbaum/ImPlaneIA}. \begin{table}[htbp!] \caption{Gemini Planet Imager approximate maximum brightness limits for all NRM settings. All values are in apparent magnitude in the specified band.} \label{tab:blims} \begin{center} \begin{tabular}{l|c|c|c|c|c} MODE & Y & J & H & K1 & K2 \\ \hline\hline Spectroscopic & 1.8 & 2.2 & 1.8 & 1.8 & 1.8 \\ Polarimetric & 3.0 & 3.0 & 3.0 & 3.0 & 3.0 \end{tabular} \end{center} \end{table} \section{Observations and data reduction} All observations discussed in this paper were taken on the Gemini Planet Imager with its 10-hole non-redundant mask, as a part of program GS-ENG-GPI-COM. A summary of the observations, all taken in stationary pupil mode, is contained in Table \ref{tab:obs}. The observations presented in this paper focus mostly on point sources in a range of conditions to determine contrast limits and polarization precision, as well as two binary systems at different contrast ratios. \begin{table*}[htbp] \caption{Summary of observations presented in this paper, indicating date string, source name, observing mode, total integration time, and various observatory parameters. All observations are taken in stationary pupil mode so that the sky rotates with respect to the detector. \label{tab:obs}} \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \hline Date & Source & Mode & Single& $N_{exp}$ & $=$ Total\footnote{Single integration $\times$ Number of exposures = Total integration} & Seeing\footnote{DIMM (Differential Image Motion Monitor)} & WFE\footnote{Residual WFE (wavefront error) measured from GPI's AO system.} & Airmass & Wind\footnote{Ground-layer wind measurement} & sky rot \tabularnewline YYMMDD& & & [s] & & [s]& ($\arcsec$) & [nm] & & [m/s] & [deg] \tabularnewline \hline 131211 & HR 2690 & NRM Spect - H & 59.6 & 8 & 476.8& 0.67 & 116.15 & 1.23 & 0.49 & 0.021 \tabularnewline & HR 2716 & NRM Spect - H & 59.6 & 8 & 476.8 & 0.58 & 121.80 & 1.23 & 0.36 & 0.37 \tabularnewline & HR 2839 & NRM Spect - H & 43.6 & 8 & 348.8 & 0.51 & 134.2 & 1.23 & 0.40 & 0.234 \tabularnewline 140324 & HD 63852 & NRM Spect - H & 1.5 & 20 & 30.0 & 0.87 & 81.99 & 1.17 & 0.41 & 0.67 \tabularnewline 140511 & HD 63852 & NRM Spect - H & 1.5 & 20 & 30.0 & 0.81 & 160.94 & 1.55 & 11.5 & 0.5 \tabularnewline & Internal & NRM Spect - H & 1.5 & 63 & 94.5 & N/A & 32.82 & N/A & N/A & N/A \tabularnewline 140512 & HD 142527 & NRM Spect - J & 59.6 & 9 & 536.4 & 1.4 & 190.57 & 1.03 & 8.5 & 11.4 \tabularnewline & HD 142695 & NRM Spect - J & 53.8 & 8 & 430.4 & 1.4 & 177.77 & 1.04 & 8.6 & 5.0 \tabularnewline 160504& HIP 74604 & NRM pol - K1 & 4.4 & 40 & 176.0 & 2.19 & 144.23 & 1.08 & 4.6 & 1.5 \tabularnewline \end{tabular} \end{center} \end{table*} During commissioning in December of 2013 we observed the known binary HR 2690 ($\Delta H\sim2$) and two unresolved calibration sources HR 2716 and HR 2839. This sequence of observations was chosen to demonstrate the recovery of a moderate contrast binary system for proof of concept. In March of 2014 we observed bright single source HD 63852 to estimate contrast limits compared to the ideal case of the internal source. In May of 2014 we returned to this source, providing a comparison between observing epochs. We also observed HD 142527, which contains an M-dwarf companion, HD 142527 B ($\Delta J \sim4.6$) to demonstrate deeper contrast retrieval of a known binary companion. For this dataset we observed two calibration sources HD 142695 and HD 142384, though the latter was found to be a close binary after our observations~\citep{lebouquin2014}. Details of the analysis are in \S \ref{sec:spect}. In May of 2016 we took polarimetric observations of bright unresolved sources to determine calibration limit and assess systematic biases. We present one example, HIP 74604, our best dataset, and discuss polarimetric sensitivity in \S \ref{sec:pol}. \subsection{Raw data reduction \label{sec:drp}} The data are processed from raw 2D detector exposures into datacubes of images at each wavelength or polarization through the GPI Data Reduction Pipeline (DRP)~\citep{drp}. Wavelength calibration is performed with Argon arc lamp exposures. Shifts in the location of the spectra due to flexure are calibrated by arc lamp exposures taken close in time to each set of observations \cite{wolff2014}. For polarimatry data we use the recipe template for polarization data taken with the NRM called ``Basic NRM Polarization Datacube Extraction," which performs the polarimetric spot calibration, smooths polarization calibration, subtracts a dark background, corrects for 2D flexure, removes microphonics noise, and interpolates bad pixels in the raw frame before assembling the polarization data cube. Details of DRP primitives can be found in online documentation\footnote{http://docs.planetimager.org/pipeline/}. \subsection{Extracting Fringe observables} \label{sec:datareduction} We measure fringe phases and amplitudes from reduced datacubes using two different aperture masking pipelines, the Sydney University pipeline, based in IDL, and a pipeline implementing the Lacour-Greenbaum (LG) algorithm~\citep{greenbaum2015}, based in Python. The former analyzes images in the Fourier domain. The latter measures fringes in the image plane. The Fourier plane approach used in the Sydney pipeline measures the phases and square-visibilities directly from the Fourier transform of the image. First, images are multiplied by a super-Gaussian window function of the form $e^{-kx^4}$, which has the effect of smoothing in the Fourier plane. Then, images are Fourier transformed, which separates the information from different baselines into distinct regions. The phases and visibilities are measured for all points in a 3-Fourier sampling element radius around the predicted frequency for each baseline. To calculate the square-visibilities and phases for each baseline, these measurements are combined by weighting with a matched filter. Closure phases are formed by considering sets of 3 baselines that form a closing triangle (i.e. the vector sum of their frequencies is zero). Rather than use the weighted phases for each baseline, instead a number of measurements are calculated from each set of 3 pixels (within a small area around the predicted frequency of each baseline) that forms a closing triangle. These are then combined by weighting with a matched filter~\citep[e.g.][]{monnier1999}. This matched filter approach relies on pre-computing the expected Fourier-plane profile of NRM images using fixed values for the size of the pupil mask holes, plate scale and wavelength for each IFS channel. The image plane pipeline assumes a plate scale and monochromatic wavelength (spectroscopic mode) or defined bandpass (polarimetric/broadband mode) and fits $A' \sin ( k\cdot \Delta x_{i,j}) + B' \cos(k \cdot \Delta x_{i,j}) $ to each fringe generated by particular hole-pair baselines, where $A' = A\sin(\Delta\phi)$ and $B' = B\cos(\Delta\phi)$, $\phi$ is the fringe phase shift, and $\sqrt{A^2 + B^2}$ is the fringe amplitude. Here, $k=(u,v)$ is the 2D coordinate in the image plane. This algorithm is described in detail in \S3 of \cite{greenbaum2015}. The sub-pixel centering of the image is measured by computing $x$ and $y$ tilt in the numerical Fourier transform of the image. This centroid is used to sample the model onto oversampled detector pixels, which are then binned to the detector scale. For NRM+polarimetry (or broadband) images, for which there is dispersion in the PSF we use filter transmission files available in the GPI DRP and an approximate source spectrum to model the dispersion. We compared the two pipelines and confirmed that they yield consistent results. We show results from the image-plane pipeline in this paper. An image-plane pipeline using the LG algorithm, \pkg{ImPlaneIA}~\citep{greenbaum2018soft, pipeline_2018} is available publicly\footnote{https://github.com/agreenbaum/ImPlaneIA} with further documentation and examples. \subsection{Calibration and analysis of fringe observables \label{sec:observables}} Both the Sydney and LG pipelines use similar analysis tools following calculation of fringe observables to produce the results shown in this paper. For spectroscopic data we compute an average closure phase and standard error over the set of integrations for each baseline (each mask hole pair for each wavelength slice). This produces $N_{\mathrm{triangles}}\times N_{\lambda}$ observables. In this case $N_{\mathrm{triangles}}=120$ in one datacube slice, and $N_{\lambda}=37$. In general, we do not see a large amount of field rotation in our observation sequences (see Table \ref{tab:obs}) so we compute an average position and consider an average parallactic angle. For our observations of HD 142527, which contains $\sim11^\circ$ of rotation, we compared the results when accounting for sky rotation by splitting exposures into smaller groups (see \S \ref{sec:spect} for more details). We subtract measured average closure phases from the calibration source(s) from our science target closure phases and add errors in quadrature. Binary detection and contrast limits rely on a model for the fringe visibility of a binary point source: \begin{eqnarray} V_{u,v} = \frac{1+ r e^{-2\pi i(\alpha\cdot u + \delta \cdot v)}}{1+r} \label{eqn:visibilities} \end{eqnarray} where $r$ is the contrast ratio between the secondary and primary, $u,v$ are the baseline coordinates a given hole pair, and $\alpha,\delta$ are the sky coordinates of the secondary relative to the primary. The absolute orientation is calibrated in the standard way for GPI data, accounting for the orientation of the lenslet array ($+23.5^\circ$), detector, and instrument position angle (PA). Plate scale and PA calibrations have been performed by the observation of astrometric calibrators yielding a pixel scale of $14.166\pm0.007$ and a north offset of $-0.1^\circ \pm0.13$ \citep{konopacky2014, derosa2015}. The derotation angle in degrees to place North up is $\mathrm{AVPARANG} - \mathrm{AVCASSANG} +23.4$. AVPARANG and AVCASSANG are header keywords in GPI data files. In practice closure phase errors are often underestimated from the data, especially when only one or two calibration sources are observed and systematic errors cannot be properly determined. We scale the errors by a factor $\sqrt{N_{holes}/3}$ to account for redundancy from repeating baselines. Additionally, we add additional constant error to the closure phases so that the reduced $\chi^2$ is close to 1. The binary detection limits reported in this paper are estimated from the calibrated closure phase errors based on a signal-to-noise ratio (SNR) threshold, where \begin{eqnarray} \label{eqn:snr} SNR &=& \sqrt{\sum_{i=1}^{N_{CP}}CP_{i,\alpha,\delta,r}^2/\sigma_{i,CP}^2} \end{eqnarray} Model closure phases are calculated from equation \ref{eqn:visibilities}. Model phases scale roughly linearly with contrast ratio $r$. We estimate contrast ratio detection limit at SNR=5 as: \begin{eqnarray} \label{eqn:r} && r_5 = \frac{5\times r_{model}}{SNR} \end{eqnarray} To generate contrast curves we compute $r_5$ over a range of separations and position angles. Sensitivity varies somewhat with position angle based on mask geometry. GPI's mask has fairly uniform visibility coverage, improved further in spectroscopic mode by the wavelength axis. In polarimetric mode, the light is split with a Wollaston prism into two orthogonal polarizations. A half-wave plate optic is used to rotate the angle of polarization during observation~\citep{perrin2015}. This enables a differential measurement between orthogonal polarizations for both fringe amplitude and fringe phase. We compute differential visibilities and differential closure phases following \cite{norris2015}. With 4 half-wave plate (HWP) rotations at 0, 22.5, 45, and 67.5 degrees, we can build up two layers of calibration. First we calibrate orthogonal polarizations in a single image: \begin{eqnarray} CP_{ortho-diff} &=& CP_{channel 1} - CP_{channel 2} \nonumber \\ V_{ortho-diff} &=& \frac{V_{channel 1}}{V_{channel 2}} \label{eqn:diffchan} \end{eqnarray} Next we calibrate orthogonal HWP rotations, for example, HWPA=$0^o$ with HWPA=$45^o$: \begin{eqnarray} CP_{0-45} &=& CP_{diff-0} - CP_{diff-45} \nonumber \\ V_{0-45} &=& \frac{V_{diff-0}}{V_{diff-45}} \label{eqn:diffang} \end{eqnarray} This should remove instrumental effects, which would contribute to all polarization states. \section{Spectroscopic mode \& binary contrast performance} \label{sec:spect} The spectroscopic mode on GPI provides a nearly monochromatic image at a set of wavelengths across each filter. Lack of bandwidth smearing makes fringe extraction straightforward in this configuration. The extra wavelength dimension provides many more baselines for a single observation, $N_{\lambda}\times N_{baselines}$ compared to $N_{baselines}$, where $N_{\lambda}$ is typically 37 for GPI. \begin{figure} \centering \includegraphics[width=3.2in]{fig2.pdf} \caption{Contrast limits at SNR=5 for the two spectroscopic mode calibrated datasets analyzed in this paper, for HR 2690 ($\sim8$ minutes in H band) and HD 142527 ($\sim9$ minutes in J band). The righthand vertical axis shows the corresponding companion mass for given an apparent H magnitude of 6.5 for the primary assuming an age of 1Myr at 140pc using the AMES-Cond models \citep{baraffe2003}.} \label{fig:typicalcontrast} \end{figure} \cite{zimmerman2012} demonstrated improved contrast from the set of IFS+NRM images compared to the combined dataset using the P1640 IFS. We find similar results when we analyze phase errors measured over all wavelength channels of the full datacube compared to data collapsed over the wavelength axis. For the collapsed data we model the PSF as polychromatic considering the approximate H-band filter throughput profile for GPI. The rest of the analysis is identical to the typical GPI case described in \S\ref{sec:datareduction}. \begin{figure*} \centering \includegraphics[width=5in]{fig3.pdf} \caption{Performance comparison based on internal source data for $\lambda$-collapsed (black) and IFS data (blue). The contrast is estimated at SNR=5. The first half of the data are calibrated with the second half, overestimating performance. The right panel shows snapshots of the data. The IFS provides twice the sensitivity and smooths out low-sensitivity windows at 200 and 400 mas.} \label{fig:ifu} \end{figure*} In Figure \ref{fig:ifu} we show an estimated contrast curve for an example dataset taken with the GPI internal source in the light blue curves, which uses all wavelength channels. The contrast curve is computed according to Equations \ref{eqn:snr} and \ref{eqn:r} after scaling the errors by the baseline redundancy. We also scale the errors by a factor $\sqrt{37/17}$, which roughly accounts for the fact that we measure $37$ wavelength channels interpolated over about 17 pixels. The full set of datacubes are split into two halves of exposures and calibrated against each other. This likely overestimates the sensitivty, but we consider the relative performance between data taken in different observing conditions. When the data is summed into one polychromatic image, contrast sensitivity is a factor of $\sim2-3$ worse. The spectroscopic mode is ideal for detection of faint companions to bright host stars, providing increased signal to noise overall. The additional spatial frequency coverage reduces regions of very low sensitivity that arise from the baseline configuration (i.e. the peak of the collapsed cube curve at $\sim200$ and $\sim400$ mas). \subsection{Analyzing IFS Data - Simulation Example} \label{sec:spect_procedure} Spectral mode datasets can provide robust binary detection, constraining a companion's position at multiple wavelengths. We explore errors and biases on parameter estimation with simulated data of a binary source. The data are simulated from shifting and adding point source images measured from GPI's internal light source. Using internal source data ensures there is no resolved structure in the primary and also that the data still represent aspects of the GPI PSF that are not modeled (e.g., vibrations, detector effects). In general, this example will underestimate typical errors for two reasons: the bright internal source PSF is much more stable and the secondary companion is simulated from the same data as the PSF calibrator (as though one had a ``perfect" calibrator). We use this as an example to demonstrate the approach and provide more practical examples in \S\ref{sec:hr2690} and \S\ref{sec:142527}. The simulated faint companion 45.5 mas away ($\sim1.2\lambda/D$, $\sim1.0\lambda/B_{max}$) at a position angle of 18.4$^\circ$. We simulate an example flux ratio spectrum between two Phoenix models~\citep[e.g.,][]{allard2003} at $\mathrm{T}=3240~\mathrm{K}$ and $\mathrm{T}=5363~\mathrm{K}$ at 10Myr. We measure the flux ratio spectrum in the following steps: \begin{enumerate} \item Fit for average flux ratio and common position over all $N_{\lambda}\times N_{CP}$ observables by MCMC. \item Find the flux ratio that minimizes $\chi^2_{binary}$ at the fixed position determined by the median position parameters recovered in Step 1. \item Applying the result from Steps 1 and 2 as a starting guess, use MCMC to fit a common position and $N_\lambda$ flux ratios (for each wavelength channel) -- a total of $N_{\lambda}+2$ parameters. \end{enumerate} \textbf{Fit for average flux ratio and common position:} We first fit for three parameters in the binary model: position angle, separation and average contrast using observables from all wavelength channels using \pkg{emcee}~\citep{dfm2013, dfm2013soft}. Our posteriors are localized around the solution, however error between our simulated parameters and the recovered ones are larger than 1-sigma, indicating that errors may be underestimated. \textbf{Generate an initial estimate for flux ratio spectrum}: Next we fix the median position and fit for the contrast that minimizes $\chi^2_{binary}$ in each wavelength channel. This will provide a good starting guess for a finer fit of the spectrum and position. While it may not be essential to do this step, it is relatively fast to compute and can be a useful diagnostic before running a full MCMC fit for all parameters. Flux ratio errors in each channel are calculated by including all points on the $\chi^2$ grid where $\chi^2 < 1+\chi^2_{min}$. This is similar to the procedure in \cite{gauchet2016} for computing detection maps. However, instead of computing reduced $\chi^2$, we find that using raw $\chi^2$ with errors scaled by a factor $\sqrt{N_{holes}/3}$ to account for baseline redundancy, produces fractional errors consistent with the fractional true error, defined as: $$f_{true} = \frac{s_{simulated} - s_{recovered}}{s_{simulated}}$$ where $s_{simulated}$ and $s_{recovered}$ are the simulated and recovered spectra in contrast, respectively. This method provides a good estimate of the spectrum across the band for a moderate contrast binary and is relatively quick to compute, but does not take into account the position parameter errors. \textbf{Simultaneous fitting of spectrum and relative astrometry:} Finally, we fit for the flux ratio in each wavelength channel and common position of the companion using \pkg{emcee}. We apply a long burn-in of 5000 iterations with 150 walkers, and run the fit for an additional 5000 iterations. After an initial run, we add closure phase error in quadrature to the closure phases errors so that the reduced $\chi^2$ is roughly equal to 1, in this case $0.1^\circ$ of additional error. We then recompute this full step. We summarize the results of this procedure in Table \ref{tab:simastrometry} and Figure \ref{fig:sim_spectrum_recovery_mcmc}. In this case, the astrometry changes slightly between the two fits and the true error is larger than the computed errorbars (which are significantly lower than for expected on-sky observations that are properly calibrated). For the recovered spectrum the contrast in each channel is correct within the errorbars, with a small bias towards lower flux. \begin{table} \label{tab:simastrometry} \caption{Summary of input parameters and results from initial 3-parameter fit, and the full fit of astrometry and all wavelength channels simultaneously.} \begin{center} \begin{tabular}{|l|c|c|c|} \hline & Separation & PA & Avg. Contrast \tabularnewline \hline Input & $45.4 \mathrm{mas} $ & $18.4^\circ $ & 0.3975 \tabularnewline 3-param & $45.47 \pm 0.03$ & $18.33 \pm 0.02$ & $0.0406 \pm 0.0001$ \tabularnewline Full fit & $45.24 \pm 0.03$ & $18.36 \pm 0.03$ & $0.0395 \pm 0.003$\footnote{ the average contrast error is computed by adding the error in each channel in quadrature. This is an overestimation given covariance between frames.} \tabularnewline \hline \end{tabular} \end{center} \end{table} \begin{figure} \centering \includegraphics[width=3.5in]{fig4.pdf} \caption{The resulting spectrum measured by MCMC fit over 39 parameters (flux ratio in 37 wavelength channels, separation, and position angle). The orange dots represent the simulated spectrum and the blue stars represent the spectrum recovered by this method with $1\sigma$ errors. The dashed orange line in the bottom panel shows the fractional error between the simulated and recovered spectrum, while the gray region shows the fractional error bounds. \label{fig:sim_spectrum_recovery_mcmc}} \end{figure} \subsection{Spectral Channel Correlations} \begin{figure} \centering \includegraphics[width=3.4in]{fig5.pdf} \caption{Phase correlations over spectral channel with respect to channels 6 (top), 18 (middle), and 30 (bottom), The internal source data (black squares) shows low levels of correlation except in the nearest neighboring channels. On sky images in March 2014 (blue circles), which saw better conditions, and in May 2014 (pink diamonds), which saw worse conditions, show larger correlation between channels.} \label{fig:simcorr} \end{figure} Following \cite{zimmerman2012} we can describe the correlation of closure phases between spectral channels. The average correlation is defined as: \begin{eqnarray} C(q,w_1;q,w_2) &=& \frac{\langle(\Psi_{q,w1} - \overline{\Psi}_{q,w1})(\Psi_{q,w2} - \overline{\Psi}_{q,w2})\rangle}{\sigma_{\Psi_{q,w1}}\sigma_{\Psi_{q,w2}}} \label{eqn;corr}\\ \overline{C}(w_1;w_2) &=& \sum_{q=0}^{N_{CP - 1}}\frac{Corr(q,w_1;q,w_2)}{N_{CP}} \label{eqn:avgcorr} \end{eqnarray} Where $\Psi_{q,w_{i}}$ represents all the measured closure phases of the $q$th triplet at channel $w_{i}$, $\overline{\Psi}$ is the mean, and $\sigma$ is the standard deviation. \cite{zimmerman2012} showed large correlations between spectral channels across the band for P1640~\citep{oppenheimer2012} NRM IFS images. Some correlation is expected due to interpolation along the wavelength axis. The simulated dataset, generated from internal source data, does not suffer from atmospheric fluctuations. In this case we see a small amount of correlation between channels except for the nearest neighboring 2-3 channels (Figure \ref{fig:simcorr}). This is likely dominated by the interpolation. The internal source data provide an estimate the limiting performance of the instrument. For on-sky data, depending on observing conditions we find higher levels of correlation between spectral channels, beyond the effect of interpolating the wavelength solution. In Figure \ref{fig:simcorr} we also compare spectral channel correlations of the two on-sky datasets. In poor conditions (which also correspond to worse contrast sensitivity) we see a high amount of correlation across almost all spectral channels. This is likely the result of smearing of fringes due to vibration and/or non-static phase errors. We further discuss the differences between these data in Section \ref{sec:spect}. \subsection{GPI+NRM single source contrast performance \label{sec:contrast}} \begin{figure*} \centering \includegraphics[scale=0.4]{fig6.pdf} \caption{A summary of the median binary contrast sensitivity between $50-300$ mas for 3 datasets, on-sky observations of HD63852 during two different observing runs, and internal source exposures. The March observations showed significantly lower residual AO wavefront error, windspeed, and DIMM seeing. In better conditions we see both smaller phase errors and a sharper image, shown on the right inset plots. Table \ref{tab:obs} shows a more complete list of environmental measurements. Plotted are the contrast sensitivity obtained with 6, 7, 8, 9 and 10 frames for the on-sky datasets, and 6, 10, 15, and 27 frames for the internal source dataset. The contrast values are plotted against total cumulative photon count in the corresponding frames based on a gain factor of 3.04. Dashed lines represent a $1/\sqrt{t}$ trend to compare with the measured contrasts. } \label{fig:mastercontrast} \end{figure*} In this section we discuss contrast sensitivity with respect to photon noise and varying conditions, and provide expected performance for future observations. In the best case, images taken with the internal source do not suffer atmospheric aberration and represent a baseline for performance. We expect these data to be primarily limited by photon and detector noise. On-sky observations will suffer from additional aberrations and smearing out of the image depending on weather conditions. Observations of an unresolved single star at two different times with different seeing and wind conditions provide an example of how performance can vary with conditions. We observed single star HD 63852 on two different nights in H band. As before, to obtain a proxy for calibrated contrast, we split each sequence of exposures in half and calibrate the first half against the second half. This likely overestimates the contrast sensitivity because it assumes no phase error differences between the target and calibrator. However, this exercise demonstrates trends in contrast performance with various environmental conditions and represents an ideal case. In a full science sequence one or more different unresolved sources will be used to calibrate the science target. Calibrators lie in different parts of the sky and the observations are separated in time between slew and acquisition. This leads to imperfect correction of closure phase errors. In practice NRM contrast will be limited by a range of factors other than photon noise. Uncharacterized detector noise, vibrations, and imperfect AO correction that lead to smearing of fringes during an exposure integration can contribute to reduced contrast. To characterize the performance, for each set of observations, we measure closure phases and scatter with increasing photon count by analyzing partial datasets at a time, adding in consecutive exposures to increase total counts. In Fig. \ref{fig:mastercontrast} we display the measured binary detection sensitivity against photons collected (detector counts divided by the recorded gain factor). We compare the measured contrast with a $1/\sqrt{t}$ trend and see some deviations that indicate other systematic errors in closure phase. All dataset contrasts improve with increased exposure time but on-sky observations are not photon noise limited. The dominant error source in this case is likely time-varying aberrations and vibrations that reduce fringe visibility (smear out the PSF), resulting from a range of weather conditions that control the atmospheric turbulence times scale. Systematic errors are known to limit performance (phase errors) in aperture masking data \citep{lacour2011}. The first flux ratio minimum (H-band) is at $40 \mathrm{mas}$. To compare, we report the average contrast measured between 100 and 300 mas for each dataset. For images taken with the internal source, contrast improves with increased exposure time following the photon noise limit $\sim\sqrt{N_{phot}}$. In a range of sky conditions, we see that other effects limit contrast. In very good conditions we find contrast sensitivity at SNR=5 close to $\Delta \mathrm{mag}$=7.5 at separations greater than $40 \mathrm{mas}$. We found that in conditions with higher wind and low level turbulence we measure an order of magnitude reduced contrast sensitivity for the same bright source. These conditions generally correspond to Gemini Observatory \textit{IQANY} conditions with high wind. \begin{figure} \centering \includegraphics[width=3.4in]{fig7.pdf} \caption{Phase errors compared to residual AO wavefront error (nm) for the three sets of observations compared in Figure \ref{fig:mastercontrast}. The left panel shows the error in each closure phase for every wavelength channel, while the right panel shows the RMS wavefront error measured from the wavefront sensor compared to the average closure phase error. Errorbars show the full range of closure phase errors measured for the dataset. } \label{fig:aowfe} \end{figure} With few datapoints it is challenging to conclusively identify the dominant effect reducing fringe contrast, but there are a few obvious correlations. We note that residual AO wavefront error is a good predictor of point source contrast. In Figure \ref{fig:aowfe} we show closure phase error as a function of the wavefront error value reported in the data headers. The cyclical nature of the phase errors follows a rough scaling with wavelength \citep[also shown in ][]{greenbaum2014spie}. We also see a correlation with wind speed, however, the wind speed recorded in the header refers to surface-layer wind and does not provide any information on wind speed at other levels of the atmosphere. \cite{madurowicz2018} show that the wind butterfly aberration seen by GPI's coronagraph~\citep{poyneer2016} most strongly correlates with wind at high altitudes. It is possible the higher altitude wind was also present during these observations, or that the ground-layer wind correlates with short characteristic timescales of atmospheric seeing, also shown to have a strong effect on GPI performance \citep{bailey2016}. On-sky observations of fainter targets not only reduces the number of photons collected, but contains more PSF jitter due to uncorrected wavefront and small changes in the PSF and/or uncorrected tip/tilt. This has the additional effect of blurring the image and reducing fringe contrast. This effect is strongest in poor conditions and especially high winds. \subsection{Resolving close binary HR2690} \label{sec:hr2690} For basic validation of using the NRM to resolve point sources and obtain precise astrometry we observed the known binary HR 2690 during early commissioning of GPI. The primary HR 2690A is classified as a B3 star~\citep{buscombe1969}. The contrast ratio of the companion has been typically measured $\Delta\mathrm{mag}\sim2$ at 0.543$\mu$m \cite{mason1997}. We observed the binary in the sequence Target-Calibrator-Calibrator. We measure a contrast sensitivity of $\sim 5 \times 10^{-3}$ by calibrating our two single stars with each other. We easily recover the binary in H band and measure a primary to secondary flux ratio of $5.7 \pm 0.05$ ($\Delta \mathrm{mag}\sim1.89$) a separation of $89.15 \pm 0.12$ mas, and position angle of $192.29 \pm 0.14^\circ$, after adding GPI plate scale and PA errors in quadrature. We find a slight spread in results depending on using one vs. both calibrators, within the errors. HR 2690 B was first resolved by \cite{mason1997} with speckle imaging. These observations were followed up several times over the next 19 years~\citep{hartkopf2012, tokovinin2014, tokovinin2015, tokovinin2016}, all using speckle interferometry. We show the current astrometric positions relative to the primary including the GPI epoch in Figure \ref{fig:hr2690astrometry}. The GPI astrometry appears to be consistent with previous measurements. Small discrepancies in astrometry could point to a mismatch in absolute calibration. Following the procedure outlined in \S\ref{sec:spect_procedure}, we fit astrometry and contrast in each wavelength channel. We find a fairly flat contrast spectrum over H band at $\Delta mag\sim1.89$, which matches the reported $\Delta mag$ (Stromgren y filter at 0.543$\mu$m) from most of the previous studies~\citep{mason1997, tokovinin2014, tokovinin2015, tokovinin2016}. \cite{hartkopf2012} report $\Delta mag_y$=3.2, which is inconsistent with all other measurements. The similar flux ratio seen at both visible and near-IR wavelength indicate that the companion is also a hot star, probably late-B type given these contrast ratios. \begin{figure} \centering \includegraphics[width=3.3in]{fig8.pdf} \caption{HR 2690 B Recovery. \textbf{Top}: Spectrum (contrast) of the HR 2690 companion measured as the ratio of the secondary to the primary. \textbf{Bottom}: Astrometry of HR 2690 including our GPI epoch. The yellow star marks the position of HR 2690 A.} \label{fig:hr2690astrometry} \end{figure} As an independent check on our errorbars, we recover simulated signals using the two calibration sources. We inject and recover a signal into one of the calibration sources, HR 2839, and use the other, HR 2716, as a sole PSF calibrator. We simulate 10 datasets at different position angles near the separation recovered with the contrast ratio spectrum extracted from the HR 2690 binary. We follow the complete extraction procedure for each simulated dataset and compute the average and standard deviation. The errors computed by this approach, shown in Figure \ref{fig:hr2690injection} (top), are consistent with $1\sigma$ errorbars computed in the original extraction. In this case there is also a slight bias in the recovered spectra to lower flux ratio, a factor $\sim2-3\%$. For the position we compute a slightly higher error of $0.4~\mathrm{mas}$ and $0.4^\circ$ for separation and PA, respectively. The PA shows no strong bias, but the average recovered separation is approximately $0.4$~mas deviant from the input separation. \begin{figure} \centering \includegraphics[width=3.3in]{fig9.pdf} \caption{Injection recovery results when \textbf{Top}: simulated signals are injected into one calibration source and the data are calibrated using the second PSF calibrator. and \textbf{Bottom}: simulated signals are injected into one calibration source, and the same original source is used as the calibrator in the analysis. This simulates the case when only one useable calibration source is available for injection recovery analysis. In the first case the error in the recovered spectrum is consistent with $1-\sigma$ errors computed from the MCMC analysis for the spectrum, but there is a slight bias in the recovered contrast to smaller flux ratio. In the second case the errors are underestimated. the bias is consistent in both cases.} \label{fig:hr2690injection} \end{figure} In some cases, only one PSF calibrator may be available so it is not possible to simulate a dataset that accounts for phase errors between sources. To highlight the difference, we repeat the injection recovery simulation by calibrating the simulated binary from HR 2839 data with the original HR 2839 data. As expected, the recovery errors are underestimated. The contrast ratio spectrum recovered in this simulation is shown in Figure \ref{fig:hr2690injection}. Interestingly, both the 2-calibrator simulation and this 1-calibrator simulation show the same ``bias" in the recovered spectrum (shifted by $2-3\%$). In the case that only one calibration source is available, injection recovery can be used to measure a systematic offset in the parameters. Using errors computed through injection recovery and mulitplying by the computed bias term we show the final spectrum and astrometry of our GPI epoch of observation in Figure \ref{fig:hr2690astrometry}. The flat spectrum over this short range is consistent with a late B-type companion. GPI NRM relative astrometry measurements are consistent with other high resolution observations and can reach precision of $\sim0.5~\mathrm{mas}$ in separation and $\sim0.5^\circ$ in PA. \subsection{Resolving M dwarf companion inside the transitional disk of HD 142527} \label{sec:142527} \begin{figure*}[t] \centering \includegraphics[width=5in]{fig10.pdf} \caption{Simultaneous recovery of relative position and contrast ratio spectrum for three cases that use all 9 frames of data. The blue curves denote results obtained taking the average obervables over all 9 frames, considering the average baseline (average parallactic angle) over the observing sequence. The orange and green curves represent results when splitting the data by two and three parts, respectively, and combining/stacking those datasets, thus accounting for sky rotation over the observing sequence. The results were computed by adding $0.5\circ$ additional phase error in quadrature to the closure phase observables. \textbf{Top:} Posteriors for position parameters in each case. Errorbars reported are $1\sigma$ (not including GPI astrometry errors). All approaches, using the same updated calibration favor smaller separation and discrepant PA than the initial data reduction. The split cases, while producing tighter errorbars, are not consistent with each other, and lead to a poorer fit of the data. \textbf{Bottom}: The resulting contrast spectrum is consistent for each approach. Individual data points are slightly offset to display relative errorbars. Comparison of reduced $\chi^2$ shows that using the average of all the data provides the best fit in this case.} \label{fig:hd142527} \end{figure*} To demonstrate GPI NRM performance for detection and characterizion of faint companions at small angular separations, we observed the transitional disk-hosting, close binary system HD 142527. These data were first presented in \cite{lacour2016}. We present a new analysis here with more detail and compare the new spectrum in J-band to photometry and spectroscopy from other instruments. Since the second calibration source was determined to be a close binary~\citep{lebouquin2014} we only have one calibration source available for this analysis and the complete injection recovery approach to estimating errorbars is not possible. We perform the injection recovery to reveal any extraction biases, relying on $1\sigma$ errorbars computed by the MCMC reduction algorithm, which we have shown to be consistent with injection recovery errors in the previous example. Extraction biases are $\sim5\%$, which we apply to the resulting spectrum for \S\ref{sec:hd142spectrum}. \subsubsection{Recovering parameters \label{sec:hd142params}} We first determine the position and average contrast ratio. We follow a similar procedure reported in \cite{lacour2016}, using all frames except two where the AO system lost lock on the star and the images are noticeably blurred. We use an average sky rotation and consider the whole dataset at this common parallactic angle. This position is in agreement with previously measured astrometry~\citep{biller2012, close2014, lacour2016}. We measure an average contrast ratio of $\sim70$ between the primary and the secondary, which is consistent with measurements in J and H bands~\citep{lacour2016} with NACO sparse aperture masking. Next we compute the full set of parameters, contrast ratio for each wavelength channel and position, as outlined in \S\ref{sec:spect_procedure} adding $0.5^\circ$ additional closure phase error in quadrature. We obtain a projected separation of $75.76 \pm0.54~$mas and PA of $116.43 \pm0.44^\circ$ ($\chi^2=1.07$). \begin{figure*} \centering \includegraphics[width=5in]{fig11.pdf} \caption{HD 142527 B spectrum converted to flux based on the host star photometry (blue diamonds) from \cite{lacour2016}. Blue lines represent model spectra described in \cite{lacour2016} and \cite{christiaens2018} with $T=3500\mathrm{K}$, $log(g)=4.5$, alone (solid) and with a 1700K environment (dashed), assuming a distance of 140pc. Our spectrum of HD 142527 B is consistent with this previously measured photometry, which are discrepant from new VLT/SINFONI H+K spectroscopy~\citep{christiaens2018} displayed in black dots.} \label{fig:hd142527_context} \end{figure*} Since there is a significant amount of sky rotation (11.4$^\circ$) over the course of the HD142527 integrations, we explored the effect of splitting the dataset into two groups of four and five exposures and three groups of three exposures, combining the rotated baselines in the analysis. We refer to this as the ``split and combine" method. In this case the contrast ratio between HD142527 A and B is slightly higher. The parameter errorbars are also slightly smaller due to the larger number of observables. We recover slightly smaller separations of $74.83 \pm0.35~$mas and $74.87\pm0.37~$mas and discrepant PAs of $115.8 \pm0.29^\circ$ and $116.8 \pm0.26^\circ$ for the split in two ($\chi^2=1.25$) and split in three ($\chi^2=1.19$) cases. Next we use the three-parameter analysis results as a starting guess to simultaneously fit for position and a contrast ratio in each spectral channel for each of the three reduced datasets, the average of all frames, and the split and combined by two and three. The comparison is shown in Figure \ref{fig:hd142527}. The two split datasets still produce a smaller separation and discrepant PAs. However, all reduced datasets produce a consistent spectrum. A known degeneracy between separation and contrast could be the cause of a smaller recovered separation, but the discrepancy in PA is likely due poor data quality, since the results depend on how the data are combined. The small number of total frames makes this approach challenging. While there is some variation in the position parameters, there is not a large difference in the spectrum of each reduction within the errorbars. We adopt the solution with the lowest error between the data and model (lowest $\chi^2$). Obtaining reliable astrometry may require more integrations in order to average out poor quality data and get a cleaner picture of the true astrophysical structure. \subsubsection{The HD 142527 B spectrum \label{sec:hd142spectrum}} H$\alpha$ was previously detected at visible wavelengths~\citep{close2014}, however, given our low resolution spectrum, our errors are too large to see the Pa$\beta$ signal expected accretion luminosity reported in either \cite{close2014} (1.3\%~$L_\odot$) or \cite{christiaens2018} (2.6\%~$L_\odot$). The expected line luminosity is approximately an order of magnitude smaller than our errorbars, according to the relations, \begin{eqnarray} \log{(L_{acc})} &=& B + A \times L_{line} \\ A_{Pa\beta} &=& 1.36, B_{Pa\beta}=4.00 \end{eqnarray} as described in \cite{natta2004, rigliaco2012}. We correct our recovered spectrum with the $\sim5\%$ extraction bias factors computed from injection recovery in the calibration source dataset. The recovered spectrum is consistent with the broadband photometry previously measured for HD 142527 B~\citep{biller2012,lacour2016,close2014}. Figure \ref{fig:hd142527_context} shows our J band spectrum next to published photometry (blue diamonds). We overplot a $T_{eff}=3500\mathrm{K}$ model alone and one with a 1700K environment (similar to the models described in \cite{lacour2016} and \cite{christiaens2018}), assuming a distance of 140~pc to be consistent with \cite{lacour2016}. Our results are consistent with the aperture masking detections. We also plot the higher resolution VLT/SINFONI H+K spectra from \cite{christiaens2018} (black dots), and note the flux discrepancy. The discrepancy with \cite{christiaens2018} is most likely a systematic error in one or both of the analyses. The presence of bright extended structures could bias the recovery of the secondary point source position and flux, but a point source was also detected in direct imaging~\citep{close2014}. Our results, taken independently, support previous aperture masking measurements, and we have demonstrated that our analysis procedure yields reliable measurement of the spectrum in simulations. Alternatively, it is possible that inaccurate calibration of the SINFONI data in post-processing could yield this discrepancy. The stellar spectrum models described in both studies assumed difference distances for HD 142527 B, $140\pm20~$pc~\citep{lacour2016} and $156\pm6~$pc~\citep{christiaens2018} resulting from the parallax measured with Gaia \citep{gaiacollab2016}. We note the coincidence that the flux discrepancy is close to the scaling factor between these distances ($156^2/140^2$). If the deeper contrast measured from this and other aperture masking observations are correct, this may imply a lower effective temperature, or different circumbinary environment. The small separation of HD 142527 B makes non-coronagraphic, full pupil images challenging to reduce. \section{Polarimetric mode \& visibility precision} \label{sec:pol} Reliable visibility amplitudes are challenging to measure from the ground, even behind an extreme-AO system. Small temporal changes in phase smear fringes over individual integrations and vibrations artificially reduce amplitudes. Differential polarimetry enables self-calibrated amplitudes under the assumption that orthogonal polarization channels and rotated half-waveplate angles are expected to suffer the same systematic errors. These systematics can therefore be calibrated out to reveal different polarized structure. In this section we follow the polarimetry+NRM procedure outlined in Sec \ref{sec:observables} and report performance of the NRM in polarimetric mode. In commissioning the polarimetric mode we focused on single, unresolved calibration stars. The differential visibility signal is expected to be unpolarized and should show constant $\mathcal{V}_{diff}=1$ and $\phi_{CP} = 0$ at all orientations. The deviation from the expected signal and scatter provide an estimate of both instrumental systematics and stability of the measurements. During commissioning observations in May 2015, when we experienced large vibrations, differential visibilities had very large errorbars and residual systematic scatter around $\mathcal{V}_{diff}=1$. Vibrations were exacerbated by high winds during May 2015 NRM commissioning. In May 2016 commissioning, after a major source of vibration was fixed, we found that, in the best cases, differential visibilities calibrated to within $1\%$ of $\mathcal{V}=1$, $\sigma\sim0.4\%$ in the best case shown here. Closure phases calibrated within $\sim1-2^\circ$ for bright sources, $\sigma\sim0.4^\circ$ in the best case. For example, Figure \ref{fig:polperformance} shows the measured differential visibilities for single source HIP 74604 from data taken in GPI's K1 band. This represents the best performance we achieved during commissioning, which is similar to the $\sigma\sim0.4\%$ performance achieved with VAMPIRES polarimetry mode at visible wavelengths \citep{norris2015}. \begin{figure} \centering \includegraphics[width=3.3in]{fig12a.pdf} \includegraphics[width=3.3in]{fig12b.pdf} \caption{Differential visibilities (top) and differential closure phases (bottom) for a representative polarimetric dataset of a single, unpolarized star. Marker size scales with baseline length. We plot the differential visibility with baseline azimuthal orientation.} \label{fig:polperformance} \end{figure} We explored the expected differential polarimetry signal of a protoplanetary disk by simulating the instrument response for a synthetic disk produced with MCFOST \citep{pinte2006,pinte2009} and reducing this simulated data through our pipeline. Within a limited set of tests attempting to simulate a relatively large signal, we were not able to simulate a detectable disk at the level of noise we measure from our best on-sky data, without artificially dialing down the flux from the star by a factor of a few. As an example (described in detail in Appendix \ref{sec:polsim}), we simulated data based modeling the features of HD 97048~\citep{lagage2006, doucet2007}, a young Herbig Ae star with a strong IR excess, $L_{IR}\sim0.4L_\odot$ \citep{vankerckhoven2002}. We physically scale disk image so that the inner edge of the disk is located $<100~$mas from the central star. In this example, the integrated flux into one GPI pixel ($14.1~$mas) of the brightest part of the disk inner edge is still $\sim7~$mag fainter than the host star (see Figure \ref{fig:mcfost} in Appendix \ref{sec:polsim}). \section{Discussion} \label{sec:discussion} GPI's non-redundant mask mode in general shows comparable performance compared to prior aperture masking~\citep[e.g.][]{lacour2011} and earlier IFS aperture masking~\citep{zimmerman2012} experiments, and very good performance in good conditions that correspond to low residual WFE measured by the AO system. As \cite{zimmerman2012} showed in the P1640 instrument, the IFS spectral axis provides improved overall contrast compared to broadband aperture masking and also smooths out baselines with lower sensitivity. This allows GPI NRM to reach contrasts close to $10^{-3}$ on bright targets and better than $10^{-2}$ on long individual integrations ($\sim$20-60 seconds). GPI's NRM achieves similar performance at $\sim\lambda/D$ in J and H bands as NACO SAM L$'$ imaging of similar total integration time, which achieved contrast limits of $2.5\times10^{-3}$ \citep{lacour2011}. Deeper NACO L$'$ imaging \citep{gauchet2016} exceeds this sensitivity, especially for bright sources. GPI's 10-hole mask, while reducing throughput compared to other masks with fewer holes provides fairly even coverage of spatial frequencies. We find that, in addition to helping constrain the average contrast measurement, we can fit a spectrum reliably to moderate contrast sources, with improved overall contrast. We have presented new spectra of HR 2690 B in H band and HD 142527 B in J band that are consistent with previous photometry for both sources. A flux discrepancy remains between aperture masking observations of HD 142527 B and VLT/SINFONI spectra in H and K bands \citep{christiaens2018}. Future observations may help resolve this discrepancy. GPI's IFS mode combined with the NRM is particularly powerful for obtaining precise ($\sim \mathrm{few~mas}$) astrometry of companions around bright host stars that are separated $< 100$ mas, where methods like Angular Differential Imaging~\citep{marois2006} suffer. The ability to resolve the relative astrometry and spectro-photometry of close binaries is a valuable tool for studying stellar multiplicity and calibrating evolutionary models as a function of mass and age. In addition, determining the mass and SED of both components of binary members of a moving group can constrain the age of the group as a whole, especially if a pre main sequence star is moving along the Henyey track (e.g. Nielsen et al. 2017). The best targets for this technique have short orbital periods to allow for quick characterization and large radial velocity signals, which in nearby moving groups means projected separations of $\sim$40 mas. Typical contrasts can reach masses $\sim50M_{jup}$ for a very young bright target as we have shown in Figure 2 considering the AMES-Cond models (Baraffe et al. 2003) for a 1Myr 6.5 mag primary at 140pc. In a single observing sequence, we obtained target integrations with minimal sky rotation, where possible, to provide multiple independent measurements along the same sky-projected baselines. In cases with a larger amount of sky rotation, we explored splitting up datasets to account for the rotation and take advantage of the increased Fourier coverage. While this can reduce the error on the fit, for the very small number of frames we obtained this produced discrepant results depending on how the data were split, sensitive to variations between frames. Ultimately averaging observables over all frames produced a better fit model for the HD 142527 dataset. All approaches yielded consistent spectra, but saw some variation in the relative position. With observations covering even greater sky rotation a split and combine approach will likely be neccesary and should be robust if the uncertainties on the observables can be estimated (e.g., by collecting a sufficient number of frames for each sky position). Polarization mode observations rely on measuring stable amplitudes, which become degraded by vibrations and poor wavefront corrections. We saw improvement in precision after major sources of vibration were corrected. A faulty M2 mirror actuator was fixed and active dampers were installed. In the best case of the most recent observations, we measured precision of $\sigma\sim0.4\%$ in differential visibilities and $\sim0.4^\circ$ in differential closure phases in the best case. However, our limited observations make it difficult to characterize the typical polarimetric mode performance with the NRM on GPI. Initial attempts to simulate NRM images of a model protoplanetary disk did not yield a detectable signal; a disk will need to be relatively bright to be detected. Compared with the VAMPIRES instrument \citep{norris2015} we reach similar performance in our best dataset taken in K1 band. Typical VAMPIRES performance is likely better and their three-tier calibration (compared to GPI's two-tier described in \S\ref{sec:observables}) makes that system more robust to systematic errors. At this level of precision, young circumstellar disks may be a significant challenge to detect with differential polarimetry on GPI, compared to sources previously detected by this method with larger polarization signals~\citep[e.g.,][]{norris2012}. In the case of a resolved signal with NRM+polarimetry, modeling is an essential component for recovering and interpreting the disk structure. Studying suspected polarized extended structures with NRM should be limited to the best conditions (low residual wavefront error, low wind). Future upgrades or instruments that can mitigate vibrations and tip/tilt errors for non-coronagraphic modes could make better use of polarimetry with NRM for studying circumstellar disks. \section{Summary and Conclusions} \label{sec:summary} We have outlined the overall performance of the GPI NRM in IFS and polarimetric modes with a few example datasets. We have also described an open source software to reduce NRM fringes from GPI and other instruments and demonstrated results on various datasets. Future observations with the NRM on IFS instruments like GPI can use this study as a guideline for observing in these modes. We also provide the following major takeaways for planning observations with GPI's NRM: \begin{itemize} \item AO residual wavefront error correlates with NRM contrast performance (Fig. \ref{fig:mastercontrast}). The AOWFE header keyword is a good metric of conditions for NRM performance, given the ``long" integration times. \item GPI NRM is suitable for moderate contrasts between $10^2 - 10^3$ to separations of $\sim30\mathrm{mas}$, with degraded performance closer in. \item Ten holes provides good uv coverage minimizing gaps of sampling sensitivity, but at the cost of lower throughput. \item Polarization observations should be taken in conditions that minimize AO residual wavefront error and when vibrations can be minimized. Polarization observations should target objects with differential polarimetry signals $\gtrsim 1\%$. \end{itemize} This study can provide a comparison with other instruments using single-pupil interferometric methods (ie., NRM, kernel phase~\citep{martinache2010}). Further improvements to the analyses presented in this work could be made by analyzing statistically independent kernel phases \citep[i.e.,][]{ireland2013} or with more sophisticated modeling and treatment of errors. The richness of the IFS datasets allows for varied approaches to treating and analyzing the data. Ground-based NRM on instruments like GPI complement the capabilities of upcoming NIRISS aperture masking on JWST. The obvious advantage of NRM on ground-based facilities like GPI is in the larger telescope size that enables higher resolution sensitivity down to $10$s of milli-arcseconds. On the other hand, interferometric observations on a stable space telescope like JWST will carve out a different discovery space. The data will likely be photon-noise limited for bright sources, allowing at least an order of magnitude improved contrast compared to the ground. Space-based interferometric observations will also be able to complement ground-bases AO observations by observing sources too faint for visible wavefront sensors. Together with other high contrast and high resolution instruments, IFS aperture masking observations help to expand the detection landscape for direct imaging. \acknowledgments The authors thank Valentin Christiaens for sharing their VLT/SINFONI data and Neil Zimmerman for useful discussions. We thank the anonymous reviewer for helpful comments that improved the clarity of this paper. This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AyA2014-55216. This work is based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil). Work from A.Z.G was supported in part by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1232825. A.Z.G and A.S. acknowledge support from NASA grant APRA08-0117 and the STScI Director’s Discretionary Research Fund. The research was supported by NSF grant AST-1411868 and NASA grant NNX14AJ80G (J.-B.R.). P.K., J.R.G., R.J.D., and J.W. thank support from NSF AST-1518332, NASA NNX15AC89G and NNX15AD95G/NEXSS. This work benefited from NASA’s Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate. KMM's work is supported by the NASA Exoplanets Research Program (XRP) by cooperative agreement NNX16AD44G. Portions of this work were performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. \software{Astropy \citep{astropy2013}, Numpy \citep{Numpy2011}, Scipy \citep{scipy2001}, oifits \footnote{https://github.com/pboley/oifits}, pysynphot \citep{pysynphot2013}, emcee \citep{dfm2013soft, dfm2013}} \facility{Gemini South}.
2,877,628,089,980
arxiv
\subsection{#1}\setcounter{theorem}{0} \setcounter{equation}{0} \par\noindent} \renewcommand{\theequation}{\arabic{subsection}.\arabic{equation}} \renewcommand{\thesubsection}{\arabic{subsection}} \newtheorem{theorem}{Theorem} \renewcommand{\thetheorem}{\arabic{subsection}.\arabic{theorem}} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corr}[theorem]{Corollary} \newtheorem{prop}[theorem]{Prop} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{deff}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newcommand{\begin{theorem}}{\begin{theorem}} \newcommand{\begin{lemma}}{\begin{lemma}} \newcommand{\begin{corr}}{\begin{corr}} \newcommand{{L^2({\Bbb R}^3)}}{{L^2({\Bbb R}^3)}} \newcommand{{L^2({\mathbb{R}}^3\backslash\mathcal{K})}}{{L^2({\mathbb{R}}^3\backslash\mathcal{K})}} \newcommand{\begin{deff}}{\begin{deff}} \newcommand{{L^\infty({\Bbb R}^3)}}{{L^\infty({\Bbb R}^3)}} \newcommand{{L^{\infty}({\mathbb{R}}^3 \backslash \mathcal{K})}}{{L^{\infty}({\mathbb{R}}^3 \backslash \mathcal{K})}} \newcommand{\begin{proposition}}{\begin{proposition}} \newcommand{\end{lemma}}{\end{lemma}} \newcommand{\end{corr}}{\end{corr}} \newcommand{\end{deff}}{\end{deff}} \newcommand{i}{i} \newcommand{\end{proposition}}{\end{proposition}} \newcommand{{R_{\lambda}^{\nu}}}{{R_{\lambda}^{\nu}}} \newcommand{{\tilde{R}_{\lambda}^{\nu}}}{{\tilde{R}_{\lambda}^{\nu}}} \newcommand{{T_{\lambda}^{\nu}}}{{T_{\lambda}^{\nu}}} \newcommand{{\tilde{T}_{\lambda}^{\nu}}}{{\tilde{T}_{\lambda}^{\nu}}} \newcommand{{S_{\lambda}^{\nu}}}{{S_{\lambda}^{\nu}}} \newcommand{{}^t\!\slnu}{{}^t\!{S_{\lambda}^{\nu}}} \newcommand{{m_{\lambda}^{\nu}}}{{m_{\lambda}^{\nu}}} \newcommand{\psi_{\lambda}^{\nu}}{\psi_{\lambda}^{\nu}} \newcommand{\xi_{\lambda}^{\nu}}{\xi_{\lambda}^{\nu}} \newcommand{N_{\lambda}^{\nu}}{N_{\lambda}^{\nu}} \newcommand{N_{\lambda}}{N_{\lambda}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb R}^n}{{\mathbb R}^n} \newcommand{{}}{{}} \newcommand{{\Bbb R}^{1+3}_+}{{\Bbb R}^{1+3}_+} \newcommand{\varepsilon}{\varepsilon} \newcommand{\varepsilon}{\varepsilon} \renewcommand{\l}{\lambda} \newcommand{{\text{\rm loc}}}{{\text{\rm loc}}} \newcommand{{\text{\rm comp}}}{{\text{\rm comp}}} \newcommand{C^\infty_0}{C^\infty_0} \newcommand{\text{supp}\ }{\text{supp}\ } \renewcommand{\Pi}{\varPi} \renewcommand{\Re}{\rm{Re} \,} \renewcommand{\Im}{\rm{Im} \,} \renewcommand{\epsilon}{\varepsilon} \newcommand{{\text {sgn}}}{{\text {sgn}}} \newcommand{\Gamma_{\text{mid}}}{\Gamma_{\text{mid}}} \newcommand{{\Bbb R}^3}{{\Bbb R}^3} \newcommand{{{\cal M}}^\alpha}{{{\mathcal M}}^\alpha} \newcommand{{{\rm dist}}}{{{\rm dist}}} \newcommand{{{\cal A}}_\delta}{{{\mathcal A}}_\delta} \newcommand{{\cal K}}{{\mathcal K}} \newcommand{\overline{\Bbb E}^{1+3}}{\overline{\Bbb E}^{1+3}} \newcommand{\overline{\Bbb E}^{1+3}_+}{\overline{\Bbb E}^{1+3}_+} \newcommand{{\Bbb E}^{1+3}}{{\Bbb E}^{1+3}} \newcommand{{\Bbb E}^{1+3}_+}{{\Bbb E}^{1+3}_+} \newcommand{{\cal P}}{{\mathcal P}} \newcommand{{\Bbb R}_+}{{\Bbb R}_+} \newcommand{\partial}{\partial} \newcommand{\tilde}{\tilde} \newcommand{\text{grad}\,}{\text{grad}\,} \newcommand{{\mathcal K}}{{\mathcal K}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathrm g}}{{\mathrm g}} \newcommand{{\langle}}{{\langle}} \newcommand{{\rangle}}{{\rangle}} \newcommand{{\,\cdot\,}}{{\,\cdot\,}} \newcommand{{\partial\K}}{{\partial{\mathcal K}}} \newcommand{{\R^3\backslash\K}}{{{\mathbb R}^3\backslash{\mathcal K}}} \renewcommand{\S}{{\mathbb{S}}} \newcommand{\subheading}[1]{{\bf #1}} \begin{document} \title[Long-time existence of quasilinear wave equations] { The lifespan for 3-dimensional quasilinear wave equations in exterior domains } \author{John Helms} \author{Jason Metcalfe} \address{Department of Mathematics, University of North Carolina, Chapel Hill, NC 27599-3250} \email{johelms@email.unc.edu, metcalfe@email.unc.edu} \thanks{The authors were supported in part by NSF grant DMS-1054289. The second author was additionally supported by NSF grant DMS-0800678} \begin{abstract} This article focuses on long-time existence for quasilinear wave equations with small initial data in exterior domains. The nonlinearity is permitted to fully depend on the solution at the quadratic level, rather than just the first and second derivatives of the solution. The corresponding lifespan bound in the boundaryless case is due to Lindblad, and Du and Zhou first proved such long-time existence exterior to star-shaped obstacles. Here we relax the hypothesis on the geometry and only require that there is a sufficiently rapid decay of local energy for the linear homogeneous wave equation, which permits some domains that contain trapped rays. The key step is to prove useful energy estimates involving the scaling vector field for which the approach of the second author and Sogge provides guidance. \end{abstract} \maketitle \newsection{Introduction} In this article, a lower bound of $c/\varepsilon^2$, where $\varepsilon$ denotes the size of the Cauchy data, is established for the lifespan of solutions to quasilinear wave equations in exterior domains with Dirichlet boundary conditions. Here we examine nonlinearities which vanish to second order and may depend on the solution $u$, rather than just its derivatives, at the quadratic level. The lifespan bound that is established is an analog of that which was obtained in \cite{Lindblad} in the absence of a boundary. Similar lifespan bounds appeared in \cite{DZ} (and could also be obtained via the methods of \cite{DMSZ}) exterior to star-shaped obstacles. We permit much more general geometries and only require that there is a sufficiently rapid decay of local energy, with a possible loss of regularity. Due, e.g., to the seminal results \cite{Ikawa1, Ikawa2}, this includes some domains which contain trapped rays. Let us more specifically introduce the problem. Let ${\mathcal K}\subset {\mathbb R}^3$ be a bounded domain with smooth boundary. Note that we shall not assume that ${\mathcal K}$ is connected. We then examine the following quasilinear wave equation exterior to ${\mathcal K}$ \begin{equation} \label{main} \begin{cases} \Box u(t,x)=Q(u,u',u''),\quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\K},\\ u(t,{\,\cdot\,})|_{{\partial\K}}=0,\\ u(0,{\,\cdot\,})=f,\quad \partial_t u(0,{\,\cdot\,})=g. \end{cases} \end{equation} Here $\Box=\partial_t^2-\Delta$ is the d'Alembertian, and $u'=\partial u = (\partial_t u, \nabla_x u)$ denotes the space-time gradient. The nonlinear term vanishes quadratically at the origin and is affine linear in $u''$, thus yielding a quasilinear equation. Without loss of generality, we may take $0\in{\mathcal K}\subset\{|x|<1\}$, and we shall do this throughout. While we shall state the lifespan bound for the scalar equation \eqref{main}, the methods shall fully permit, even multiple speed, systems. The nonlinearity $Q$ can be expanded as \[Q(u,u',u'')=A(u,u') + B^{\alpha\beta}(u,u')\partial_\alpha\partial_\beta u\] where $A(u,u')$ vanishes to second order at the origin and $B^{\alpha\beta}$ are functions which are symmetric in $\alpha,\beta$ and vanish to the first order at $(0,0)$. Here we are using the summation convention where repeated indices are implicitly summed from $0$ to $3$, where $x_0=t$ and $\partial_0=\partial_t$. For simplicity of exposition, we shall truncate at the quadratic level. As we are examining a small amplitude problem, it is clear that this shall not affect the long-time behavior. Upon doing so, we may write \[Q(u,u',u'') = A(u,u') + b^{\alpha\beta} u \partial_\alpha\partial_\beta u + b^{\alpha\beta}_{\gamma} \partial_\gamma u \partial_\alpha\partial_\beta u\] where $b^{\alpha\beta}$ and $b^{\alpha\beta}_\gamma$ are constants which are symmetric in $\alpha, \beta$ and $A(u,u')$ is a quadratic form. In order to solve \eqref{main}, the Cauchy data are required to satisfy certain compatibility conditions. Letting $J_ku=\{\partial_x^\alpha u\,:\,0\le |\alpha|\le k\}$, for a formal $H^m$ solution $u$ of \eqref{main}, we can write $\partial_t^k u(0,{\,\cdot\,})=\psi_k(J_kf, J_{k-1}g)$, $0\le k\le m$. The functions $\psi_k$, which depend on $Q$, are called compatibility functions. The compatibility condition for $(f,g)\in H^m\times H^{m-1}$ simply requires that $\psi_k$ vanishes on ${\partial\K}$ for all $0\le k\le m-1$. For smooth data, the compatibility conditions are said to hold to infinite order if this condition holds for all $m$. A more detailed exposition on compatibility can be found in, e.g., \cite{KSS2}. Our only assumption on the geometry of ${\mathcal K}$ is that the associated local energy for solutions to the linear homogeneous wave equation decays sufficiently rapidly. For a clearer exposition, we shall assume that there is an exponential decay of local energy with a loss of a single degree of regularity, though it will be clear from the proof that a sufficiently rapid polynomial decay with a fixed finite loss of regularity will suffice.\footnote{To date, the authors are not aware of examples of three dimensional domains for which there is polynomial decay but not exponential decay, though the recent article \cite{CW} provides the most compelling evidence to date that such domains can be constructed.} More specifically, we shall assume that if $\Box u=0$ and if $\text{supp}\ u(0,{\,\cdot\,}), \partial_t u(0,{\,\cdot\,})\subset \{|x|<10\}$, then \begin{equation}\label{decay} \|u'(t,{\,\cdot\,})\|_{L^2({\R^3\backslash\K}\cap \{|x|<10\})}\lesssim e^{-a t} \sum_{|\alpha|\le 1} \|\partial^\alpha_x u'(0,{\,\cdot\,})\|_2 \end{equation} for some $a>0$. The notation $A\lesssim B$ indicates that there is a positive unspecified constant $C$ (which may change from line to line) so that $A\le CB$. Moreover, this $C$ will implicitly be independent of any parameters in our problem. Local energy decay estimates such as \eqref{decay} have a long history, which we shall only tersely describe. For nontrapping domains, one need not have the loss of regularity which appears in the right. See \cite{LMP} for star-shaped obstacles and \cite{MRS} for nontrapping domains. When there are trapped rays, it is known \cite{Ralston} that an estimate such as \eqref{decay} cannot hold unless such a loss of regularity is permitted. Results in the positive direction in the presence of trapped rays begin with \cite{Ikawa1, Ikawa2} where such estimates are proved when ${\mathcal K}$ consists of multiple convex obstacles subject to certain size/spacing conditions. More recent results include \cite{BZ}, \cite{C}, \cite{CdVP}, \cite{NZ}, \cite{WZ}. We may now state our main theorem, which shows that for Cauchy data of size $\varepsilon$ solutions to \eqref{main} must exist up to a lifespan of $T_\varepsilon = c/\varepsilon^2$ for some small constant $c$. \begin{theorem}\label{thm1} Let ${\mathcal K}$ be a smooth, bounded set for which the exponential decay of local energy \eqref{decay} holds, and let $Q$ be as above. Suppose that the Cauchy data $f,g\in C^\infty({\R^3\backslash\K})$ are compactly supported and satisfy the compatibility conditions to infinite order. Then there exist constants $N$ and $c$ so that if $\varepsilon$ is sufficiently small and \begin{equation}\label{data} \sum_{|\mu|\le N} \|\partial_x^\mu f\|_{2} + \sum_{|\mu|\le N-1}\|\partial_x^\mu g\|_2 \le \varepsilon, \end{equation} then \eqref{main} has a unique solution $u\in C^\infty([0,T_\varepsilon]\times {\R^3\backslash\K})$ where \begin{equation} \label{lifespan} T_\varepsilon = c/\varepsilon^2. \end{equation} \end{theorem} For simplicity of exposition, we are assuming here that the Cauchy data are compactly supported. It is likely that it would suffice to take the data to be small in certain weighted Sobolev norms. On ${\mathbb R}_+\times{\mathbb R}^3$, this lifespan was first proved in \cite{Lindblad}. The dependence on the solution $u$ rather than just its first and second derivatives inhibits the energy methods which are typically employed to show such long time existence. Indeed, when the nonlinearity does not depend on $u$ at the quadratic level, solutions are known to exist almost globally, i.e. with a lifespan of $T_\varepsilon\approx \exp(c/\varepsilon)$. See \cite{JK}. The previous results \cite{KSS4, KSS, KSS2, KSS3}, \cite{MS3, MS, MS4}, \cite{MNS}, \cite{KK} have focused on proving long-time existence for three dimensional wave equations in exterior domains when there is no dependence on $u$, and \cite{MNS2} examines the case that there is dependence at the cubic level and beyond. Of particular note is the paper \cite{MS3} where long-time existence results were first established only assuming \eqref{decay}, and in particular, in domains which have trapped rays. The current direction of research was initiated with the paper \cite{DZ} where the same lower bound on the lifespan was shown exterior to star-shaped obstacles. A similar four dimensional problem was addressed in \cite{DMSZ}, and the techniques therein could also be applied to the exterior of three dimensional star-shaped obstacles. The current article extends these results to much more general geometries. The method of proof shall utilize Klainerman's method of invariant vector fields \cite{Klainerman}. This has been adapted to the exterior domain setting with particularly notable contributions coming in \cite{KSS, KSS3} and \cite{MS3}. To this end, we set \[Z=\{\partial_\alpha, \Omega_{ij}=x_i\partial_j-x_j\partial_i\,:\, 0\le \alpha\le 3,\, 1\le i<j\le 3\}\] to be the generators of space-time translations and spatial rotations. We shall also denote $L=t\partial_t+r\partial_r$, which is the scaling vector field. A key fact is that \[[\Box,Z]=0,\quad [\Box, L] = 2\Box.\] These vector fields thus preserve $\Box u = 0$, in the sense that if $u$ solves such an equation then $\Box Zu=0$ and $\Box Lu=0$. The Lorentz boosts $\Omega_{0k}=t\partial_k + x_k\partial_t$ have not yet been mentioned despite playing a key role when the method is applied on ${\mathbb R}_+\times {\mathbb R}^3$. Though all of these vector fields have nice commutation properties with $\Box$, only $\partial_t$ is guaranteed to preserve the Dirichlet boundary conditions. While the other members of $Z$ do not preserve the boundary conditions, their coefficients are bounded on the compact obstacle, and thus, approximately do. Indeed, these can be handled using elliptic regularity arguments and localized energy estimates as was initiated in \cite{KSS}. On the other hand, the Lorentz boosts have unbounded normal component on ${\partial\K}$ and seem inadmissible for such boundary value problems. It is also worth noting that the Lorentz boosts have an associated speed, and they only commute with the d'Alembertian of the same speed, which renders them also difficult to use for multiple speed systems. While the scaling vector field has a bounded normal component on ${\partial\K}$, for long-time problems, its coefficients can be large in any neighborhood of the boundary. For this reason, we shall be required to use relatively few $L$ compared to the vector fields $Z$. This is an idea which originated in \cite{KSS3} and is further displayed in \cite{MS3}, \cite{MNS, MNS2}. The star-shaped assumption on the geometry of ${\mathcal K}$ arises in \cite{DZ} in order to prove energy estimates involving the scaling vector field $L$. Indeed, using ideas akin to those from \cite{KSS3} which is in turn reminiscent of \cite{Morawetz}, it is shown that the worst boundary term (in terms of $t$ dependence) which arises when proving an energy inequality for $Lu$ has a beneficial sign. Developing an alternative method for handling these boundary terms was one of the major innovations of \cite{MS3}, and this article largely represents a combination of ideas from \cite{DZ} and \cite{MS3}. The star-shaped assumption arises in \cite{DMSZ} in a related, though different, way. Long-time existence is shown there using only the vector fields $Z$. This is accomplished by employing a class of localized energy estimates which are known to hold for small perturbations of the d'Alembertian exterior to star-shaped obstacles \cite{MS}, \cite{MT2} and iterating in a fashion which is akin to \cite{KSS} and \cite{MS}. See also \cite{MS4} for a further example of how a star-shaped assumption and the broader class of available localized energy estimates can simplify arguments. The remainder of the article is organized as follows. In Section 2, we gather our main energy and localized energy estimates. These largely represent a combination of the main estimates used in \cite{DZ} as well as the techniques developed in \cite{MS3} to permit the use of the scaling vector field when the obstacle is not star-shaped. In Section 3, we establish the main decay estimates which we shall utilize. The principal piece here is a $L^1-L^\infty$ estimate which is akin to those of H\"ormander \cite{HormanderL1Linfty} as was adapted to remove the dependence on the Lorentz boosts by \cite{KSS3}. In Section 4, we prove the long time existence given by Theorem \ref{thm1}. \newsection{Energy estimates} In this section, we shall gather the main $L^2$ estimates which we shall require. These will primarily be energy estimates as well as weighted $L^2_tL^2_x$ localized energy estimates for the solution and vector fields applied to the solution. That is, these will be variants of the energy estimate and the localized energy estimate \begin{multline}\label{le}\sup_{t\in [0,T]} \|u'(t,{\,\cdot\,})\|_2 + \sup_{R\ge 1} R^{-1/2} \|u'\|_{L^2_{t,x}([0,T]\times \{|x|<R\})}\lesssim \|u'(0,{\,\cdot\,})\|_2 \\+ \inf_{\Box u = f+g}\Bigl( \int_0^T \|f(s,{\,\cdot\,})\|_2\,ds + \sum_{j\ge 0} \|{\langle} x{\rangle}^{1/2} g\|_{L^2_{t,x}([0,T]\times \{{\langle} x{\rangle} \approx 2^j\})}\Bigr)\end{multline} which are known to hold on ${\mathbb R}_+\times {\mathbb R}^3$. Estimates of this latter form originated in \cite{Morawetz2} and have subsequently appeared in, e.g., \cite{HY}, \cite{KSS, KSS3}, \cite{KPV}, \cite{SS}, \cite{Sterb}, \cite{Strauss}. Their particular utility for proving long-time existence in exterior domains was first recognized in \cite{KSS}, and they have played a primary role in nearly every such proof since. Also, see, e.g., \cite{MS, MS4} and \cite{MT1, MT2}. \subsubsection{Estimates for $\|u\|_{L^2_x}$ on ${\mathbb R}_+\times{\mathbb R}^3$}\label{bdylessSection} Here we shall gather the boundaryless $L^2$ estimates on $u$, rather than on $u'$, which we shall utilize in the sequel. We shall only require these in the boundaryless case as the Dirichlet boundary conditions permit the control \begin{equation}\label{locControl}\|u\|_{L^2_x(\{x\in {\R^3\backslash\K}\,:\,|x|<2\})} \lesssim \|u'\|_{L^2_x(\{x\in {\R^3\backslash\K}\,:\,|x|<2\})}\end{equation} and when a cutoff which vanishes on $\{|x|<1\}$ is applied to $u$, then it suffices to examine a boundaryless equation. The majority of these estimates were also utilized in \cite{DZ}. In the sequel, we shall abbreviate $L^2_x(\{x\in {\R^3\backslash\K}\,:\,|x|<2\})$ as $L^2_x(|x|<2)$. We first state the variant of the localized energy estimates which we shall employ. \begin{lemma} Let $u$ be a smooth function which vanishes for large $|x|$ at each time $t$. Then for $T\ge 1$ \begin{multline}\label{DZle} \|{\langle} x{\rangle}^{-3/4} u'\|_{L^2_{t,x}([0,T]\times{\mathbb R}^3)}+ T^{-1/4} \|{\langle} x{\rangle} ^{-1/4} u'\|_{L^2_{t,x}([0,T]\times {\mathbb R}^3)} \\\lesssim \|u'(0,{\,\cdot\,})\|_2 + \inf_{\Box u = f+g}\Bigl( \int_0^T \|f(s,{\,\cdot\,})\|_2\,ds + \sum_{j\ge 0} \|{\langle} x{\rangle}^{1/2} g\|_{L^2_{t,x}([0,T]\times \{{\langle} x{\rangle} \approx 2^j\})}\Bigr). \end{multline} \end{lemma} To obtain \eqref{DZle}, we first note that over $|x|>T$ it follows trivially from the energy inequality. To finish the proof, one need only dyadically decompose $|x|<T$ and apply \eqref{le}. The interested reader can find an alternate proof in \cite{DZ}. We shall then use the following weighted Sobolev-type estimate from \cite{DZ}, \cite{DMSZ}. \begin{lemma} For $n\ge 3$ and $h\in C_0^\infty({\mathbb R}^n)$, \[\|h\|_{\dot{H}^{-1}({\mathbb R}^n)}\lesssim \|h\|_{L^{\frac{2n}{n+2}}(|x|<1)} + \||x|^{-\frac{n-2}{2}} h\|_{L^1_rL^2_\omega(|x|>1)}.\] \end{lemma} Here and throughout, the mixed norm represents \[\|f\|_{L^p_r L^q_\omega({\mathbb R}^n)} = \Bigl(\int_0^\infty \Bigr[\int_{\S^{n-1}} |f(r\omega)|^q\,d\omega\Bigr]^{p/q}\, r^{n-1}\,dr\Bigr)^{1/p}.\] By applying the energy inequality and \eqref{DZle} to the Riesz transforms of the solution and subsequently applying the preceding lemma, we obtain: \begin{proposition} Let $u$ be a smooth function which vanishes for large $|x|$ at each time $t$. Then for $T\ge 1$ \begin{multline}\label{noD} \|u\|_{L^\infty_tL^2_x([0,T]\times {\mathbb R}^3)} + T^{-1/4} \|{\langle} x{\rangle}^{-1/4} u\|_{L^2_{t,x}([0,T]\times {\mathbb R}^3)} \lesssim \|u(0,{\,\cdot\,})\|_2 + \|\partial_tu(0,{\,\cdot\,})\|_{\dot{H}^{-1}} \\ + \int_0^T \||x|^{-1/2} \Box u(s,{\,\cdot\,})\|_{L^1_rL^2_\omega(|x|>1)}\,ds + \int_0^T \|\Box u(s,{\,\cdot\,})\|_{L^{6/5}(|x|<1)}\,ds. \end{multline} \end{proposition} Further details of the proof can again be found in \cite{DZ} and \cite{DMSZ}. The following proposition will be used to control commutator terms when the solution in the exterior domain is cutoff away from the obstacle. It appears implicitly in \cite[Section 4]{DZ} and utilizes arguments akin to those which appeared in \cite{SS}, \cite{KSS} which rely on Huygens' principle. In higher dimensions, an alternate proof which does not rely on Huygens' principle appeared in \cite{DMSZ}. \begin{proposition} Let $u$ be a smooth solution to $\Box u = G$, $u=0$ for $t\le 0$. Suppose further that $G(s,x)=0$ unless $|x|<3$. Then, \begin{equation} \label{noDcomm} \|u\|_{L^\infty_tL^2_x([0,T]\times {\mathbb R}^3)} + T^{-1/4} \|{\langle} x{\rangle}^{-1/4} u\|_{L^2_{t,x}([0,T]\times {\mathbb R}^3)} \lesssim \|G\|_{L^2_{t,x}([0,T]\times {\mathbb R}^3)}. \end{equation} \end{proposition} In one case, we shall need an improvement on the estimate \eqref{noD} when the forcing term is in divergence form. This follows easily from ideas of \cite{Hormander}, \cite{Lindblad}. See also \cite{MS5} for an application in a context similar to the current study. \begin{proposition}\label{propDivForm} Let $v$ be a smooth solution to \[\begin{cases} \Box v = \sum_0^3 a_j\partial_j G,\quad (t,x)\in {\mathbb R}_+\times {\mathbb R}^3,\\ v(0,{\,\cdot\,})=\partial_t v(0,{\,\cdot\,})=0. \end{cases}\] Then, \begin{equation} \label{divForm} \sup_{t\in [0,T]}\|v(t,{\,\cdot\,})\|_2 + {\langle} T{\rangle}^{-1/4} \|{\langle} x{\rangle}^{-1/4} v\|_{L^2_{t,x}([0,T]\times {\mathbb R}^3)} \lesssim \|G(0,{\,\cdot\,})\|_{\dot{H}^{-1}} + \int_0^T \|G(s,{\,\cdot\,})\|_2\,ds. \end{equation} \end{proposition} \subsubsection{Energy estimates on ${\mathbb R}_+\times{\R^3\backslash\K}$} In this section, we examine the fixed time, $L^2$ energy estimates which will be used in the sequel. As we are proving long-time existence for quasilinear equations, we shall require such estimates for small perturbations of the d'Alembertian. Such an estimate is well-known for solutions satisfying Dirichlet boundary conditions. When vector fields are, however, applied to the solution and these boundary terms no longer vanish, some extra care is required, particularly when the time dependent vector field $L$ occurs. In the remainder of this section, we are merely gathering results from \cite[Section 2]{MS3}, and the interested reader is referred therein for detailed proofs. In particular, we shall be studying smooth solutions $u$ to \begin{equation}\label{pert} \begin{cases} \Box_\gamma u = F,\quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\K},\\ u|_{{\partial\K}}(t,{\,\cdot\,})=0,\\ u(0,{\,\cdot\,})=f,\quad \partial_t u(0,{\,\cdot\,})=g \end{cases} \end{equation} where \[\Box_\gamma u = (\partial_t^2 -\Delta)u + \gamma^{\alpha\beta}(t,x)\partial_\alpha\partial_\beta u.\] The perturbation is taken to satisfy $\gamma^{\alpha\beta}=\gamma^{\beta\alpha}$ as well as \begin{equation}\label{gammaDecay1} \|\gamma^{\alpha\beta}(t,{\,\cdot\,})\|_\infty \le \frac{\delta}{1+t},\quad 0<\delta\ll 1. \end{equation} We set $e_0(u)$ to be the energy form \[ e_0(u)=|u'|^2 + 2\gamma^{0\alpha} \partial_t u \partial_\alpha u - \gamma^{\alpha\beta}\partial_\alpha u\partial_\beta u. \] Our first estimate concerns \[E_M(t)=E_M(u)(t)=\int_{{\R^3\backslash\K}} \sum_{j=0}^M e_0(\partial_t^j u)(t,x)\,dx.\] As $\partial_t^j$ preserves the Dirichlet boundary conditions, standard energy methods yield \begin{lemma}\label{lemma2.1} Fix $M=0,1,2,\dots$ and assume that the $\gamma^{\alpha\beta}$ are as above. Suppose that $u\in C^\infty$ solves \eqref{pert} and vanishes for large $|x|$ for every $t$. Then \begin{equation} \label{dtEnergy} \partial_t E_M^{1/2}(t)\lesssim \sum_{j=0}^M \|\Box_\gamma \partial_t^j u(t,{\,\cdot\,})\|_2 + \|\gamma'(t,{\,\cdot\,})\|_\infty E_M^{1/2}(t). \end{equation} \end{lemma} Here, we have used the notation \[ \|\gamma'(t,{\,\cdot\,})\|_\infty = \sum_{\alpha,\beta,\mu=0}^3 \|\partial_\mu \gamma^{\alpha\beta}(t,{\,\cdot\,})\|_\infty.\] In the sequel, we shall frequently use the fact that \[\int_{{\R^3\backslash\K}} e_0(v)(t,x)\,dx \approx \|v'(t,{\,\cdot\,})\|^2_2\] if \eqref{gammaDecay1} holds with $\delta$ sufficiently small. From these $L^2$ estimates for $\partial_t$ applied to the solution $u$, estimates for $\partial_x^\mu u$ shall be obtained via elliptic regularity. The key lemma is \begin{lemma}\label{lemma2.3} For $j, N=0,1,2,\dots$ fixed and for $u\in C^\infty({\mathbb R}_+\times{\R^3\backslash\K})$ solving \eqref{pert} and vanishing for large $|x|$ for each $t$, we have \begin{equation}\label{ellReg} \sum_{|\mu|\le N} \|L^j \partial^\mu u'(t,{\,\cdot\,})\|_2 \lesssim \sum_{\substack{k+l\le j+N\\ l\le j}} \|L^l\partial_t^k u'(t,{\,\cdot\,})\|_2 + \sum_{\substack{|\mu|+l\le N+j-1\\l\le j}} \|L^l\partial^\mu \Box u(t,{\,\cdot\,})\|_2. \end{equation} \end{lemma} In order to prove estimates involving $L$, we set $\tilde{L}=\eta(x)r\partial_r + t\partial_t$ where $\eta\in C^\infty({\mathbb R}^3)$ with $\eta(x)=0$ for $x\in {\mathcal K}$ and $\eta(x)=1$ for $|x|>1$. We note that $\tilde{L}$ now preserves the Dirichlet boundary conditions. It, however, no longer commutes with $\Box$. The commutators will be controlled using a combination of \eqref{decay} and Huygens' principle for the associated boundaryless d'Alembertian, which will be stated later. We set \[X_{k,j} = \int e_0(\tilde{L}^k \partial_t^j u)(t,x)\,dx.\] Associated to this energy, we have the following estimate. \begin{proposition} Let $u\in C^\infty$ solve \eqref{pert} with $\gamma^{\alpha\beta}$ as above and vanish for large $|x|$ for each $t$. Then, \begin{multline} \label{X} \partial_t X^{1/2}_{k,j} \lesssim \|\gamma'(t,{\,\cdot\,})\|_\infty X^{1/2}_{k,j} +\|\tilde{L}^k \partial_t^j \Box_\gamma u(t,{\,\cdot\,})\|_2 + \|[\tilde{L}^k\partial_t^j, \gamma^{\alpha\beta}\partial_\alpha\partial_\beta]u(t,{\,\cdot\,})\|_2 \\+ \sum_{l\le k-1} \|L^l\partial_t^j \Box u(t,{\,\cdot\,})\|_2 + \sum_{\substack{l+|\mu|\le j+k\\l\le k-1}} \|L^l \partial^\mu u'(t,{\,\cdot\,})\|_{L^2(|x|<1)}. \end{multline} \end{proposition} In the sequel, we shall be choosing $\gamma$ so that \begin{equation}\label{gammaDecay2}\|\gamma'(t,{\,\cdot\,})\|_\infty \le \frac{\delta}{1+t}.\end{equation} By doing so, it will be easy to bootstrap the term involving $\|\gamma'\|_\infty$ upon integration over $[0,T]$ when $T$ is appropriately bounded in terms of $\delta$. We finally state an energy estimate involving the full set of vector fields. Here, the associated boundary terms involve a loss of regularity, but they no longer involve the rotations. These boundary terms will be controlled using localized energy estimates which follow. \begin{proposition} For fixed $N_0$ and $m_0$, set \[Y_{N_0,m_0}(t) = \sum_{\substack{|\mu|+k\le N_0+m_0\\k\le m_0\\|\nu|=1}} \int e_0(L^k Z^\mu \partial^\nu u)(t,x)\,dx.\] Suppose that \eqref{gammaDecay1} and \eqref{gammaDecay2} hold for $\delta$ sufficiently small. Then \begin{multline} \label{LZenergy} \partial_t Y_{N_0,m_0}\lesssim Y_{N_0,m_0}^{1/2}\sum_{\substack{|\mu|+k\le N_0+m_0\\k\le m_0\\|\nu|=1}}\|\Box_\gamma L^k Z^\mu \partial^\nu u(t,{\,\cdot\,})\|_2 \\+ \|\gamma'(t,{\,\cdot\,})\|_\infty Y_{N_0,m_0} + \sum_{\substack{|\mu|+k\le N_0+m_0+2\\k\le m_0}} \|L^k \partial^\mu u'(t,{\,\cdot\,})\|^2_{L^2(|x|<1)}. \end{multline} \end{proposition} The above proposition contains a slight modification of what appeared previously in \cite{MS3}. There one did not need to distinguish between the vector fields $Z$ and the one derivative $\partial$ in the definition of $Y_{N_0,m_0}$. Here we need this slight bit of additional precision. The proof follows the same argument. One needs to only apply standard energy methods to the principle terms and a trace theorem to the boundary terms which result upon doing such integrations by parts. \subsubsection{Localized energy estimates and boundary term estimates on ${\mathbb R}_+\times{\R^3\backslash\K}$} In this section, we collect two additional results from \cite{MS3}. The reader is referred there for proofs. Both estimates will concern solutions to the Dirichlet-wave equation \begin{equation}\label{inhomwave} \begin{cases} \Box w = G(t,x),\quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\K},\\ w|_{{\partial\K}}(t,{\,\cdot\,})=0,\\ w(t,x)=0,\quad t\le 0. \end{cases} \end{equation} When the estimates of Section \ref{bdylessSection} are applied away from the obstacle, the following, which strongly depends on \eqref{decay}, is used to handle the behavior near the boundary. This is also used to control the boundary term that appears in \eqref{LZenergy}. For notational convenience, we set $S_T=[0,T]\times {\R^3\backslash\K}$. \begin{proposition} Suppose that ${\mathcal K}\subset \{|x|<1\}$ satisfies \eqref{decay}, and suppose that $w\in C^\infty$ solves \eqref{inhomwave}. Then, for fixed $N_0$ and $m_0$, if $w$ vanishes for large $|x|$ for every fixed $t$, \begin{multline} \label{locEnergyExt} \sum_{\substack{|\mu|+j\le N_0+m_0\\j\le m_0}} \|L^j \partial^\mu w'\|_{L^2_{t,x}(S_T\cap \{|x|<10\})} \lesssim \int_0^T \sum_{\substack{|\mu|+j\le N_0+m_0+1\\j\le m_0}} \|\Box L^j \partial^\mu w(s,{\,\cdot\,})\|_2\,ds \\+ \sum_{\substack{|\mu|+j\le N_0+m_0-1\\j\le m_0}} \|\Box L^j \partial^\mu w\|_{L^2_{t,x}(S_T)}. \end{multline} \end{proposition} The second estimate shall be used to control the boundary term which arises in \eqref{X}. The contributions from behavior near the obstacle are controlled using \eqref{decay} and that away from the boundary follows from sharp Huygens' principle after passing to a properly related boundaryless equation. See \cite{MS3}. \begin{proposition} Let ${\mathcal K}\subset \{|x|<1\}$ satisfy \eqref{decay}, and suppose that $w\in C^\infty$ solves \eqref{inhomwave}. Then for fixed $N_0$ and $m_0$ and $t>2$, \begin{multline} \label{ms_bdy} \sum_{\substack{|\mu|+j\le N_0+m_0\\ j\le m_0}} \int_0^t \|L^j \partial^\mu w'(s,{\,\cdot\,})\|_{L^2(|x|<2)}\,ds \\\lesssim \sum_{\substack{|\mu|+j\le N_0+m_0+1\\j\le m_0}} \int_0^t \Bigl(\int_0^s \|L^j \partial^\mu \Box w(\tau,{\,\cdot\,})\|_{L^2(||x|-(s-\tau)|<10)}\,d\tau\Bigr)\,ds. \end{multline} \end{proposition} \newsection{Pointwise decay estimates} In this section, we gather the main decay estimates which will permit the necessary integrability to gain long-time existence. The first is a variant on standard weighted $L^1$-$L^\infty$ estimates (see, e.g., \cite{Htext}, \cite{So2}). \begin{proposition}\label{l1linfProp} Suppose that $w$ is a solution to the scalar inhomogeneous wave equation \eqref{inhomwave}, and suppose that ${\mathcal K}\subset \{|x|<1\}$ is so that the decay of local energy \eqref{decay} holds. Then \begin{multline}\label{l1linf} (1+t+|x|) |Z^\mu w(t,x)|\lesssim \int_0^t\int_{\R^3\backslash\K} \sum_{\substack{|\nu|+k\le |\mu|+7\\k\le 1}} |L^k Z^\nu G(s,y)|\,\frac{dy\,ds}{|y|} \\+\int_0^t \sum_{\substack{|\nu|+k\le |\mu|+4\\k\le 1}} \|L^k \partial^\nu G(s,{\,\cdot\,})\|_{L^2(|x|<2)}\,ds. \end{multline} \end{proposition} This is essentially \cite[Theorem 4.1]{KSS3}, though there solutions are studied exterior to star-shaped obstacles. For such domains, the associated version of \eqref{decay} does not require a loss of regularity. In the current setting, as we allow for the possibility of trapped rays in our exterior domain and as such \eqref{decay} has an associated loss of regularity, the right hand side of \eqref{l1linf} reflects an additional vector field. See, also, \cite[Theorem 3.1]{MS3}. The second decay estimate is a weighted Sobolev lemma. See \cite{Klainerman2}. \begin{lemma}\label{wtdSoblemma} Suppose that $h\in C^\infty({\mathbb R}^3)$. Then for $R\ge 1$, \begin{equation} \label{wtdSob} \|h\|_{L^\infty(R/2<|x|<R)}\lesssim R^{-1}\sum_{|\alpha|\le 2} \|Z^\alpha h\|_{L^2(R/4<|x|<2R)}, \end{equation} and \begin{equation} \label{wtdSob2} \|h\|_{L^\infty(R<|x|<R+1)}\lesssim R^{-1}\sum_{|\alpha|\le 2} \|Z^\alpha h\|_{L^2(R-1<|x|<R+2)}. \end{equation} \end{lemma} After localizing to the annulus, these estimates follow simply by applying Sobolev embedding on ${\mathbb R}\times \S^2$ and then adjusting the volume element to match that of ${\mathbb R}^3$. \newsection{Proof of Theorem \ref{thm1}} Here we prove Theorem~\ref{thm1} via iteration. We shall first use local existence theory to reduce to the case of vanishing initial data. Indeed by invoking, e.g., the local existence theory established in \cite{KSS2}, if $\varepsilon$ in \eqref{data} is sufficiently small and $N$ is sufficiently large, then the existence of a smooth solution for $t\in [0,2]$ satisfying \begin{equation} \label{local} \sup_{0\le t\le 2} \sum_{|\mu|\le 102} \|\partial^\mu u(t,{\,\cdot\,})\|_{2}\le C\varepsilon \end{equation} is guaranteed. We shall use this local solution to reduce to the case of vanishing initial data. To this end, let $\eta\in C^\infty({\mathbb R})$ with $\eta(t)\equiv 1$ for $t\le 1/2$ and $\eta(t)\equiv 0$ for $t>1$. Then $u_0=\eta u$ solves \[\Box u_0 = \eta Q(u,u',u'') + [\Box,\eta]u.\] And solving \eqref{main} is then equivalent to showing that $w=u-u_0$ solves \begin{equation}\label{reduced} \begin{cases} \Box w = (1-\eta)Q(u_0+w,(u_0+w)',(u_0+w)'') - [\Box,\eta](u_0+w),\\ w|_{\partial\K} = 0,\\ w(0,x)=\partial_tw(0,x)=0. \end{cases} \end{equation} We solve \eqref{reduced} via iteration. In particular, we let $w_0\equiv 0$, and recursively define $w_m$ to solve \[ \begin{cases} \Box w_m = (1-\eta) Q(u_0+w_{m-1},(u_0+w_{m-1})', (u_0+w_m)'') - [\Box,\eta]u,\\ w_m|_{{\partial\K}}=0,\\ w_m(0,x)= \partial_t w_m(0,x)=0. \end{cases} \] Our first goal is to show a form of boundedness. We set \begin{multline}\label{M} M_m(T)=\sup_{t\in [0,T]}\sum_{|\mu|\le 100}\|\partial^\mu w_m'(t,{\,\cdot\,})\|_2 + \sum_{|\mu|\le 95} \|{\langle} x {\rangle}^{-3/4}\partial^\mu w_m'\|_{L^2_{t,x}(S_T)}\\ +\sup_{t\in [0,T]} \sum_{\substack{|\mu|\le 90\\|\nu|\le 2}} \|Z^\mu\partial^\nu w_m(t,{\,\cdot\,})\|_2 + \sum_{\substack{|\mu|\le 90\\|\nu|\le 1}} {\langle} T{\rangle}^{-1/4} \|{\langle} x{\rangle}^{-1/4} Z^\mu\partial^\nu w_m\|_{L^2_{t,x}(S_T)}\\ +\sup_{0\le t\le T} \sum_{|\mu|\le 80} \|L\partial^\mu w'_m(t,{\,\cdot\,})\|_2 + \sum_{|\mu|\le 75} \|{\langle} x{\rangle}^{-3/4} L\partial^\mu w_m'\|_{L^2_{t,x}(S_T)} \\+ \sup_{0\le t\le T} \sum_{\substack{|\mu|\le 70\\|\nu|\le 2}} \|LZ^\mu \partial^\nu w_m(t,{\,\cdot\,})\|_2 + {\langle} T{\rangle}^{-1/4} \sum_{\substack{|\mu|\le 70\\|\nu|\le 1}} \|{\langle} x{\rangle}^{-1/4} LZ^\mu \partial^\nu w_m\|_{L^2_{t,x}(S_T)} \\+ \sup_{0\le t\le T} (1+t) \sum_{|\mu|\le 60} \|Z^\mu w_m(t,{\,\cdot\,})\|_\infty. \end{multline} If $M_0(T)$ denotes the above quantity with $w_m$ replaced by $u_0$, then \eqref{local} and finite propagation speed guarantees the existence of a constant $C_0$ so that \begin{equation}\label{basecase}M_0(T)\le C_0\varepsilon.\end{equation} We wish to inductively show that there is a uniform constant $C_1$ so that \begin{equation}\label{bddness}M_m(T)\le C_1\varepsilon.\end{equation} We label the terms of $M_m(T)$ by $I, II, \dots, IX$. {\bf \em Bound for $I$:} Here we shall apply \eqref{dtEnergy} and \eqref{ellReg} ($j=0$) with \begin{equation}\label{gamma}\gamma^{\alpha\beta} = - (1-\eta)\Bigl[b^{\alpha\beta} (u_0+w_{m-1}) + b^{\alpha\beta}_\sigma \partial_\sigma(u_0+w_{m-1})\Bigr].\end{equation} By the inductive hypothesis as well as \eqref{local}, we have \eqref{gammaDecay1} and \eqref{gammaDecay2} with $\delta = C_1\varepsilon$. Upon integrating \eqref{dtEnergy}, applying \eqref{lifespan}, bootstrapping, and utilizing \eqref{ellReg}, it suffices to bound \[\sum_{j\le 100} \int_0^T \|\Box_\gamma \partial_t^j w_m(t,{\,\cdot\,})\|_2\,dt + \sup_{t\in [0,T]}\sum_{|\mu|\le 99} \|\partial^\mu \Box w_m(t,{\,\cdot\,})\|_2.\] Here we note that \begin{multline*} \sum_{j\le 100} |\Box_\gamma \partial^j_t w_m| \lesssim \sum_{|\nu|\le 50} |\partial^\nu (u_0+w_{m-1})| \Bigl[ \sum_{|\mu|\le 100} |\partial^\mu (u_0+w_m)'| + \sum_{|\mu|\le 102} |\partial^\mu u_0|\Bigr] \\ + \sum_{|\nu|\le 51} |\partial^\nu (u_0+w_{m})'| \sum_{|\mu|\le 100} |\partial^\mu (u_0+w_{m-1})'| + \sum_{|\nu|\le 51} |\partial^\nu (u_0+w_{m-1})| \sum_{|\mu|\le 100} |\partial^\mu (u_0+w_{m-1})'| \\+ |u_0+w_{m-1}|^2 + \sum_{|\mu|\le 100} |\partial^\mu [\Box,\eta]u|. \end{multline*} By using term $I$, $III$, and $IX$ of \eqref{M} as well as \eqref{local}, we have \begin{multline*} \sum_{j\le 100} \int_0^T \|\Box_\gamma \partial^j_t w_m(t,{\,\cdot\,})\|_2\, dt\lesssim (M_0(T)+M_{m-1}(T))(M_0(T)+M_m(T)) \int_0^T (1+t)^{-1}\,dt \\ + (M_0(T)+M_{m-1}(T))^2 \int_0^T (1+t)^{-1}\,dt +\varepsilon. \end{multline*} We can argue similarly to control \[\sup_{t\in [0,T]}\sum_{|\mu|\le 99} \|\partial^\mu \Box w_m(t,{\,\cdot\,})\|_2.\] By doing so, it follows that \[I\le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T))\log(2+T) +C_2\varepsilon\] provided that $C_2$ is a constant which is chosen sufficiently large relative to the constant in \eqref{local}. At this point, we shall permit $C_2$ to change from line to line but note that $C_2$ is completely independent of important parameters such as $m$, $C_1$, $\varepsilon$, and $T$. {\bf\em Bound for $II$:} Here we fix a smooth cutoff $\beta$ which is identically $1$ on $|x|<2$ and vanishes for $|x|>3$. For the multi-index $\mu$ fixed, we first examine $(1-\beta)\partial^\mu w_m$, which solves the boundaryless wave equation \[\Box (1-\beta)\partial^\mu w_m = (1-\beta)\partial^\mu \Box w_m - [\Box,\beta] \partial^\mu w_m\] with vanishing initial data. To this, we apply \eqref{DZle}. Thus, in order to control $II$, we see that it suffices to bound \[\sum_{|\mu|\le 95} \int_0^T \|\partial^\mu \Box w_m(s,{\,\cdot\,})\|_2\,ds + \sum_{|\mu|\le 95} \|\partial^\mu w_m'\|_{L^2_{t,x}(S_T\cap \{|x|<3\})}.\] Here we have applied \eqref{locControl} to the lower order piece of the commutator. To control the latter term, we utilize \eqref{locEnergyExt}, which reduces the bound for $II$ to controlling \begin{equation}\label{II_RHS}\sum_{|\mu|\le 96} \int_0^T \|\partial^\mu \Box w_m(s,{\,\cdot\,})\|_2\,ds + \sum_{|\mu|\le 94} \|\partial^\mu \Box w_m\|_{L^2_{t,x}(S_T)}.\end{equation} As \begin{multline} \label{productrule} \sum_{|\mu|\le 96} |\partial^\mu \Box w_m| \lesssim \sum_{|\nu|\le 49} |\partial^\nu (u_0+w_{m-1})| \sum_{|\mu|\le 97} |\partial^\mu (u_0+w_m)'|\\ + \sum_{|\nu|\le 49} |\partial^\nu (u_0+w_{m})'| \sum_{|\mu|\le 96} |\partial^\mu (u_0+w_{m-1})'| + \sum_{|\nu|\le 49} |\partial^\nu (u_0+w_{m-1})| \sum_{|\mu|\le 96} |\partial^\mu (u_0+w_{m-1})'| \\+ |u_0+w_{m-1}|^2 + \sum_{|\mu|\le 96} |\partial^\mu [\Box,\eta]u|, \end{multline} it follows from \eqref{M} ($I$, $III$, and $IX$) and \eqref{local} that \begin{multline*} \sum_{|\mu|\le 97} \int_0^T \|\partial^\mu \Box w_m(s,{\,\cdot\,})\|_2\,ds \lesssim (M_0(T)+M_{m-1}(T))(M_0(T)+M_m(T)) \int_0^T s^{-1}\,ds \\+ (M_0(T)+M_{m-1}(T))^2\int_0^T s^{-1}\,ds + \varepsilon. \end{multline*} The second term in \eqref{II_RHS} is simpler and can be controlled similarly. It follows that \[II\le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T))\log(2+T) +C_2\varepsilon.\] {\bf \em Bounds for $III$ and $IV$, $|\nu|=0$:} The primary estimate which shall be utilized is \eqref{noD}. This meshes well with every nonlinear term with the exception of those involving second derivatives, which is more difficult as we cannot properly control the second derivatives in the weighted $L^2_{t,x}$ spaces. To get around this, we write the worst terms in divergence form and utilize \eqref{divForm}. To do so, we fix $\beta$ as above. Over $|x|<3$, due to the Dirichlet boundary conditions, we have that $\|w_m(s,{\,\cdot\,})\|_{L^2(|x|<3)} \lesssim \|w_m'(s,{\,\cdot\,})\|_{L^2(|x|<3)}$. Moreover, on such a compact set, the coefficients of $Z$ are bounded, and $\|Zw_m(s,{\,\cdot\,})\|_{L^2(|x|<3)}\lesssim \|w_m'(s,{\,\cdot\,})\|_{L^2(|x|<3)}$. Such terms are subject to the bound established for $II$ above. Thus, it will suffice to control $(1-\beta)Z^\mu w_m$ in the appropriate norms. We note that for $\mu$ fixed, $(1-\beta)Z^\mu w_m$ solves the boundaryless equation \[\Box (1-\beta)Z^\mu w_m = (1-\beta)Z^\mu \Box w_m -[\Box,\beta]Z^\mu w_m\] and that the latter term is supported in $\{|x|<3\}$. We must further decompose the first term in the right. We write \begin{multline*} (1-\beta)Z^\mu \Box w_m = \partial_\alpha\Bigl[(1-\eta)(1-\beta) \Bigl(b^{\alpha\beta} (u_0+w_{m-1}) \partial_\beta Z^\mu (u_0+w_m) \\+ b^{\alpha\beta}_\gamma \partial_\gamma (u_0+w_{m-1}) \partial_\beta Z^\mu (u_0+w_m)\Bigr)\Bigr] +G_m(t,x). \end{multline*} The key here is that the former term falls into the context of Proposition \ref{propDivForm} while the latter term does not contain the case where the full number of vector fields lands on the term containing second derivatives. By applying \eqref{divForm}, \eqref{noD} and \eqref{noDcomm}, as well as the comments in the preceding paragraph, we obtain \begin{multline}\label{zeroCase} \sup_{t\in [0,T]} \sum_{|\mu|\le 90} \|(1-\beta) Z^\mu w_m(t,{\,\cdot\,})\|_2 + \sum_{|\mu|\le 90} {\langle} T{\rangle}^{-1/4} \|{\langle} x{\rangle}^{-1/4} (1-\beta)Z^\mu w_m\|_{L^2_{t,x}(S_T)} \\\lesssim \int_0^T \|{\langle} x{\rangle}^{-1/2}G_m(s,{\,\cdot\,})\|_{L^1_rL^2_\omega}\,ds +\sum_{|\mu|\le 90}\sum_{|\theta|\le 1}\int_0^T \| |\partial^\theta (u_0+w_{m-1})| |Z^\mu (u_0+w_m)'|\|_2\,ds \\+ \sum_{|\mu|\le 90} \|\partial^\mu w_m'\|_{L^2_{t,x}([0,T]\times\{|x|<3\})}. \end{multline} The last term is controlled by $II$, which was appropriately bounded in the previous section. We first examine the first term in the right. Analogous to \eqref{productrule}, we have \begin{multline} \label{productrule2} |G_m| \lesssim \sum_{|\nu|\le 46} |Z^\nu (u_0+w_{m-1})| \sum_{|\mu|\le 90} |Z^\mu (u_0+w_m)'|\\ + \sum_{|\nu|\le 46} |Z^\nu (u_0+w_{m})'| \sum_{\substack{|\mu|\le 90\\|\theta|\le 1}} |Z^\mu \partial^\theta (u_0+w_{m-1})| \\+ \sum_{|\nu|\le 46} |Z^\nu (u_0+w_{m-1})| \sum_{\substack{|\mu|\le 90\\|\theta|\le 1}} |Z^\mu \partial^\theta (u_0+w_{m-1})| + \sum_{|\mu|\le 90} |Z^\mu [\Box,\eta]u|. \end{multline} For the first term in the right side of \eqref{zeroCase}, we can apply Sobolev embedding on $\S^2$ and the Schwarz inequality to obtain \begin{multline*} \sum_{|\mu|\le 90} \int_0^T \|{\langle} x{\rangle}^{-1/2} G_m\|_{L^1_rL^2_\omega}\,ds \lesssim \varepsilon \\+ \sum_{|\nu|\le 48} \|{\langle} x{\rangle}^{-1/4} Z^\nu (u_0+ w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{|\mu|\le 90} \|{\langle} x{\rangle}^{-1/4} Z^\mu (u_0+ w_m)'\|_{L^2_{t,x}(S_T)} \\+ \sum_{|\nu|\le 48} \|{\langle} x{\rangle}^{-1/4} Z^\nu (u_0+w_m)'\|_{L^2_{t,x}(S_T)} \sum_{\substack{|\mu|\le 90\\|\theta|\le 1}} \|{\langle} x{\rangle}^{-1/4} Z^\mu \partial^\theta (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \\+ \sum_{|\nu|\le 48} \|{\langle} x{\rangle}^{-1/4} Z^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{\substack{|\mu|\le 90\\|\theta|\le 1}} \|{\langle} x{\rangle}^{-1/4} Z^\mu \partial^\theta (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)}. \end{multline*} Here we have also applied \eqref{local}. The above is, in turn, \[{\langle} T{\rangle}^{1/2}(M_0(T)+ M_{m-1}(T)) (M_0(T)+M_{m-1}(T)+M_m(T)) + C_2\varepsilon.\] The second term in the right side of \eqref{zeroCase} is better behaved. Indeed, using $IX$, it follows immediately that this is \[\lesssim (\log(2+T)) (M_0(T)+M_{m-1}(T))(M_0(T)+M_m(T)).\] Combining the above, we have established \[III\Bigr|_{|\nu|=0} + IV\Bigr|_{|\nu|=0}\le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)){\langle} T{\rangle}^{1/2} +C_2\varepsilon.\] {\bf \em Bounds for $III$ and $IV$, $|\nu|=1$:} We, again, fix $\beta$ as above and seek to control the contribution away from the boundary as the coefficients of $Z$ are $O(1)$ on the support of $\beta$ and the corresponding terms near the boundary are controlled by $II$. Indeed, we apply \eqref{le} and \eqref{DZle} to $(1-\beta)Z^\mu w_m$, which solves the boundaryless equation shown in the previous subsection. This yields \begin{multline}\label{III_IV_RHS} \sup_{t\in [0,T]} \sum_{|\mu|\le 90} \|Z^\mu w_m'(t,{\,\cdot\,})\|_2 + \sum_{|\mu|\le 90} {\langle} T{\rangle}^{-1/4} \|{\langle} x{\rangle}^{-1/4} Z^\mu w_m'\|_{L^2_{t,x}(S_T)} \\\lesssim \sum_{|\mu|\le 90} \int_0^T \|Z^\mu \Box w_m(s,{\,\cdot\,})\|_2\,ds + \sum_{|\mu|\le 91} \|\partial^\mu w_m\|_{L^2_{t,x}([0,T]\times \{|x|<3\})}. \end{multline} The last term in the above equation is subject to the bounds previously established for $II$. Mimicking the arguments above, we obtain to following bound for the first term in the right side of \eqref{III_IV_RHS}: \begin{multline*} \int_0^T \sum_{|\nu|\le 46} \|Z^\nu (u_0+w_{m-1})(s,{\,\cdot\,})\|_\infty \sum_{|\mu|\le 90} \|Z^\mu \partial^2 (u_0+w_m)(s,{\,\cdot\,})\|_2\,ds \\+ \int_0^T \sum_{|\nu|\le 46} \|Z^\nu \partial (u_0+w_m)(s,{\,\cdot\,})\|_\infty \sum_{\substack{|\mu|\le 90\\|\nu|\le 1}} \|Z^\mu \partial^\nu (u_0+w_{m-1})(s,{\,\cdot\,})\|_2\,ds \\+ \int_0^T \sum_{|\nu|\le 46} \|Z^\nu (u_0+w_{m-1})(s,{\,\cdot\,})\|_\infty \sum_{\substack{|\mu|\le 90\\|\nu|\le 1}} \|Z^\mu \partial^\nu (u_0+w_{m-1})(s,{\,\cdot\,})\|_2\,ds \\+ \int_0^T \sum_{|\mu|\le 90} \|Z^\mu [\Box,\eta]u(s,{\,\cdot\,})\|_2\,ds. \end{multline*} The key thing to note here is that as we are not utilizing estimates for time-dependent perturbations of $\Box$, we must face a term of the form $\sum_{|\mu|\le 90} \|Z^\mu w_m''\|_2$, which shall be bounded in the following section. Utilizing terms $III$ and $IX$ of \eqref{M} as well as \eqref{local}, it follows that this piece satisfies \[III\Bigr|_{|\nu|=1} + IV\Bigr|_{|\nu|=1} \le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T))\log(2+T) + C_2\varepsilon.\] {\em \bf Bounds for $III$, $|\nu|=2$:} With $\gamma$ chosen as when bounding $I$, we have \eqref{gammaDecay1} and \eqref{gammaDecay2} with $\delta=C_1 \varepsilon$. We may, thus, apply \eqref{LZenergy}. Upon integrating \eqref{LZenergy} in $t$, applying \eqref{gammaDecay2} and \eqref{lifespan}, and bootstrapping, we need to control \begin{equation}\label{III_RHS}\int_0^T \sum_{\substack{|\mu|\le 90\\|\nu|=1}} \|\Box_\gamma Z^\mu \partial^\nu w_m(t,{\,\cdot\,})\|_2\,dt + \sum_{|\mu|\le 92} \|\partial^\mu w_m'\|_{L^2_{t,x}([0,T]\times\{|x|<1\})}.\end{equation} As above, the bound established for $II$ applies to the latter term, and we need only control the former term. Using the product rule, it follows that \begin{multline*} \sum_{\substack{|\mu|\le 90\\|\nu|=1}} |\Box_\gamma Z^\mu \partial^\nu w_m| \lesssim \sum_{|\nu|\le 47} |Z^\nu (u_0+w_{m-1})| \Bigl[ \sum_{\substack{|\mu|\le 90\\|\nu|\le 1}} |Z^\mu \partial^\nu (u_0+w_m)'| + \sum_{|\mu|\le 93} |Z^\mu u_0|\Bigr] \\ + \sum_{|\nu|\le 47} |Z^\nu (u_0+w_{m})'| \sum_{\substack{|\mu|\le 90\\|\nu|\le 2}} |Z^\mu \partial^\nu (u_0+w_{m-1})| \\+ \sum_{|\nu|\le 47} |Z^\nu (u_0+w_{m-1})| \sum_{\substack{|\mu|\le 90\\|\nu|\le 2}} |Z^\mu \partial^\nu (u_0+w_{m-1})| + \sum_{|\mu|\le 91} |Z^\mu [\Box,\eta]u|. \end{multline*} And hence, using \eqref{local}, \begin{multline*} \int_0^T \sum_{\substack{|\mu|\le 90\\|\nu|=1}} \|\Box_\gamma Z^\mu \partial^\nu w_m(t,{\,\cdot\,})\|_2\,dt \lesssim\varepsilon \\+ \int_0^T \sum_{|\nu|\le 47} \|Z^\nu (u_0+w_{m-1})(t,{\,\cdot\,})\|_\infty \Bigl[ \sum_{\substack{|\mu|\le 90\\|\nu|\le 1}} \|Z^\mu \partial^\nu (u_0+w_m)'(t,{\,\cdot\,})\|_2 + \sum_{|\mu|\le 93} \|Z^\mu u_0(t,{\,\cdot\,})\|\Bigr]\,dt \\ + \int_0^T \sum_{|\nu|\le 47} \|Z^\nu (u_0+w_{m})'(t,{\,\cdot\,})\|_\infty \sum_{\substack{|\mu|\le 90\\|\nu|\le 2}} \|Z^\mu \partial^\nu (u_0+w_{m-1})(t,{\,\cdot\,})\|_2\,dt \\+ \int_0^T \sum_{|\nu|\le 47} \|Z^\nu (u_0+w_{m-1})(t,{\,\cdot\,})\|_\infty \sum_{\substack{|\mu|\le 90\\|\nu|\le 2}} \|Z^\mu \partial^\nu (u_0+w_{m-1})(t,{\,\cdot\,})\|_2\,dt. \end{multline*} We now use terms $III$ and $IX$ of \eqref{M}. This immediately yields \[III\Bigr|_{|\nu|=2} \le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T))\log(2+T) +C_2\varepsilon.\] {\em \bf Bound for $V$:} It is here where our approach most differs from that of \cite{DZ}. When proving energy estimate involving $L$, \cite{DZ} utilized the star-shapedness, as in \cite{KSS3}, to show that the worst contribution on the boundary had a favorable sign. Our approach instead follows that of \cite{MS3}, which relies instead on \eqref{ms_bdy}. We first note that by \eqref{ellReg}, it suffices to estimate \[\sum_{\substack{j+k\le 81\\k\le 1}} \|L^k \partial_t^j w_m'(t,{\,\cdot\,})\|_2 + \sum_{\substack{|\mu|+k\le 80\\k\le 1}} \|L^k \partial^\mu \Box w_m(t,{\,\cdot\,})\|_2.\] We further note that \[\sum_{\substack{j+k\le 81\\k\le 1}} \|L^k \partial_t^j w_m'(t,{\,\cdot\,})\|_2 \le \sum_{k\le 80} \|\tilde{L} \partial_t^j w_m'(t,{\,\cdot\,})\|_2 + \sum_{|\mu|\le 81} \|\partial^\mu w_m'(t,{\,\cdot\,})\|_2.\] As the latter term is subject to the previously established bounds for $I$, it suffices to control \[\sum_{j\le 80} \|\tilde{L} \partial_t^j w_m'(t,{\,\cdot\,})\|_2 + \sum_{\substack{|\mu|+k\le 80\\k\le 1}} \|L^k \partial^\mu \Box w_m(t,{\,\cdot\,})\|_2.\] For the former term, we shall employ \eqref{X} with $\gamma$ as in the argument for term $I$. After integrating \eqref{X}, applying \eqref{gammaDecay2} and \eqref{lifespan}, and bootstrapping, it remains to bound \begin{multline}\label{VneedToBound} \int_0^T \sum_{\substack{j+k\le 81\\k\le 1}} \Bigl(\|\tilde{L}^k \partial_t^j \Box_\gamma w_m(t,{\,\cdot\,})\|_2 + \|[\tilde{L}^k\partial_t^j, \gamma^{\alpha\beta}\partial_\alpha\partial_\beta]w_m(t,{\,\cdot\,})\|_2\Bigr)\,dt \\+\int_0^T \sum_{j\le 80} \|\partial_t^j \Box w_m(t,{\,\cdot\,})\|_2 \,dt + \sum_{|\mu|\le 80} \int_0^T \|\partial^\mu w_m'(t,{\,\cdot\,})\|_{L^2(|x|<1)}\, dt \\+ \sup_{t\in [0,T]} \sum_{\substack{|\mu|+k\le 80\\k\le 1}} \|L^k \partial^\mu \Box w_m(t,{\,\cdot\,})\|_2. \end{multline} We note that \begin{multline}\label{LproductRule} \sum_{\substack{j+k\le 81\\k\le 1}} \Bigl(|\tilde{L}^k \partial_t^j \Box_\gamma w_m| + |[\tilde{L}^k\partial_t^j, \Box-\Box_\gamma] w_m|\Bigr) \lesssim \sum_{|\nu|\le 41} |\partial^\nu (u_0+w_{m-1})| \sum_{|\mu| \le 80} |L\partial^\mu w_m'| \\+ \Bigl(\sum_{|\mu|\le 41} (|\partial^\mu (w_{m-1}+u_0)| + |\partial^\mu (w_m+u_0)'|)\Bigr) \sum_{|\mu|\le 80} |L \partial^\mu (w_{m-1}+u_0)'| \\+\sum_{\substack{j+|\mu|\le 41\\j\le 1}} |L^j \partial^\mu (w_{m-1}+u_0)| \Bigl(\sum_{|\nu|\le 81} (|\partial^\nu (w_m+u_0)'| + |\partial^\nu(w_{m-1}+u_0)'| + |\partial^\nu u_0|)\Bigr) \\+\sum_{\substack{j+|\mu|\le 41\\j\le 1}} |L^j \partial^\mu (w_m+u_0)'| \sum_{|\nu|\le 81} |\partial^\nu (w_{m-1}+u_0)'| \\+ |w_{m-1}+u_0|^2 + \sum_{|\mu|\le 81} |\partial^\mu [\Box,\eta] u|. \end{multline} For the last term, we are using the assumption that the Cauchy data are compactly supported and finite propagation speed in order to guarantee that the coefficients of $L$ are $O(1)$ on the support of $[\Box,\eta]u$. For the terms on the third and fourth lines, we shall apply \eqref{wtdSob} on dyadic intervals and then sum over those dyadic intervals. See \cite{KSS} for a more detailed computation. Upon doing so, we see that the $L^2$ norm of the terms on the third and fourth lines above is bounded by \begin{multline*} \sum_{\substack{j+|\mu|\le 43\\j\le 1}} \|{\langle} x{\rangle}^{-1/4} L^j Z^\mu (w_{m-1}+u_0)\|_2\\\times \Bigl(\sum_{|\nu|\le 81} (\|{\langle} x{\rangle}^{-3/4}\partial^\nu (w_m+u_0)'\|_2 + \|{\langle} x{\rangle}^{-3/4} \partial^\nu(w_{m-1}+u_0)'\|_2 + \|{\langle} x{\rangle}^{-3/4} \partial^\nu u_0\|_2)\Bigr) \\+\sum_{\substack{j+|\mu|\le 41\\j\le 1}} \|{\langle} x{\rangle}^{-1/4}L^j Z^\mu (w_m+u_0)'\|_2 \sum_{|\nu|\le 81} \|{\langle} x{\rangle}^{-3/4}\partial^\nu (w_{m-1}+u_0)'\|_2. \end{multline*} When integrated in $t$, we apply the Schwarz inequality and utilize terms $II$ and $XIII$ of \eqref{M} to establish control. For the terms in the right side of \eqref{LproductRule} which are on the first and second lines, we apply $IX$ and $V$ of \eqref{M}. And control for the second to last term in \eqref{LproductRule} follows from terms $IX$ and $III$ of \eqref{M}. Arguing as such yields the bound \begin{multline*} \int_0^T \sum_{\substack{j+k\le 81\\k\le 1}} \Bigl(\|\tilde{L}^k \partial_t^j \Box_\gamma w_m(t,{\,\cdot\,})\|_2 + \|[\tilde{L}^k\partial_t^j, \gamma^{\alpha\beta}\partial_\alpha\partial_\beta]w_m(t,{\,\cdot\,})\|_2\Bigr)\,dt \\\le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)){\langle} T{\rangle}^{1/4} + C_2\varepsilon. \end{multline*} The integrand of last term in \eqref{VneedToBound} is also controlled by the right side of \eqref{LproductRule}. As there is no time integral, it can be bounded much more easily just using Sobolev embedding and terms $I$ and $V$ of \eqref{M}. No loss of ${\langle} T{\rangle}^{1/4}$ is necessitated here. The third term in \eqref{VneedToBound} was previously controlled while establishing the bound for, say, $II$. For this term, as we saw previously, logarithmic losses would suffice. It only remains to establish control for \[\sum_{|\mu|\le 80} \int_0^T \|\partial^\mu w_m'(t,{\,\cdot\,})\|_{L^2(|x|<1)}\,dt.\] We apply \eqref{ms_bdy} and see that it suffices to bound \[\sum_{|\mu|\le 81} \int_0^T \int_0^s \|\partial^\mu \Box w_m(\tau,{\,\cdot\,})\|_{L^2(||x|-(s-\tau)|<10)}\,d\tau\,ds.\] Similar to \eqref{productrule}, we have \begin{multline*} \sum_{|\mu|\le 81} |\partial^\mu \Box w_m| \lesssim \sum_{|\nu|\le 41} |\partial^\nu (u_0+w_{m-1})| \sum_{|\mu|\le 82} |\partial^\mu (u_0+w_m)'|\\ + \sum_{|\nu|\le 41} |\partial^\nu (u_0+w_{m})'| \sum_{|\mu|\le 81} |\partial^\mu (u_0+w_{m-1})'| + \sum_{|\nu|\le 41} |\partial^\nu (u_0+w_{m-1})| \sum_{|\mu|\le 81} |\partial^\mu (u_0+w_{m-1})'| \\+ |u_0+w_{m-1}|^2 + \sum_{|\mu|\le 81} |\partial^\mu [\Box,\eta]u|. \end{multline*} With the exception of the last term above, for which we instead use \eqref{local}, we apply \eqref{wtdSob2} to see that \begin{multline*} \sum_{|\mu|\le 81} \|\partial^\mu \Box w_m\|_{L^2(||x|-(s-\tau)|<10)} \\\lesssim \sum_{|\nu|\le 43} \|{\langle} x{\rangle}^{-1/4} Z^\nu (u_0+w_{m-1})\|_{L^2(||x|-(s-\tau)|<20)} \sum_{|\mu|\le 82} \|{\langle} x{\rangle}^{-3/4} \partial^\mu (u_0+w_m)'\|_{L^2(||x|-(s-\tau)|<20)}\\ + \sum_{|\nu|\le 43} \|{\langle} x{\rangle}^{-1/4}Z^\nu (u_0+w_{m})'\|_{L^2(||x|-(s-\tau)|<20)} \sum_{|\mu|\le 81} \|{\langle} x{\rangle}^{-3/4} \partial^\mu (u_0+w_{m-1})'\|_{L^2(||x|-(s-\tau)|<20)} \\+ \sum_{|\nu|\le 43} \|{\langle} x{\rangle}^{-1/4} Z^\nu (u_0+w_{m-1})\|_{L^2(||x|-(s-\tau)|<20)} \sum_{|\mu|\le 81} \|{\langle} x{\rangle}^{-3/4}\partial^\mu (u_0+w_{m-1})'\|_{L^2(||x|-(s-\tau)|<20)} \\+ \sum_{|\nu|\le 2} \|{\langle} x{\rangle}^{-1/4} Z^\nu(u_0+w_{m-1})\|^2_{L^2(||x|-(s-\tau)|<20)} + \sum_{|\mu|\le 81} \|\partial^\mu [\Box,\eta]u\|_{L^2(||x|-(s-\tau)|<10)}. \end{multline*} Since the sets $\{||x|-(j-\tau)|<20\}$ have finite overlap as $j$ ranges over the nonnegative integers, it follows that upon integrating in $\tau$ and $s$ that \begin{multline*} \sum_{|\mu|\le 81} \int_0^T \int_0^s \|\partial^\mu \Box w_m\|_{L^2(||x|-(s-\tau)|<10)} \,d\tau\,ds\\\lesssim \sum_{|\nu|\le 43} \|{\langle} x{\rangle}^{-1/4} Z^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{|\mu|\le 82} \|{\langle} x{\rangle}^{-3/4} \partial^\mu (u_0+w_m)'\|_{L^2_{t,x}(S_T)}\\ + \sum_{|\nu|\le 43} \|{\langle} x{\rangle}^{-1/4}Z^\nu (u_0+w_{m})'\|_{L^2_{t,x}(S_T)} \sum_{|\mu|\le 81} \|{\langle} x{\rangle}^{-3/4} \partial^\mu (u_0+w_{m-1})'\|_{L^2_{t,x}(S_T)} \\+ \sum_{|\nu|\le 43} \|{\langle} x{\rangle}^{-1/4} Z^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{|\mu|\le 81} \|{\langle} x{\rangle}^{-3/4}\partial^\mu (u_0+w_{m-1})'\|_{L^2_{t,x}(S_T)} \\+ \sum_{|\nu|\le 2} \|{\langle} x{\rangle}^{-1/4} Z^\nu(u_0+w_{m-1})\|^2_{L^2_{t,x}(S_T)} +\varepsilon. \end{multline*} Here we have also employed \eqref{local}. And thus, we see that this boundary term is \[\le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_{m}(T)){\langle} T{\rangle}^{1/2} + C_2\varepsilon,\] and hence, when combined with the above \[V\le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_{m}(T)){\langle} T{\rangle}^{1/2} + C_2\varepsilon.\] {\em \bf Bound for $VI$:} The arguments to bound terms $VI$, $VII$, and $VIII$ follow those of the corresponding terms with no $L$ quite closely. Indeed, for $VI$, we, as in the bound for $II$, apply \eqref{DZle} to $L \partial^\mu w_m$ cutoff away from the boundary and \eqref{locEnergyExt} to both the remaining portion as well as the compactly supported commutator which results from cutting off above. It remains then to control \begin{equation}\label{VI_RHS}\sum_{\substack{j+|\mu|\le 77\\j\le 1}} \int_0^T \|L^j\partial^\mu \Box w_m(s,{\,\cdot\,})\|_2\,ds + \sum_{\substack{j+|\mu|\le 75\\j\le 1}} \|L^j \partial^\mu \Box w_m\|_{L^2_{t,x}(S_T)}.\end{equation} By Sobolev embeddings, it suffices to control the first term. We must now take a little care with the location of the scaling vector field. Playing the role of \eqref{productrule}, we have \begin{multline*} \sum_{\substack{j+|\mu|\le 77\\j\le 1}} |L^j \partial^\mu \Box w_m| \lesssim \sum_{|\nu|\le 40} |\partial^\nu (u_0+w_{m-1})| \sum_{\substack{j+|\mu|\le 78\\j\le 1}} |L^j\partial^\mu (u_0+w_m)'|\\ \sum_{\substack{j+|\nu|\le 40\\j\le 1}} |L^j \partial^\nu (u_0+w_{m-1})| \sum_{|\mu|\le 78} |\partial^\mu (u_0+w_m)'|\\ + \sum_{|\nu|\le 41} |\partial^\nu (u_0+w_{m})'| \sum_{\substack{j+|\mu|\le 77\\j\le 1}} |L^j \partial^\mu (u_0+w_{m-1})'| \\+ \sum_{\substack{j+|\nu|\le 41\\j\le 1}} |L^j \partial^\nu (u_0+w_{m})'| \sum_{|\mu|\le 77} |\partial^\mu (u_0+w_{m-1})'| \\+ \sum_{|\nu|\le 40} |\partial^\nu (u_0+w_{m-1})| \sum_{\substack{j+|\mu|\le 77\\j\le 1}} |L^j \partial^\mu (u_0+w_{m-1})'| \\+ \sum_{\substack{j+|\nu|\le 40\\j\le 1}} |L^j \partial^\nu (u_0+w_{m-1})| \sum_{|\mu|\le 77} |\partial^\mu (u_0+w_{m-1})'| \\+ |u_0+w_{m-1}| \sum_{j\le 1} |L^j (u_0+w_{m-1})| + \sum_{\substack{j+|\mu|\le 78\\j\le 1}} |L^j \partial^\mu [\Box,\eta]u|. \end{multline*} When the scaling vector field is on the higher order term, we shall generally utilize terms $V$ and $IX$ of \eqref{M}. Alternatively, when the scaling vector field is on the lower order term, we utilize \eqref{wtdSob}. More specifically, we decompose dyadically in $x$, apply \eqref{wtdSob}, and apply the Schwarz inequality in both the dyadic summation variable and in $t$ as above. Upon doing so, we can bound utilizing $II$, $IV$ and $VII$ instead. For the second to last term, $VII$ and $IX$ are employed. And finally, \eqref{local} provides the bound for the final term. We illustrate arguing in this fashion by examining the $L^1([0,T];L^2({\R^3\backslash\K}))$-norm of the first two terms in the right. Indeed, the norm of these terms is bounded by \begin{multline*} \int_0^T \sum_{|\nu|\le 40} \|\partial^\nu (u_0+w_{m-1})(s,{\,\cdot\,})\|_\infty \sum_{\substack{j+|\mu|\le 78\\j\le 1}} \|L^j\partial^\mu (u_0+w_m)'(s{\,\cdot\,})\|_2\,ds\\ \sum_{\substack{j+|\nu|\le 42\\j\le 1}} \|{\langle} x{\rangle}^{-1/4} L^j Z^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{|\mu|\le 78} \|{\langle} x{\rangle}^{-3/4} \partial^\mu (u_0+w_m)'\|_{L^2_{t,x}(S_T)}. \end{multline*} And using $XI$, $V$, $VIII$, and $II$, this is \[\lesssim (M_0(T)+M_{m-1}(T))(M_0(T)+M_{m}(T)){\langle} T{\rangle}^{1/4}.\] The remaining terms above are handled in a directly analogous way, which yields \[ VI\le C (M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)){\langle} T{\rangle}^{1/4} + C_2\varepsilon. \] {\em \bf Bound for $VII$ and $VIII$ with $|\nu|=0$:} We fix $\beta$ as above. It suffices to bound $(1-\beta)L Z^\mu w_m$ as the Dirichlet boundary conditions and the boundedness of the coefficients of $Z$ on the support of $\beta$ allow us to control $\beta L Z^\mu w_m$ using term $VI$. For $\mu$ fixed, we decompose \begin{multline*} (1-\beta)L Z^\mu \Box w_m \\= \partial_\alpha[(1-\eta)(1-\beta) (b^{\alpha\beta} (u_0+w_{m-1}) L Z^\mu \partial_\beta (u_0+w_m) + b^{\alpha\beta}_\gamma \partial_\gamma (u_0+w_{m-1}) L Z^\mu \partial_\beta (u_0+w_m))] \\+\tilde{G}_m(t,x). \end{multline*} We apply \eqref{divForm}, \eqref{noD}, and \eqref{noDcomm}. The latter is used for $[\Box,\beta] LZ^\mu w_m$. This yields \begin{multline}\label{LzeroCase} \sup_{t\in [0,T]} \sum_{|\mu|\le 70} \|(1-\beta) L Z^\mu w_m(t,{\,\cdot\,})\|_2 + \sum_{|\mu|\le 70} {\langle} T{\rangle}^{-1/4} \|{\langle} x{\rangle}^{-1/4} (1-\beta)L Z^\mu w_m\|_{L^2_{t,x}(S_T)} \\\lesssim \int_0^T \|{\langle} x{\rangle}^{-1/2}\tilde{G}_m(s,{\,\cdot\,})\|_{L^1_rL^2_\omega}\,ds +\sum_{|\mu|\le 70}\sum_{|\theta|\le 1}\int_0^T \| |\partial^\theta (u_0+w_{m-1})| |L Z^\mu (u_0+w_m)'|\|_2\,ds \\+ \sum_{|\mu|\le 70} \|L \partial^\mu w_m'\|_{L^2_{t,x}([0,T]\times\{|x|<3\})}. \end{multline} The last term is controlled by $VI$, and the bound previously established for $VI$ shall simply be cited to control this piece. To control the first term in the right of \eqref{LzeroCase}, we need not be as precise with the location of the scaling vector fields, and indeed, we can crudely utilize the obvious analog of \eqref{productrule2} where every term on the right is permitted at most one occurrence of $L$. Upon doing so and using Sobolev embedding on $\S^2$ as well as \eqref{local}, we obtain \begin{multline*} \sum_{|\mu|\le 70} \int_0^T \|{\langle} x{\rangle}^{-1/2} \tilde{G}_m\|_{L^1_rL^2_\omega}\,ds \lesssim \varepsilon \\+ \sum_{\substack{|\nu|\le 38\\ j\le 1}} \|{\langle} x{\rangle}^{-1/4} L^j Z^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{\substack{|\mu|\le 70\\k\le 1}} \|{\langle} x{\rangle}^{-1/4} L^k Z^\mu (u_0+w_m)'\|_{L^2_{t,x}(S_T)} \\+ \sum_{\substack{|\nu|\le 38\\j\le 1}} \|{\langle} x{\rangle}^{-1/4} L^j Z^\nu (u_0+w_m)'\|_{L^2_{t,x}(S_T)} \sum_{\substack{|\mu|\le 70\\k\le 1\\|\theta|\le 1}} \|{\langle} x{\rangle}^{-1/4} L^k Z^\mu \partial^\theta (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \\+ \sum_{\substack{|\nu|\le 38\\j\le 1}} \|{\langle} x{\rangle}^{-1/4} L^j Z^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{\substack{|\mu|\le 70\\k\le 1\\|\theta|\le 1}} \|{\langle} x{\rangle}^{-1/4}L^k Z^\mu \partial^\theta (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)}. \end{multline*} Each of these individual factors is controlled either by ${\langle} T{\rangle}^{1/4} IV$ or ${\langle} T{\rangle}^{1/4} VIII$. Thus, the first term in the right of \eqref{LzeroCase} is \[\le C{\langle} T{\rangle}^{1/2} (M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)) + C_2\varepsilon.\] For the second term in the right side of \eqref{LzeroCase}, we may simply use $IX$ and $VII$ to immediately see that it is \[\le C (M_0(T)+M_{m-1}(T))(M_0(T)+M_m(T))\log(2+T).\] And thus, by combining the above bounds, we see that \[VII\Bigr|_{|\nu|=0} + VIII\Bigr|_{|\nu|=0} \le C{\langle} T{\rangle}^{1/2} (M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)) + C_2\varepsilon.\] {\em \bf Bound for $VII$ and $VIII$ with $|\nu|=1$:} Here, again, it suffices to control the given norm when $w_m$ is replaced by $(1-\beta) w_m$. As the coefficients of $Z$ are $O(1)$ on the support of $\beta$, the corresponding $\beta w_m$ terms are subject to the bounds previously established for $V$ and $VI$. Applying \eqref{le} and \eqref{DZle} to $(1-\beta) L Z^\mu w_m$, one obtains \begin{multline*} \sup_{t\in [0,T]} \sum_{|\mu|\le 70} \|L Z^\mu w_m'(t,{\,\cdot\,})\|_2 + \sum_{|\mu|\le 70} {\langle} T{\rangle}^{-1/4} \|{\langle} x{\rangle}^{-1/4} L Z^\mu w_m'\|_{L^2_{t,x}(S_T)} \\\lesssim \sum_{\substack{|\mu|\le 70\\j\le 1}} \int_0^T \|L^j Z^\mu \Box w_m(s,{\,\cdot\,})\|_2\,ds + \sum_{|\mu|\le 71} \|L \partial^\mu w_m\|_{L^2_{t,x}([0,T]\times \{|x|<3\})}. \end{multline*} The bound for term $VI$ applies to the latter term on the right side. Here we need to pay attention to the location of $L$ for terms involving $\partial^2 w_m$, but for the other terms we may be more crude and simply permit up to one occurrence of $L$ on each factor. For everything except for the case that all of the vector fields land on $\partial^2 w_m$, we dyadically decompose, apply \eqref{wtdSob}, and use the Cauchy-Schwarz inequality in the dyadic variable as well as $t$. Upon doing so, we obtain that first term in the right side of the preceding equation is controlled by \begin{multline*} \int_0^T \sum_{|\nu|\le 36} \|Z^\nu (u_0+w_{m-1})(s,{\,\cdot\,})\|_\infty \sum_{|\mu|\le 70} \|L Z^\mu \partial^2 (u_0+w_m)(s,{\,\cdot\,})\|_2\,ds \\ + \sum_{\substack{|\nu|\le 38\\j\le 1}} \|{\langle} x{\rangle}^{-1/2} L^j Z^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{|\mu|\le 71} \|{\langle} x{\rangle}^{-1/2} Z^\mu \partial (u_0+w_m)\|_{L^2_{t,x}(S_T)} \\+ \sum_{\substack{|\nu|\le 38\\j\le 1}} \|{\langle} x{\rangle}^{-1/2} L^j Z^\nu \partial (u_0+w_m)\|_{L^2_{t,x}(S_T)} \sum_{\substack{|\mu|\le 70\\k\le 1\\|\nu|\le 1}} \|{\langle} x{\rangle}^{-1/2} L^k Z^\mu \partial^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \\+ \sum_{\substack{|\nu|\le 38\\j\le 1}} \|{\langle} x{\rangle}^{-1/2} L^j Z^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \sum_{\substack{|\mu|\le 70\\k\le 1\\|\nu|\le 1}} \|{\langle} x{\rangle}^{-1/2} L^k Z^\mu \partial^\nu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \\+ \int_0^T \sum_{\substack{|\mu|\le 70\\j\le 1}} \|L^j Z^\mu [\Box,\eta]u(s,{\,\cdot\,})\|_2\,ds. \end{multline*} The first term is bounded using $VII$ and $IX$, while for the next three, term $VIII$ is primarily used. The only exception is the second factor in the second term where $IV$ is applied. The estimate \eqref{local} is used to control the last term. This shows that the above yields \[VII\Bigr|_{|\nu|=1} + VIII\Bigr|_{|\nu|=1} \le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)){\langle} T{\rangle}^{1/2} + C_2\varepsilon.\] Because the weights that appear in the $L^2_{t,x}$ norms are ${\langle} x{\rangle}^{-1/2}$ rather than ${\langle} x{\rangle}^{-1/4}$, it would be rather easy to replace ${\langle} T{\rangle}^{1/2}$ in this bound by $\log(2+T)$. As this will not improve the final lifespan, we choose to not lengthen the argument in order to show this. {\em\bf Bound for $VII$ with $|\nu|=2$:} With $\gamma$ chosen as in \eqref{gamma}, which satisfies \eqref{gammaDecay1} and \eqref{gammaDecay2} with $\delta=C_1\varepsilon$, we employ \eqref{LZenergy}. After integrating, applying \eqref{gammaDecay2}, using that $T\le T_\varepsilon$ and bootstrapping, it suffices to bound \[\int_0^T \sum_{\substack{|\mu|\le 70\\|\nu|=1}} \|\Box_\gamma L Z^\mu \partial^\nu w_m(t,{\,\cdot\,})\|_2\,dt + \sum_{\substack{|\mu|\le 72\\k\le 1}} \|L^k \partial^\mu w_m'\|_{L^2_{t,x}([0,T]\times \{|x|<1\})}.\] The latter term is controlled by $VI$ of \eqref{M}, and the bound proved previously for that term suffices. Here, we have \begin{multline*} \sum_{\substack{|\mu|\le 70\\|\nu|=1}} |\Box_\gamma L Z^\mu \partial^\nu w_m|\lesssim \sum_{|\nu|\le 37} |Z^\nu (u_0+w_{m-1})| \Bigl[\sum_{\substack{|\mu|\le 70\\|\nu|\le 1\\j\le 1}} |L^j Z^\mu \partial^\nu (u_0+w_m)'| + \sum_{|\mu|\le 73} |\partial^\mu u_0|\Bigr] \\+ \sum_{\substack{|\nu|\le 37\\j\le 1}} |L^j Z^\nu (u_0+w_{m-1})| \Bigl[\sum_{|\mu|\le 71} |Z^\mu (u_0+w_m)'| + \sum_{|\mu|\le 73} |\partial^\mu u_0|\Bigr] \\+ \sum_{|\nu|\le 37} |Z^\nu (u_0+w_m)'| \sum_{\substack{|\mu\le 70\\|\nu|\le 2\\j\le 1}} |L^j Z^\mu \partial^\nu (u_0+w_{m-1})| \\+ \sum_{\substack{|\nu|\le 37\\j\le 1}} |L^j Z^\nu (u_0+w_m)'| \sum_{|\mu\le 72} |Z^\mu (u_0+w_{m-1})| \\+ \sum_{|\nu|\le 37} |Z^\nu (u_0+w_{m-1})| \sum_{\substack{|\mu\le 70\\|\nu|\le 2\\j\le 1}} |L^j Z^\mu \partial^\nu (u_0+w_{m-1})| \\+ \sum_{\substack{|\nu|\le 37\\j\le 1}} |L^j Z^\nu (u_0+w_{m-1})| \sum_{|\mu\le 72} |Z^\mu (u_0+w_{m-1})| +\sum_{|\mu|\le 72} |\partial^\mu [\Box,\eta]u|. \end{multline*} Here we have used the assumption that the Cauchy data are compactly supported and finite propagation speed guarantees that the coefficients of $L$ and $Z$ are $O(1)$ on the supports of $u_0$ and $[\Box,\eta]u$. The method of bounding the above terms in $L^1_t([0,T]; L^2_x)$ depends on the location of the scaling vector field. We will illustrate the method on the terms in the third and fourth lines above. The remaining pieces are controlled in a directly analogous manner. When the scaling vector field is on the higher order factor, we shall use $IX$ and $VII$ (and $III$ in the case that no $L$ appears). When $L$ lands on the lower order piece, we shall instead apply \eqref{wtdSob} as above. Doing so gives the following upper bound on the $L^1_t([0,T];L^2_x)$ norm of the terms in the third and fourth lines above: \begin{multline*} \int_0^T \sum_{|\nu|\le 37} \|Z^\nu (u_0+w_m)'(t,{\,\cdot\,})\|_\infty \sum_{\substack{|\mu\le 70\\|\nu|\le 2\\j\le 1}} \|L^j Z^\mu \partial^\nu (u_0+w_{m-1})(t,{\,\cdot\,})\|_2\,dt \\+ \sum_{\substack{|\nu|\le 39\\j\le 1}} \|{\langle} x{\rangle}^{-1/2} L^j Z^\nu (u_0+w_m)'\|_{L^2_{t,x}(S_T)} \sum_{|\mu\le 72} \|{\langle} x{\rangle}^{-1/2} Z^\mu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)}. \end{multline*} By now citing $IX$, $VII$, and $VIII$ of \eqref{M} and arguing similarly for the remaining nonlinear terms, we see that \[VII\Bigr|_{|\nu|=2} \le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)) {\langle} T{\rangle}^{1/2} + C_2\varepsilon.\] {\em \bf Bound for $IX$:} Here, we apply \eqref{l1linf}, and we are left with bounding \begin{equation}\label{RHSl1linf}\int_0^T \int \sum_{\substack{|\mu|+j\le 67\\j\le 1}} |L^j Z^\mu \Box w_m(s,y)|\,\frac{dy\,ds}{|y|} + \int_0^T \sum_{\substack{|\mu|+j\le 64\\j\le 1}} \|L^j \partial^\mu \Box w_m(s,{\,\cdot\,})\|_{L^2(|x|<2)}\,ds.\end{equation} For the first term, we use an analog of \eqref{productrule} for the given vector fields and apply the Schwarz inequality. Upon doing so, we have \begin{multline*} \sum_{\substack{|\mu|+j\le 69\\j\le 1}} \|{\langle} x{\rangle}^{-1/2} L^j Z^\mu (u_0+w_m)\|_{L^2_{t,x}(S_T)} \sum_{\substack{|\mu|+j\le 68\\j\le 1}} \|{\langle} x{\rangle}^{-1/2} L^j Z^\mu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)} \\+ \Bigl(\sum_{\substack{|\mu|+j\le 68\\j\le 1}} \|{\langle} x{\rangle}^{-1/2} L^j Z^\mu (u_0+w_{m-1})\|_{L^2_{t,x}(S_T)}\Bigr)^2 + C_2\varepsilon. \end{multline*} Control for this term follows from $IV$ and $VIII$ of \eqref{M}. And the second term of \eqref{RHSl1linf} was previously controlled in the process of bounding term $VI$ of \eqref{M}. It follows that \[IX \le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)){\langle} T{\rangle}^{1/2} + C_2\varepsilon.\] {\em \bf Boundedness of $M_m(T)$:} If we combine the estimates for terms $I$, \dots, $IX$ just established, it follows that \[M_m(T)\le C(M_0(T)+M_{m-1}(T))(M_0(T)+M_{m-1}(T)+M_m(T)){\langle} T{\rangle}^{1/2} + C_2\varepsilon\] where $C_2$ is now a fixed constant which is independent of $m$, $\varepsilon$, and $T$. If $C_1$ is chosen so that $C_1>2C_2$ and if we apply \eqref{basecase} as well as the inductive hypothesis, it follows that \[M_m(T)\le C\varepsilon (\varepsilon +M_m(T)){\langle} T{\rangle}^{1/2} + \frac{C_1}{2}\varepsilon.\] If we use \eqref{lifespan}, then if $c$ in \eqref{lifespan} is sufficiently small we may bootstrap in such a way that we obtain \eqref{bddness} if $\varepsilon$ is small enough, as desired. {\em \bf Convergence of the sequence $\{w_m\}$:} We now complete the proof of Theorem \ref{thm1} by showing that the sequence $\{w_m\}$ is Cauchy. Standard results show that the limiting function solves \eqref{reduced}, which is equivalent to the existence promised in Theorem \ref{thm1}. Indeed, if we set \[A_m(T) = \sup_{t\in [0,T]} \sum_{|\mu|\le 90} \|\partial^\mu (w_m-w_{m-1})(t,{\,\cdot\,})\|_2,\] we may argue quite similarly to the above. Upon doing so and using \eqref{bddness}, it can be shown that \[A_m(T)\le \frac{1}{2}A_{m-1}(T)\] for $T\le T_\varepsilon$, which completes the proof.
2,877,628,089,981
arxiv
\section{Introduction\label{sec:intro}} Quasars are one of the most powerful astrophysical sources in the universe with a bolometric luminosity, $L_{\rm Bol} \gtrsim 10^{45}$ erg s$^{-1}$ (e.g., \citealp{Hickox+2018}). Owing to their high luminosities, we can find them even in the distant universe ($z>7$; \citealp{Mortlock+2011, Banados+2018, Matsuoka+2019, Yang+2020a, Wang+2021}). The discovery of quasars in the early universe enables us to investigate how the intergalactic medium (IGM) was ionized and supermassive black holes (SMBHs) grew (e.g., \citealt{Fan+2006, Trakhtenbrot+2011, KimYJ+2018, Onoue+2019, Yang+2020b}). By constructing the quasar luminosity function (LF) at $z\gtrsim4$ with several reasonable assumptions, the contribution of quasars to keep the ionized state of the IGM can be estimated \citep{Jiang+2016, Akiyama+2018, Matsuoka+2018b, McGreer+2018, Kim+2020, Shin+2020}. Given the short time available for SMBHs to grow between the birth of seed BHs and the age of the universe where the highest redshift known quasars reside, examining the properties of the early universe quasars can give us insights on how the SMBH became to exist (e.g., \citealt{Banados+2018, Yang+2020a}). \indent Although the expected contribution to the UV background at $z\sim5$ of quasars with the rest-frame absolute magnitude at 1450 \r{A} ($M_{1450}$) between $-25$ and $-22$ mags are comparable to or greater than that of brighter quasars \citep{Kim+2020}, the number of the currently identified faint quasars with $-25 < M_{1450} < -22$ is much less than the number of brighter quasars so far. This is because previous quasar searches have mostly relied on wide-field surveys with shallow depths. The situation is improving with deeper data becoming available for high-redshift quasar search: the Canada–France–Hawaii Telescope Legacy Survey (CFHTLS; \citealt{Gwyn2012}); the Dark Energy Survey (DES; Dark Energy Survey Collaboration et al. 2016); the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; \citealt{Aihara+2019}); the Infrared Medium-deep Survey (IMS; M. Im et al. 2022, in preparation). \defcitealias{Niida+2020}{N20} \indent However, finding faint quasars is more challenging than bright quasar search, even with deeper images. High-redshift quasars can be identified by the strong break in their spectral energy distributions (SEDs) caused by the redshifted Lyman break \citep{Jiang+2016, Jeon+2017, McGreer+2018, KimYJ+2019, Shin+2020}. However, as we go fainter in magnitude, there is a rapid increase in the number of other types of objects, such as primarily late-type stars and high redshift galaxies, which mimic this break. Conventional color selection considering only a few broadband colors is not enough to reject lots of contaminant sources in faint quasar candidates (\citealt{Matsuoka+2018a, Niida+2020}, hereafter, \citetalias{Niida+2020}). SED model fitting is a physically meaningful approach for searching quasars \citep{Reed+2017}. However, it would require a considerable amount of computing resources if one hopes to find faint quasars among many contaminants. \indent In our previous work (\citealp{Shin+2022}, accepted), we developed a novel method for selecting quasar candidates adopting the deep learning and the Bayesian information criterion (BIC). We applied this method to the HSC-SSP data that reaches a 5-$\sigma$ depth of $i\sim26$ mag for point source detection \citep{Aihara+2019} which corresponds to $M_{1450} \sim -20$ mag for $z\sim5$ quasars. We identified 35 faint quasar candidates, five being previously known quasars. \indent In this paper, we report our spectroscopic observation of four quasar candidates with $i < 23$ mag as an attempt to further confirm the effectiveness of our new quasar selection method. In Section~\ref{sec:selection}, we describe the photometric selection process for quasars at z $\sim5$ in brief. The specification for the spectroscopic observation is explained in Section~\ref{sec:specinfo}. Section~\ref{sec:spectralfitting} includes the spectral fitting procedure and black hole (BH) mass measurement from C\Romannum{4} emission line. The efficiency of our quasar selection is addressed in Section~\ref{sec:efficiency}. Section~\ref{sec:sum} summarizes our findings. We assume cosmological parameters $\Omega_{M} = 0.3$, $\Omega_{\Lambda} = 0.7$ and $H_{0} = 70$ km~s$^{-1}$ Mpc$^{-1}$ throughout the paper. We adopted the AB magnitude system for representing a flux measured in each filter \citep{Oke+1983} and used the dust map of \citet{Schlegel+1998} to correct fluxes for the Galactic extinction. \begin{table*} \caption{General information of the targets and spectroscopic observations\label{tab:info}} \centering \begin{tabular}{llrrrrrrrll} \toprule ra & dec & $g$ & $r$ & $i$ & $NB816$ & $z$ & $NB921$ & $y$ & Date & ExpTime \\ (J2000) & (J2000) & [mag] & [mag] & [mag] & [mag] & [mag] & [mag] & [mag] & & [sec] \\ \midrule 02:17:33.44 & -4:44:44.32 & 25.7 & 23.7 & 22.2 & 22.4 & 22.1 & 22.1 & 22.3 & 2020 Nov 19 & 1500 \\ 16:18:27.28 & 55:17:48.51 & 25.3 & 22.7 & 21.1 & 21.1 & 20.9 & 20.9 & 20.8 & 2021 Jul 13 & 600 \\ 23:27:13.22 & 0:05:47.92 & $>$ 27.3 & 25.1 & 22.3 & 21.3 & 22.0 & 21.7 & 21.8 & 2020 Nov 19 & 1800 \\ 23:31:07.00 & -0:10:14.52 & $>$ 27.3 & 24.4 & 22.6 & 23.2 & 22.6 & 21.8 & 22.7 & 2020 Nov 19 & 1800 \\ \bottomrule \end{tabular} \tabnote{The magnitude errors are mostly less than 0.03 mag.} \end{table*} \section{Quasar selection \label{sec:selection}} Faint quasar candidates at $z\sim5$ were selected in our previous work (\citealp{Shin+2022}, accepted). Here, we briefly explain how the quasar candidates were selected. \subsection{Photometric data} The HSC-SSP has Wide, Deep, and UltraDeep layers. Taking advantage of deep images ($i \sim 26$ mag) and moderate survey area (27 deg$^{2}$), we choose the Deep layer to search quasars among the layers. The Deep layer consists of four fields covered by five broadbands ($g, r, i, z, y$) and two narrow-bands ($NB816, NB921$). The 5-$\sigma$ image depths of broadbands ($g, r, i, z, y$) are (27.3, 26.9, 26.7, 26.3, and 25.3). For $NB816$ and $NB921$, the depths are 26.1 and 25.9 mag, respectively. \indent We retrieved a source catalog of the second public data release (PDR2) of the HSC-SSP. To exclude objects with unreliable photometry, we adopted flags evaluating influences from the bad pixel, cosmic ray, saturation at the center of an object, abnormal background level, and the location of an object on an image. Also, we constrained that a source should be a primary object with detection in the $i$-band. After excluding the flagged objects, the resulting effective survey area of the Deep layer for our quasar search became $\sim 15.5$ deg$^{2}$ based on a random source catalog in HSC-SSP data archival system \citep{Coupon+2018}, somewhat smaller than the nominal HSC-SSP Deep survey area of 27 deg$^2$. The number of sources in the catalog is 3.5 million. In this study, we use two magnitude systems: the point-spread function (PSF) magnitude and the CModel magnitude (CModel). \subsection{SED models} \label{sec:SEDmodels} Due to the small sample size of spectroscopically confirmed quasars at $z\sim5$ in the Deep layer of the HSC-SSP ($<10$), quasar SED models were used for training deep learning models and calculating BIC. Stellar SED models are also necessary to do SED fitting. Since these SED models were introduced in detail in \citealt{Shin+2022}, accepted, here we explained them in brief. Our quasar SED models have four free parameters: redshift ($z_{\rm SED}$), continuum slope ($\alpha_{\lambda}$), equivalent width of Ly$\alpha$ and N \Romannum{5} $\lambda 1240$ (EW), and $M_{1450}$. We created a composite SED by adopting the SED at $\lambda < 1450$ \r{A} from \citet{Lusso+2015} and the redder part from \citet{Selsing+2016}. Then, the Ly$\alpha$+N V equivalent width and the continuum slopes are adjusted. The IGM attenuation model of \citet{Inoue+2014} was adopted. For the stellar SED models, we used the BT-Settl models \citep{Allard+2013} which have five free parameters: effective temperature ($T_{\text{eff}}$), surface gravity (log($g$)), metallicity ([M/H]), alpha-element enrichment ([$\alpha$/M]), and a normalization factor (f$_{N}$). \subsection{Quasar selection process} We applied multiple criteria sequentially. Figure~\ref{fig:flowchart} shows the sequence of our selection. \begin{figure}[t] \centering \includegraphics[width=85mm]{Figure1.pdf} \caption{Flow chart for the entire quasar selection process. \label{fig:flowchart}} \end{figure} \indent First, we selected objects with $19 < i_{\rm PSF} < 25$ and magnitude error $<0.2$ mag (the process (1)). Then, we eliminated extended sources by applying the extendedness cut of $i_{\rm PSF} - i_{\rm CModel} < 0.2$ (the process (2)). With this procedure, we could include about 98.4 $\%$ of point sources identified by an $I$-band catalog of the Hubble Space Telescope (HST) Advanced Camera for Surveys \citep{Leauthaud+2007}. The number of candidates satisfying the process (1) and (2) was 333,780. Among them, there are six spectroscopically confirmed quasars at $4.5 < z < 5.5$ \citep{McGreer+2013, Paris+2018, Shin+2020}. \indent Since faint quasars at $z\sim5$ have the strong IGM absorption lines in the blueward of the Ly$\alpha$ emission line, they tend to show a weak signal to noise ratio (SNR $< 3$) in the $g$-band or large $g-r$ color. To focus on the quasars at $4.5 < z < 5.5$, we limited the $g-r$ color to 0.987, which is the minimum $g-r$ value for quasar models at the redshift range. The number of candidates was 125,644. \indent After choosing the red objects, we performed classification using deep learning. We trained 100 models to predict a class of an object based on its HSC-SSP photometry information. Our trained models assume that the red objects belong to one of two classes: quasars at $z\sim5$ (‘qso’) or non-quasar sources (‘nqso’). Training set for the ‘qso’ comprised of 100,000 randomly sampled quasar models at $4.5 < z < 5.5$. For training ‘nqso’ class, 100,000 randomly sampled point sources with $19 < i < 25$ and $\sigma_{i} < 0.2$ were used. Combining results of 100 deep learning models, we achieved an average accuracy larger than 99 $\%$ for the ‘qso’ class. 1,599 candidates were selected in our ensemble learning. \indent To compensate for the simple approximation used in the deep learning and deal with possible misclassified ‘nqso’ objects, we carried out SED fitting of 1,599 deep-learning-selected candidates using quasar and stellar SED models and compare the best-fit models by adopting the BIC. For each best-fit model, the BIC value can be calculated as, \begin{equation} \label{equ:BIC} \rm{BIC} = \chi^2 + k \times \ln n, \end{equation} \noindent where $k$ is the number of free parameters in the model, $n$ is the number of the data, and $\chi^2$ is the chi-square value for the model. The BIC imposes the penalty term to the model with a large $k$, so a fair comparison between given models becomes possible. The difference between the BIC values, $\Delta\rm{BIC}$, is defined as \begin{equation} \Delta\rm{BIC} = \rm{BIC}_{\rm{star}} - \rm{BIC}_{\rm{quasar}}. \end{equation} The larger $\Delta\rm{BIC}$ is, the more likely it is a quasar. We considered the minimum value of the $\Delta\rm{BIC}$ as 10 \citep{Liddle+2007}. The number of the BIC-selected candidates was 78. \indent We visually inspected multi-band HSC-SSP images to remove candidates for which the photometry could be affected by satellite tracks, nearby bright stars, optical ghosts, and scattered lights. Among the 78 BIC-selected candidates, 53 candidates passed the visual inspection. \indent Finally, we restricted the $M_{1450}$ of the candidate to be brighter than -22.0 mag where the number density of quasar candidates can dramatically increase due to the galaxy contamination (\citetalias{Niida+2020}, \citealt{Shin+2022}, accepted). The number of the final quasar candidates is 35. Five out of 35 candidates are known quasars at $z\sim5$ \citep{McGreer+2013, Shin+2020}. Another four candidates were selected to be promising candidates selected in \citet{McGreer+2018}, \citet{Chaves-Montero+2017} and \citet{Shin+2020}. Among these four candidates, a spectrum of one candidate \citep{Shin+2020} was obtained in this study. \begin{table*} \caption{Best-fit spectral parameters\label{tab:specfit}} \centering \begin{tabular}{lrrrrr} \toprule id & $z_{\mathrm{Ly}\alpha}$ & $z_{\mathrm{spec}}$ & $\alpha_{\lambda}$ & $\log (EW)$ & $M_{1450}$ [mag] \\ \midrule HSC J021733-044444 & 4.807 & $4.783_{-0.003}^{+0.003}$ & $-1.58_{-0.37}^{+0.35}$ & $2.38_{-0.05}^{+0.05}$ & $-23.88_{-0.06}^{+0.06}$ \\ HSC J161827+551748 & 4.754 & $4.726_{-0.008}^{+0.005}$ & $-1.94_{-0.56}^{+0.53}$ & $1.71_{-0.17}^{+0.12}$ & $-25.13_{-0.06}^{+0.06}$ \\ HSC J232713+000547 & 5.591 & $5.561_{-0.006}^{+0.003}$ & $-1.40_{-0.70}^{+0.74}$ & $2.51_{-0.06}^{+0.06}$ & $-24.44_{-0.09}^{+0.10}$ \\ HSC J233107-001014 & 4.974 & $4.950_{-0.004}^{+0.002}$ & $-0.58_{-0.56}^{+0.52}$ & $2.62_{-0.10}^{+0.10}$ & $-23.03_{-0.16}^{+0.19}$ \\ \bottomrule \end{tabular} \end{table*} \section{Spectroscopy \label{sec:specinfo}} We conducted spectroscopic observations for four candidates with $i_{\rm PSF} < 23$ mag between 2020 November and 2021 July. Note that one of the targets, IMS J161827+551748 was also selected using a different selection method that included medium-band data \citep{Shin+2020}. We utilized the Double Spectrograph (DBSP) on the 200-inch Hale Telescope in the Palomar Observatory (PID: CTAP2020-B0043 and CTAP2021-A0032, PI: Y.Kim). Using the dichroic filter, the DBSP can observe the red and blue channels simultaneously. The dichroic filter we used was D55. We set a long-slit mode with a slit of which the width and length correspond to 1.5 and 128 arcseconds. The 316 (600) lines$/$mm grating with a wavelength of 7500 (4000) \r{A} was chosen for the red (blue) channel. The exposure time was $\sim$ 1200 to 1800 seconds. The typical seeing during the observing period was $\sim$ 1.1 to 1.5 arcseconds. \indent After acquiring the spectroscopic data, we pre-processed the data with a python package, {\tt \string PypeIt} \citep{Prochaska+2020a, Prochaska+2020b}. Owing to the expected IGM absorption in the blue channel, we only considered the red channel in this study. The {\tt \string PypeIt} can automatically subtract bias, perform flat-fielding, give a wavelength solution, and model the sky background. To do wavelength calibration, we used a HeNeAr lamp. Since our targets are too faint to be detected with a default algorithm, we manually extracted the fluxes of a standard star and the targets. We adjusted the spatial and spectral locations of the extraction window and its size to maximize the signal-to-noise ratio of each spectrum. \indent After reducing the data, we calculated a sensitivity function by comparing the observed fluxes of a standard star, Feige110, with its actual fluxes. After the sensitivity correction, we rescaled the spectrum using HSC-SSP $i$-band magnitude to correct for a possible flux loss due to the finite size of the slit width. \indent From the full width at half maximum (FWHM) of sky emission lines, we estimated a spectral resolution, $R = 800 - 1300$, which varied depending on the observed wavelength and the observation date. Table~\ref{tab:info} shows the coordinate, HSC-SSP photometry, observation date, and exposure time for the candidates. \begin{figure}[t] \centering \includegraphics[width=85mm]{04_QsoSpec_rev.pdf} \caption{The 1-d and 2-d spectra of four observed quasar candidates in sequence. The skyblue bins and gray lines correspond to the 1-d binned spectra and their 1-$\sigma$ flux errors, respectively. The blue thick lines are the best-fit quasar SED model. Several dominant lines in a typical quasar spectrum are marked as green (orange) vertical lines of which the locations depend on $z_{\mathrm{spec}}$ ($z_{\mathrm{Ly}\alpha}$). The 2-d spectra are shown in an inverted gray scale. \label{fig:qsospec}} \end{figure} \section{Result \label{sec:spectralfitting}} \subsection{Spectral fitting} To increase the SNR of each spectrum, the spectrum was binned at intervals of $6\sim9$ \r{A} ($5\sim6$ pixels), corresponding to FWHM from its $R$. Each flux in each bin was weighted by the inverse of its squared flux uncertainty. The weighted mean flux is to be a representative value of each bin. After binning, the SNRs of the spectra reach 5 to 15 at their emission lines, and 2 to 3 at their continua. Figure~\ref{fig:qsospec} shows the binned spectra of the four objects. All four objects show predominant and broad Ly$\alpha$ emission lines with sufficient $SNR\sim10$ and sharp breaks at the blueward of the line due to the IGM absorption, confirming their nature as high-redshift quasars. \indent To estimate the spectroscopic redshift ($z_{\mathrm{spec}}$), we performed a spectrum fitting based on quasar SED models to the binned spectra (please refer to Section~\ref{sec:SEDmodels}). The best-fit model was obtained by using Markov Chain Monte Carlo (MCMC) method \citep{emcee+2013}. From the sampled posterior distribution of the parameter, we obtained the best-fit parameter and its error as the median and the 68$\%$ equal-tailed interval of the distribution (Table~\ref{tab:specfit}). Due to the low SNR of the continuum, the uncertainties of the parameters are somewhat large except for $z_{\mathrm{spec}}$, implying the role of the clear Ly$\alpha$ emission line in constraining the $z_{\mathrm{spec}}$. These quasars have $M_{1450}=-23.0$ to -25.2 mag at $z_{\mathrm{spec}}=4.7$ to $5.6$. \indent In addition to the best-fit $z_{\mathrm{spec}}$, we calculated the $z_{\mathrm{Ly}\alpha}$ by comparing the location of the strongest point in the Ly$\alpha$ emission line to the rest wavelength of the line (1216 \r{A}). Although the blueward of the Ly$\alpha$ emission line was attenuated by the neutral hydrogen in the IGM, the $z_{\mathrm{Ly}\alpha}$ is still consistent with the locations of the strong emission lines such as Ly$\alpha$ and C \Romannum{4}, especially for HSC J233107-001014. The $z_{\mathrm{Ly}\alpha}$ values are also provided in Table~\ref{tab:specfit}. \begin{figure}[t] \centering \includegraphics[width=80mm]{Mbh_R=7.11.pdf} \caption{Spectral modeling for the C\Romannum{4} emission line. We plot the binned spectrum of HSC J233107-001014 in the gray bar. The thick line indicates the best-fit model which is the summation of the power-law continuum and C\Romannum{4} emission line modeled as a single Gaussian distribution. The former and latter is plotted in the dot-dashed line and the dashed line, respectively. \label{fig:C4line}} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=80mm]{MBH.pdf} \caption{$M_{\mathrm{BH}}$-$L_{\mathrm{Bol}}$ distributions of quasars, from $M_{\mathrm{BH}}$ measurements based on C\Romannum{4} emission line. The contours show the locations of the quasars at $z\sim2$ on the $M_{\mathrm{BH}}$-$L_{\mathrm{Bol}}$ plane \citep{Shen+2011}. The blue squares represent quasars at $z\sim5$ \citep{Jun+2015, Ikeda+2017}, while the gold diamonds indicate quasars at $z\sim6$ \citep{Jiang+2007, KimYJ+2018, Shen+2019}. Our faint quasar at $z\sim5.0$ is marked with the pink square. The solid, dashed, and dotted lines in the panels are corresponds to $\lambda_{\mathrm{Edd}}=1, 0.1$, and $0.01$, respectively.\label{fig:MBHvsLBol}} \end{figure} \subsection{BH mass and Eddington ratio} \indent One interesting question is whether high-redshift quasars are more vigorously growing than lower-redshift quasars \citep{Willott+2010, KimYJ+2018, Onoue+2019, Shen+2019}. Recent studies of quasars at $z\gtrsim6$ have found that SMBHs in the bright quasars are vigorously growing with the Eddington ratios ($\lambda_{\mathrm{Edd}}$) of $\lambda_{\mathrm{Edd}} \sim 1$ \citep{Mortlock+2011, Banados+2018, Yang+2020a}. On the other hand, other studies find that fainter quasars at $z>6$ have $\lambda_{\mathrm{Edd}} \sim 0.1$, on par with quasars at $z=2$ to 3 \citep{Shen+2011, Mazzucchelli+2017, KimYJ+2018, Onoue+2019, Shen+2019}. Here, we investigate the supermassive black hole mass ($M_{\mathrm{BH}}$) and $\lambda_{\mathrm{Edd}}$ of one of our sample, HSC J233107-001014 for which C \Romannum{4} is detected. \indent First, we transferred the observed frame to rest frame by using $z_{\rm spec}$. Then, we modeled continuum using a power-law function ($f_{\lambda} = \zeta \lambda^{\alpha_{\lambda}}$). Although the continuum consists of not only the power-law continuum but also Fe \Romannum{2} complex and Balmer continuum, we considered the former only due to the lack of sensitivity in its overall spectrum and an insignificant influence of the Fe \Romannum{2} emissions on the C \Romannum{4} line properties (e.g., \citealt{Shen+2011}). We fitted the power-law continuum to the fluxes of which the wavelength range is not close to those of the broad emission lines (e.g., Ly$\alpha$, C \Romannum{4}). To prevent large uncertainty of $\alpha_{\lambda}$ caused by poor SNR $\sim 1-3$ of the binned spectrum, it was inevitable to use global parts of the spectrum corresponding to $\lambda = 1260 - 1510$ and $1580 - 1660$, and assume $\zeta = 1$. The fitted power-law continuum was subtracted from the spectrum. \indent We also modeled the continuum-subtracted C \Romannum{4} emission line with a single Gaussian profile. Even though the C \Romannum{4} line is frequently regarded as the sum of multiple Gaussian profiles (e.g., \citealp{Jun+2015, Zuo+2020}), we did not include additional Gaussian components because the spectrum has a SNR too low to discern multiple components. To estimate the FWHM of the C \Romannum{4} emission line ($\mathrm{FWHM}_{\mathrm{C\Romannum{4}}}$) and its error, we generated 10,000 mock spectra by assuming the probability density function of the flux follows a Gaussian distribution. We calculated an $\mathrm{FWHM_{C\Romannum{4}, init}}$ for each spectrum, then the $\mathrm{FWHM}$ of our instrument ($\mathrm{FWHM_{int}} \sim 230$ km~s$^{-1}$) was subtracted from the estimated FWHMs, resulting in $\mathrm{FWHM_{C\Romannum{4}}}$ = $\sqrt{\mathrm{FWHM_{C\Romannum{4}, init}^{2} - \mathrm{FWHM_{int}}^{2}}}$. Adopting 16 and 84 percentiles of the $\mathrm{FWHM_{C\Romannum{4}}}$ histogram as 1-$\sigma$ uncertainties, the $\mathrm{FWHM}_{\mathrm{C\Romannum{4}}}$ is $2230_{-640}^{+1560}$ km~s$^{-1}$. The spectral fitting result of the C \Romannum{4} line is shown in Figure~\ref{fig:C4line}. \indent The mass scaling relation based on the C \Romannum{4} line \citep{Vestergaard+2006} is expressed as, \begin{equation} M_{\mathrm{BH}} = {\bigg[\frac{\mathrm{FWHM}_{\mathrm{C\Romannum{4}}}}{1000 \mathrm{km}\,\mathrm{s}^{-1}}\bigg]}^{2} {\bigg[\frac{\lambda L_{\lambda}( 1350 ~\mathrm{\r{A}})}{10^{44} \mathrm{erg} s^{-1}}\bigg]}^{0.53} \times 10^{6.6} M_{\odot}. \end{equation} $L_{\lambda}$(1350 \r{A}) is calculated from the best-fit SED model for the spectrum of HSC J233107-001014. The resulting $\log(M_{\mathrm{BH}}/M_{\mathrm{sun}})$ is $7.92_{-0.27}^{+0.62}$ for the virial factor of 5.1 \citep{Woo+2013}. \indent We caution that C \Romannum{4}-based $M_{\mathrm{BH}}$ could be systematically biased than the Balmer line-based $M_{\mathrm{BH}}$, since the C \Romannum{4} line profile could be seriously affected by non-virial motion (e.g., \citealt{Sulentic+2017}). To correct for systematic bias, some studies have considered the relation between physical parameters (e.g., C \Romannum{4} blueshift, Eddington ratio, C \Romannum{4} line asymmetry, and so on) and the difference between C \Romannum{4}-based and Balmer-based BH masses \citep{Coatman+2017, Marziani+2019, Zuo+2020}. Although the corrected $\mathrm{FWHM}_{\mathrm{C\Romannum{4}}}$ based on the relations could improve the $M_{\mathrm{BH}}$ estimates, we did not apply the correction due to large uncertainties in the physical parameters caused by the low SNR of the spectra. \indent Using the bolometric correction derived in \citep{Runnoe+2012}, which is \begin{equation} \log (L_{\mathrm{Bol}}) = 4.745 + 0.910\,\log(\,1450~\mathrm{\r{A}} \,L_{\lambda}(1450~\mathrm{\r{A}})\,), \end{equation} \noindent we calculated the bolometric luminosity of the quasar. For computing the Eddington luminosity $L_{\mathrm{Edd}}$, we use the following equation, \begin{equation} \log (L_{\mathrm{Edd}}) = 1.3 \times 10^{38} \times M_{\mathrm{BH}}/M_{\odot}\,\, \mathrm{erg} s^{-1}. \end{equation} The Eddington ratio is $L_{\mathrm{Bol}}/L_{\mathrm{Edd}} = 0.64_{-0.41}^{+0.93}$. Table~\ref{tab:Mbhinfo} provides the properties of HSC J233107-001014 derived from the line fitting. Figure~\ref{fig:MBHvsLBol} compares Eddington ratios of quasars at different redshifts in a $M_{\mathrm{BH}}$ versus $L_{\mathrm{Bol}}$ plane. Compared to quasars at similar luminosities ($L_{\mathrm Bol} \sim 10^{46}$ erg s$^{-1}$; \citealp{Shen+2011, Trakhtenbrot+2011, Ikeda+2017}), this quasar has somewhat a large $\lambda_{\mathrm{Edd}}$ if not exceptional. On the other hand, this $\lambda_{\mathrm{Edd}}$ is on par with brighter quasars at $z\sim5$ (e.g., \citealp{Trakhtenbrot+2011, Ikeda+2017, Jeon+2017}). This implies that not every faint quasar has smaller $\lambda_{\mathrm{Edd}}$ compared to the bright one. However, more faint quasars with well-measured BH properties should be required to better understand the accretion activities of $z\sim5$ quasars. \begin{figure}[t!] \centering \includegraphics[width=80mm]{zhist.pdf} \caption{Redshift histograms of the final quasar candidates (blue) and the final quasar candidates satisfying traditional color selection (\citetalias{Niida+2020}, orange). \label{fig:zhist}} \end{figure} \begin{table} \caption{\label{tab:Mbhinfo}} \centering \begin{tabular}{lr} \toprule HSC J233107-001014 & \\ \midrule $\mathrm{FWHM_{C\Romannum{4}}}$ [km~s$^{-1}$] & $2230_{-640}^{+1560}$ \\ log$(L_{\mathrm{Bol}}\,[\mathrm{erg}\mathrm{s}^{-1}])$ & $45.85_{-0.05}^{+0.06}$\\ log$(M_{\mathrm{BH}}\,[M_{\odot}]$) & $7.92_{-0.27}^{+0.62}$ \\ $\lambda_{\mathrm{Edd}}$ & $0.64_{-0.41}^{+0.93}$\\ \bottomrule \end{tabular} \end{table} \section{Implications on quasar selection \label{sec:efficiency}} The confirmation of 4 new quasars brings the total number of spectroscopically confirmed quasars to 9 among 35 candidates. None of the candidates have been shown to be non-quasars so far, suggesting a very high confirmation rate. In addition, there is one quasar candidate with medium-band data, which can be almost certain to be a quasar \citep{Shin+2020}. \indent To quantitatively evaluate the effectiveness of our selection, we compare it to the traditional color selection of \citetalias{Niida+2020}. We checked whether the 35 final candidates can be selected by \citetalias{Niida+2020} selection. It misses 16/35 ($\sim 46 \%$) of the final candidates. Figure~\ref{fig:zhist} shows redshift histograms of the final candidates and the final candidates that meet \citetalias{Niida+2020} selection. Since the \citetalias{Niida+2020} selection has a high completeness value only when searching for quasars at $4.7 < z < 5.1$, it is difficult to select quasars at $z < 4.7$ or $z > 5.1$. However, our selection method can find quasar candidates at a broader redshift range than the \citetalias{Niida+2020} selection does, allowing us to increase the number of quasar sample at $z\sim5$. \indent We also calculate the recovery rate of known quasars at $4.5 < z < 5.5$. Our selection process recovers 5/6 known quasars and 4/4 newly discovered quasars (= 9/10, $\sim 90 \%$). We find that one quasar that has been missed is in the parameter space where the completeness is low (see \citealp{Shin+2022}, accepted), suggesting that our quasar recovery rate is as high as expected. To firmly confirm the effectiveness of our selection, spectroscopic observations for the 26 remaining candidates is required. \section{Summary\label{sec:sum}} We performed spectroscopic observations for four candidates with $M_{1450} \gtrsim -25.0$ mag at $z\sim5$ utilizing the DBSP on the 200-inch Hale Telescope in the Palomar observatory. The candidates were selected by deep learning and Bayesian information criterion (\citealp{Shin+2022}, accepted). Each candidate has a strong Ly$\alpha$ emission line and clear break near the line in its spectrum, suggesting all the candidates are quasars at $z\sim5$. 4$/$4 spectroscopic confirmation rate implies the validity of our novel selection approach. Our selection method provides a possible way for efficiently selecting high-redshift quasars at unbiased redshift ranges from future surveys. \indent Since HSC J233107-001014 has a strong C \Romannum{4} emission line as well among the quasars, we calculated the marginal BH mass for the quasar, resulting in $10^{8} M_{\odot}$. The $\lambda_{\mathrm{Edd}}$ of the quasar is $\sim 0.6$, although most quasars with a similar luminosity ($L_{\mathrm{Bol}} \sim 10^{46}$ erg$s^{-1}$) to the quasar have lower Eddington ratios. To better understand the early growth of SMBHs, more faint quasars with $L_{\mathrm{Bol}} \lesssim 10^{46}$ erg$s^{-1}$ should be investigated. \acknowledgments This research was supported by the National Research Foundation of Korea (NRF) grants No. 2020R1A2C3011091 and No. 2021M3F7A1084525, funded by the Ministry of Science and ICT (MSIT). S. S. acknowledges the support from the Basic Science Research Program through the NRF funded by the Ministry of Education (No. 2020R1A6A3A13069198). Y. K. was supported by the NRF grant funded by the MSIT (No. 2021R1C1C2091550). He acknowledges the support from the China Postdoc Science General (2020M670022) and Special (2020T130018) Grants funded by the China Postdoctoral Science Foundation. This research uses data obtained through the Telescope Access Program (TAP) (PID: CTAP2020-B0043 and CTAP2021-A0032), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences, and the Special Fund for Astronomy from the Ministry of Finance. Observations obtained with the Hale Telescope at Palomar Observatory were obtained as part of an agreement between the National Astronomical Observations, Chinese Academy of Sciences, and the California Institute of Technology. \indent The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. \indent This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software at http://dm.lsst.org \indent This paper is based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by the Subaru Telescope and Astronomy Data Center (ADC) at National Astronomical Observatory of Japan. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics (CfCA), National Astronomical Observatory of Japan. The Subaru Telescope is honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical and natural significance in Hawaii.
2,877,628,089,982
arxiv
\section{Introduction} Extragalactic high energy gamma-ray astronomy has developed from an emerging discipline into a fully fledged research field over the past several decades. Following its initial success in the 1990s with the first AGN discoveries of Mrk~421 and Mrk~501 \cite{Punch:1992xw,Quinn:1996dj}, the field now boasts the detection of more than 60 AGN at very high energies (VHE) by ground-based gamma-ray instruments~\footnote{See http://tevcat.uchicago.edu for an up-to-date list.}. A consequence of these observational achievements has been a broadening in the array of different AGN subclasses, believed to represent various manifestations of a single (few) AGN type(s) \cite{Urry:1995mg}, at these energies. These range from the bright beamed blazars of both BL Lac and flat spectrum radio quasar (FSRQ) type, the most numerously observed AGN subclass at VHE, to their dimmer weakly beamed/unbeamed counterparts, radio galaxies. The jet-beamed blazar family members, are observed as point-like sources. For these, information about the spatial extent of the emission site may be encoded into the temporal structure of the flux that they emit. Indeed, the most challenging/enlightening results from observations of such temporal structure information, come from the most intense outbursts (such as that of PKS~2155-304 \cite{Aharonian:2007ig} in 2006). Such extreme bright episodic emission has lead to tight constraints being placed upon the size of the emission region and the jet Doppler factor. At present, H.E.S.S., based in Namibia, is one of the principal stereoscopic Cherenkov telescope instruments currently in operation. This sensitive stereoscopic Cherenkov telescope instrument provides a unique VHE perspective on the southern hemisphere regions of the sky. The achievements of this instrument have played a key part in bringing about the present flourishing status of the field. Furthermore, an upgrade of this experiment through the completed installation of a massive 28~m telescope at the centre of the original array in 2012, marked the onset of the \hess~II\xspace era. This upgrade resulted in significant improvements in the instrument's low energy sensitivity, reducing significantly its threshold energy \cite{Holler:2015tca}. Recent years have also seen the arrival of new monitoring instruments, with FACT \cite{Dorner:2015jka}, and the now completed HAWC-300 \cite{Pretz:2015zja}, collectively able to provide wide field of view and sensitive effective AGN monitoring. The complementarity provided by the monitoring and follow-ups through both the broad sky coverage, and the in-depth low energy threshold targeted observations, make promising the prospects for further growth in the coming years. Such collaborative efforts allow what may be obtained from this present generation instrument to be maximised before the arrival of the next generation CTA north and south instruments \cite{Consortium:2010bc}. In the following, several of the key recent H.E.S.S. observational developments in AGN gamma-ray astrophysics will be covered. In section~\ref{HESSII_Era}, a discussion on the dawn of the new \hess~II\xspace era will be addressed. Starting in subsection \ref{Rise_of_FSRQs}, the first \hess~II\xspace era results will be presented, noting in particular the rise of the detection of FSRQ type blazars. In subsection~\ref{AGN_ToOs} current efforts utilising wide field-of-view VHE AGN monitoring instruments as efficient trigger alerts will be discussed. Following this, subsection~\ref{new_sources} considers potential new sources which may be detectable within the new \hess~II\xspace era. Lastly, in section~\ref{AGN_Lessons}, a summary of the lessons learnt about the intrinsic spectra of AGN (primarily HBL) detected by \hess~I\xspace will be presented. The conclusions to this discourse will be provided in section~\ref{Conclusion}. \section{HESS II Era} \label{HESSII_Era} Since October 2012 a fifth telescope at the centre of the original H.E.S.S. array, has been operational, taking data in coordination with the other \hess~I\xspace telescopes. This five telescope set-up is referred to as \hess~II\xspace. The analysis of the data taken in this new setup may be made either utilising the information from all of the telescopes (hybrid), or utilising only the information from the fifth telescope (mono), \cite{Holler:2015uca}, each providing niche advantages depending on the source type. One of the recent \hess~II\xspace highlights which utilised the hybrid analysis was the detection of a new extreme HBL candidate, 1RXS~J023832.6-311658. Little is presently understood about this class of object, which demonstrates a continuation of its high energy (HE) hard spectral index into the VHE domain, without evidence for a cutoff. Furthermore, this blazar class appears to show little evidence of variability in their lightcurves, and curious evidence of correlations found have suggested that preferred directions of these sources exist long ``voidy'' lines of sight \cite{Furniss:2014bna}. Such limitations in our understanding of this class, in part, is due to the small number of such objects so far having been discovered, highlighting the need to search for more such objects. The first \hess~II\xspace AGN results which utilised the mono analysis came from observational campaigns taken with this new setup in 2013 and 2014. These observations were of the HBL blazars PKS~2155-304 and PG~1553+113, which were both found to be in quiescent states during the observation periods \cite{Aharonian:2016ria}. Despite these lowered activity levels, significant achievements in lowering the threshold energy utilising the new fifth telescope (mono) data, with analysis of this data for the PKS~2155-304 and PG~1553+113 observations reaching down to new threshold energies of 80~GeV and 110~GeV, respectively. \subsection{The Rise of the FSRQs} \label{Rise_of_FSRQs} Building on the successful achievement of a lower energy threshold analysis utilising the \hess~II\xspace, a wider spread of blazar classes (eg. HBLs, LBLs, FSRQs) and redshifts, has become accessible to the instrument. Indeed, further proof of the successful lowering of the threshold energy of analysis utilising \hess~II\xspace mono data comes from the observations 3C~279, which underwent a giant outburst back in July 2015. \hess~II\xspace mono analysis of these observations of the flare achieved a record low energy threshold energy of 66~GeV for AGN results with this instrument. Fig.~\ref{3C279_spectra} shows the HESS and Fermi components of its SED of during the second night of the flaring outburst. Along this same vein, the detection of second FSRQ, PKS~0736+017, which underwent an outburst in February 2015, was also achieved utilising the analysis of \hess~II\xspace data. Again, \hess~II\xspace mono analysis of these observations achieved an energy threshold of 80~GeV. Lastly, observations by both H.E.S.S. and MAGIC of giant flare from another FSRQ, namely PKS~1510-089, which underwent an massive outburst in May 2016, collectively provided exceptional temporal coverage of the flaring event at VHE. For all three of these FSRQ flares, the detection of VHE emission from them during their outbursting episodes presents new surprising and unexpected challenges for their modeling. Specifically, the presence of the broad line region (BLR), an intense radiation field in the vicinity of the AGN, presents a barrier for the escape of VHE emission from within it. In turn, the detection of VHE emission from these sources can be used to place strong constraints on the position of the emission site with respect to the BLR location \cite{Tavecchio:2012cs}. With gamma-ray emission beyond 200~GeV detected from each of these systems during their outbursts, the emission site is found to be constrained to sit at a distance beyond $r_{\rm BLR}\approx 10^{17}~{\rm cm}\left(L_{\rm disk}/10^{45}~{\rm erg s}^{-1}\right)^{1/2}$ \cite{Ghisellini:2009wa}, where $L_{\rm disk}$ is the thermal luminosity of the accretion disk. Information about the spatial size of the emission site, from where the outburst originates, is also provided by the minimum temporal variability time-scale in the observed lightcurve. During their recent outbursts, unexpectedly short time-scale variability have been revealed for both objects. Specifically, for 3C~279, $\sim$minute time-scale structure in the $>$100~MeV HE lightcurves was discovered \cite{TheFermi-LAT:2016dss}. Likewise, at VHE, for PKS~1510-089 $\sim$tens of minute time-scale structure. Together, both the lack of internal absorption features in the flaring FSRQ spectra, and the short variability time-scales observed during the flare, make for rather challenging constraints. Reconciliation of these two differing results only appears to be possible in a few possible scenarios. The first is if the emission site sits sufficiently far out such that absorption of the BLR is avoided, potentially allowing the intrinsic spectrum to continue as an extrapolation of that in the \textit{Fermi}-LAT\xspace domain. The second is if the emission originates from only a small subsection of the jet, out at distance scales beyond the BLR. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{final_fermi_hess_deab_sed_proc.pdf} \caption{Energy flux data points of 3C~279 during its giant outburst in July 2015.} \label{3C279_spectra} \end{center} \end{figure} \subsection{An AGN Transient Machine} \label{AGN_ToOs} To fully exploit the lowered threshold energy to observe the larger assortment of VHE AGN in the high redshift Universe demands efficient wide-field coverage of the transient VHE sky. To this end, full advantage is being made of broad multi-wavelength transient event monitoring. In particular, the H.E.S.S. collaboration is provided with alerts at optical, X-ray, HE gamma-ray, and VHE gamma-ray energies. Indeed, for the three AGN discussed in the previous section, 3C~279, PKS~0736+017, and PKS~1510-089, \textit{Fermi}-LAT\xspace triggers and a H.E.S.S. trigger during a monitoring campaign, instigated the subsequent in-depth observations during their heightened activity states. A further example of such effective transient observations is provided by the H.E.S.S. observations of Mrk~501 in June 2014. These observations were triggered by FACT, an imaging air Cherenkov telescope (IACT) which regularly monitors the activity of known VHE AGN, providing alerts for follow-up observation to the VHE gamma-ray community. During a giant outburst in 2014, the flux level of Mrk~501 observed by H.E.S.S. matched that of the record level, observed by HEGRA back in 1997 \cite{DjannatiAtai:1999af}. The obtained spectra both during this flare, and in the quiescent state, are shown in fig.~\ref{Mrk501_spectra}. Of particular note from these results is that the spectrum observed during this flare, once extragalactic background light (EBL) absorption had been accounted for, showed no signs of a cutoff, continuing as a hard spectrum up to the highest energy data point ($\sim 20~$TeV). \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{Mrk501_2014flare.pdf} \caption{The observed spectral data points of Mrk~501 above 300~GeV, from observations taken both before and during its extremely bright outburst in June 2014 \cite{Cologna:2015mia}.} \label{Mrk501_spectra} \end{center} \end{figure} \subsection{New VHE Transient Sources} \label{new_sources} Alongside the broadened AGN discovery potential which the onset of the \hess~II\xspace era has opened up, the possibility to catch new VHE phenomena has also been increased. Indeed, a consideration of the competing sensitivities of \textit{Fermi}-LAT\xspace and next generation IACT \cite{Funk:2012ca}, highlights that within the overlapping energy region between such instruments, it is the temporal domain of the next generation instruments in which the discovery frontier lies. To pursue an exploitation of \hess~II\xspace in this direction, for the catching of new transients, a rapid repointing system for the telescopes has been implemented. This system has been designed to respond as fast as possible, without human intervention, to targets of opportunity (ToOs). With an expected fall-off in intensity of the gamma-ray flux from GRB observed following the prompt phase emission, motivation exists for rapid response follow-up observations of such bursts. A minimisation in the repointing time for these observations over the last few years has succeeded to reducing the average overall response time to a timescale of $\sim$few~minutes. Upper limits for such a follow-up observation of GRB~140818B are shown in fig.~\ref{GRB140818B_upperlimits}. Beyond the successes of this \hess~II\xspace rapid response GRB ToO activity, efforts are also underway within collaboration to utilise \hess~II\xspace to catch other extragalactic VHE gamma-ray emitting phenomena. Specifically, attention is here drawn to the VHE neutrino and FRB follow-up observations carried out by the collaboration. With the origins of both these phenomena remaining unclear, though believed in both cases to be (in part) extragalactic in origin, great potential exists for fresh insight about the emission mechanism to be provided by such follow-ups. In particular, for the case of VHE neutrino follow-up observations, a strong potential link exists between high-energy neutrinos and gamma rays through the possibility that both particles are secondary losses of high energy cosmic rays within the region in or around their acceleration site. Provided that both particles types are able to escape from the source region and arrive to Earth, and the transient event overlaps sufficiently with the observation window, a detectable flux level is expected within the H.E.S.S. energy range. An example of the recent improvement in response time to neutrino alerts is the follow-up observations of the ANTARES neutrino event on the 30th January 2017. These observations took placed only 32 seconds after the reconstructed neutrino event occurred. Although searches for a gamma-ray counterpart within this data set are still ongoing, the success of the automatic response system has already proved itself through this considerable reduction in the follow-up delay time to neutrino event alerts. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{GRB140818B_ul_plot.pdf} \caption{Upper limit results obtained from the H.E.S.S. observations for GRB140818B (shown in red). The best fit spectral model for the Fermi-GBM detection is shown in blue.} \label{GRB140818B_upperlimits} \end{center} \end{figure} \section{Intrinsic AGN Spectra} \label{AGN_Lessons} In parallel with efforts to broaden and deepen the range of sources detected in the \hess~II\xspace era, investigations are also being made to consolidate what lessons have been learnt from the detection of blazars in the \hess~I\xspace era. Specifically, this has been focused on their intrinsic spectral properties in this energy range. The ability to accurately reconstruct the intrinsic spectra of blazars has two limiting factors. The first of these is dictated by the instrument sensitivity, which dictates the photon statistics obtained during observational campaigns of an object. The second of the constraints comes from the limit in our understanding of the EBL, which at present is predominantly inferred a mixture of modeling \cite{Franceschini:2008tp} and VHE blazar observations \cite{Abramowski:2012ry}. Following the adoption of the recent EBL model and its associated uncertainty archival \hess~I\xspace data has been recently reviewed. An investigation was carried out to collectively infer what the full \hess~I\xspace data set revealed about the intrinsic AGN spectra. In particular, following the assumption that the intrinsic spectra are describable by power-law or log-parabola functions, the constraints on the free-parameters of these functions were obtained. Among other things, the results demonstrated the fact that only the brightest AGN flares, for which the highest statistics were obtained, showed a preference for log-parabola type spectra. The majority of AGN spectra, however, found no statistical preference for such a spectra, preferring instead the simpler power-law model. \section{Conclusion} \label{Conclusion} The present epoch of H.E.S.S. extragalactic observations is one of a maturing and broadening frontier. Although an increase in the number of BL Lac blazar sources detected continues, there has also been an evolution of focus. This evolution is primarily thanks to recent improvements in the instrumentation which have lowered the energy threshold. Hand in hand with these instrumental improvements is the implementation of wide field-of-view monitoring and trigger follow-up schemes, allowing efficient capture of bright VHE emission follow-up observations. AGN blazar variability is a well known and familiar phenomena. Despite this, however, the short-time variability results observed from recent FSRQ outbursts at VHE are challenging. An example case in point is the blazar 3C~279, which underwent a giant outburst in June 2015, demonstrated minute-scale variability at GeV energies. Such short-time variability at gamma-ray energies approaches the shortest level caught from the BL Lac PKS 2155-304. Since considerable internal absorption for FSRQs are expected should the emission zone be located within the BLR, both the compactness of the emission zone suggested by the short time-scale structure, and the large distance from the BLR region, are collectively rather challenging to reconcile. Beyond new and deeper AGN observations, the new \hess~II\xspace era offers the promise for the discovery of new extragalactic VHE sources. Of particular relevance in this domain is the need for rapidly response to ToOs. The successful implementation of an automatic rapid response system is here demonstrated through the some of the first results obtained with this system. Hand in hand with efforts looking to the future to explore the new range of phenomena open to \hess~II\xspace, the lessons learnt from \hess~I\xspace are considered. Looking back at the archival set of \hess~I\xspace AGN data, a clear message about what can be learned about the intrinsic spectra is found. The key message here being that only in brightest of AGN observations could the higher moments of the intrinsic spectra be probed. In summary, exciting and increasingly challenging new results have arisen in the maturing discipline of H.E.S.S. AGN observations. The achievement of these results have come about both through the lowering of the instrument threshold allowing the detection of bright FSRQ flares, and the utilisation of wide field-of-view VHE AGN monitoring systems ensuring efficient follow-up observations of bright flares. These developments collectively ensure that this observational frontier continues to both broaden and deepen our understanding of extragalactic sources, fundamentally providing key insights into how these effective particle accelerators operate. \section*{Acknowledgements}The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is gratefully acknowledged, as is the support by the German Ministry for Education and Research (BMBF), the Max Planck Society, the German Research Foundation (DFG), the French Ministry for Research, the CNRS-IN2P3 and the Astroparticle Interdisciplinary Programme of the CNRS, the U.K. Science and Technology Facilities Council (STFC), the IPNP of the Charles University, the Czech Science Foundation, the Polish Ministry of Science and Higher Education, the South African Department of Science and Technology and National Research Foundation, the University of Namibia, the Innsbruck University, the Austrian Science Fund (FWF), and the Autrian Federal Ministry for Science, Research and Economy, and by the University of Adelaide and the Australian Research Council. We appreciate the excellent work of the technical support staff in Berlin, Durham, Hamburg, Heidelberg, Palaiseau, Paris, Saclay, and in Namibia in the construction and operation of the equipment. This work benefited from services provided by the H.E.S.S. Virtual Organisation, supported by the national resource providers of the EGI Federation.
2,877,628,089,983
arxiv
\section{INTRODUCTION} \label{sect:Introduction} \ Two-dimensional magnetic systems have been a subject of intensive investigation for almost half a century now. Both ferro- and anti-ferromagnetic systems have been studied extensively, experimentally as well as theoretically, revealing a myriad of interesting properties including the discovery of high-temperature superconductivity in doped two-dimensional cuprates\cite{bednorz1986}. One of the important aspects of these systems which has continued to attract interest is the magnetic excitations which are of fundamental relevance to understand the spin dynamics. Knowledge of the collective spin wave excitations can provide valuable insight into their dynamical response as well as their thermodynamic behavior. The advancement of experimental techniques, such as ferromagnetic resonance spectroscopy and inelastic neutron scattering, have been of immense help in exploring this field meticulously\cite{coldea2001,lumsden2009}. Inelastic neutron scattering is one of the most powerful and versatile tools as the long wavelength spin excitations, better known as magnons, can be probed directly and accurately. One of the most thoroughly studied systems, in this context, is the two-dimensional Heisenberg anti-ferromagnet Rb$_2$Mn$_{1-x}$Mg$_x$F$_4$\cite{cowley1977,birgeneau1977,birgeneau1980}, which was investigated by means of neutron scattering techniques to study the low-temperature magnetic excitations, viz.\ the magnon dispersion, linewidths, and lineshapes, as well as the critical exponents near the transition temperature. This led to similar studies on K$_2$Cu$_{1-x}$Zn$_x$F$_4$\cite{wagner1978}, which is a quasi bi-dimensional ferromagnet. In the aforementioned studies, good agreement with numerical calculations, available at that time, was also reported. However, despite the existence of innumerable studies, one important feature which has eluded understanding, over the decades, is the wave-vector dependence of the magnon lifetime (inversely proportional to the linewidth), especially in the long wavelength limit ($q\rightarrow 0$). \ Spin waves in Heisenberg ferromagnets, in the low-energy limit, were studied theoretically as early as the sixties by Murray\cite{murray1966}. The author calculated the spin wave energies and the scattering cross section, within the Born approximation, and reported a $q^5$ scaling of the magnon lifetime. The exchange interactions, in this case, were restricted to nearest neighbors only. Similar $q^5$ dependence was also found in amorphous Heisenberg ferromagnets, in the low-temperature and long wavelength limit, by using an effective medium approximation\cite{singh1978}. In this case, however, spatially dependent extended couplings were assumed between the magnetic sites. Based on Green's functions calculations, Mano\cite{mano1982} also predicted an identical behavior of the lifetime in the long wavelength limit. The finite linewidth of the excitations, which increased rapidly with decreasing wavelength, was attributed to the randomness in the magnitude of the spins. Also the discrepancy between the observed magnetization behavior and that predicted by elementary spin wave theory was believed to originate from this finite linewidth of the spin waves. On the contrary, similar spin wave studies in amorphous ferromagnets by Kaneyoshi\cite{kaneyoshi1978} led to a $q^7$ dependence of the linewidth. This was an outcome of using a quasi-crystalline approximation, which is essentially a virtual-crystal-like approach. Within this approximation, the magnon dispersion reduces to that of an ideal crystal, wherein the disorder effects are completely neglected. In yet another study, based on the two-magnon interaction theory of Heisenberg ferromagnets, a leading order $q^2$ scaling of the magnon lifetime was suggested by Ishikawa \textit{et al.}\cite{ishikawa1981}. However, in most of the aforesaid studies, the systems under consideration were three-dimensional Heisenberg ferromagnets and there was no clear mention of the dimensional dependence. It was only later that Christou and Stinchcombe\cite{christou1986} investigated the low-temperature spin excitations in bond-diluted Heisenberg ferromagnets from a more generalized perspective. Using a diagrammatic perturbation theory, the authors obtained a $q^{d+2}$ ($d>1$, is the dimensionality) scaling of the magnon linewidth. Although the discussion was extended to the more relevant site-diluted systems, the exchange interactions were again restricted to nearest neighbors only. \ Thus, the lack of a general accord on the issue of linewidth scaling becomes apparent from the widely varying predictions available in the literature. Moreover, considerable attention and interest have also been devoted to the case of anti-ferromagnets, including even lately\cite{bayrakci2013}. In a very recent study\cite{akash2015}, on three-dimensional disordered ferromagnets, a $q^5$ scaling of the magnon linewidth, in the long wavelength limit, was reported using similar numerical approaches as implemented here. This served as a further motivation behind the current study of the magnetic excitations in two-dimensional ferromagnetic systems, with a view to identify the dimensional dependence of the scaling of the magnon lifetimes. Also a proper knowledge of the lifetimes is not only of fundamental interest but can also be of practical importance, as we shall discuss later. In this article, we provide a comprehensive and detailed analysis of the low-temperature spin wave excitations in two-dimensional site-diluted ferromagnets, in the presence of extended exchange interactions. The calculations have been performed on sufficiently large system sizes and a proper statistical sampling over disorder is also taken into account. We lay special emphasis on the correct evaluation of the magnon linewidths in the long wavelength limit. In the process, we demonstrate that determining the correct wave-vector dependence of the lifetimes constitutes a non-trivial task. In addition, we also discuss the nature of the magnon density of states, the spectral functions, as well as the magnon dispersion over a relatively broad concentration range. \section{HEISENBERG MODEL AND EXCHANGE COUPLINGS} \label{sect:Model} \ We start with the Hamiltonian describing $N_{imp}$ interacting spins (${\bf S}_{i}$) randomly distributed on a square lattice of $N$ sites, given by the dilute Heisenberg model \begin{align} H=-\sum_{i,j} J_{ij}p_{i}p_{j} {\bf S}_{i}\cdot{\bf S}_{j} \label{Hamiltonian} \end{align} where the sum $i,j$ runs over all sites and the random variable $p_i$=1 if the site is occupied by an impurity or otherwise zero. We consider classical spins ($\mid$${\bf S}_{i}$$\mid=S$) on a square lattice, with lattice spacing $a$, and with periodic boundary conditions. The distribution of the spins, in this case, is completely random and uncorrelated; in other words the probability of a spin to be placed at site $i$ is independent of the neighboring sites. This is in contrast to a previous study\cite{akash2014} on the magnetic excitations in inhomogeneous diluted systems, where well-defined spherical clusters of spins were considered. Spin-orbit coupling is neglected as this would lead to anisotropy in the system which is not the primary focus here. The effects of spin-orbit on the magnon lifetimes in two-dimensional systems were studied in\cite{zakeri2012}. All calculations, in the present work, are performed at $T$=0 K. The concentration of magnetic impurities in the system is denoted by $x$ ($=N_{imp}/N$). The Hamiltonian, Eq.~(\ref{Hamiltonian}), is treated within the self-consistent local random phase approximation (SC-LRPA), which is essentially a semi-analytical approach based on (finite temperature) Green's functions. Within this approach, the retarded Green's functions are defined as \begin{align} G^c_{ij}(\omega)=\int_{-\infty}^{\infty}G^c_{ij}(t)e^{i\omega t}dt \label{retarded_GF} \end{align} where $G^c_{ij}(t)$=$-i\theta(t)\langle[{\bf S}_i^+(t),{\bf S}_j^-(0)]\rangle$, describe the transverse spin fluctuations, and $\langle\ldots\rangle$ denotes the expectation value, and `$c$' the disorder configuration index. After performing the Tyablikov decoupling\cite{tahirkheli1962,tyablikov1967,nolting2009} (assuming magnetization along the $z$-axis) of the higher-order Green's functions which appear in the equation of motion of $G^c_{ij}(\omega)$, we obtain \begin{align} (\omega{\bf I}-{\bf H}_{eff}^c){\bf G}^{c}={\bf D} \label{eff_matrix} \end{align} where ${\bf H}_{eff}^c$, ${\bf G}^{c}$, and $\bf D$ are $N_{imp}\times N_{imp}$ matrices. The effective Hamiltonian matrix elements are \begin{align} ({{\bf H}_{eff}^c)}_{ij}= -\langle{S_i^z}\rangle J_{ij}+\delta_{ij}\sum_{l}\langle{S_l^z}\rangle J_{lj} \end{align} and the diagonal matrix \begin{align} D_{ij}= 2\langle S_i^z\rangle \delta_{ij}. \end{align} For a given temperature and disorder configuration, the local magnetizations $\langle{S_i^z}\rangle$ ($i=1,2,\ldots,N_{imp}$) have to be calculated self-consistently. However, since we are interested at $T$=0 K, where the ground state is assumed to be fully polarized, all $\langle{S_i^z}\rangle$ are equal to $S$ in this case. We shall not go into further details of the method here, as the accuracy and reliability of the SC-LRPA to handle disorder (dilution) effects in different contexts have been discussed and established on numerous previous occasions (for details see Refs.~\onlinecite{georges2005,georgesprb2005,sato2010}). The virtual crystal approximation, as a possible alternative approach, fails in these systems as will be discussed in Sec.~\ref{sect:Lifetime}. \ The exchange interactions are assumed to be of the form $J_{ij}$=$J_{0} r_{ij}^{-\alpha}$, where $r_{ij}=|{\bf r}_i-{\bf r}_j|$. In most of the previous studies the exchange couplings were restricted to nearest neighbors only, but in realistic systems these interactions extend well beyond the nearest neighbors. Moreover, the choice of the couplings is motivated by the theoretical proposition put forward in Ref.~\onlinecite{bruno2001}, wherein the author extends the Mermin-Wagner theorem\cite{mermin1966} to Heisenberg and $XY$ systems with long-range interactions. It is stated that a $d$-dimensional system ($d$=1 or 2) with monotonically decaying interactions as $\mid$$J_{\textbf r}$$\mid \propto r^{-\alpha}$ cannot have ferro- or anti-ferromagnetic long-range order at $T> 0$, if $\alpha \ge 2d$. For RKKY-like interactions (long-range oscillatory nature) magnetic order could be strictly ruled out for the one-dimensional systems, but only for certain cases in the two-dimensional ones. It was later proved by Loss, Pedrocchi, and Leggett\cite{loss2011}, again as an extension of the Mermin-Wagner theorem, that no long-range magnetic order is possible in one or two-dimensional systems at a finite temperature, in the presence of RKKY interactions. The choice of exponentially decaying couplings can also be ruled out in this case as they satisfy the Mermin-Wagner theorem trivially. Hence, this led us to the choice of the exponent $\alpha$=3 for the couplings, which implies that in our two-dimensional systems long-range (ferro-)magnetic order at a finite temperature is not excluded by the Mermin-Wagner theorem. Also, since the couplings are isotropic and all ferromagnetic ($J_{ij}>0$) there is no frustration expected and hence the collinear state can be safely assumed to be the ground state. This in turn leads to only positive eigenvalues in the magnon spectrum which will become clear in the following calculations of the magnon DOS. \section{MAGNON DENSITY OF STATES AND SPECTRAL FUNCTION} \label{sect:DOS} \begin{figure}[htbp]\centerline {\includegraphics[width=\columnwidth,angle=0]{Fig1.pdf}} \caption{(Color online) Average magnon DOS $\rho_{\text{avg}}$ as a function of energy $\omega$ plotted for different concentrations $x$.} \label{fig1} \end{figure} \ From the retarded Green's functions defined above one can calculate the average magnon density of states (DOS), which is given by $\rho_{\text{avg}}(\omega)=(1/N_{\text{imp}})\sum_i\rho_i(\omega)$, where $\rho_i(\omega)=-1/(2\pi S)\Im[ G_{ii}(\omega)]$ is the local magnon DOS\@. Fig.~\ref{fig1} shows the average magnon DOS as a function of the energy for different impurity concentrations. The DOS have been averaged over a hundred disorder configurations, although it was found that typically 25 configurations were sufficient for each impurity concentration. We observe irregular features in the DOS which become more pronounced with increase in dilution. On decreasing the concentration from $x=0.1$ to $x=0.02$, a significant increase in weight around the low energy end of the spectrum is observed. This increase in weight is attributed to the increase in the fraction of impurities which are weakly connected to the rest. These isolated impurity regions have their own zero-energy modes which in turn contribute to the DOS at the low energies. In order to gain a better insight into this behavior we look at the distribution of the local DOS shown in Fig.~\ref{fig2a} and ~\ref{fig2b}, at two different energies 2.2 $J_0S^2$ and 3.2 $J_0S^2$, respectively for $x=0.1$. Here we can clearly identify certain impurity regions, of typically two or three impurities, which are weakly coupled to the surrounding impurities. These can be seen to make a higher contribution to the DOS. (For more details see Fig.~\ref{fig7}, App.). Note that the distribution shown corresponds only to a part of the lattice from a $200a \times 200a$ system. With increasing dilution the average separation between the spins increases and hence the effective coupling decreases. This accounts for the increase in the irregular features observed in the DOS for $x=0.02$. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=0.45\columnwidth,angle=0]{Fig2a.pdf} \label{fig2a} } \subfigure[]{ \includegraphics[width=0.45\columnwidth,angle=0]{Fig2b.pdf} \label{fig2b} } \subfigure{ \includegraphics[width=0.5\columnwidth,angle=0]{Fig2c.pdf} \label{fig2c} } \caption[Optional caption for list of figures]{(Color online) Distribution of the local magnon DOS at energies (a) $\omega/(J_0S^2)=2.2$, and (b) $\omega/(J_0S^2)=3.2$, for an impurity concentration of $x=0.1$. Shown is a part of the lattice of size $L$=$200a$ in coordinate space. The dots indicate the positions of the spins ${\bf S}_i$.} \label{fig2} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=0.75\textwidth,angle=0]{Fig3.pdf} \caption{(Color online) Average spectral function $A({\bf q},\omega)$ as a function of the energy with ${\bf q}$=$n(2\pi/La)\{1,0\}^\top$ (where $n\in\mathbb{N}$), corresponding to four different concentrations $x$. The system size is $L^{2}=300a\times300a$.} \label{fig3} \end{figure*} \ The dynamical spectral function, also known as the structure factor, provides valuable insight into the underlying spin dynamics of a system. Experimentally this can be probed by inelastic neutron scattering and ferromagnetic resonance to a good accuracy. The averaged spectral function is defined by \begin{align} {A}({\bf q},\omega) :={}& -\left\langle \frac{1}{2\pi S} \Im[G^{c}({\bf q},\omega)]\right\rangle_c, \end{align} where $G^{c}({\bf q},\omega)$ is the Fourier transform of the retarded Green's function $G^{c}_{ij}(t)$, and $\langle\dots\rangle_{c}$ denotes the configuration average. Fig.~\ref{fig3} shows the averaged spectral functions as a function of energy for four different concentrations. The $A({\bf q},\omega)$'s are averaged over a few hundred disorder configurations, and the results are plotted only in the [1 0] direction of the Brillouin zone, for progressively increasing momentum $\bf q$, since the focus is on the long wavelength regime here. It should be noted that the [0 1] direction is equivalent to the [1 0] direction in this case, due to the lattice symmetry. Also note that for $q\gg 2\pi/(La)$ the deviation from rotation invariance is not negligible. Well-defined excitations are found to exist only for small values of $\bf q$, in each case. For increasing $\bf q$, the excitation peaks become broader and develop a tail extending toward the higher energies. On decreasing the concentration from 0.08 to 0.02 the zone of stability of the well-defined magnon modes is found to decrease by almost one order of magnitude. Also the excitations become increasingly asymmetric with increase in the momentum. This increase in asymmetry is associated with the crossover from propagating low-energy spin waves to localized or quasi-localized excitations (fractons)\cite{orbach1987} at higher energies. Here, the term \textit{localized} implies that the excitations are quite broad in energy at fixed wave-vectors, or rather quantitatively the excitation energy is much larger than the linewidth (i.e.\ the full-width at half-maximum). \ The nature of the spectral functions is similar to what was observed by neutron scattering experiments in Mn$_x$Zn$_{1-x}$F$_2$\cite{uemura1986,uemura1987}, which is a three-dimensional randomly diluted anti-ferromagnet. The authors measured sharp spin waves near the zone center which broadened progressively with the wave vector approaching the zone-boundary. These findings were attributed to a crossover from low-energy extended spin waves (magnons) to localized high-energy excitations (fractons), which was further consistent with the theoretical conjecture of fractons in disordered percolating networks\cite{orbach1987,aharony1985}. A recent numerical study\cite{mucciolo2004} on site-diluted two-dimensional anti-ferromagnets also reveal the existence of localized excitations at high energies. The authors evaluated the inverse participation ratio (IPR), for different dilutions and different system sizes in order to establish the nature (extended or localized) of the states, although the largest system size studied was only $32a \times 32a$. These studies provide relevance and also additional motivation to study the two-dimensional ferromagnets from this aspect. The proper and accurate evaluation of the spectral functions, as we shall see in what follows, constitutes a vital task since the magnon dispersion as well as the lifetime can be directly extracted from them. \section{MOMENTS ANALYSIS} \label{sect:Moments} \begin{figure}[htbp]\centerline {\includegraphics[width=\columnwidth,angle=0]{Fig4.pdf}} \caption{(Color online) Spin stiffness $D$ and effective spin stiffness $D_{0}$ as a function of $x$. ($m_{1}(q)\approx D_{0}q^2$, where $m_1$ is the first moment associated to the spectral density). The inset shows a comparison of the excitation energies extracted from $A({\bf q},\omega)$ and the first moments $m_1$, respectively, for the case of $x=0.03$.} \label{fig4} \end{figure} \ Before embarking into further details of the long wavelength magnon properties, we define the moments associated with the spectral density. The $n$-th moment is defined by \begin{align} m_n({\bf q})=\int_{0}^{\infty} \omega^{n} A({\bf q},\omega) d\omega \label{moments} \end{align} In the limit $q\rightarrow 0$, it can be shown that $m_{1}({\bf q}) \approx D_{0}q^{2}$\cite{georges2007}, where we call $D_{0}$ as the effective spin wave stiffness. It is also well known that in the long wavelength limit the dispersion in ferromagnetic systems is quadratic in $q$, $\omega({\bf q}) \approx Dq^{2}$, where $D$ denotes the spin stiffness coefficient. The moments, as sometimes found in the literature\cite{motome2002,motome2005}, are used in the spectral function analyses as a good approximation to estimate the excitation energy and linewidth, especially in the presence of disorder. Nonetheless, the accuracy and the viability of this assumption is subject to further examination. In order to address this, as a first step, we numerically calculated the dispersion from the first moment and then compared it to the \textit{real} excitation energy $\omega({\bf q})$ extracted from the $A({\bf q},\omega)$ peaks shown in Fig.~\ref{fig3}. The results for the particular case of $x=0.03$ are plotted in the inset of Fig.~\ref{fig4}. As can be seen, in the small $q$ limit, both $m_{1}(q)$ and $\omega({\bf q})$ are linear in $q^2$ but the first moment fairly overestimates the real magnon energies. This is better reflected when we extract the respective spin stiffness coefficients, $D_{0}$ from $m_{1}(q)$ and $D$ from $\omega({\bf q})$, and plot them against the concentration as shown in Fig.~\ref{fig4}. For all considered $x$, the effective spin stiffness is larger than the actual spin stiffness, overestimating by 15-20\% in each case. This clearly demonstrates that the first moment is not a reliable quantity to evaluate the spin stiffness in these diluted systems as it fails to reproduce the magnon energies precisely. \begin{figure}[htbp]\centerline {\includegraphics[width=\columnwidth,angle=0]{Fig5.pdf}} \caption{(Color online) Effective linewidth $\gamma_{0}$ (in units of $J_{0}S^{2}$) as a function of $q$ (in the [1 0] direction), for different concentrations $x$, Eq.~(\ref{eff_linewidth}). The dashed lines indicate the linear fits.} \label{fig5} \end{figure} \ The other relevant quantity of interest is the intrinsic linewidth of the magnetic excitations. The linewidth gives a measure of the excitations' broadening due to disorder, which maybe magnetic or structural disorder, or due to the magnon-magnon interactions. One can obtain the linewidth from the second moment from the relation\cite{motome2002} \begin{align} \gamma_{0}({\bf q})= \sqrt{m_{2}({\bf q})-m_{1}^{2}({\bf q})} \label{eff_linewidth} \end{align} where $\gamma_{0}({\bf q})$ is the effective linewidth. In Fig.~\ref{fig5} we have plotted this effective linewidth as a function of {\bf q} for four different impurity concentrations. We find that in the small-$q$ limit the linewidth is linear in $q$ for all considered $x$. The same holds true for all other intermediate concentrations, which are not shown here. Consequently, we end up with $\omega \propto q^2$ and $\gamma_{0} \propto q$, in the limit $q\rightarrow 0$. This indicates that the magnetic excitations are incoherent or localized around the $\Gamma$-point, since $\gamma_{0} > \omega$. However, this is somehow contrary to what we have observed in the spectral functions shown in Fig.~\ref{fig3}, where the excitations are well defined for small $q$ values. Hence, we can safely assume that the effective linewidth obtained from the moments does not correspond to the real linewidth of the excitations. The same discrepancy was also demonstrated for the case of Ga$_{1-x}$Mn$_x$As\cite{georges2007}, a well-known III-V diluted ferromagnetic semiconductor, where the lattice has an fcc structure. Note that similar linear $q$-dependence, obtained from the moments analysis, was reported by the authors in disordered double-exchange systems\cite{motome2005}. Determining the correct $q$-dependence of the intrinsic linewidth, in the long wavelength limit, requires further detailed analysis which is elucidated in the following. \section{SCALING OF MAGNON LIFETIME} \label{sect:Lifetime} \ We extract the linewidth, which is the full-width at half-maximum, directly from the magnon spectral functions (Fig.~\ref{fig3}) corresponding to the first non-zero $q$ values from different system sizes. The extracted linewidths are plotted as a function of the wave-vector in Figs.~\ref{fig6a} and ~\ref{fig6b}, for $x=0.03$ and 0.05, respectively. In order to have sufficiently small $q$ values, and also check for the probable finite-size effects we have performed the calculations on system sizes ranging from $200a\times200a$ up to $500a\times500a$. The linewidth data are averaged over one hundred disorder configurations and the error bars corresponding to the standard deviation are contained within the symbols. Now, since we are interested in the $q\rightarrow 0$ regime, we focus on a restricted region of the $q$ values, (highlighted by the shaded regions in the plots), in order to give more weight to the smallest available $q$'s. We remark that the limit considered for the shaded regions only serve as an approximate value and not as a clear demarcation of the $q$ regime, defining the long wavelength limit. Note that the value of $\ln(qa)\approx -4$ corresponds to a value $qa\approx 0.02$. To determine the $q$-dependence we use a linear fit of the form $n\ln(qa)+C$, (with $n=3$, 4, and 5) for the data within these shaded regions. As can be clearly seen for both cases, $x=0.03$ and 0.05, it is the $n=4$ fit (denoted by the solid line) which best describes the linewidth behavior in this region. Beyond this region, the linewidth begins to deviate from this behavior although the deviations are less for $x=0.05$ compared to $x=0.03$. Also note that the same $q$-scaling was observed for the other concentrations as well. This clearly shows that in the long wavelength limit and at low temperatures the intrinsic linewidth actually scales as $q^4$ in these two-dimensional systems. Our findings are interestingly in good agreement with the prediction of a $q^{d+2}$- dependence reported in Ref.~\onlinecite{christou1986}. This agreement is not obvious since in the latter work a different analytical approach, based on diagrammatic perturbation theory, was used and also the couplings were restricted to nearest neighbors only. Whereas our study is more general in the sense that the couplings are extended, as well as the linewidth is extracted directly from the magnon spectral functions. In this context, it is worth mentioning that studies based on virtual-crystal-like approaches often lead to an infinite lifetime, since the spin fluctuations are unaccounted and the disorder effects are neglected, implying no mechanism for magnon decay. However, disorder plays an essential role, as shown here, in leading to a finite lifetime in these systems. From this $q^4$ scaling we infer that, in the long wavelength limit, the linewidth is actually smaller than the excitation energy which was qualitatively clear from the well-defined peaks observed around the $\Gamma$-point in the spectral functions. Nevertheless, as we have seen, it is difficult to identify precisely the values of $q$ below which this behavior holds and these values, in turn, should also depend on the concentration $x$. Similar difficulties were also demonstrated in the case of three-dimensional systems\cite{akash2015} where a $q^3$ behavior, instead of $q^5$, was observed if the considered $q$ values were not sufficiently small. Apparently, in the present case we do not observe any clear crossover from the $q^4$ scaling to any other form within the considered range of the wave-vectors. We also conclude that the scaling of the linewidths does in fact depend on the dimensionality. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[width=\columnwidth]{Fig6a.pdf} \label{fig6a} } \subfigure{ \includegraphics[width=\columnwidth]{Fig6b.pdf} \label{fig6b} } \label{fig6} \caption[Optional caption for list of figures]{(Color online) Logarithm of the magnon linewidth $\gamma$ (in units of $J_{0}S^{2}$) as a function of ln $(qa)$, for (a) $x=0.03$, and (b) $x=0.05$. Dashed (red), solid (green), and dot-dashed (blue) lines indicate linear fits of the form $n\ln(qa)+C$, ($n=3$, 4, and 5), for the linewidth data within the shaded region.} \end{figure} \ As already mentioned, the energy and the linewidth calculated from the moments do not coincide with the real ones extracted from the spectral function. The reason behind is that moments can only reproduce the characteristic features of a distribution when they are perfectly symmetric, such as a Gaussian or a Lorentzian. In the present case, the spectral functions are actually asymmetric and hence the moments prove to be inappropriate to estimate the real line shape and the peak positions. In the clean case (absence of any disorder) one is likely to get reliable results from the moments analysis of the spectral functions as the excitations can be completely symmetric. However, disorder leads to a strong asymmetry of the excitations, as we have seen in the present case. There is a considerable broadening in the spectrum observed especially close to the zone boundary. Thermal fluctuations also play an important role in these systems, but since we focus only on the low-temperature excitations we can neglect the thermal effects here. Further experimental studies to quantitatively examine the linewidth in these compounds could prove to be very useful. \section{CONCLUSION} \label{sect:Conclusions} \ We have addressed the low temperature spin excitations in two-dimensional diluted Heisenberg systems, with a particular focus on the long wavelength limit. A self-consistent Green's functions based approach is used to evaluate the magnon DOS and the dynamical spectral functions. Well-defined excitations are observed only in a restricted region of the Brillouin zone, around the $\Gamma$-point. It is demonstrated that determining the correct wave-vector dependence of the magnon linewidth in diluted systems is not an ordinary task. Contrary to some previous studies, we have shown that the moments associated with the spectral function are inappropriate to determine the linewidth or the excitation energies. The moments overestimate the real spin stiffness as well as provide a linear $q$-dependence of the linewidth, implying incoherent excitations in the limit $q\rightarrow 0$. However, this is found to be inconsistent with the stiffness and the linewidth extracted from the calculated spectral functions. In the long wavelength limit, the linewidth in fact scales as $q^4$ in two-dimensional systems, for a wide range of impurity concentrations. The discrepancy arises due to the inability of the moments to reproduce the asymmetry in the excitation peaks. The origin of this asymmetry, and thus a finite lifetime, is ascribed to the disorder induced broadening of the spin waves. Hence, this underlines the importance of the disorder effects in these systems and we emphasize that the failure to properly account for them will certainly result in an incorrect wave vector dependence of the linewidth. \ Most data storage devices, in nowadays spintronics, try to manipulate the dynamical motion of spins. From this perspective, a precise knowledge of the excitations' lifetime (inversely proportional to the linewidth) could be of practical relevance. For example, a short lifetime is important for memory devices to leave a bit in a steady state after a read-in or read-out operation. On the contrary, a longer lifetime is advantageous for the unhampered transmission of signals in inter-chip communications. It would be equally interesting to look into the temperature effects on the spin dynamics, in particular the linewidth, where in addition to disorder the thermal effects also play a vital role. However, this is beyond the scope of the current work. The present findings provide qualitative insights into the low temperature excitations and the magnon lifetimes in two-dimensional ferromagnets, and could serve as a firm basis for future research on complex disordered magnets. More experimental studies oriented in this direction are also highly desirable to resolve the controversy arising from the numerous theoretical proposals. \acknowledgments \ We acknowledge financial support by DFG within the collaborative research center SFB 689. AC would like to thank Georges Bouzerar for insightful comments and discussions.
2,877,628,089,984
arxiv
\section{Introduction} The Heisenberg commutation relations are \begin{equation} \left[ {\widehat{P}}_{i},{\widehat{Q}}_{j}\right] =i \hbar \delta _{i,j}\text{\boldmath $1$} \label{MG: Heisenberg commutation relations} \end{equation} \noindent where $i,j,...=1,...,n$.\ \ The hermitian operators ${\widehat{P}}_{i}$ and ${\widehat{Q}}_{j}$ represent quantum mechanical momentum and position observables acting on states $|\psi \rangle $ that are elements of a Hilbert space $\text{\boldmath $\mathrm{H}$}$ for which $\text{\boldmath $1$}$ is the unit operator.\ \ (We will use natural units in which $\hbar =1$ throughout the paper.) These relations are fundamental to quantum mechanics in its original formulation.\ \ Weyl \cite{weyl} established that these relations are the Hermitian representation of the algebra of a Lie group $\mathcal{H}( n) $ that we now call the Weyl-Heisenberg group. The Weyl-Heisenberg Lie group is a semidirect product \cite{folland} of two abelian groups\footnote{In our notation for a semidirect product $\mathcal{G}\simeq \mathcal{K}\otimes _{s}\mathcal{N}$, $\mathcal{N}$ is the normal subgroup (see Definition 1 in Appendix A).\ \ Also, $\mathcal{A}\simeq \mathcal{B}$ is the notation for a group isomorphism.} \begin{equation} \mathcal{H}( n) \simeq \mathcal{A}( n) \otimes _{s}\mathcal{A}( n+1) \label{MG: WH definition} \end{equation} \noindent where $\mathcal{A}( m) $ is the abelian Lie group isomorphic to the reals under addition, $\mathcal{A}( m) \simeq (\mathbb{R}^{m},+)$.\ \ Therefore, it has an underlying manifold diffeomorphic to $\mathbb{R}^{2n+1}$ and is simply connected.\ \ In a global coordinate system $p,q\in \mathbb{R}^{n}$, $\iota \in \mathbb{R}$, the group product and inverse of the Weyl-Heisenberg group may be written \begin{gather} \Upsilon ( p^{\prime },q^{\prime },\iota ^{\prime }) \Upsilon ( p,q,\iota ) =\Upsilon ( p^{\prime }+p,q^{\prime }+q,\iota +\iota ^{\prime }+\frac{1}{2} \left( p^{\prime }\cdot q-q^{\prime }\cdot p\right) ) \label{MG: Heisenberg group product} \\{\Upsilon ( p,q,\iota ) }^{-1}=\Upsilon ( -p,-q,-\iota ) \label{MG: Heisenberg group inverse} \end{gather} \noindent The identity element is $e=\Upsilon ( 0,0,0) $.\ \ Its Lie algebra is given by \begin{equation} \left[ P_{i},Q_{i}\right] = \delta _{i,j}I,\ \ \left[ P_{i},I\right] =0, \left[ Q_{i},I\right] =0 \label{MG: WH algebra} \end{equation} The faithful unitary irreducible representations $\xi $ of the Weyl-Heisenberg group may be written as \begin{equation} \psi ^{\prime }( x) =\left( \xi ( \Upsilon ( p,q,\iota ) ) \psi \right) \left( x\right) =e^{i \lambda ( \iota +x\cdot p-\frac{1}{2}p\cdot q) }\psi ( x-q) \label{MG: Unitary representaton of WH} \end{equation} \noindent where $p,q,x\in \mathbb{R}^{n}$, $\iota \in \mathbb{R}$. $\lambda \in \mathbb{R}\backslash \{0\}$ label the irreducible representations and $\psi ( x) =\langle x|\psi \rangle \in {\text{\boldmath $\mathrm{H}$}}^{\xi }\simeq {\text{\boldmath $L$}}^{2}( \mathbb{R}^{n},\mathbb{C}) $.\ \ We label the Hilbert space with the unitary representation $\xi $ as this Hilbert space, on which the unitary representation $\xi $ acts, is\ \ determined by the unitary irreducible representation and is not given {\itshape a priori}. The Stone-von Neumann theorem \cite{Stone}, \cite{vonNeumann} establishes that (6) defines the complete set of faithful irreducible representations of the Weyl-Heisenberg group. This theorem is not constructive; it does not give a prescription to obtain these representations but only establishes that they are a complete set of faithful irreducible representations. However, as the Weyl-Heisenberg group has the form of the semidirect product given in (2), the unitary irreducible representations (6) can also be directly calculated using the Mackey theorems as these theorems are constructive. This is reviewed in Section 3.1. The position and momentum operators in (1) are given by the faithful\footnote{There are also degenerate representations corresponding to the homomorphism $\pi :\mathcal{H}( n) \rightarrow \mathcal{A}( 2n) $ for which $\lambda =0$ (See Appendix A, Theorem 4). These representations of the abelian group are not discussed further here.} hermitian representation $\xi ^{\prime }$ of the Weyl-Heisenberg algebra. (The prime designates the lift of the unitary representation $\xi $ of the group to the algebra, $\xi ^{\prime }=T_{e}\xi $.) \begin{equation} {\widehat{P}}_{i}=\xi ^{\prime }( P_{i}) , {\widehat{P}}_{i}=\xi ^{\prime }( P_{i}) , \widehat{I}=\xi ^{\prime }( I) . \end{equation} These operators also act on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }\simeq {\text{\boldmath $L$}}^{2}( \mathbb{R}^{n},\mathbb{C}) $.\ \ As the representation is a homomorphism, its lift preserves the Lie bracket, \begin{equation} \left. \upsilon ^{\prime }( \left[ P_{i},Q_{i}\right] ) =\left[ \upsilon ^{\prime }( P_{i}) ,\upsilon ^{\prime }( Q_{i}) \right] \right) =\ \ i\ \ \delta _{i,j}\upsilon ^{\prime }( I) =i\ \ \lambda \delta _{i,j}\text{\boldmath $1$} \label{MG: irrep Weyl-Heisenberg algebra} \end{equation} \noindent The $i$ appears simply because we are using hermitian rather than anti-hermitian operators\footnote{In some neighborhood, the group element $g$ is given in terms of an element\ \ $X$ of the algebra by $g=e^{X}$. Then for a unitary representation $\upsilon $ of the group, the representation $\upsilon ^{\prime }$of the algebra is Hermitian (rather than non-Hermitian only if we insert an $i$, $\upsilon ( g) =e^{i \upsilon ^{\prime }( X) }$.\ \ This follows as\ \ ${\upsilon ( g) }^{-1}={\upsilon ( g) }^{\dagger }$ implies $-i \upsilon ^{\prime }( X) ={(i \upsilon ^{\prime }( X) )}^{\dagger }$ and hence $\upsilon ^{\prime }( X) ={\upsilon ^{\prime }( X) }^{\dagger }$.}.\ \ Schur's lemma states that the representation of the central generators are a multiple of the identity for irreducible representations and so $\widehat{I}=\lambda \text{\boldmath $1$}$ where $\lambda \in \mathbb{R}\backslash \{0\}$.\ \ With $\lambda =1$,\ \ these are the Heisenberg commutation relations given in (1).\ \ \subsection{Symmetry of Physical States} A basic assumption of quantum mechanics is that the Heisenberg commutation relations (1) are valid when acting on any physical state. Physically observable probabilities are given by the square of the modulus of the states. Therefore, physical states in quantum mechanics are rays $\Psi $ that are equivalence classes of states $|\psi \rangle $ in the Hilbert space that are equal up to a phase \cite{wigner}, \cite{Weinberg1}, \begin{equation} \Psi =\left[ \left| \psi \right\rangle \right] , \left| \widetilde{\psi }\right\rangle \simeq \left| \psi \right\rangle \ \ \ \ \left| \widetilde{\psi }\right\rangle =e^{i \theta } \left| \psi \right\rangle . \end{equation} \noindent The square of the modulus is the same for any representative state in the ray, \begin{equation} P( \alpha \rightarrow \beta ) =|\left( \Psi _{\beta },\Psi _{\alpha }\right) |^{2}=|\left\langle {\widetilde{\psi }}_{\beta }|{\widetilde{\psi }}_{\alpha }\right\rangle |^{2}=|\left\langle \psi _{\beta }|\psi _{\alpha }\right\rangle |^{2}. \end{equation} Symmetry transformations between physical states (i.e. rays $\Psi$) are given by operators $U$ that leave invariant the square of modulus, \begin{equation} |\left( U \Psi _{\beta },U \Psi _{\alpha }\right) |^{2}=|\left( \Psi _{\beta },\Psi _{\alpha }\right) |^{2}. \end{equation} \noindent These transformations $U$ are the representation of a group in the space $U( \mathrm{H}) $ of linear or anti-linear operators on $\text{\boldmath $\mathrm{H}$}$ \begin{equation} \varrho :\mathcal{G}\rightarrow U( \mathrm{H}) : g\mapsto U=\varrho ( g) . \end{equation} \noindent This\ \ operator also acts on any representative in the equivalence class of states that defines the ray, \begin{equation} \Psi ^{\prime }=U \Psi ,\ \ \left| \psi ^{\prime }\right\rangle =U\left| \psi \right\rangle . \end{equation} \noindent Theorem 2 in Appendix A states that any representation of a Lie group \cite{bargmann}, \cite{mackey2} that leaves invariant the square of the modulus is always equivalent to a linear unitary or anti-linear, anti-unitary operator mapping the Hilbert space $\text{\boldmath $\mathrm{H}$}$ into itself. Furthermore, if the Lie group is connected\footnote{In this paper, a connected group is abbreviation for a group for which every element is connected by a continuous path to the identity element. }, it is always equivalent to a linear unitary operator.\ \ The representations $\varrho $ are referred to as projective representations.\ \ If $\mathcal{G}$ is a connected Lie group, the fundamental Theorem 3 states that these projective representations are equivalent to the ordinary unitary representations $\upsilon $ of the central extension $\widecheck{\mathcal{G}}$ of $\mathcal{G}$. We seek the maximal group with projective representations that preserve the Heisenberg commutation relations.\ \ As the Heisenberg commutation relations are a faithful unitary representation of the Lie algebra of the Weyl-Heisenberg group, the group we seek must be a subgroup of the automorphism group of the Weyl-Heisenberg algebra.\ \ As the Weyl-Heisenberg group is simply connected, the automorphism group of the algebra is equivalent to the automorphism group ${\mathcal{A}ut}_{\mathcal{H}( n) }$ of the Weyl-Heisenberg group itself. Under the action of elements $g\in {\mathcal{A}ut}_{\mathcal{H}( n) }$, the elements of the algebra transform to a new basis \begin{equation} {P^{\prime }}_{i}=g P_{i} g^{-1}, {Q^{\prime }}_{i}=g Q_{i} g^{-1}, I^{\prime }=g I g^{-1}. \end{equation} \noindent such that the form of the Lie algebra is preserved, \begin{equation} \left[ {P^{\prime }}_{i},{Q^{\prime }}_{i}\right] =\delta _{i,j}I^{\prime } \label{MG: prined WH algebra} \end{equation} \noindent The element $I^{\prime }$ is central and as $I$ spans the center of the algebra, we must have $I^{\prime }=d I$ with $d\in \mathbb{R}\backslash \{0\}$. Furthermore, the elements of the automorphism group that preserves the center of the algebra, \begin{equation} I^{\prime }=g I g^{-1}=I, \end{equation} \noindent defines a subgroup. The group inner automorphisms of a group\ \ is isomorphic to the group itself.\ \ The full group of automorphisms always contains the group of inner automorphisms as a normal subgroup. For the case of the Weyl-Heisenberg group, this means that the Weyl-Heisenberg group is a normal subgroup of its automorphism group, $\mathcal{H}( n) \subset {\mathcal{A}ut}_{\mathcal{H}( n) }$.\ \ \ The projective representations of ${\mathcal{A}ut}_{\mathcal{H}( n) }$ are equivalent to the unitary representations $\upsilon $ of its central extension ${\widecheck{\mathcal{A}ut}}_{\mathcal{H}( n) }$\ \ acting on a Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\upsilon }$.\ \ If we restrict $\upsilon $ to the normal subgroup $\mathcal{H}( n) $ of inner automorphisms, these are the unitary representations of the Weyl-Heisenberg group, $\upsilon |_{\mathcal{H}( n) }=\xi $.\ \ Therefore, the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }$ is an invariant subspace of ${\text{\boldmath $\mathrm{H}$}}^{\upsilon }$.\ \ The generators of the Weyl-Heisenberg group transform under the action of elements $U=\upsilon ( g) $, $g\in {\mathcal{A}ut}_{\mathcal{H}( n) }$ as \begin{equation} \begin{array}{l} \begin{array}{l} {{\widehat{P}}^{\prime }}_{i}=\upsilon ^{\prime }( {P^{\prime }}_{i}) =\upsilon ^{\prime }( g P_{i}g^{-1}) =\upsilon ( g) \xi ^{\prime }( P_{i}) {\upsilon ( g) }^{-1}=U {\widehat{P}}_{i}U^{-1}, \\ {{\widehat{Q}}^{\prime }}_{i}=\upsilon ^{\prime }( {Q^{\prime }}_{i}) =\upsilon ^{\prime }( g Q_{i}g^{-1}) =\upsilon ( g) \xi ^{\prime }( Q_{i}) {\upsilon ( g) }^{-1}=U {\widehat{Q}}_{i}U^{-1}, \\ {\widehat{I}}^{\prime }=\upsilon ^{\prime }( I^{\prime }) =\upsilon ^{\prime }( g I g^{-1}) =\upsilon ( g) \xi ^{\prime }( I) {\upsilon ( g) }^{-1}=U \widehat{I}U^{-1}, \end{array} \end{array} \end{equation} For the faithful representation $\upsilon $, the commutation relations for the transformed generators are, using (15), \begin{equation} \left[ {{\widehat{P}}^{\prime }}_{i},{{\widehat{Q}}^{\prime }}_{i}\right] =i\ \ \delta _{i,j}{\widehat{I}}^{\prime }=i \lambda ^{\prime } \delta _{i,j}\text{\boldmath $1$} \end{equation} \noindent where ${\widehat{I}}^{\prime }=d\widehat{I}$ and so $\lambda ^{\prime }=d \lambda $.\ \ Now, as we have noted, the $\lambda $ label the faithful irreducible representations of the Weyl-Heisenberg group. Furthermore, the physical cases corresponds to the choice $\lambda =1$.\ \ This must also be true for the transformed operators and therefore $\lambda ^{\prime }=1$ and so ${\widehat{I}}^{\prime }=\widehat{I}$\ \ with $d=1$.\ \ That is, the projective representation of the symmetry group of the Heisenberg commutation relations leaves the representation of the center of the Weyl-Heisenberg group invariant.\ \ As the representation is faithful, the symmetry group also must leave the central generator of the Weyl-Heisenberg algebra invariant, $I^{\prime }=I$. Therefore, the maximal group of symmetries of the Heisenberg commutation relations are the projective representation of the subgroup of the automorphism group of the Weyl-Heisenberg group that leaves the central generator $I$ invariant. The problem that this paper addresses is to determine the explicitly this symmetry group and its projective representations. We will show that the automorphism group of the Weyl-Heisenberg group is \cite{folland}\ \ \ \begin{equation} {\mathcal{A}ut}_{\mathcal{H}( n) }\simeq \mathcal{D}\otimes _{s}\mathcal{H}\overline{\mathcal{S}p}( 2n) \label{MG: Original Aut group def} \end{equation} \noindent where \begin{equation} \mathcal{H}\overline{\mathcal{S}p}( 2n) \simeq \overline{\mathcal{S}p}( 2n) \otimes _{s}\mathcal{H}( n) ,\ \ \ \ \ \mathcal{D}\simeq \left( \mathbb{R}\backslash \left\{ 0\right\} ,\times \right) , \end{equation} \noindent where $\mathcal{D}$ is the reals excluding $\{0\}$ viewed as a group under multiplication, $\mathcal{D}\simeq (\mathbb{R}\backslash \{0\},\times )$.\ \ We will show that the subgroup of the automorphism group that leaves the central generator $I$ invariant is\ \ \ \begin{equation} \mathcal{H}\overline{\mathcal{S}p}( 2n) \label{MG: ce of symmmetry group} \end{equation} \noindent The group $\mathcal{H}\overline{\mathcal{S}p}( 2n) $ is connected and is the central extension of the Inhomogeneous group, $\mathcal{H}\overline{\mathcal{S}p}( 2n) \simeq \mathcal{I}\widecheck{\mathcal{S}p}( 2n) $ that is defined by the short exact sequence \begin{equation} e\rightarrow \mathbb{Z}\otimes \mathcal{A}( 1) \rightarrow \mathcal{H}\overline{\mathcal{S}p}( 2n) \rightarrow \mathcal{I}\mathcal{S}p( 2n) \rightarrow e \label{MG: short exact sequence for hsp} \end{equation} \noindent $\mathbb{Z}$ is the center of $\overline{\mathcal{S}p}( 2n) $ and $\mathcal{A}( 1) $ is the center of $\mathcal{H}( n) $.\ \ $\mathcal{I}\mathcal{S}p( 2n) $ is the inhomogeneous symplectic group familiar from classical Hamiltonian mechanics, \begin{equation} \mathcal{I}\mathcal{S}p( 2n) \equiv \mathcal{S}p( 2n) \otimes _{s}\mathcal{A}( 2n) \label{MG: ISp} \end{equation} To establish the above results we start by reviewing the Weyl-Heisenberg group. We then derive its automorphism group and the subgroup that leaves the center of the Weyl-Heisenberg group invariant. This is the maximal symmetry group. The projective representations of this symmetry group are equivalent to the unitary representations of its central extension.\ \ We use the Mackey theorems to compute the unitary irreducible representations of the symmetry group from first principles. (As the symmetry group contains the Weyl-Heisenberg group as normal subgroup, this first requires the computation of the faithful unitary irreducible representations of the Weyl-Heisenberg group itself using the Mackey theorems.)\ \ We will enumerate and comment on the degenerate cases.\ \ \ \ \section{The symmetry group} In this section, we review basic properties of the Weyl-Heisenberg group and determine its automorphism group.\ \ We then determine the subgroup leaving the center of the Weyl-Heisenberg group invariant and study certain of its properties. \subsection{The Weyl-Heisenberg group} The Weyl-Heisenberg Lie group is defined to be the semi-direct product of two abelian groups of the form given in (2). We first verify that these group product (3) and inverse (4) relations result in the semidirect product of this form.\ \ First, the group product and inverse (3-4) enable us to identify the abelian subgroups \begin{equation} \Upsilon ( 0,q,\iota ) \in \mathcal{A}( n+1) , \Upsilon ( p,0,0) \in \mathcal{A}( n) \label{MG: WH normal q i} \end{equation} \noindent where again $p,q\in \mathbb{R}^{n}$ and $\iota \in \mathbb{R}$.\ \ These subgroups satisfy the group product and inverse relations \begin{gather} \Upsilon ( 0,q^{\prime },\iota ^{\prime }) \Upsilon ( 0,q,\iota ) =\Upsilon ( 0,q^{\prime }+q,\iota +\iota ^{\prime }) ,\ \ {\Upsilon ( 0,q,\iota ) }^{-1}=\Upsilon ( 0,-q,-\iota ) \\\Upsilon ( p^{\prime },0,0) \Upsilon ( p,0,0) =\Upsilon ( p^{\prime }+p,0,0) , {\Upsilon ( p,0,0) }^{-1}=\Upsilon ( -p,0,0) \label{MG: Heisenberg group product} \end{gather} \noindent Additional abelian subgroups are likewise given by \begin{equation} \Upsilon ( p,0,\iota ) \in \mathcal{A}( n+1) ,\ \ \Upsilon ( 0,q,0) \in \mathcal{A}( n+1) \label{MG: WH normal p i} \end{equation} We calculate the inner automorphisms of the group using (3-4) to be\footnote{We always use $\varsigma $ to define the similarity map $\varsigma _{g}h\equiv g h g^{-1}$ in what follows.} \begin{equation} \begin{array}{ll} \varsigma _{\Upsilon ( p^{\prime },q^{\prime },\iota ^{\prime }) }\Upsilon ( p,q,\iota ) & =\Upsilon ( p^{\prime },q^{\prime },\iota ^{\prime }) \Upsilon ( p,q,\iota ) {\Upsilon ( p^{\prime },q^{\prime },\iota ^{\prime }) }^{-1} \\ & =\Upsilon ( p,q,\iota +p^{\prime }q-q^{\prime }\cdot p) . \end{array \label{MG: WH inner aut unpolarized} \end{equation} \noindent In particular, note that for each of the choices of the subgroups \begin{gather} \varsigma _{\Upsilon ( p^{\prime },q^{\prime },\iota ^{\prime }) }\Upsilon ( 0,q,\iota ) =\Upsilon ( 0,q,\iota +p^{\prime }q) \label{MG: WH inner aut q t} \\\varsigma _{\Upsilon ( p^{\prime },q^{\prime },\iota ^{\prime }) }\Upsilon ( p,0,\iota ) =\Upsilon ( p,0,\iota -q^{\prime }\cdot p) \label{MG: WH inner aut p i} \end{gather} \noindent This means that both of the $\mathcal{A}( n+1) $ subgroups given in (24), (27) are normal subgroups.\ \ Another special case of (3) is \begin{equation} \varsigma _{\Upsilon ( 0,0,\iota ^{\prime }) }\Upsilon ( p,q,\iota ) =\Upsilon ( p,q,\iota ) \label{MG: center of H} \end{equation} \noindent and therefore the elements $\Upsilon ( 0,0,\iota ^{\prime }) $ commute with all elements of the group. Furthermore, these are the only elements that commute with all other elements of the group.\ \ Therefore the $\mathcal{A}( 1) $ group that is defined by the elements $\Upsilon ( 0,0,\iota ) $ is the center of the group, $\mathcal{Z}\simeq \mathcal{A}( 1) $. The final step to verify that the group relations defined by (3-4) results in the Weyl-Heisenberg group having the structure of a semidirect product given in (2).\ \ We have already established that there are two choices for the $\mathcal{A}( n) $ subgroup and $\mathcal{A}( n+1) $ normal subgroup.\ \ It is clear in both cases that \begin{equation} \mathcal{A}( n) \cap \mathcal{A}( n+1) =\text{\boldmath $e$}\text{\boldmath $,$} \end{equation} \noindent as the identity $\Upsilon ( 0,0,0) $ is the only element in both groups for both cases.\ \ It remains to show that $\mathcal{A}( n+1) \mathcal{A}( n) \simeq \mathcal{H}( n) $.\ \ Using the group product (3),\ \ for each of the cases (24), (27),\ \ this is \begin{gather} \Upsilon ( 0,q,\iota ) \Upsilon ( p,0,0) =\Upsilon ( p,q,\iota -\frac{1}{2}q\cdot p) , \\\Upsilon ( p,0,\iota ) \Upsilon ( 0,q,0) =\Upsilon ( p,q,\iota +\frac{1}{2}p\cdot q) . \end{gather} The map \begin{equation} \varphi ^{\pm }:\mathcal{H}( n) \rightarrow \mathcal{H}( n) : \Upsilon ( p,q,\iota ) \mapsto \Upsilon ^{\pm }( p,q,\iota ^{\pm }) =\Upsilon ( p,q,\iota \mp \frac{1}{2}p\cdot q) \label{MG: WH Isomorphism} \end{equation} \noindent is a homomorphism that is onto and the kernel is trivial. Therefore, the map $\varphi ^{\pm }$ is an isomorphism and the Weyl-Heisenberg group has the semidirect product structure given in (2) for either of the choices of abelian subgroup given by (24), (27). The Weyl-Heisenberg\ \ Lie group is a matrix group and may be realized by the $2n+2$ dimensional square matrices\ \ \begin{equation} \Upsilon ( p,q,\iota ) =\left( \begin{array}{llll} 1_{n} & 0 & 0 & p \\ 0 & 1_{n} & 1 & q \\ q^{\mathrm{t}} & -p^{\mathrm{t}} & 1 & 2\iota \\ 0 & 0 & 0 & 1 \end{array}\right) . \end{equation} \noindent $1_{m}$ denotes the unit matrix in $m$ dimensions and the t superscript denotes the transpose.\ \ The group multiplication and inverse (3-4) are realized by matrix multiplication and inverse.\ \ The Lie algebra of the Weyl-Heisenberg group may be computed from this matrix realization.\ \ The coordinates are nonsingular at the origin and therefore, choosing the unpolarized form, the generators are given by \begin{equation} Q_{i}=\frac{\partial }{\partial p^{i}} \Upsilon ( p,q,\iota ) |_{e} , P_{i}=\frac{\partial }{\partial q^{i}} \Upsilon ( p,q,\iota ) |_{e}, I=\frac{\partial }{\partial \iota } \Upsilon ( p,q,\iota ) |_{e}. \end{equation} \noindent A general element of the algebra is then \begin{equation} W=p^{i}Q_{i}+q^{i}P_{i}+\iota I. \end{equation} \noindent The nonzero commutation relations are,\ \ as expected, \begin{equation} \left[ P_{i},Q_{i}\right] =\delta _{i,j}I \end{equation} \noindent where $I$ is a central generator. It is convenient to also introduce the notation that combines the $p,q\in \mathbb{R}^{n}$ into a single $2n$ tuple $z=(p,q)$, $z\in \mathbb{R}^{2n}$.\ \ Then the group product and inverse are \begin{equation} \Upsilon ( z^{\prime },\iota ^{\prime }) \Upsilon ( z,\iota ) =\Upsilon ( z^{\prime }+z,\iota +\iota -\frac{1}{2} {z^{\prime }}^{\mathrm{t}}\zeta z) ,\ \ {\Upsilon ( z,\iota ) }^{-1}=\Upsilon ( -z,-\iota ) \label{MG: z Heisenberg group product} \end{equation} \noindent and the unpolarized matrix realization is \begin{equation} \Upsilon ( z,\iota ) =\left( -\begin{array}{lll} 1_{2n} & 0 & z \\ z^{\mathrm{t}}\zeta & 1 & 2\iota \\ 0 & 0 & 1 \end{array}\right) ,\ \ \zeta =\left( \begin{array}{ll} 0 & 1_{n} \\ -1_{n} & 0 \end{array}\right) \label{MG: H element z notation} \end{equation} \noindent The Lie algebra has general element \begin{equation} W( z,\iota ) =z^{\alpha }Z_{\alpha }+\iota I \label{MG: H alg basis} \end{equation} \noindent $\alpha , \beta ,...=1,...2n\text{}$ where the matrix form of the algebra is \begin{equation} W( z,\iota ) =\left( -\begin{array}{lll} 0 & 0 & z \\ z^{\mathrm{t}}\zeta & 0 & 2\iota \\ 0 & 0 & 0 \end{array}\right) , \label{MG: H algebra element z notation} \end{equation} \noindent The generators satisfy the nonzero commutation relations \begin{equation} \left[ Z_{\alpha },Z_{\beta }\right] =\zeta _{\alpha ,\beta }I. \end{equation} \subsection{The automorphism group of the Weyl-Heisenberg group} The automorphism group of a group $\mathcal{G}$ is the maximal group for which $\mathcal{G}$ is a normal subgroup. We have established in the previous section that the Weyl-Heisenberg group is a simply connected matrix group and this enables us to prove the following theorem.\ \ \ \ \ \begin{theorem} The automorphism group of the Weyl-Heisenberg group $\mathcal{H}( n) $ is\label{PH: theorem: WH Automorphism theorem} \end{theorem} \begin{equation} {\mathcal{A}ut}_{\mathcal{H}( n) }\simeq \mathcal{D}\otimes _{s}\overline{\mathcal{S}p}( 2n) \otimes _{s}\mathcal{H}( n) \label{MG: aut D semi HSp} \end{equation} \noindent $\mathcal{H}( n) $ is the Weyl-Heisenberg group, $\overline{\mathcal{S}p}( 2n) $ is the cover of the real symplectic group that leaves invariant a real skew symmetric form and $\mathcal{D}$ is the reals excluding zero viewed as a group under multiplication $\mathcal{D}\simeq (\mathbb{R}\backslash \{0\},\times )$. As the Weyl-Heisenberg group is simply connected, Theorem 7\ \ states that\ \ the automorphism group of its algebra and group are equivalent.\ \ We can therefore establish the result by determining the maximal group for which its elements\ \ $\Omega $ satisfy \begin{equation} \varsigma _{\Omega }W=\Omega W \Omega ^{-1}=W^{\prime } \label{MG: aut of alg} \end{equation} \noindent $W,W^{\prime }$ are general elements of the algebra of the Weyl-Heisenberg group (42). The most general transformation between a primed and unprimed basis is \begin{equation} {Z^{\prime }}_{\alpha }= a_{\alpha }^{\beta }Z_{\beta }+x_{\alpha } I,\ \ I^{\prime }=c^{\alpha }Z_{\alpha }+d I \label{MG: General aut on alg gen} \end{equation} \noindent The commutator [${Z^{\prime }}_{\alpha },I^{\prime }]=0$ requires $c^{\alpha }=0$ so that $I^{\prime }=d I$ with $d\in \mathbb{R}\backslash \{0\}$.\ \ Next, \begin{equation} \begin{array}{rl} \zeta _{\alpha ,\beta }I^{\prime }=\left[ {Z^{\prime }}_{\alpha },{Z^{\prime }}_{\beta }\right] & =\left[ a_{\alpha }^{\kappa }Z_{\kappa }+x_{\alpha },a_{\beta }^{\kappa }Z_{\kappa }+x_{\beta }\right] \\ & =\frac{1}{d}a_{\alpha }^{\delta }a_{\beta }^{\gamma }\zeta _{\delta ,\gamma }I^{\prime }. \end{array} \end{equation} \noindent This has the solution $a_{\alpha }^{\beta }=\delta \Sigma _{\alpha }^{\beta }$ and\ \ $d=\delta ^{2}$.\ \ Therefore, for $W( z,\iota ) =z^{\kappa }Z_{\kappa }+\iota I$ we have \begin{equation} W( z^{\prime },\iota ^{\prime }) =\varsigma _{\Omega }W( z,\iota ) ={z^{\prime }}^{\kappa }Z_{\kappa }+\iota ^{\prime } I \end{equation} \noindent with\cite{folland} \begin{equation} z^{\prime }=\delta \Sigma z, \iota ^{\prime }=\iota \delta ^{2}+z\cdot x \label{MG: general form of W prime} \end{equation} To determine the group with elements $\Omega $ that satisfies (46, 50), we can use the matrix realization of the algebra given in (43).\ \ As $\Omega $ is nonsingular, (46) is equivalent to\ \ \begin{equation} \Omega W( z,\iota ) =W( z^{\prime },\iota ^{\prime }) \Omega \label{MG: aut of alg W} \end{equation} \noindent where $\Omega $ is a $2n+2$ dimensional square matrix. We can write $\Omega $ in terms of the submatrices \begin{equation} \Omega =\left( \begin{array}{lll} a & c & z \\ f & d & j \\ g & h & \epsilon \end{array}\right) \end{equation} \noindent where $j,d,r,h,e\in \mathbb{R}$, $c,w,f,g\in \mathbb{R}^{2n}$ and $a$ is a $2n$ dimensional square submatrices and then solve (51) to obtain \begin{equation} \Omega ( \delta ,\Sigma ,z,\iota ) =\left( \begin{array}{lll} \delta \Sigma & 0 & z \\ - \delta z^{\mathrm{t}}\zeta \Sigma & \delta ^{2} & 2 \iota \\ 0 & 0 & 1 \end{array}\right) \end{equation} \noindent where $z\in \mathbb{R}^{2n}$, $\delta \in \mathcal{D}\equiv \mathbb{R}\backslash \{0\}$,\ \ $\iota \in \mathbb{R}$ and $\Sigma ^{\mathrm{t}}\zeta \Sigma =\zeta $ and so $\Sigma \in \mathcal{S}p( 2n) $.\ \ Direct matrix multiplication shows that the elements $\Omega ( \delta ,\Sigma ,w,r) $ define a group that we call ${\mathrm{aut}}_{\mathcal{H}( n) }$ with product and inverse \begin{gather} \begin{array}{rl} \Omega ( \delta ^{{\prime\prime}},\Sigma ^{{\prime\prime}},z^{{\prime\prime}},\iota ^{{\prime\prime}}) & =\Omega ( \delta ^{\prime },\Sigma ^{\prime },z^{\prime },\iota ^{\prime }) \Omega ( \delta ,\Sigma ,z,\iota ) \\ & =\Omega ( \delta ^{\prime }\delta ,\Sigma ^{\prime }\Sigma ,z^{\prime }+\delta ^{\prime }\Sigma ^{\prime }z, \iota ^{\prime }+ {\delta ^{\prime }}^{2}\iota -\frac{1}{2}\delta ^{\prime } z^{\prime t}\zeta \Sigma ^{\prime }z) , \end{array \label{MG: aut group product} \\{\Omega ( \delta ,\Sigma ,z,\iota ) }^{-1}=\Omega ( \delta ^{-1},\Sigma ^{-1},- \delta ^{-1}\Sigma ^{-1}z,-\delta ^{-2}\iota ) \label{MG: aut group inverse} \end{gather} \noindent where the identity element is $e=\{1,1_{2n},0,0\}$.\ \ From these relations, we can explicitly compute that automorphisms of the algebra given in (46) \begin{equation} W( z^{\prime },\iota ^{\prime }) =\varsigma _{\Omega ( \delta ,\Sigma ,z^{{\prime\prime}},\iota ^{{\prime\prime}}) }W( z,\iota ) \end{equation} \noindent where \begin{equation} z^{\prime }=\delta \Sigma z, \iota ^{\prime }=\iota \delta ^{2}-\delta {z^{{\prime\prime}}}^{\mathrm{t}}\zeta \Sigma \cdot z \label{MG: general aut of WH algebra} \end{equation} \noindent Comparing with the general expression given in (50), they are equivalent where we identify\ \ $x=\delta {z^{{\prime\prime}}}^{\mathrm{t}}\zeta \Sigma $.\ \ As $\det \Sigma \neq 0 \mathrm{and} \delta \neq 0$, there is a bijection between values of $x$ and $z^{{\prime\prime}}$. Using these relations, the next step is to show that the group ${\mathrm{aut}}_{\mathcal{H}( n) }$ has the form of a semidirect product\footnote{The definition of a semidirect product is reviewed in Definition 1 in Appendix A.} \begin{equation} {\mathrm{aut}}_{\mathcal{H}( n) }\simeq \left( \mathcal{D}\otimes \mathcal{S}p( 2n) \right) \otimes _{s}\mathcal{H}( n) \label{MG: aut D direct Sp semi H} \end{equation} First, using the group product and inverse (54-55), we can establish that $\mathcal{D}$, $\mathcal{S}p( 2n) $ and $\mathcal{H}( n) $ are subgroups of ${\mathrm{aut}}_{\mathcal{H}( n) }$ with elements \begin{equation} \ \ \begin{array}{l} \Omega ( \delta ,1_{2n},0,0) \in \mathcal{D}, \\ \Omega ( 1,\Sigma ,0,0) \simeq \Sigma \in \mathcal{S}p( 2n) \\ \Omega ( 1,1_{2n},z,\iota ) =\Upsilon ( z,\iota ) \in \mathcal{H}( n) \end{array} \end{equation} The direct product $\mathcal{D}\otimes \mathcal{S}p( 2n) $ is immediately established from the special case of the group multiplication (54) \begin{equation} \begin{array}{rl} \Omega ( \delta ,\Sigma ,0,0) & =\Omega ( \delta ,1_{2n},0,0) \Omega ( 1,\Sigma ,0,0) \\ & =\Omega ( 1,\Sigma ,0,0) \Omega ( \delta ,1_{2n},0,0) , \end{array \label{MG: aut group product} \end{equation} \noindent The semidirect product in (58) is established by first noting that\ \ \begin{equation} \left( \mathcal{D}\otimes \mathcal{S}p( 2n) \right) \cap \mathcal{H}( n) \simeq \left\{ \Omega ( \delta ,\Sigma ,0,0) \right\} \cap \left\{ \Omega ( 1,1_{2n},z,\iota ) \right\} \simeq e, \end{equation} \noindent Then, using the group product (54), \begin{equation} \Omega ( 1,1_{2n},z,\iota ) \Omega ( \delta ,\Sigma ,0,0) =\Omega ( \delta ,\Sigma ,z,\iota ) . \end{equation} \noindent Direct computation using (54-55) shows that the Weyl-Heisenberg subgroup $\mathcal{H}( n) $ is a normal subgroup with the automorphisms given by \begin{equation} \varsigma _{\Omega ( \delta ^{\prime },\Sigma ^{\prime },z^{\prime },\iota ^{\prime }) }\Upsilon ( z,\iota ) =\Upsilon ( \delta ^{\prime } \Sigma ^{\prime }z,{\delta ^{\prime }}^{2} \iota -\delta ^{\prime }{z^{\prime }}^{\mathrm{t}}\zeta \Sigma ^{\prime }z) \label{MG: automorphisms of aut} \end{equation} \noindent This establishes that ${\mathrm{aut}}_{\mathcal{H}( n) }$ has the semidirect product form given in (58).\ \ The right associative property of the semidirect product allows\ \ this to be written as \begin{equation} \begin{array}{rl} {\mathrm{aut}}_{\mathcal{H}( n) } & \simeq \left( \mathcal{D}\otimes \mathcal{S}p( 2n) \right) \otimes _{s}\mathcal{H}( n) \\ & \simeq \mathcal{D}\otimes _{s}\mathcal{H}\mathcal{S}p( 2n) \end{array \label{MG: aut D semi HSp} \end{equation} \noindent where $\mathcal{H}\mathcal{S}p( 2n) $ is a semidirect product of the form \begin{equation} \mathcal{H}\mathcal{S}p( 2n) \simeq \mathcal{S}p( 2n) \otimes _{s}\mathcal{H}( n) \label{MG: HSp is Sp semi H} \end{equation} This the local characterization of the automorphism group.\ \ It remains to consider any global topological properties that could result in a larger group that behaves the same locally.\ \ The group $\mathcal{D}$ may be written as the direct product $\mathcal{D}\simeq \mathbb{Z}_{2}\otimes \mathcal{D}^{+}$ where $\mathcal{D}^{+}\simeq (\mathbb{R}^{+},\times )$ is the positive reals considered as a group under multiplication.\ \ $\mathbb{Z}_{2}$ is the discrete group with two elements $\{\pm 1\}$. $\mathcal{D}^{+}$ is simply connected but $\mathcal{D}$ has two components,\ \ $\mathcal{D}/\mathcal{D}^{+}\simeq \mathbb{Z}_{2}$. Therefore, the connected component of the group is \begin{equation} {\mathrm{aut}}_{\mathcal{H}( n) }^{c}\simeq \mathcal{D}^{+}\otimes _{s}\mathcal{S}p( 2n) \otimes _{s}\mathcal{H}( n) \label{MG: connected aut D semi HSp} \end{equation} \noindent $\mathcal{H}( n) $ and $\mathcal{D}^{+}$ are simply connected and $\mathcal{S}p( 2n) $ is connected with fundamental group $\mathbb{Z}$.\ \ Its simply connected universal cover is denoted $\overline{\mathcal{S}p}( 2n) $ with \begin{equation} \pi :\overline{\mathcal{S}p}( 2n) \rightarrow \mathcal{S}p( 2n) :\overline{\Sigma }\rightarrow \Sigma =\pi ( \overline{\Sigma }) , \ker \pi \simeq \mathbb{Z}. \end{equation} \noindent Therefore, by the universal covering theorem, \begin{equation} {\mathcal{A}ut}_{\mathcal{H}( n) }^{c}\simeq {\overline{\mathrm{aut}}}_{\mathcal{H}( n) }^{c}\simeq \mathcal{D}^{+}\otimes _{s}\overline{\mathcal{S}p}( 2n) \otimes _{s}\mathcal{H}( n) \label{MG: connected aut D semi HSp} \end{equation} \noindent is well defined and unique with the following group product and inverse \begin{gather} \begin{array}{rl} \Omega ( \delta ^{{\prime\prime}},\Sigma ^{{\prime\prime}},z^{{\prime\prime}},\iota ^{{\prime\prime}}) & =\Omega ( \delta ^{\prime },{\overline{\Sigma }}^{\prime },z^{\prime },\iota ^{\prime }) \Omega ( \delta ,\overline{\Sigma },z,\iota ) \\ & =\Omega ( \delta ^{\prime }\delta ,{\overline{\Sigma }}^{\prime }\overline{\Sigma },z^{\prime }+\delta ^{\prime }\Sigma ^{\prime }z, \iota ^{\prime }+ {\delta ^{\prime }}^{2}\iota -\frac{1}{2}\delta ^{\prime } z^{\prime t}\zeta \Sigma ^{\prime }z) , \end{array \label{MG: cover group product} \\{\Omega ( \delta ,\Sigma ,z,\iota ) }^{-1}=\Omega ( \delta ^{-1},{\overline{\Sigma }}^{-1},- \delta ^{-1}\Sigma ^{-1}z,-\delta ^{-2}\iota ) \label{MG: cover group inverse} \end{gather} \noindent where $z\in \mathbb{R}^{2n}$, $\delta \in \mathcal{D}^{+}$,\ \ $\iota \in \mathbb{R}$ and\ \ $\overline{\Sigma }\in \overline{\mathcal{S}p}( 2n) $.\ \ Note that in these expressions $\Sigma =\pi ( \overline{\Sigma }) $$\text{}$. The expression for automorphisms of the Weyl-Heisenberg subgroup remains the same as given in (63). The cover of a disconnected group may be defined to be the central extension of the group with a discrete central group. The problem is, that in general, this does not give a unique cover and so this must be checked on a case by case basis. This is discussed in Appendix C where we show that \begin{equation} {\mathcal{A}ut}_{\mathcal{H}( n) }\simeq {\mathrm{aut}}_{\mathcal{H}( n) }\simeq \mathcal{D}\otimes _{s}\overline{\mathcal{S}p}( 2n) \otimes _{s}\mathcal{H}( n) \label{MG: connected aut D semi HSp} \end{equation} \noindent is unique and well defined. It has the group product and inverse given in (69-70) where now $\delta \in \mathcal{D}$.\ \ Again, the automorphisms of the Weyl-Heisenberg subgroup remains the same as given in (63). The group ${\mathcal{A}ut}_{\mathcal{H}( n) }$ is the largest group that the topological properties admit that is homomorphic to ${\mathrm{aut}}_{\mathcal{H}( n) }$ and therefore we completed the proof of Theorem 1.\ \ \subsection{Subgroup of automorphism group with invariant center} The action of the automorphism group on the algebra is given in\ \ (57).\ \ Invariance of the central element requires $\delta =1$ which is the unit element for $\mathcal{D}^{+}$.\ \ Thus the maximal symmetry group that leaves the center of the Weyl-Heisenberg algebra invariant is $\mathcal{H}\overline{\mathcal{S}p}( 2n) $. As given in (22), the central extension of $\mathcal{H}\mathcal{S}p( 2n) $ is equivalent to the central extension of the inhomogeneous symplectic group familiar from classical mechanics.\ \ \ \begin{equation} \mathcal{H}\widecheck{\mathcal{S}p}( 2n) \simeq \mathcal{H}\overline{\mathcal{S}p}( 2n) \simeq \mathcal{I}\widecheck{\mathcal{S}p}( 2n) \end{equation} This is a very remarkable fact.\ \ The central extension of the $\mathcal{A}( 2n) $ is generally $n( 2n-1) $ dimensional. However, because it is a subgroup of $\mathcal{I}\widecheck{\mathcal{S}p}( 2n) $, the Lie algebra relations with the symplectic group constrain the central extension of the abelian normal subgroup to be precisely the one dimensional extension that is the Weyl-Heisenberg group.\ \ The group product and inverse are given by (69-70) with $\delta =1$. \subsubsection{Symplectic group factorization } The defining condition for the real symplectic group $\mathcal{S}p( 2n) $ is \begin{equation} \Sigma ^{\mathrm{t}}\zeta \Sigma =\zeta \label{MG: symplectic condition} \end{equation} \noindent where $\zeta $ is the symplectic matrix defined in (41). Matrix realizations of elements of the real symplectic group may be written as \begin{equation} \Sigma =\left( \begin{array}{ll} \Sigma _{1} & \Sigma _{2} \\ \Sigma _{3} & \Sigma _{4} \end{array}\right) \end{equation} \noindent where $\Sigma _{a}$ , $a=1,..,4$ are $n\times n$ submatrices.\ \ The symplectic condition (73) immediately results in the relations \begin{equation} \begin{array}{l} \Sigma _{1}^{\mathrm{t}}\Sigma _{4}-\Sigma _{3}^{\mathrm{t}}\Sigma _{2}=1_{n}, \\ \Sigma _{1}^{\mathrm{t}} \Sigma _{3}={\left( \Sigma _{1}^{\mathrm{t}} \Sigma _{3}\right) }^{\mathrm{t}}, \\ \Sigma _{2}^{\mathrm{t}}\Sigma _{4}={\left( \Sigma _{2}^{\mathrm{t}} \Sigma _{4}\right) }^{\mathrm{t}}. \end{array \label{MG: symplectic block matrix conditions} \end{equation} A matrix realization of a Lie group is a coordinate system. As $\operatorname{Det}( \Sigma ) =1$, it follows that the determinate of at least one of the $\Sigma _{a}$, $a=1,...,4$,\ \ must be nonzero.\ \ These correspond to different coordinate patches for the manifold underlying the symplectic group. Assume $\operatorname{Det}( \Sigma _{1}) \neq 0$.\ \ Then \cite{degosson}, \begin{equation} \Sigma ( \alpha ,\beta ,\gamma ) =\left( \begin{array}{ll} 1_{n} & 0 \\ \gamma & 1_{n} \end{array}\right) \left( \begin{array}{ll} \alpha ^{-1} & 0 \\ 0 & \alpha ^{\mathrm{t}} \end{array}\right) \left( \begin{array}{ll} 1_{n} & \beta \\ 0 & 1_{n} \end{array}\right) \label{MG: Sigma matrix factors f} \end{equation} \noindent where we define \begin{equation} \alpha ={\left( \Sigma _{1}\right) }^{-1}, \beta ={\left( \Sigma _{1}\right) }^{-1}\Sigma _{2}, \gamma ={\Sigma _{3}( \Sigma _{1}) }^{-1}. \end{equation} \noindent It follows from (75) that $\beta =\beta ^{\mathrm{t}}$ and $\gamma =\gamma ^{\mathrm{t}}$. The matrix realizations of elements of the symplectic group factor as \begin{equation} \Sigma ( \alpha ,\beta ,\gamma ) =\Sigma ^{-}( \gamma ) \Sigma \mbox{}^{\circ}( \alpha ) \Sigma ^{+}( \beta ) \label{MG: Sigma matrix factors} \end{equation} \noindent where \begin{equation} \begin{array}{l} \Sigma \mbox{}^{\circ}( \alpha ) \equiv \Sigma ( \alpha ,1_{n},1_{n}) \in \mathcal{U}( n) , \\ \Sigma ^{+}( \beta ) \equiv \Sigma ( 1_{n},\beta ,1_{n}) \in \mathcal{A}( m) , \\ \Sigma ^{-}( \gamma ) \equiv \Sigma ( 1_{n},1_{n},\gamma ) \in \mathcal{A}( m) . \end{array} \end{equation} \noindent and $m=\frac{n( n-1) }{2}$.\ \ Furthermore, note that \begin{equation} \zeta \Sigma ^{-}( \gamma ) \zeta ^{-1}= \Sigma ^{+}( -\gamma ) . \label{MG: sigma gamma zeta tx} \end{equation} A similar argument applies if we instead assume $\operatorname{Det}( \Sigma _{4}) \neq 0$. Both of these coordinate patches contain the identity, $1_{2n}$ but neither contains the element $\zeta $.\ \ These require us to consider the case with either $\Sigma _{2}$ or $\Sigma _{3}$ to be assumed to be nonsingular.\ \ In this case, define \begin{equation} \widetilde{\Sigma }= \Sigma \zeta ^{-1}=\left( \begin{array}{ll} \Sigma _{2} & -\Sigma _{1} \\ \Sigma _{4} & -\Sigma _{3} \end{array}\right) . \end{equation} \noindent The $\widetilde{\Sigma }$ also satisfy the symplectic condition as\ \ \begin{equation} \zeta =\Sigma ^{\mathrm{t}}\zeta \Sigma =\zeta ^{\mathrm{t}}{\widetilde{\Sigma }}^{\mathrm{t}}\zeta \widetilde{\Sigma } \zeta \Rightarrow {\widetilde{\Sigma }}^{\mathrm{t}}\zeta \widetilde{\Sigma }=\zeta \label{MG: zeta factorization} \end{equation} \noindent This symplectic condition results in the identities \begin{equation} \begin{array}{l} \Sigma _{4}^{\mathrm{t}}\Sigma _{1}-\Sigma _{2}^{\mathrm{t}}\Sigma _{3}=1_{n}, \\ \Sigma _{2}^{\mathrm{t}} {\widetilde{\Sigma }}_{4}={\left( \Sigma _{2}^{\mathrm{t}} \Sigma _{4}\right) }^{\mathrm{t}}, \\ \Sigma _{1}^{\mathrm{t}}\Sigma _{3}={\left( \Sigma _{1}^{\mathrm{t}} \Sigma _{3}\right) }^{\mathrm{t}}. \end{array} \end{equation} We can now assume $\operatorname{Det}( \Sigma _{2}) \neq 0$ and the analysis proceeds as before with \begin{equation} \alpha ={\left( \Sigma _{2}\right) }^{-1}, \beta =-{\left( \Sigma _{2}\right) }^{-1} \Sigma _{1}, \gamma =\Sigma _{4} {\left( \Sigma _{2}\right) }^{-1}, \end{equation} \noindent In this case the factorization must include the symplectic matrix from (82) \begin{equation} \Sigma ( \alpha ,\beta ,\gamma ) =\Sigma ^{-}( \gamma ) \Sigma \mbox{}^{\circ}( \alpha ) \Sigma ^{+}( \beta ) \zeta \label{MG: Z symplectic factorization} \end{equation} \noindent Finally a similar argument applies for the coordinate patch\ \ $\operatorname{Det}( \Sigma _{3}) \neq 0$. Both of these coordinate patches contain the element $\zeta $ but do not contain the identity The expressions (78) and (85)\ \ can be combined into a single expression \begin{equation} \Sigma ^{\epsilon }( \alpha ,\beta ,\gamma ) =\Sigma ^{-}( \gamma ) \Sigma \mbox{}^{\circ}( \alpha ) \Sigma ^{+}( \beta ) \zeta ^{ \epsilon } \label{MG: symplectic factorization} \end{equation} \noindent where $\epsilon \in \{0,1\}$. \subsubsection{Lie Algebra} The Lie algebra of the symmetry group $\mathcal{H}\overline{\mathcal{S}p}( 2n) $ is the same as the Lie algebra of $\mathcal{H}\mathcal{S}p( 2n) $.\ \ It may be directly computed from its matrix realization. It is convenient to use a basis for the algebra of the symplectic group corresponding to the factorized form (78). Let the $A_{i,j}$ be the generators of the unitary subgroup with elements $\Sigma ( \alpha ) \in \mathcal{U}( n) $, and $B_{i,j}$ the generators of the abelian subgroup with elements\ \ $\Sigma ( \beta ) \in \mathcal{A}( m) $ and $C_{i,j}$ the generators of the abelian subgroup with elements $\Sigma ( \gamma ) \in \mathcal{A}( m) $.\ \ \ The abelian generators are symmetric, $B_{i,j}=B_{j,i}$ and $C_{i,j}=C_{j,i}$.\ \ A general element is written as \begin{equation} Z=\alpha ^{i,j}A_{i,j}+\beta ^{i,j}B_{i,j}+\gamma ^{i,j}C_{i,j}+p^{i}Q_{i}+q^{i}P_{i}+\iota I. \end{equation} Straightforward computation shows that these generators of $\mathcal{S}p( 2n) $ satisfy the Lie algebra \begin{equation} \begin{array}{l} \left[ A_{i,j},A_{k,l}\right] =\delta _{i,l}A_{j,k}-\delta _{j,k}A_{i,l}, \\ \left[ A_{i,j},B_{k,l}\right] =\delta _{j,k}B_{i,l}+\delta _{j,l}B_{i,k}, \\ \left[ A_{i,j},C_{k,l}\right] =-\delta _{i,k}C_{j,l}-\delta _{i,l}C_{k,j}, \\ \left[ B_{i,j},C_{k}\right] =\delta _{i,k}A_{j,l} +\delta _{i,l}A_{j,k} +\delta _{j,k}A_{i,l} +\delta _{j,l}A_{i,k}. \end{array} \end{equation} The nonzero commutators of the algebra of $\mathcal{H}\mathcal{S}p( 2n) $ are the above relations for the symplectic generators together with the Weyl-Heisenberg generators are \begin{equation} \begin{array}{ll} \left[ A_{i,j},Q_{k}\right] = \delta _{j,k}Q_{i}, & \left[ C_{i,j},Q_{k}\right] =\delta _{j,k}P_{i}+ \delta _{i,k}P_{j}, \\ \left[ A_{i,j},P_{k}\right] = -\delta _{i,k}P_{j}, & \left[ B_{i,j},P_{k}\right] = \delta _{j,k}Q_{i}+\delta _{i,k}Q_{j}, \\ \left[ P_{i},Q_{j}\right] =\delta _{i,j}I. & \end{array} \end{equation} The symplectic generators may be realized in the enveloping algebra up to a central element \cite{Low13}.\ \ This will be important when we discuss the representations in Section 3.2. \begin{equation} {\widetilde{A}}_{i,j}= Q_{i}P_{j},\ \ {\widetilde{B}}_{i,j}=Q_{i}Q_{j}, {\widetilde{C}}_{i,j}= P_{i}P_{j}. \end{equation} Clearly $B_{i,j}=B_{j,i}$ and $C_{i,j}=C_{j,i}$.\ \ Then, using the Weyl-Heisenberg commutation relations (5), this defines the commutation relations, up to the central element, $I$, \begin{equation} \begin{array}{l} \left[ {\widetilde{A}}_{i,j},{\widetilde{A}}_{k,l}\right] =I( \delta _{i,l}{\widetilde{A}}_{j,k}-\delta _{j,k}{\widetilde{A}}_{i,l}) , \\ \left[ A_{i,j},{\widetilde{B}}_{k,l}\right] =I( \delta _{j,k}{\widetilde{B}}_{i,l}+\delta _{j,l}{\widetilde{B}}_{i,k}) , \\ \left[ A_{i,j},{\widetilde{C}}_{k,l}\right] =-I( \delta _{i,k}{\widetilde{C}}_{j,l}+\delta _{i,l}{\widetilde{C}}_{k,j}) , \\ \left[ {\widetilde{B}}_{i,j},{\widetilde{C}}_{k}\right] =I( \delta _{i,k}{\widetilde{A}}_{j,l} +\delta _{i,l}{\widetilde{A}}_{j,k} +\delta _{j,k}{\widetilde{A}}_{i,l} +\delta _{j,l}{\widetilde{A}}_{i,k} ) . \end{array \label{MG: symplectic in enveloping algebra of H} \end{equation} \section{Quantum symmetry: Projective representations} The projective representations of the maximal symmetry group $\mathcal{I}\mathcal{S}p( 2n) $ are equivalent to the ordinary unitary representations of its central extension $\mathcal{H}\overline{\mathcal{S}p}( 2n) $. These unitary irreducible representations may be determined using the Mackey theorems for semidirect product groups. The first step in applying the Mackey theorem for semidirect products is to determine the unitary irreducible representations of the Weyl-Heisenberg normal subgroup. While these are well known, the method of constructing them using the Mackey theorems applied to the semidirect product of two abelian groups (2) does not appear to be as well known \cite{Major}. We review this briefly in order to introduce the Mackey theorems and also to show how the unitary irreducible representations of the symmetry group can be constructed completely from first principles. \subsection{Unitary irreducible representations of the Weyl-Heisenberg group}\label{MG: Section: WH UIR} The Mackey theorem for semidirect products with an abelian normal subgroup are given in Theorem 10 in\ \ Appendix A \cite{mackey}. We choose the normal subgroup (27) with elements $\Upsilon ( p,0,\iota ) \in \mathcal{A}( n+1) $. The unitary irreducible representations $\xi $ of the abelian normal subgroup are the phases acting on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }=\mathbb{C}$ \begin{equation} \left. \xi ( \Upsilon ( p,0,\iota ) ) |\phi \right\rangle =e^{i( \iota \widehat{I} +p^{i}{\widehat{Q}}_{i}) }\overset{ }{\left. |\phi \right\rangle =e^{i( \iota \lambda +p\cdot \alpha ) }\overset{ }{\left. |\phi \right\rangle }},\ \ \overset{ }{\left. |\phi \right\rangle }\in \mathbb{C} \label{MG: abelian uir} \end{equation} \noindent The hermitian representation of the algebra has the eigenvalues that are given by \begin{equation} {\widehat{Q}}_{i}\overset{ }{\left. |\phi \right\rangle }=\xi ^{\prime }( Q_{i}) \left| \phi \right\rangle =\alpha _{i}\overset{ }{\left. |\phi \right\rangle },\ \ \ \widehat{I}\overset{ }{\left. |\phi \right\rangle }=\xi ^{\prime }( I) \left| \phi \right\rangle =\lambda \overset{ }{\left. |\phi \right\rangle },\ \ \end{equation} \noindent where $\alpha \in \mathbb{R}^{n}$ and $\lambda \in \mathbb{R}$.\ \ The characters\ \ $\xi _{\alpha ,\lambda }$ are parameterized by the eigenvalues $\alpha ,\lambda $ and the equivalence classes that are elements of the unitary dual, $[\xi _{\alpha ,\lambda }]\in {\text{\boldmath $U$}}_{\mathcal{A}( n+1) }\simeq \mathbb{R}^{n+1}$.\ \ Each equivalence class has the single element $[\xi _{\alpha ,\lambda }]=\xi _{\alpha ,\lambda }$. The action of the elements $\Upsilon ( 0,q,0) \in \mathcal{A}( n) $ of the\ \ homogeneous group on these representations is given by the dual automorphisms \begin{equation} \begin{array}{l} \begin{array}{l} \left. \left. \left( {\widehat{\varsigma }}_{\Upsilon ( 0,q,0) }\xi _{\alpha ,\lambda }\right) \left( \Upsilon ( p,0,\iota ) \right) |\phi \right\rangle =\xi _{\alpha ,\lambda }( \varsigma _{\Upsilon ( 0,q,0) }\Upsilon ( p,0,\iota ) ) |\phi \right\rangle \\ =\xi _{\alpha -\lambda q,\lambda }( \Upsilon ( p,0,\iota ) ) \overset{ }{\left. |\phi \right\rangle }. \end{array} \end{array} \end{equation} \noindent In simplifying this expression, we have used (30) and (92). The little group is the set of $\Upsilon ( 0,q,0) \in \mathcal{K}\mbox{}^{\circ}$ that satisfy the fixed point equation (134), \begin{equation} {\widehat{\varsigma }}_{\Upsilon ( 0,q,0) }\xi _{ \alpha ,\lambda }=\xi _{\alpha -\lambda q,\lambda } =\xi _{\alpha ,\lambda }. \end{equation} \noindent The solution of the fixed point condition requires that $\alpha -\lambda q\equiv \alpha $. The $\lambda =0$ solution for which the little group is $\mathcal{A}( n) $ is the degenerate case corresponding to the homomorphism $\mathcal{H}( n) \rightarrow \mathcal{A}( 2n) $ with kernel $\mathcal{A}( 1) $.\ \ This is just the abelian group that is not considered further here.\ \ \ The faithful representation with $\lambda \neq 0$ requires $p=0$, and therefore has the trivial little group $\mathcal{K}\mbox{}^{\circ}\simeq \text{\boldmath $e$}\simeq \{\Upsilon ( 0,0,0) \}$.\ \ The stabilizer is $\mathcal{G}\mbox{}^{\circ}\simeq \mathcal{A}( n+1) $.\ \ The orbits are \begin{equation} \mathbb{O}_{\lambda }=\left\{ {\widehat{\varsigma }}_{\Upsilon ( 0,q,0) }[ \xi _{\alpha ,\lambda }] |q\in \mathbb{R}^{n}\right\} =\left\{ \xi _{q,\lambda }|q\in \mathbb{R}^{n}\right\} ,\ \ \lambda \in \mathbb{R}\backslash \left\{ 0\right\} . \end{equation} All representations in the orbit are equivalent for the determination of the semidirect product unitary irreducible representations. A convenient representative of the equivalence class is $\xi _{0,\lambda }$.\ \ The unitary representations $\sigma $ of the trivial little group are trivial and therefore the representations of the stabilizer are just $\varrho \mbox{}^{\circ}=\xi _{0,\lambda }$.\ \ The Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\sigma }$ is\ \ also trivial and therefore the Hilbert space of the stabilizer is ${\text{\boldmath $\mathrm{H}$}}^{\varrho \mbox{}^{\circ}}={\text{\boldmath $\mathrm{H}$}}^{\sigma }\otimes {\text{\boldmath $\mathrm{H}$}}^{\xi }\simeq \mathbb{C}.$ \subsubsection{Mackey induction} \noindent The final step is to apply the Mackey induction theorem to determine the faithful unitary irreducible representations of the full $\mathcal{H}( n) $ group. The induction requires the definition of the symmetric space \begin{equation} \mathbb{K}=\mathcal{G}/\mathcal{G}\mbox{}^{\circ}=\mathcal{H}( n) /\mathcal{A}( n+1) \simeq \mathcal{A}( n) \simeq \mathbb{R}^{n}, \end{equation} \noindent with the natural projection $\pi $ and a section $\Theta $\ \ \ \begin{equation} \begin{array}{l} \pi :\mathcal{H}( n) \rightarrow \mathbb{K}:\Upsilon ( p,q,\iota ) \mapsto {\mathrm{k}}_{q}, \\ \Theta :\mathbb{K}\rightarrow \mathcal{H}( n) :{\mathrm{k}}_{q}\mapsto \Theta ( {\mathrm{k}}_{q}) =\Upsilon ( 0,q,0) . \end{array} \end{equation} \noindent These satisfy $\pi ( \Theta ( {\mathrm{a}}_{q}) ) ={\mathrm{a}}_{q}$ and so $\pi \circ \Theta ={\mathrm{Id}}_{\mathbb{K}}$ as required. Using (2), an element of the Weyl-Heisenberg group\ \ $\mathcal{H}( n) $ can be written as, \begin{equation} \Upsilon ( p,q,\iota ) =\Upsilon ( 0,q,0) \Upsilon ( p,0,\iota +\frac{1}{2}p\cdot q) . \end{equation} \noindent The cosets are therefore defined by \begin{equation} \begin{array}{ll} {\mathrm{k}}_{q} & =\left\{ \Upsilon ( 0,q,0) \Upsilon ( p,0,\iota +\frac{1}{2}p\cdot q) |p\in \mathbb{R}^{n},\iota \in \mathbb{R}\right\} \\ & =\left\{ \Upsilon ( 0, q, 0) \mathcal{A}( n + 1) \right\} \end{array} \end{equation} \noindent Note that \begin{equation} \ \ \Upsilon ( p,q,\iota ) {\mathrm{k}}_{x} ={\mathrm{k}}_{x+q},\ \ \ \ x\in \mathbb{R}^{n} \label{PH: action of group on coset} \end{equation} \noindent The Mackey induced representation theorem can now be applied straightforwardly.\ \ \ First, the Hilbert space is \begin{equation} {\text{\boldmath $\mathrm{H}$}}^{\varrho }={\text{\boldmath $L$}}^{2}( \mathbb{K},{\text{\boldmath $\mathrm{H}$}}^{\varrho \mbox{}^{\circ}}) \simeq {\text{\boldmath $L$}}^{2}( \mathbb{R}^{n},\mathbb{C}) . \end{equation} \noindent Next the Mackey induction Theorem 8 yields \begin{equation} \psi ^{\prime }( {\mathrm{k}}_{x}) =\left( \varrho ( \Upsilon ( p,q,\iota ) ) \psi \right) \left( {\Upsilon ( p,q,\iota ) }^{-1}{\mathrm{k}}_{x}\right) =\varrho \mbox{}^{\circ}( \Upsilon ( a \mbox{}^{\circ},0,\iota \mbox{}^{\circ}) ) \psi ( {\mathrm{k}}_{x-q}) \end{equation} \noindent Using the Weyl-Heisenberg group product (2), \begin{equation} \begin{array}{ll} \Upsilon ( \mathit{p\mbox{}^{\circ}},\mathit{q\mbox{}^{\circ}},\iota \mbox{}^{\circ}) & ={\Theta ( {\mathrm{k}}_{x}) }^{-1}\Upsilon ( p,q,\iota ) \Theta ( {\Upsilon ( p,q,\iota ) }^{-1}{\mathrm{k}}_{x}) \\ & =\Upsilon ( 0, -x, 0) \Upsilon ( p, q, \iota ) \Upsilon ( 0, x - q, 0) \\ & =\Upsilon ( p,0,\iota +p\cdot \left( x-\frac{1}{2}q\right) ) . \end{array} \end{equation} \noindent We lighten notation using the isomorphism ${\mathrm{k}}_{x}\mapsto x$.\ \ The induced representation theorem then yields \begin{equation} \begin{array}{ll} \psi ^{\prime }( x) & =\xi _{0,\lambda }(\Upsilon ( p,0,\iota +x\cdot p-\frac{1}{2}p\cdot q) \psi ( x-q) \\ & =e^{i \lambda ( \iota +x\cdot p-\frac{1}{2}p\cdot q) }\psi ( x-q) . \end{array \label{PH: WH UIR} \end{equation} \noindent Using Taylor expansion, we can write \begin{equation} \psi ( x-q) = e^{-q^{i} \frac{\partial }{\partial {x}^{i}}}\psi ( x) . \end{equation} \noindent The\ \ Baker Campbell-Hausdorff formula [20] enables us to combine the exponentials \begin{equation} \psi ^{\prime }( x) =e^{i \left( \lambda \iota +\lambda p^{i} x_{i}+ q^{i}i\frac{\partial }{\partial x^{i}}\right) }\psi ( x) =e^{i \left( \iota \widehat{I}+p^{i}{\widehat{Q}}_{i} +q^{i}{\widehat{P}}_{i} \right) }\psi ( x) \label{PH: WH general nondegenerate representations} \end{equation} The representation of the algebra is therefore \begin{equation} \widehat{I}\psi ( x) =\lambda \psi ( x) ,\ \ \ {\widehat{Q}}_{i}\psi ( x) =\lambda x_{i}\psi ( x) ,\ \ \ {\widehat{P}}_{i}\psi ( x) =i \frac{\partial }{\partial x^{i}}\psi ( x) \label{PH: WH general algebra} \end{equation} \noindent that satisfies the Heisenberg commutation relations (1). This analysis can also be carried out choosing $\Upsilon ( 0,q,\iota ) \in \mathcal{A}( n+1) $ to be the elements of the normal subgroup and this yields the representation with ${\widehat{P}}_{i}$ diagonal.\ \ \subsection{Unitary irreducible representations of $\mathcal{H}\overline{\mathcal{S}p}( 2n) $}\label{MG: Section uir hsp} We consider next the unitary irreducible representations of the $\mathcal{H}\overline{\mathcal{S}p}( 2n) $ group\ \ \ \ \begin{equation} \mathcal{H}\overline{\mathcal{S}p}( 2n) \simeq \mathcal{S}p( 2n) \otimes _{s}\mathcal{H}( n) . \end{equation} As $\mathcal{H}\overline{\mathcal{S}p}( 2n) $\ \ is the central extension of $\mathcal{I}\mathcal{S}p( 2n) $, the projective representations of $\mathcal{I}\mathcal{S}p( 2n) $ are equivalent to the ordinary unitary representations of $\mathcal{H}\overline{\mathcal{S}p}( 2n) $. The unitary irreducible representations of $\mathcal{H}\overline{\mathcal{S}p}( 2n) $ may be determined using Mackey Theorem 9 for the nonabelian normal subgroup case.\ \ \ The faithful unitary representations of the Weyl-Heisenberg group are given in the previous section (105). The next step in applying the Mackey's theorem is to determine the $\rho $ representation of the stabilizer $\mathcal{G}\mbox{}^{\circ}\subset \mathcal{H}\overline{\mathcal{S}p}( 2n) $. \subsubsection{Stabilizer and $\rho $ representation} The representation $\rho $ of the stabilizer $\mathcal{G}\mbox{}^{\circ}$ acts on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }$ and therefore the hermitian representations $\rho ^{\prime }$ of the algebra of the stabilizer must be realized in the enveloping algebra of the Weyl-Heisenberg group. The $\rho $ representation restricted to the Weyl-Heisenberg group are given by $\rho |_{\mathcal{H}( n) }=\xi $ where $\xi $ are the unitary irreducible representations of the Weyl Heisenberg group.\ \ The faithful representations $\xi $ are given in (105). The unitary representation $\rho $ acts on ${\text{\boldmath $\mathrm{H}$}}^{\xi }\simeq {\text{\boldmath $L$}}^{2}( \mathbb{R}^{n},\mathbb{C}) $ such that \begin{equation} \rho ( \Omega \mbox{}^{\circ}) \xi ( \Upsilon ( z,\iota ) ) {\rho ( \Omega \mbox{}^{\circ}) }^{-1}= \xi ( \varsigma _{\Omega \mbox{}^{\circ}}\Upsilon ( z,\iota ) ) ,\ \ \Omega \mbox{}^{\circ}\in \mathcal{G}\mbox{}^{\circ}. \end{equation} \noindent The representation $\rho$ factors into\ \ \begin{equation} \rho ( \Omega \mbox{}^{\circ}( \delta ,\Sigma ,w,r) ) = \xi ( \Upsilon ( w,r) ) \rho ( \Sigma ) , \end{equation} \noindent where again for notational brevity\ \ $\Sigma \equiv \Omega ( 1,\Sigma ,0,0) $. We already have characterized the inner automorphisms. The automorphisms corresponding factor as \begin{equation} \begin{array}{l} \begin{array}{l} \xi ( \Upsilon ( w,r) ) \xi ( \Upsilon ( z,\iota ) ) {\xi ( \Upsilon ( w,r) ) }^{-1}= \xi ( \varsigma _{\Upsilon ( w,r) }\Upsilon ( z,\iota ) ) , \\ \rho ( \Sigma ) \xi ( \Upsilon ( z,\iota ) ) {\rho ( \Sigma ) }^{-1}= \xi ( \varsigma _{\Omega ( \Sigma ) }\Upsilon ( z,\iota ) ) =\xi ( \Upsilon ( \pi ( \Sigma ) z,\iota ) ) . \end{array} \end{array} \end{equation} \noindent where $\Sigma \in \overline{\mathcal{S}p}( 2n) $ and $\pi :\overline{\mathcal{S}p}( 2n) \rightarrow \mathcal{S}p( 2n) $. The inner automorphisms are already characterized as we know the unitary irreducible representations $\xi $. Consider next the representation $\rho ( \Sigma ) $ of the symplectic group $\overline{\mathcal{S}p}( 2n) $.\ \ The hermitian representation of the symplectic generators is \begin{equation} \begin{array}{l} \begin{array}{l} {\widehat{A}}_{i,j}=\rho ^{\prime }( A_{i,j} ) =\lambda {\widehat{Q}}_{i}{\widehat{P}}_{j}, \\ {\widehat{B}}_{i,j}=\rho ^{\prime }( B_{i,j}) =\lambda {\widehat{Q}}_{i}{\widehat{Q}}_{j}, \\ {\widehat{C}}_{i,j}=\rho ^{\prime }( C_{i,j}) =\lambda {\widehat{P}}_{i}{\widehat{P}}_{j}. \end{array} \end{array} \end{equation} Clearly ${\widehat{B}}_{i,j}={\widehat{B}}_{j,i}$ and ${\widehat{C}}_{i,j}={\widehat{C}}_{j,i}$.\ \ Then, using the Heisenberg commutation relations (1), this defines a hermitian realization of the Lie algebra of the automorphism group acting on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }\simeq {\text{\boldmath $L$}}^{2}( \mathbb{R}^{n},\mathbb{C}) $. \begin{gather} \begin{array}{l} \begin{array}{l} \left[ {\widehat{A}}_{i,j},{\widehat{A}}_{k,l}\right] =i( \delta _{i,l}{\widehat{A}}_{j,k}-\delta _{j,k}{\widehat{A}}_{i,l}) , \\ \left[ {\widehat{A}}_{i,j},{\widehat{B}}_{k,l}\right] =i( \delta _{j,k}{\widehat{B}}_{i,l}+\delta _{j,l}{\widehat{B}}_{i,k}) , \\ \left[ {\widehat{A}}_{i,j},{\widehat{C}}_{k,l}\right] =-i( \delta _{i,k}{\widehat{C}}_{j,l}+\delta _{i,l}{\widehat{C}}_{k,j}) , \\ \left[ {\widehat{B}}_{i,j},{\widehat{C}}_{k}\right] =i( \delta _{i,k}{\widehat{A}}_{j,l} +\delta _{i,l}{\widehat{A}}_{j,k} +\delta _{j,k}{\widehat{A}}_{i,l} +\delta _{j,l}{\widehat{A}}_{i,k} ) , \end{array} \end{array} \\\begin{array}{ll} \left[ {\widehat{A}}_{i,j},{\widehat{Q}}_{k}\right] = i \delta _{j,k}{\widehat{Q}}_{i}, & \left[ {\widehat{C}}_{i,j},{\widehat{Q}}_{k}\right] =i( \delta _{j,k}{\widehat{P}}_{i}+ \delta _{i,k}{\widehat{P}}_{j}) , \\ \left[ {\widehat{A}}_{i,j},{\widehat{P}}_{k}\right] = -i \delta _{i,k}{\widehat{P}}_{j}, & \left[ {\widehat{B}}_{i,j},{\widehat{P}}_{k}\right] =i( \delta _{j,k}{\widehat{Q}}_{i}+\delta _{i,k}{\widehat{Q}}_{j}) , \\ \left[ {\widehat{P}}_{i},{\widehat{Q}}_{j}\right] =i \delta _{i,j}\widehat{I}. & \end{array} \end{gather} Therefore, there exists a $\rho ^{\prime }$ representation for the entire algebra of $\mathcal{H}\overline{\mathcal{S}p}( 2n) $ and therefore the stabilizer is the group itself, $\mathcal{G}\mbox{}^{\circ}\simeq \mathcal{H}\overline{\mathcal{S}p}( 2n) $. This explicate construction of the algebra shows that the representation $\rho ( \Sigma ) $ exists.\ \ Consequently, the Mackey induction theorem is not required. The $\rho ( \Sigma ) $ representation is precisely (up to an overall phase) the metaplectic representation originally studied by Weil \cite{weil}, \cite{folland}. We can construct this explicitly using the factorization of the symplectic group (86).\ \ We can consider each of the factors separately as \begin{equation} \rho (\Sigma ( \epsilon ,\alpha ,\beta ,\gamma ) =\rho ( \Sigma ^{-}( \gamma ) ) \rho ( \Sigma \mbox{}^{\circ}( \alpha ) ) \rho ( \Sigma ^{+}( \beta ) ) \rho ( \zeta ^{ \epsilon }) , \end{equation} \noindent and each of these factors can be applied separately to determine the $\rho $ representation.\ \ The unitary representations of $\Sigma ( \beta ) \in \mathcal{A}( m) $, $m=\frac{n( n+1) }{2}$ in a basis with ${\widehat{Q}}_{i}$ diagonal are \begin{equation} \begin{array}{l} \left. \rho ( \Sigma ^{+}( \beta ) ) |\psi _{\lambda }( x) \right\rangle =e^{i \alpha ^{i,j}{\widehat{B}}_{i,j}} \overset{ }{\left. |\psi _{\lambda }( x) \right\rangle }=e^{\frac{i}{\lambda }\beta ^{i,j}x_{i}x_{j}} \overset{ }{\left. |\psi _{\lambda }( x) \right\rangle } \end{array} \label{MG: metaplectic beta} \end{equation} The representations of the elements of the unitary group $\Sigma ( \alpha ) \in \mathcal{U}( n) $ are \begin{equation} \begin{array}{l} \left. \rho ( \Sigma \mbox{}^{\circ}( \alpha ) ) |\psi _{\lambda }( x) \right\rangle ={\overset{ }{|\det A |}}^{-\frac{1}{2}} \overset{ }{\left. |\psi _{\lambda }( A^{-1}x) \right\rangle } \end{array}. \end{equation} \noindent The symplectic matrix exchanges the $p$ and $q$ degrees of freedom, $\varsigma _{\zeta }\Upsilon ( p,q,\iota ) =\Upsilon ( q,-p,\iota ) $. As is well known, the unitary representation of this is the Fourier transform, $\rho ( \zeta ) =\text{\boldmath $f$}$ where \begin{equation} \left. \left. \rho ( \Upsilon ( p,q,\iota ) ) \text{\boldmath $f$} |\psi _{\lambda }( x) \right\rangle =\text{\boldmath $f$} \rho ( \Upsilon ( q,-p,\iota ) ) |\psi _{\lambda }( x) \right\rangle , \end{equation} \noindent where the Fourier transform is defined as usual by \begin{equation} \widetilde{\psi }( y) =\text{\boldmath $f$}\psi ( x) = {\left( 2 \pi i\right) }^{-\frac{n}{2}}\int e^{-i x \cdot y}\psi ( x) d^{n}x, \end{equation} \noindent and where \begin{equation} {\widehat{Q}}_{i}\overset{ }{\left. |\psi _{\lambda }( x) \right\rangle }=\lambda x_{i}\overset{ }{\left. |\psi _{\lambda }( x) \right\rangle },\ \ {\widehat{P}}_{i}\overset{ }{\left. |{\widetilde{\psi }}_{\lambda }( y) \right\rangle }=y_{i}\overset{ }{\left. |\widetilde{\psi _{\lambda }}( y) \right\rangle }. \end{equation} Finally, the $\rho ( \Sigma ^{+}( \beta ) ) $ representation can be computed using (80) in a basis with ${\widehat{Q}}_{i}$ diagonal giving \begin{equation} \rho ( \Sigma ^{-}( \gamma ) ) \overset{ }{\left. |\psi _{\lambda }( x) \right\rangle }=\text{\boldmath $f$} \rho ( \Sigma ^{+}( -\gamma ) ) {\text{\boldmath $f$}}^{-1}|\psi _{\lambda }( x) , \end{equation} \noindent and the $\rho ( \Sigma ^{+}( -\gamma ) ) $ is given by (116).\ \ Putting all of these together gives the representation $\rho ( \Sigma ) $ up to a phase.\ \ While one would expect the phase to be $m\in \mathbb{Z}$ dependent, it actually only is two valued $\pm 1\in \mathbb{Z}_{2}$.\ \ The unitary representations of the double cover metaplectic group $\mathcal{M}p( 2n) $ are also a representation of $\overline{\mathcal{S}p}( 2n) $ due to the homomorphism (141).\ \ Of course, all of these calculations could also be done in a basis with ${\widehat{P}}_{i}$ diagonal. As the stabilizer is the full group, Mackey induction is not required and the unitary irreducible representations $\upsilon $ of $\mathcal{H}\overline{\mathcal{S}p}( 2n) $ are given by \begin{equation} \left. \left. \upsilon ( \Omega ( 1, \Sigma ,z,\iota ) ) |\psi ( x) \right\rangle =\sigma ( \Sigma ) \otimes \xi ( \Upsilon ( z,\iota ) ) \rho ( \Sigma ) |\psi ( x) \right\rangle \label{MG: HSp unitary irreps} \end{equation} \noindent where $\sigma $ are ordinary unitary irreducible representations of $\overline{\mathcal{S}p}( 2n) $, $\rho $ are the metaplectic representation of $\overline{\mathcal{S}p}( 2n) $ given above and $\xi $ are the unitary irreducible representations of $\mathcal{H}( n) $ given in Section 3.1. The ordinary unitary representations of the symplectic group have been partially characterized [8-9]. A complete set of unitary irreducible representations of the covering group $\overline{\mathcal{S}p}( 2n) $ appears to be an open problem. \section{Summary} We have determined the projective representations of the inhomogeneous symplectic group. This is the maximal symmetry whose projective representations transform physical states such that the Heisenberg commutation relations are valid in all of the transformed states. The inhomogeneous symplectic symmetry is well known from classical mechanics.\ \ It acts on classical phase space with position and momentum degrees of freedom. The projective representations that define the quantum symmetry require its central extension which introduces the non-abelian structure of the Weyl-Heisenberg group, $\mathcal{I}\widecheck{\mathcal{S}p}( 2n) \simeq \overline{\mathcal{S}p}( 2n) \otimes _{s}\mathcal{H}( n) $.\ \ The non-abelian structure is a direct result of the fact that transition probabilities are the square of the norm of physical states. Consequently, the physical states are defined up to a phase and the action of a symmetry group is given by the projective representations.\ \ This is the underlying reason for the non-abelian structure, or {\itshape quantization}.\ \ Any symmetry of quantum mechanics that preserves the position, momentum Heisenberg commutation relations must be a subgroup of this maximal symmetry. On the other hand, we now understand special relativistic quantum mechanics as the projective representations of the inhomogeneous Lorentz group \cite{wigner},\cite{Weinberg1}.\ \ The central extension of this group does not admit an algebraic extension. For the connected component, the central extension is therefore the cover that we call the Poincar\'e group which for $n=3$ is $\mathcal{P}=\mathcal{S}\mathcal{L}( 2,\mathbb{C}) \otimes _{s}\mathcal{A}( 4) $.\footnote{The full inhomogeneous group is given in terms of the orthogonal group $\mathcal{O}( 1,n) $ that has 4 disconnected components. The discrete $\mathbb{Z}_{2,2}$ symmetry is P, T and PT symmetry.\ \ Its central extension is not unique and it gives rise to the $\mathcal{P}in$ group ambiguity.\ \ On the other hand the $\mathcal{S}\mathcal{O}( 1,n) $ group has 2 components but does have a unique central extension that is the $\mathcal{S}pin$ group.\ \ The discrete $\mathbb{Z}_{2}$ symmetry is the PT symmetry. }\ \ Special relativistic quantum mechanics is formulated in terms of the unitary representations of the Poincar\'e group.\ \ There is however, no mention of the Weyl-Heisenberg group which plays a fundamental role in the original formulation of quantum mechanics. Symmetry is one of the most fundamental concepts of physics. We have the case where we have a quantum symmetry for the Weyl-Heisenberg of quantum mechanics that is the projective representations of a classical symmetry on phase space. On the other hand, the quantum symmetry for the Minkowski metric of special relativity is given in terms of a classical symmetry on position-time space, that is, spacetime.\ \ Quantum mechanics and special relativity have at best, an uneasy marriage.\ \ Perhaps it is due to this underlying disparity in the most basic symmetries of these theories. The standard approach is to ignore the quantum symmetry described in this paper and formulate special relativistic quantum mechanics as the projective representations of the inhomogeneous group. If we truly are to bring together quantum mechanics and special relativity, we must first reconcile these basic symmetries and find a symmetry that encompasses both.\ \ This can be done in a remarkably straightforward manner and results in a theory that, in a physical limit, results in the usual formulation of special relativistic quantum mechanics.\ \ But, before the limit is taken, it points to a theory incorporating both symmetries that may give further understanding of the unification of quantum mechanics and relativity \cite{Low5},\cite{Low6},\cite{Low14}.\ \ In this theory, physics takes place in extended phase space and there is no invariant global projection that gives physics in position-time space (i.e. space-time).\ \ \ Generally, local observers with general non-inertial trajectories construct different space-times as subspaces of extended phase space. The usual Lorentz symmetry continues to hold exactly for inertial trajectories but is generalized in a remarkable manner for non-inertial trajectories.\ \ \ \ \section{Appendix A: Key Theorems} In this appendix we review a set of definitions and theorems that are fundamental for the application of symmetry groups in quantum mechanics.\ \ We state the theorems only and refer the reader to the cited literature for full proofs. \begin{definition} A group $\mathcal{G}$ is a semidirect product if it has a subgroup $\mathcal{K}$ (referred to as the homogeneous subgroup) and a normal subgroup $\mathcal{N}$ such that $\mathcal{K}\cap \mathcal{N}=\text{\boldmath $e$}$ and $\mathcal{G}\simeq \mathcal{N} \mathcal{K}$.\ \ \ Our notation for a semidirect product is $\mathcal{G}\simeq \mathcal{K}\otimes _{s}\mathcal{N}$\footnote{Our notation follows [17].\ \ Another notation commonly used is $\mathcal{N}\rtimes \mathcal{K}$.\ \ It is just notation; the definition remains the same for both notations. }\cite{sternberg}. \label{PH: def: semidirect product} \end{definition} It follows directly that a semidirect product is right associative in the sense that $\mathcal{D}\simeq (\mathcal{A}\otimes _{s}\mathcal{B})\otimes _{s}\mathcal{C}$ implies that $\mathcal{D}\simeq \mathcal{A}\otimes _{s}(\mathcal{B}\otimes _{s}\mathcal{C})$ and so brackets can be removed. However $\mathcal{D}\simeq \mathcal{A}\otimes _{s}(\mathcal{B}\otimes _{s}\mathcal{C})$ does not necessarily imply $\mathcal{D}\simeq (\mathcal{A}\otimes _{s}\mathcal{B})\otimes _{s}\mathcal{C}$ as $\mathcal{B}$ is not necessarily a normal subgroup of $\mathcal{A}$. \begin{definition} An algebraic central extension of a Lie algebra $g$ is the Lie algebra $\widecheck{g}$ that satisfies the following short exact sequence where $z$ is the maximal abelian algebra that is central in $\widecheck{g}$,\label{PH: def: alg central extension} \end{definition} \begin{equation} \text{\boldmath $0$}\rightarrow z\rightarrow \widecheck{g}\rightarrow g\rightarrow \text{\boldmath $0$} . \end{equation} \noindent where $\text{\boldmath $0$}$ is the trivial algebra. Suppose $\{X_{a}\}$ is a basis of the Lie algebra $g$ with commutation relations $[X_{a},X_{b}]=c_{a,b}^{c}X_{c}$, $a,b=1,...r$.\ \ Then an algebraic central extension is a maximal set of central abelian generators $\{A_{\alpha }\}$, where $\alpha ,\beta ,... =1,..m$,\ \ such that \begin{equation} \left[ A_{\alpha },A_{\beta }\right] =0,\ \ \ \ \left[ X_{a},A_{\alpha }\right] =0,\ \ \ \ \left[ X_{a},X_{b}\right] =c_{a,b}^{c}X_{c}+c_{a,b}^{\alpha }A_{\alpha }. \end{equation} \noindent The basis $\{X_{a},A_{\alpha }\}$ of the centrally extended Lie algebra must also satisfy the Jacobi identities. The Jacobi identities\ \ constrain the admissible central extensions of the algebra. The choice\ \ $X_{a}\mapsto X_{a}+A_{a}$ will always satisfy these relations and this trivial case is excluded.\ \ The algebra $\widecheck{g}$ constructed in this manner is equivalent to the central extension of $g$ given in Definition 2. \begin{definition} The central extension of a connected Lie group $\mathrm{\mathcal{G}}$ is the\ \ Lie group $\widecheck{\mathcal{G}}$ that satisfies the following short exact sequence where $\mathrm{\mathcal{Z}}$ is a maximal abelian group that is central in $\widecheck{\mathcal{G}}$ \end{definition} \begin{equation} \text{\boldmath $e$}\rightarrow \mathcal{Z}\rightarrow \widecheck{\mathcal{G}}\overset{\pi }{\rightarrow }\mathcal{G}\rightarrow \text{\boldmath $e$} . \end{equation} The abelian group $\mathcal{Z}$ may always be written as the direct product $\mathcal{Z}\simeq \mathcal{A}( m) \otimes \mathbb{A}$ of a connected continuous abelian Lie group $\mathcal{A}( m) \simeq (\mathbb{R}^{m},+)$ and a discrete abelian group $\mathbb{A}$ that may have a finite or countable dimension [10]. The exact sequence may be decomposed into an exact sequence for the {\itshape topological} central extension and the {\itshape algebraic} central extension, \begin{equation} \text{\boldmath $e$}\rightarrow \mathbb{A}\rightarrow \overline{\mathcal{G}}\overset{\pi \mbox{}^{\circ}}{\rightarrow }\mathcal{G}\rightarrow \text{\boldmath $e$} \text{\boldmath $,$}\text{\boldmath $\ \ $}\text{\boldmath $e$}\rightarrow \mathcal{A}( m) \rightarrow \widecheck{\mathcal{G}}\overset{\widetilde{\pi }}{\rightarrow }\overline{\mathcal{G}}\rightarrow \text{\boldmath $e$}. \end{equation} \noindent where $\pi =\pi \mbox{}^{\circ}\circ \widetilde{\pi }$. The first exact sequence defines the universal cover where $\mathbb{A}\simeq \ker \pi \mbox{}^{\circ}$ is the fundamental homotopy group. All of the groups is in the second sequence are simply connected and therefore may be defined by the exponential map of the central extension of the algebra given by Definition 2. In other words, the full central extension may be computed by determining the universal covering group of the algebraic central extension. \begin{definition} A ray $\Psi $ is the equivalence class of states $|\psi _{\gamma }\rangle $ that are elements of a Hilbert space $\text{\boldmath $\mathrm{H}$}$ up to a phase, \end{definition} \begin{equation} \left. \left. \Psi =\left\{ e^{i \omega }\left| \psi \right. \right. \right\rangle |\omega \in \mathbb{R}\right\} ,\ \ \ \left. \left| \psi \right. \right\rangle \in H. \end{equation} \noindent Note that the physical probabilities that are the square of the modulus depend only on the ray \[ |\left( \Psi _{\beta },\Psi _{\alpha }\right) |^{2}={\left| \left\langle \psi _{\beta }|\psi _{\alpha }\right\rangle \right| }^{2} \] \noindent for all $|\psi _{\gamma }\rangle \in \Psi $. For this reason, physical states in quantum mechanics are defined to be rays rather than states in the Hilbert space \begin{definition} A projective representation $\varrho $ of a symmetry group $\mathcal{G}$$\text{}$ is the maximal representation such that\ \ for $|{\widetilde{\psi }}_{\gamma }\rangle =\varrho ( g) |\psi _{\gamma }\rangle $, the modulus is invariant\ \ ${|\langle {\widetilde{\psi }}_{\beta }|{\widetilde{\psi }}_{\alpha }\rangle |}^{2}={|\langle \psi _{\beta }|\psi _{\alpha }\rangle |}^{2}$ for all $|\psi _{\gamma }\rangle , |{\widetilde{\psi }}_{\gamma }\rangle \in \Psi $.\label{PH: def: proj representation} \end{definition} \begin{theorem} {\bfseries (Wigner, Weinberg):} Any projective representation of a Lie symmetry group $\mathcal{G}$ on a separable Hilbert space is equivalent to a representation that is either linear and unitary or anti-linear and anti-unitary. Furthermore, if $\mathcal{G}$ is connected, the projective representations are equivalent to a representation that is linear and unitary [1],[11].\label{PH: theorem: Wigner unitary projective} \end{theorem} This is the generalization of the well known theorem that the ordinary representation of any compact group is equivalent to a representation that is unitary.\ \ For a projective representation, the phase degrees of freedom of the central extension enables the equivalent linear unitary or anti-linear anti-unitary representation to be constructed for this much more general class of Lie groups that admit representations on separable Hilbert spaces.\ \ (A proof of the theorem is given in Appendix A of Chapter 2 of \cite{Weinberg1}).\ \ The set of groups that this theorem applies to include all the groups that are studied in this paper. \begin{theorem} {\bfseries (Bargmann, Mackey)} The projective representations of a connected Lie group $ \mathcal{G}$ are equivalent to the ordinary unitary representations of its central extension $\widecheck{\mathcal{G}}$ \cite{bargmann}, \cite{mackey2}.\label{PH: theorem: proj rep is unitary CE} \end{theorem} Theorem 2 states that are all projective representations of a connected Lie group are equivalent to a projective representation that is unitary. A phase is the unitary representation of a central abelian subgroup. Therefore, the maximal representation is given in terms of the central extension of the group. \begin{theorem} Let $\mathcal{G}$,$\mathcal{H}$ be Lie groups and $\pi :\mathcal{G}\rightarrow \mathcal{H}$ be a homomorphism. Then, for every unitary representation $\widetilde{\varrho }$ of $\mathcal{H}$ there exists a degenerate unitary representation $\varrho $ of $\mathcal{G}$ defined by $\varrho =\widetilde{\varrho }\circ \pi $. Conversely, for every degenerate unitary representation of a Lie group $\mathcal{G}$ there exists a Lie subgroup $\mathcal{H}$ and a homomorphism $\pi :\mathcal{G}\rightarrow \mathcal{H}$ where $\ker ( \pi ) \neq \text{\boldmath $e$}$ such that $\varrho =\widetilde{\varrho }\circ \pi $\ \ where $\widetilde{\varrho }$ is a unitary representation of $\mathcal{H}$.\label{PH: theorem: degenerate reps} \end{theorem} Noting that a representation is a homomorphism, This theorem follows straightforwardly from the properties of homomorphisms. As a consequence, the set of degenerate representations of a group is characterized by its set of normal subgroups. A {\itshape faithful} representation is the case that the representation is an isomorphism. \begin{theorem} {\bfseries (Levi) }Any simply connected\ \ Lie group is equivalent to the semidirect product of a semisimple group and a maximal solvable normal subgroup{\bfseries }\cite{barut}\label{PH: theorem: Levi} \end{theorem} As the central extension of any connected group is simply connected, the problem of computing the projective representations of a group always can be reduced to computing the unitary irreducible representations of a semidirect product group with a semisimple homogeneous group and a solvable normal subgroup. The unitary irreducible representations of the semisimple groups are known and the solvable groups that we are interested in turn out to be the semidirect product of abelian groups. \begin{theorem} Any semidirect product group $\mathcal{G}\simeq \mathcal{K}\otimes _{s}\mathcal{N}$ is a subgroup of a group homomorphic to the group of automorphisms of $\mathcal{N}$ [13].\label{PH: theorem: automorphisms semid-direct} \end{theorem} The proof follows directly from the definition of the semidirect product and an automorphism group. \begin{theorem} The automorphism group of a simply connected group is isomorphic to the automorphism group of its Lie algebra.\cite{barut}\label{PH: theorem: automorphisms simply connected} \end{theorem} \subsection{Mackey theorems for the representations of semidirect product groups} The Mackey theorems are valid for a general class of topological groups but we will only require the more restricted case $\mathcal{G}\simeq \mathcal{K}\otimes _{s}\mathcal{N}$ where the group $\mathcal{G}$ and subgroups $\mathcal{K},\mathcal{N}$ are smooth Lie groups.\ \ The central extension of any connected Lie group is simply connected and therefore generally has the form of a semidirect product due to Theorem 5 (Levi). Theorem 6 further constrains the possible homogeneous groups $\mathcal{K}$ of the semidirect product given the normal subgroup $\mathcal{N}$. The first Mackey theorem is the induced representation theorem that gives a method of constructing a unitary representation of a group (that is not necessarily a semidirect product group) from a unitary representation of a closed subgroup. The second theorem gives a construction of certain representations of a certain subgroup of a semidirect product group from which the complete set of unitary irreducible representations of the group can be induced. This theorem is valid for the general case where the normal subgroup $\mathcal{N}$ is a nonabelian group. In the special case where the normal subgroup $\mathcal{N}$ is abelian, the theorem may be stated in a simpler form. \begin{theorem} {\bfseries {\upshape (Mackey).}} {\upshape Induced representation theorem.} Suppose that $\mathcal{G}$ is a Lie group and $\mathcal{H}$ is a Lie subgroup, $\mathcal{H}\subset \mathcal{G}$ such that $\mathbb{K}\simeq \mathcal{G}/\mathcal{H}$ is a homogeneous space with a natural projection\ \ $\pi :\mathcal{G}\rightarrow \mathbb{K}$, an invariant measure and a canonical section {\upshape $\Theta :\mathbb{K}\rightarrow \mathcal{G}:\mathrm{k}\mapsto g$} such that\ \ $\pi \circ \Theta \mathrm{=}{\mathrm{Id}}_{\mathbb{K}}$ where ${\mathrm{Id}}_{\mathbb{K}}$ is the identity map on $\mathrm{\mathbb{K}}$. Let $\rho $ be a unitary representation of $\mathcal{H}$ on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\rho }$:\label{PH: theorem: Mackey induction theorem} \end{theorem} \[ \rho ( h) :{\text{\boldmath $\mathrm{H}$}}^{\rho }\rightarrow {\text{\boldmath $\mathrm{H}$}}^{\rho }:\left. \left. \left| \varphi \right. \right\rangle \mapsto \left| \widetilde{\varphi }\right. \right\rangle =\left. \rho ( h) \left| \varphi \right. \right\rangle ,\ \ h\in \mathcal{H}. \] \noindent Then a unitary representation $\mathrm{\varrho }$ of a Lie group $\mathrm{\mathcal{G}}$ on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\mathrm{\varrho }}$, \[ \varrho ( g) :{\text{\boldmath $\mathrm{H}$}}^{\varrho }\rightarrow {\text{\boldmath $\mathrm{H}$}}^{\varrho }:\left. \left. \left| \psi \right. \right\rangle \mapsto \left| \widetilde{\psi }\right. \right\rangle =\left. \varrho ( g) \left| \psi \right. \right\rangle ,\ \ g\in \mathcal{G}, \] \noindent may be induced from the representation $\mathrm{\rho }$ of $\mathrm{\mathcal{H}}$\ \ by defining \begin{equation} \widetilde{\psi }( \mathrm{k}) =\left( \varrho ( g) \psi \right) \left( \mathrm{k}\right) =\rho ( \mathit{g\mbox{}^{\circ}}) \psi ( g^{-1}\mathrm{k}) ,\text{}\ \ \ \mathit{g\mbox{}^{\circ}}={\Theta ( \mathrm{k}) }^{-1} g \Theta ( g^{-1}\mathrm{k}) , \end{equation} \noindent where the Hilbert space on which the induced representation $\mathrm{\varrho }$ acts is given by ${\text{\boldmath $\mathrm{H}$}}^{\varrho }\simeq {\text{\boldmath $\mathrm{L}$}}^{2}( \mathbb{K},H^{\rho }) $ [14], [13]. The proof is straightforward given that the section $\Theta $ exists by showing first that $g \mbox{}^{\circ}\in \ker ( \pi ) \simeq \mathcal{H}$ and therefore $\rho ( g \mbox{}^{\circ}) $ is well defined. \begin{definition} ({\itshape Little groups}): Let $\mathcal{G}=\mathcal{K}\otimes _{s}\mathcal{N}$ be a semidirect product. Let $[\xi ]\in {\text{\boldmath $U$}}_{\mathcal{N}}$ where\ \ ${\text{\boldmath $U$}}_{\mathcal{N}}$ denotes the unitary dual whose elements are equivalence classes of unitary representations of $\mathcal{N}$ on a Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }$.\ \ Let $\rho $ be a unitary representation of a subgroup $\mathcal{G}\mbox{}^{\circ}=\mathcal{K}\mbox{}^{\circ}\otimes _{s}\mathcal{N}$ on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }$ such that $\rho _{|\mathcal{N}}=\xi $. The {\itshape little} {\itshape groups} are the set of maximal subgroups $\mathcal{K}^{\circ }$ such that $\rho $ exists on the corresponding {\itshape stabilizer} $\mathcal{G}^{\circ }\simeq \mathcal{K}\mbox{}^{\circ}\otimes _{s}\mathcal{N}$ and satisfies the fixed point equation\label{PH: defn: little group} \end{definition} \begin{equation} {\widehat{\varsigma }}_{\rho ( k) }[ \xi ] =\left[ \xi \right] , k\in \mathcal{K}\mbox{}^{\circ} \label{PH: Little group general equation} \end{equation} \noindent In this definition the dual automorphism is defined by \begin{equation} \begin{array}{ll} \left( {\widehat{\varsigma }}_{\rho ( g) }\xi \right) \left( h\right) & =\rho ( g) \rho ( h) {\rho ( g) }^{-1}=\rho ( g h g^{-1}) =\xi ( \varsigma _{g}h) \end{array \label{PH: automorphisms of little group} \end{equation} \noindent for all $g\in \mathcal{G}\mbox{}^{\circ}$ and $h\in \mathcal{N}$.\ \ The equivalence classes of the unitary representations of $\mathcal{N}$ are defined by \begin{equation} \left[ \xi \right] =\left\{ {\widehat{\varsigma }}_{\xi ( h) }\xi |h\in \mathcal{N}\right\} \label{PH: Abelian xi equivalence classess} \end{equation} \noindent A group $\mathcal{G}$ may have multiple little groups ${\mathcal{K}\mbox{}^{\circ}}_{\alpha }$ whose intersection is the identity element only.\ \ We will generally leave the label $\alpha $ implicit. \begin{theorem} {\bfseries {\upshape (Mackey).}} {\upshape Unitary irreducible representations of semidirect products. }Suppose that we have a semidirect product Lie group {\upshape $\mathcal{G}\simeq \mathcal{K}\otimes _{s}\mathcal{N}$}, where $\mathcal{K},\mathcal{N}$ are Lie subgroups. Let $\mathrm{\xi }$ be the unitary irreducible representation of $\mathrm{\mathcal{N}}$ on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }$.\ \ Let $\mathcal{G}\mbox{}^{\circ}\simeq \mathcal{K}\mbox{}^{\circ}\otimes _{s}\mathcal{N}$ be a maximal stabilizer on which there exists a representation $\rho $ on ${\text{\boldmath $\mathrm{H}$}}^{\xi }$ such that $\rho |_{\mathcal{N}}=\xi $.\ \ \ Let $\mathrm{\sigma }$ be a unitary irreducible representation of $\mathrm{\mathcal{K}}\mbox{}^{\circ}$ on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\sigma }$. Define the representation $ \varrho \mbox{}^{\circ}=\sigma \otimes \rho $ that acts on the Hilbert space\ \ ${\text{\boldmath $\mathrm{H}$}}^{\varrho \mbox{}^{\circ}}\simeq {\text{\boldmath $\mathrm{H}$}}^{\sigma }\otimes {\text{\boldmath $\mathrm{H}$}}^{\xi }$. \label{PH: theorem: Mackey semidirect product theorem}Determine the complete set of stabilizers and representations $\rho $\ \ and little groups that satisfy these properties, that we label by $\alpha $,$\{{(\mathcal{G}\mbox{}^{\circ},\varrho \mbox{}^{\circ},{\text{\boldmath $\mathrm{H}$}}^{\varrho \mbox{}^{\circ}})}_{\alpha }\}$.\ \ If for some member of this set $\mathrm{\mathcal{G}}\text{{\upshape $ \mbox{}^{\circ}$ $ \simeq $ $ \mathcal{G}$}}$ then for this case the representations\ \ are\ \ {\upshape $(\mathcal{G},\varrho ,{\text{\boldmath $\mathrm{H}$}}^{\varrho })\mathrm{\simeq }(\mathcal{G}\mbox{}^{\circ},\varrho \mbox{}^{\circ},{\text{\boldmath $\mathrm{H}$}}^{\varrho \mbox{}^{\circ}})$}.\ \ For the cases where the stabilizer\ \ {\upshape $\mathcal{G}\mbox{}^{\circ}$} is a proper subgroup of $\mathrm{\mathcal{G}}$ then the unitary irreducible representations $(\mathcal{G},\varrho ,{\text{\boldmath $\mathrm{H}$}}^{\varrho })$ are the representations induced (using Theorem 8) by the representations {\upshape $(\mathcal{G}\mbox{}^{\circ},\varrho \mbox{}^{\circ},{\text{\boldmath $\mathrm{H}$}}^{\varrho \mbox{}^{\circ}})$} of the stabilizer subgroup. The complete set of unitary irreducible representations is the union of the representations $\cup _{\alpha }\{{(\mathcal{G},\varrho ,{\text{\boldmath $\mathrm{H}$}}^{\varrho })}_{\alpha }\mathrm{\}}$ over the set of all the stabilizers and corresponding little groups. \end{theorem} This major result and its proof are due to Mackey[14]. Our focus in this paper is on applying this theorem. \subsubsection{Abelian normal subgroup} The theorem simplifies for special cases where the normal subgroup $\mathcal{N}$ is an abelian group, $\mathcal{N}\simeq \mathcal{A}( n) $.\ \ An abelian group has the property that its unitary irreducible representations $\xi $ are the characters acting on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\xi }\simeq \mathbb{C}$, \begin{equation} \xi ( a) \left| \phi \right\rangle =e^{i a\cdot \nu }\left| \phi \right\rangle , \nu \in \mathbb{R}^{n} \end{equation} \noindent The unitary\ \ irreducible representations are labeled by the $\nu _{i}$ that are the eigenvalues of the hermitian representation of the basis $\{A_{i}\}$ of the abelian Lie algebra, \begin{equation} {\widehat{A}}_{i}\left| \phi \right\rangle =\xi ^{\prime }( A_{i}) \left| \phi \right\rangle =\nu _{i}\left| \phi \right\rangle . \end{equation} \noindent \ \ The equivalence classes $[\xi ]\in {\text{\boldmath $U$}}_{\mathcal{A}( n) }$ each have a single element $[\xi ]\simeq \xi $ as, for the abelian group, the expression (131) is trivial. The representations $\rho $ act on ${\text{\boldmath $\mathrm{H}$}}^{\xi }\simeq \mathbb{C}$ and are one dimensional and therefore must\ \ commute with the $\xi $. Therefore, in equation (130),\ \ $\rho ( g) \xi ( h) {\rho ( g) }^{-1}=\xi ( h) $ and (129) simplifies to \begin{equation} \xi ( a) =\xi ( \varsigma _{k}a) =\xi ( k a k^{-1}) , a\in \mathcal{A}( m) ,\ \ k\in \mathcal{K}\mbox{}^{\circ}\text{}.\ \ \label{PH: Little group abelian equation} \end{equation} \begin{theorem} {\bfseries {\upshape (Mackey).}} {\upshape Unitary irreducible representations of a semidirect product with an abelian normal subgroup.}\ \ Suppose that we have a semidirect product group {\upshape $\mathcal{G}\simeq \mathcal{K}\otimes _{s}\mathcal{A}$} where $\mathcal{A}$ is abelian. Let $\mathrm{\xi }$ be the unitary irreducible representation (that are the characters) of $\mathrm{\mathcal{A}}$ on ${\text{\boldmath $\mathrm{H}$}}^{\xi }\simeq \mathbb{C}$. Let $\mathcal{K}\mbox{}^{\circ}\subseteq \mathcal{K}$ be a Little group defined by (134) with the corresponding stabilizers\ \ $\mathcal{G}\mbox{}^{\circ}\simeq \mathcal{K}\mbox{}^{\circ}\otimes _{s}\mathcal{A}$.\ \ Let $\mathrm{\sigma }$ be the unitary irreducible representations of $\mathrm{\mathcal{K}}\mbox{}^{\circ}$ on the Hilbert space ${\text{\boldmath $\mathrm{H}$}}^{\sigma }$. Define the representation $ \varrho \mbox{}^{\circ}=\sigma \otimes \xi $ of the stabilizer that acts on the Hilbert space\ \ ${\text{\boldmath $\mathrm{H}$}}^{\varrho \mbox{}^{\circ}}\simeq {\text{\boldmath $\mathrm{H}$}}^{\sigma }\otimes \mathbb{C}$.\ \ \ The theorem then proceeds as in the case of the general Theorem 9.\label{PH: theorem: Mackey abelian case} \end{theorem} \section{Appendix B: Polarized Realization of the Weyl-Heisenberg group} The maps $\varphi ^{\pm }$ defined in (35) is an isomorphism.\ \ Therefore the $\Upsilon ^{\pm }( p,q,\iota ^{\pm }) $ are elements of the Weyl-Heisenberg group realized in another coordinate system of matrices. These realizations are referred to as the polarized realizations \cite{folland}. The group products in these coordinates are computed directly from\ \ (3-4) to be\ \ \begin{equation} \begin{array}{l} \begin{array}{l} \Upsilon ^{+}( p^{\prime },q^{\prime },\iota ^{\prime }) \Upsilon ^{+}( p,q,\iota ) =\Upsilon ^{+}( p^{\prime }+p,q^{\prime }+q,\iota +\iota +p^{\prime }\cdot q) , \\ \Upsilon ^{-}( p^{\prime },q^{\prime },\iota ^{\prime }) \Upsilon ^{-}( p,q,\iota ) =\Upsilon ^{-}( p^{\prime }+p,q^{\prime }+q,\iota +\iota -q^{\prime }\cdot p) , \\ {\Upsilon ^{\pm }( p,q,\iota ) }^{-1}=\Upsilon ^{\pm }( -p,-q,-\iota \pm p\cdot q) . \end{array} \end{array} \end{equation} \noindent Note that the polarized realizations factor directly \begin{gather} \Upsilon ^{+}( 0,q,\iota ) \Upsilon ^{+}( p,0,0) =\Upsilon ^{+}( p,q,\iota ) , \\\Upsilon ^{-}( p,0,\iota ) \Upsilon ^{-}( 0,q,0) =\Upsilon ^{-}( p,q,\iota ) . \end{gather} The existence of the isomorphisms $\varphi ^{\pm }$\ \ and these two different normal subgroups $\mathcal{A}( n+1) $ with elements $\Upsilon ( p,0,\iota ) $ and $\Upsilon ( 0,q,\iota ) $ whose intersection is the center $\mathcal{Z}\simeq \mathcal{A}( 1) $ is responsible for many of the remarkable properties of the Weyl-Heisenberg group.\ \ In fact, we shall see shortly that the choice of the normal subgroup in determining the unitary representations when applying the Mackey theorems results in unitary representations with either $p$ or $q$ diagonal.\ \ \noindent The matrix realization corresponds to a coordinate system of the Lie group and is therefore not unique.\ \ The polarized matrix realizations are given by the $n+2$ dimensional square matrices \begin{equation} \Upsilon ^{+}( p,q,\iota ) =\left( \begin{array}{lll} 1 & q^{\mathrm{t}} & \iota \\ 0 & 1_{n} & p \\ 0 & 0 & 1 \end{array}\right) , \Upsilon ^{-}( p,q,\iota ) =\left( \begin{array}{lll} 1 & p^{\mathrm{t}} & \iota \\ 0 & 1_{n} & q \\ 0 & 0 & 1 \end{array}\right) . \end{equation} \section{Appendix C: Extended Central Extension} The central extension for a group that is not connected group is not necessarily unique.\ \ The central extension for a group that is not connected may be defined by requiring exact sequences both for the cover of the group and the homomorphisms onto the discrete group for the components.\ \ \ For the\ \ $\mathbb{Z}_{2}\otimes _{s}\mathcal{H}\overline{\mathcal{S}p}( 2n) $, these sequences are \cite{Azcarraga} \begin{equation} \begin{array}{lllllllll} \ \ & \ \ & e & & e & & e & \ \ & \ \ \\ & & \downarrow & & \downarrow & & \downarrow & & \\ e & \rightarrow & \mathbb{Z}\otimes \mathcal{A}( 1) & \rightarrow & \mathcal{H}\overline{\mathcal{S}p}( 2n) & \rightarrow & \mathcal{I}\mathcal{S}p( 2n) & \rightarrow & e \\ & \ \ & \downarrow & & \downarrow & \ \ & \downarrow & & \\ e & \rightarrow & \mathbb{D} & \rightarrow & \mathbb{Z}_{2}\otimes _{s}\mathcal{H}\overline{\mathcal{S}p}( 2n) & \rightarrow & \mathbb{Z}_{2}\otimes _{s}\mathcal{I}\mathcal{S}p( 2n) & \rightarrow & e \\ & \ \ & \downarrow & & \downarrow & & \downarrow & & \ \ \\ & & \mathbb{Z}_{2} & & \mathbb{Z}_{2} & & \mathbb{Z}_{2} & & \\ & & \downarrow & & \downarrow & & \downarrow & & \\ \ \ & & e & & e & & e & \ \ & \end{array} \end{equation} \noindent The solution is $\mathbb{D}\simeq \mathbb{Z}_{2}\otimes \mathbb{Z}\otimes \mathcal{A}( 1) $.\ \ Therefore the central extension of\ \ $\mathbb{Z}_{2}\otimes _{s}\mathcal{I}\mathcal{S}p( 2n) $ is unique and is given by\ \ $\mathbb{Z}_{2}\otimes _{s}\mathcal{H}\overline{\mathcal{S}p}( 2n) $. \section{Appendix D: Homomorphisms} Representations are homomorphisms of a group $\mathcal{G}$.\ \ If the homomorphism is an isomorphism, then the representation is said to be faithful and otherwise it is degenerate. Theorem 4 establishes that degenerate representations are faithful representations of groups homomorphic to $\mathcal{G}$. The homomorphisms can be characterized by the normal subgroups that are the kernel of the homomorphism. First we consider the subgroup $\mathcal{H}\overline{\mathcal{S}p}( 2n) $ that we have noted in (22) is the central extension of $\mathcal{I}\mathcal{S}p( 2n) $ with center \begin{equation} \mathcal{Z}=\mathbb{Z}\otimes \mathcal{A}( 1) \label{MG: Center of aut} \end{equation} \noindent where $\mathbb{Z}$ is the center of $\overline{\mathcal{S}p}( 2n) $ and $\mathcal{A}( 1) $ is the center of $\mathcal{H}( n) $ (31).\ \ \ The double cover of $\mathcal{S}p( 2n) $ is the metaplectic group $\mathcal{M}p( 2n) $. As $\mathbb{Z}_{2}$ is a normal subgroup of $\mathbb{Z}$,\ \ that there is also a homomorphism from the cover of the symplectic group to the metaplectic group \ \ \begin{equation} \pi :\overline{\mathcal{S}p}( 2n) \rightarrow \mathcal{M}p( 2n) ,\ \ \ \ \ker ( \pi ) \simeq \mathbb{Z}/\mathbb{Z}_{2} \label{MG: symplectic to metaplectic homomorphism} \end{equation} \noindent This gives the sequence of homomorphic groups where the homomorphisms have kernels that are subgroups of the center $\mathcal{Z}$.\ \ \ \ \begin{equation} \begin{array}{lllllll} \mathcal{H}\overline{\mathcal{S}p}( 2n) & \rightarrow & \mathcal{H}\mathcal{M}p( 2n) & \rightarrow & \mathcal{H}\mathcal{S}p( 2n) & & \\ & \searrow & & \searrow & & \searrow & \\ & & \mathcal{I}\overline{\mathcal{S}p}( 2n) & \rightarrow & \mathcal{I}\mathcal{M}p( 2n) & \rightarrow & \mathcal{I}\mathcal{S}p( 2n) . \end{array \label{MG: homomorphic central sequence for HSp} \end{equation} The group $\mathcal{I}\mathcal{S}p( 2n) $ that has a trivial center terminates the sequence.\ \ It is the maximal {\itshape classical} symmetry group.\ \ The projective representations of any of the groups in this sequence is equivalent to the unitary representations of the $\mathcal{H}\overline{\mathcal{S}p}( 2n) $.\ \ \ The above expressions also apply to the full group $\mathbb{Z}_{2}\otimes _{s}\mathcal{H}\overline{\mathcal{S}p}( 2n) $ by prefixing ``$\mathbb{Z}_{2}\otimes _{s}$'' onto each of the groups that appear in (142). In addition to the above homomorphisms that have abelian kernels, we have the additional homomorphisms \begin{equation} \pi :\mathbb{Z}_{2}\otimes _{s}\mathcal{H}\overline{\mathcal{S}p}( 2n) \rightarrow \mathcal{K}, \ker ( \pi ) =\mathcal{N} \label{MG: aut c to G} \end{equation} \noindent with \begin{equation} \begin{array}{ll} \mathcal{N} & \mathcal{K} \\ \mathcal{H}( n) & \mathbb{Z}_{2}\otimes \overline{\mathcal{S}p}( 2n) \\ \mathbb{Z}/\mathbb{Z}_{2}\otimes \mathcal{H}( n) & \mathbb{Z}_{2}\otimes \mathcal{M}p( 2n) \\ \mathbb{Z}\otimes \mathcal{H}( n) & \mathbb{Z}_{2}\otimes \mathcal{S}p( 2n) \\ \mathcal{H}\mathcal{S}p( 2n) & \mathbb{Z}\otimes \mathbb{Z}_{2} \\ \mathcal{H}\mathcal{M}p( 2n) & \mathbb{Z}_{2}\otimes \mathbb{Z}_{2} \\ \mathcal{H}\overline{\mathcal{S}p}( 2n) & \mathbb{Z}_{2} \end{array \label{MG: aut c homomorphisms} \end{equation}\label{sd}\label{degH}\label{hermalg}\label{con}\label{sm}\label{sdirect}\label{pin}\label{sedirectd}
2,877,628,089,985
arxiv
\section*{\large\contentsname \@mkboth{% \MakeUppercase\contentsname}{\MakeUppercase\contentsname}}% {\baselineskip=15pt plus 2pt minus 1pt \@starttoc{toc}}% } \renewenvironment{thebibliography}[1] {\section*{\centering{\refname} \@mkboth{\MakeUppercase\refname}{\MakeUppercase\refname}}% \list{\@biblabel{\@arabic\c@enumiv}}% {\settowidth\labelwidth{\@biblabel{#1}}% \leftmargin\labelwidth \advance\leftmargin\labelsep \@openbib@code \usecounter{enumiv}% \let\p@enumiv\@empty \renewcommand\theenumiv{\@arabic\c@enumiv}}% \sloppy \clubpenalty4000 \@clubpenalty \clubpenalty \widowpenalty4000% \sfcode`\.\@m \catcode`\^^M=10% } \newcommand{\section*{Acknowledgements}{\section*{Acknowledgements} \addcontentsline{toc}{section}{\hspace{0.6cm}{\bf Acknowledgements}}} \newcommand{\section*{Appendices}\setcounter{subsection}{0}\setcounter{equation}{0}\renewcommand{\thesubsection}{\Alph{subsection}.}{\section*{Appendices}\setcounter{subsection}{0}\setcounter{equation}{0}\renewcommand{\thesubsection}{\Alph{subsection}.} \renewcommand{\theequation}{\thesubsection\arabic{equation}} \addtocontents{toc}{\vspace{0.2cm} {\bf Appendices}} } \hyphenation{mani-folds mani-fold opera-tor bet-ween} \newtheorem{lemma}{Lemma} \newtheorem{rem}{Remark}[section] \newtheorem{definition}{Definition}[section] \newtheorem{theorem}{Theorem}[section] \def\quad \vrule height7.5pt width6.5pt depth0pt{\quad \vrule height7.5pt width6.5pt depth0pt} \def\slasha#1{\setbox0=\hbox{$#1$}#1\hskip-\wd0\hbox to\wd0{\hss\sl/\/\hss}} \def\period#1{\setbox0=\hbox{$#1$}#1\hskip-\wd0{}\hskip-\wd0{~-~}} \def\periodb#1{\setbox0=\hbox{$#1$}#1\hskip-\wd0\hbox to\wd0{-}} \newcommand{\para}[1]{\noindent{\bf #1.}} \newcommand{\binomr}[2]{\binom{\,#1\,}{\,#2\,}} \newcommand{\bfa}{\mathbf{a}} \newcommand{\mathbf{A}}{\mathbf{A}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{b}}{\mathbf{b}} \newcommand{\mathbf{c}}{\mathbf{c}} \newcommand{\mathbf{e}}{\mathbf{e}} \newcommand{\boldsymbol{\eta}}{\boldsymbol{\eta}} \newcommand{\mathbf{f}}{\mathbf{f}} \newcommand{\mathbf{F}}{\mathbf{F}} \newcommand{\boldsymbol{\Phi}}{\boldsymbol{\Phi}} \newcommand{\boldsymbol{\phi}}{\boldsymbol{\phi}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{\mathbf{g}}{\mathbf{g}} \newcommand{\mathbf{H}}{\mathbf{H}} \newcommand{\mathbf{L}}{\mathbf{L}} \newcommand{\boldsymbol{\lambda}}{\boldsymbol{\lambda}} \newcommand{\mathbf{M}}{\mathbf{M}} \newcommand{\mathbf{N}}{\mathbf{N}} \newcommand{\mathbf{T}}{\mathbf{T}} \newcommand{\mathbf{V}}{\mathbf{V}} \newcommand{\mathbf{v}}{\mathbf{v}} \newcommand{\mathbf{w}}{\mathbf{w}} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\unit}{\mathbbm{1}} \newcommand{\zero}{\mathbbm{0}} \newcommand{\lsc}{\{\hspace{-0.1cm}[} \newcommand{]\hspace{-0.1cm}\}}{]\hspace{-0.1cm}\}} \newcommand{(\hspace{-0.1cm}(}{(\hspace{-0.1cm}(} \newcommand{)\hspace{-0.1cm})}{)\hspace{-0.1cm})} \newcommand{[\hspace{-0.05cm}[}{[\hspace{-0.05cm}[} \newcommand{]\hspace{-0.05cm}]}{]\hspace{-0.05cm}]} \newcommand{\id}{\mathrm{id}} \newcommand{\eff}{{\mathrm{eff}}} \newcommand{\bfone}{\mathbf{1}} \newcommand{\mathbf{3}}{\mathbf{3}} \newcommand{\mathbf{4}}{\mathbf{4}} \newcommand{\CA}{\mathcal{A}} \newcommand{\tilde{\mathcal{A}}}{\tilde{\mathcal{A}}} \newcommand{\bar{\mathcal{A}}}{\bar{\mathcal{A}}} \newcommand{\hat{\mathcal{A}}}{\hat{\mathcal{A}}} \newcommand{\mathscr{A}}{\mathscr{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathscr{B}}{\mathscr{B}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathscr{C}}{\mathscr{C}} \newcommand{\mathscr{L}}{\mathscr{L}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathscr{D}}{\mathscr{D}} \newcommand{\bar{\mathscr{D}}}{\bar{\mathscr{D}}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathscr{F}}{\mathscr{F}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathscr{G}}{\mathscr{G}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathscr{H}}{\mathscr{H}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathscr{K}}{\mathscr{K}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\bar{\mathcal{O}}}{\bar{\mathcal{O}}} \newcommand{\mathscr{O}}{\mathscr{O}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\hat{\mathcal{P}}}{\hat{\mathcal{P}}} \newcommand{\mathscr{P}}{\mathscr{P}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\hat{\mathcal{Q}}}{\hat{\mathcal{Q}}} \newcommand{\mathscr{R}}{\mathscr{R}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathscr{T}}{\mathscr{T}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathscr{V}}{\mathscr{V}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathscr{X}}{\mathscr{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathscr{Y}}{\mathscr{Y}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \newcommand{\mathscr{Z}}{\mathscr{Z}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\frg}{\mathfrak{g}} \newcommand{\mathfrak{m}} \newcommand{\frp}{\mathfrak{p}}{\mathfrak{m}} \newcommand{\frp}{\mathfrak{p}} \newcommand{\mathfrak{A}}{\mathfrak{A}} \newcommand{\mathfrak{F}}{\mathfrak{F}} \newcommand{\mathfrak{G}}{\mathfrak{G}} \newcommand{\mathfrak{H}}{\mathfrak{H}} \newcommand{\mathfrak{N}}{\mathfrak{N}} \newcommand{\mathfrak{S}}{\mathfrak{S}} \newcommand{\mathfrak{U}}{\mathfrak{U}} \newcommand{\mathfrak{u}}{\mathfrak{u}} \newcommand{\mathfrak{X}}{\mathfrak{X}} \newcommand{\mathfrak{v}}{\mathfrak{v}} \newcommand{\XF}{\mathcal{X}} \newcommand{\FK}{\mathbbm{K}} \newcommand{\FR}{\mathbbm{R}} \newcommand{\FC}{\mathbbm{C}} \newcommand{\FH}{\mathbbm{H}} \newcommand{\FO}{\mathbbm{O}} \newcommand{\NN}{\mathbbm{N}} \newcommand{\DD}{\mathbbm{D}} \newcommand{\FF}{\mathbbm{F}} \newcommand{\VV}{\mathbbm{V}} \newcommand{\RZ}{\mathbbm{Z}} \newcommand{\CPP}{{\mathbbm{C}P}} \newcommand{\PP}{{\mathbbm{P}}} \newcommand{\HS}{\mathbbm{F}} \newcommand{\hat{a}}{\hat{a}} \newcommand{\hat{A}}{\hat{A}} \newcommand{\hat{C}}{\hat{C}} \newcommand{\hat{L}}{\hat{L}} \newcommand{\hat{f}}{\hat{f}} \newcommand{\hat{I}}{\hat{I}} \newcommand{\AlA}{\mathcal{A}} \newcommand{\AlC}{\mathcal{C}} \newcommand{\dd}{\mathrm{d}} \newcommand{\dpar}{\partial} \newcommand{\dparb}{{\bar{\partial}}} \newcommand{\embd}{{\hookrightarrow}} \newcommand{\diag}{{\mathrm{diag}}} \newcommand{\dL}{\mathcal{L}} \newcommand{\dD}{\mathcal{D}} \newcommand{\de}{\mathrm{e}} \newcommand{\di}{\mathrm{i}} \newcommand{\eps}{{\varepsilon}} \renewcommand{\Re}{\mathrm{Re}} \renewcommand{\Im}{\mathrm{Im}} \newcommand{\bi}{{\bar{\imath}}} \newcommand{{\bar{\jmath}}}{{\bar{\jmath}}} \newcommand{{\bar{1}}}{{\bar{1}}} \newcommand{{\bar{w}}}{{\bar{w}}} \newcommand{{\bar{z}}}{{\bar{z}}} \newcommand{{\bar{D}}}{{\bar{D}}} \newcommand{{\bar{\Phi}}}{{\bar{\Phi}}} \newcommand{{\bar{\psi}}}{{\bar{\psi}}} \newcommand{{\bar{\theta}}}{{\bar{\theta}}} \newcommand{{\bar{\phi}}}{{\bar{\phi}}} \newcommand{{\bar{\lambda}}}{{\bar{\lambda}}} \newcommand{{\bar{\zeta}}}{{\bar{\zeta}}} \newcommand{{\bar{E}}}{{\bar{E}}} \newcommand{{\bar{V}}}{{\bar{V}}} \newcommand{{\bar{D}}}{{\bar{D}}} \newcommand{{\bar{W}}}{{\bar{W}}} \newcommand{{\bar{y}}}{{\bar{y}}} \newcommand{{\bar{\mu}}}{{\bar{\mu}}} \newcommand{{\bar{\eta}}}{{\bar{\eta}}} \newcommand{{\bar{\sigma}}}{{\bar{\sigma}}} \newcommand{{\bar{\Theta}}}{{\bar{\Theta}}} \newcommand{\hl}{{\hat{\lambda}}} \newcommand{\mdt}{{\dot{m}}} \newcommand{\ndt}{{\dot{n}}} \newcommand{\ald}{{\dot{\alpha}}} \newcommand{{\dot{\beta}}}{{\dot{\beta}}} \newcommand{{\dot{\gamma}}}{{\dot{\gamma}}} \newcommand{{\dot{\delta}}}{{\dot{\delta}}} \newcommand{{\dot{\rho}}}{{\dot{\rho}}} \newcommand{{\dot{\sigma}}}{{\dot{\sigma}}} \newcommand{{\dot{1}}}{{\dot{1}}} \newcommand{{\dot{2}}}{{\dot{2}}} \newcommand{\tphi}{{\tilde{\phi}}} \newcommand{{\tilde{\eta}}}{{\tilde{\eta}}} \newcommand{\eand}{{~~~\mbox{and}~~~}} \newcommand{{~~~\mbox{with}~~~}}{{~~~\mbox{with}~~~}} \newcommand{{~~~\mbox{for}~~~}}{{~~~\mbox{for}~~~}} \newcommand{{\mathrm{ker}}}{{\mathrm{ker}}} \newcommand{{\,\cdot\,}}{{\,\cdot\,}} \newcommand{\der}[1]{\frac{\dpar}{\dpar #1}} \newcommand{\dder}[1]{\frac{\dd}{\dd #1}} \newcommand{\derr}[2]{\frac{\dpar #1}{\dpar #2}} \newcommand{\ddpart}[1]{\dd #1 \der{#1}} \newcommand{\dderr}[2]{\frac{\dd #1}{\dd #2}} \newcommand{\Der}[1]{\frac{\delta}{\delta #1}} \newcommand{\ci}[1]{\overset{\circ}{#1}{}} \newcommand{\tr}{\,\mathrm{tr}\,} \newcommand{\tra}[1]{\,\mathrm{tr}_{#1}\,} \newcommand{\str}{{\,\mathrm{str}\,}} \newcommand{\ad}{\mathrm{ad}} \newcommand{\Ad}{\mathrm{Ad}} \newcommand{^{\mathrm{T}}}{^{\mathrm{T}}} \newcommand{\dual}{^\vee} \newcommand{\agl}{\mathfrak{gl}} \newcommand{\mathfrak{s}}{\mathfrak{s}} \newcommand{\mathfrak{sl}}{\mathfrak{sl}} \newcommand{\mathfrak{u}}{\mathfrak{u}} \newcommand{\mathfrak{o}}{\mathfrak{o}} \newcommand{\mathfrak{su}}{\mathfrak{su}} \newcommand{\mathfrak{so}}{\mathfrak{so}} \newcommand{\mathfrak{spin}}{\mathfrak{spin}} \newcommand{\mathfrak{u}}{\mathfrak{u}} \newcommand{\sU}{\mathsf{U}} \newcommand{\mathsf{V}}{\mathsf{V}} \newcommand{\mathsf{SU}}{\mathsf{SU}} \newcommand{\mathsf{SL}}{\mathsf{SL}} \newcommand{\mathsf{GL}}{\mathsf{GL}} \newcommand{\mathsf{Mat}}{\mathsf{Mat}} \newcommand{\mathsf{O}}{\mathsf{O}} \newcommand{\mathsf{SO}}{\mathsf{SO}} \newcommand{\mathsf{S}}{\mathsf{S}} \newcommand{\mathsf{Spin}}{\mathsf{Spin}} \newcommand{\mathsf{End}\,}{\mathsf{End}\,} \newcommand{\mathsf{SpecM}\,}{\mathsf{SpecM}\,} \newcommand{\mathsf{P}}{\mathsf{P}} \newcommand{|0\rangle}{|0\rangle} \newcommand{\langle 0|}{\langle 0|} \newcommand{\spn}{\mathrm{span}} \newcommand{\acton}{\vartriangleright} \newcommand{\remark}[1]{} \newcommand{\cmpl}{\mbox{[[to be completed]]}} \newcommand{\z}[1]{{\stackrel{\circ}{#1}}{}} \newcommand{\contra}{{\diagup\hspace{-0.27cm}\bullet}} \newcommand{\co}{{\diagup}} \newcommand{\inner}{\mathrm{int}} \def\tyng(#1){\hbox{\tiny$\yng(#1)$}} \def\tyoung(#1){\hbox{\tiny$\young(#1)$}} \def\cpv{\setbox0=\hbox{$\int$}\int\hskip-\wd0{}\hskip-\wd0{~-~}} \newcommand{\mathrm{cr}}{\mathrm{cr}} \newcommand{\gamma_\mathrm{str}}{\gamma_\mathrm{str}} \begin{document} \begin{titlepage} \setcounter{page}{0} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \begin{flushright} NIKHEF/2009--008\\ TCDMATH 09--15\\ DAMTP 2009--47\\[.5cm] \end{flushright} \vspace*{1cm} \begin{center} {\LARGE\textbf{\mathversion{bold}Marginal Deformations and 3-Algebra Structures}\par} \vspace*{1cm} {\large Nikolas Akerblom$^{a}$, Christian S\"amann$^{b}$ and Martin Wolf$^{c,}$\footnote{Also at the Wolfson College, Barton Road, Cambridge CB3 9BB, United Kingdom.}}\footnote{{\it E-mail addresses:\/} \href{mailto:nikolasa@nikhef.nl}{\ttfamily nikolasa@nikhef.nl},~\href{mailto:saemann@maths.tcd.ie}{\ttfamily saemann@maths.tcd.ie},~\href{mailto:m.wolf@damtp.cam.ac.uk}{\ttfamily m.wolf@damtp.cam.ac.uk}} \vspace*{1cm} {\it $^{a}$ NIKHEF Theory Group\\ Science Park 105\\ 1098 XG Amsterdam, The Netherlands}\\[.5cm] {\it $^b$ Hamilton Mathematics Institute\\ \& \\ School of Mathematics\\ Trinity College, Dublin 2, Ireland}\\[.5cm] {\it $^{c}$ Department of Applied Mathematics and Theoretical Physics\\ University of Cambridge\\ Wilberforce Road, Cambridge CB3 0WA, United Kingdom} \vspace*{1cm} {\bf Abstract} \end{center} \vspace*{-.3cm} \begin{quote} We study marginal deformations of superconformal Chern-Simons matter theories that are based on 3-algebras. For this, we introduce the notion of an associated 3-product, which captures very general gauge invariant deformations of the superpotentials of the BLG and ABJM models. We also consider conformal multi-trace deformations preserving $\mathcal{N}=2$ supersymmetry. We then use $\mathcal{N}=2$ supergraph techniques to compute the two-loop beta functions of these deformations. Besides confirming conformal invariance of both the BLG and ABJM models, we also verify that the recently proposed $\beta$-deformations of the ABJM model are indeed marginal to the order we are considering. \vfill \noindent June 09, 2009 \end{quote} \setcounter{footnote}{0}\renewcommand{\thefootnote}{\arabic{thefootnote}} \end{titlepage} \tableofcontents \bigskip \bigskip \hrule \bigskip \bigskip \section{Introduction} In the work of Bagger, Lambert \cite{Bagger:2006sk,Bagger:2007jr} as well as Gustavsson \cite{Gustavsson:2007vu}, a candidate theory for multiple M2-branes was proposed, which has attracted much attention in the last year. Initially, this theory was conjectured to be an IR description of stacks of M2-branes in the same sense as maximally supersymmetric Yang-Mills theory (SYM) provides an effective description of stacks of D-branes. Soon after its discovery, however, it was realized that this Bagger-Lambert-Gustavsson (BLG) model cannot capture stacks of arbitrarily many M2-branes: Its interactions and the gauge algebraic structure are based on 3-Lie algebras\footnote{See \cite{Lazaroiu:2009wz} and references therein for a detailed discussion of algebras with $n$-ary brackets.} \cite{Filippov:1985aa}, and there is only one such 3-Lie algebra which fulfills all reasonable physical requirements \cite{Nagy:2007aa,Papadopoulos:2008sk,Gauntlett:2008uf}. One way to circumvent this problem is to generalize the concept of a 3-Lie algebra as done in \cite{Bagger:2008se} and \cite{Cherkis:2008qr}. These generalizations yield superconformal field theories which allow for more freedom but at the cost of a reduced amount of supersymmetry compared to the original BLG model. The generalizations discussed in \cite{Bagger:2008se}, for example, yield the so-called Aharony-Bergman-Jafferis-Maldacena (ABJM) model \cite{Aharony:2008ug} as a special case, see also \cite{VanRaamsdonk:2008ft}. This theory shares many features with $\mathcal{N}=4$ SYM theory in four dimensions such as planar integrability \cite{Minahan:2008hf,Gaiotto:2008cg,Gromov:2008qe,Bak:2008cp,Zwiebel:2009vb,Minahan:2009te,Bak:2009mq} (see \cite{Gaiotto:2007qi} for an earlier account). Therefore, it is interesting to ask what phenomena familiar from $\mathcal{N}=4$ SYM theory in four dimensions persist in these generalized BLG-type models. One such phenomenon is the existence of marginal deformations. There is a 3-parameter family of such deformations of $\mathcal{N}=4$ SYM theory, which was found by Leigh and Strassler \cite{Leigh:1995ep}. These include in particular the so-called $\beta$-deformations as a subclass. Written in terms of four-dimensional $\mathcal{N}=1$ superfields, where the field content of $\mathcal{N}=4$ SYM theory is encoded in three chiral superfields $\Phi^i$, $i=1,2,3$, and a vector superfield, these deformations are given by the superpotential terms \begin{equation}\label{eq:BetaDeformSYM} \mathcal{W}\ =\ \eps_{ijk}\tr([\Phi^i,\Phi^j]_\beta \Phi^k)~,{~~~\mbox{with}~~~} [\Phi^i,\Phi^j]_\beta\ :=\ \de^{\di\beta}\Phi^i\Phi^j-\de^{-\di\beta}\Phi^j\Phi^i~. \end{equation} The theories with such a superpotential are still finite and, as they are written in terms of superfields, they are manifestly $\mathcal{N}=1$ supersymmetric. In this paper, we make an attempt at the construction of analogous deformations for BLG-type models. For a rough guideline on what structures one expects to arise at 3-algebra level, one can look at the reduction process from M2-branes to D2-branes as described in \cite{Mukhi:2008ux}. For this reduction, one has to compactify a direction transverse to the M2-branes on a circle. In \cite{Mukhi:2008ux}, it was suggested that in this compactification process, the scalar describing M2-brane fluctuations in this direction would acquire the vacuum expectation value $\langle X^\circ\rangle=\frac{R}{\ell_p^{3/2}}=g_{YM}$. Here, $R$ is the radius of the circle, $\ell_p$ the Planck length, and $g_{YM}$ the Yang-Mills coupling constant. The interaction terms of the BLG model are formulated using totally antisymmetric 3-brackets of 3-Lie algebras. In the reduction process, 3-brackets of the form $[X^\circ,X^2,X^3]$ reduce to commutator terms $g_{YM}[X^2,X^3]=[X^\circ,X^2,X^3]$, and in a strong coupling expansion, only those 3-bracket expressions which reduce to a commutator survive. To obtain terms which correspond to $\beta$-deformed commutators, one evidently has to relax the total antisymmetry of the 3-bracket. One is therefore led to look for marginal deformations amongst models which are built from the 3-Lie algebras introduced in \cite{Bagger:2008se} and \cite{Cherkis:2008qr}. There are already proposals for $\beta$-deformations of both the BLG and the ABJM model in the literature \cite{Berman:2008be,Imeroni:2008cr} based on considering gravitational duals. Here, we will study such deformations in more detail from the gauge theory perspective: We will write down the most general gauge invariant deformations of BLG-type models based on 3-algebras in $\mathcal{N}=2$ superspace. Although the generalized 3-Lie algebras of \cite{Bagger:2008se} and \cite{Cherkis:2008qr} already allow for certain classes of marginal deformations, we find that we should also introduce the notion of an associated 3-product: A new triple product, which transforms covariantly under gauge transformations. Moreover, we include all classically conformal multi-trace terms that are compatible with $\mathcal{N}=2$ supersymmetry.\footnote{Multi-trace terms received attention in this context rather recently in \cite{Craps:2009qc}.} The Lagrangians we find are rather restrictive, but contain the deformations studied in \cite{Imeroni:2008cr}. We then evaluate the beta functions of the couplings arising from the admissible deformations using supergraph techniques up to two-loop order. We confirm the conformal invariance of the BLG and the ABJM model as well as the deformations of \cite{Imeroni:2008cr} at quantum level to this order in perturbation theory. This paper is structured as follows. In Section 2, we discuss the necessary 3-algebraic structures, the relation between 3-algebras and their associated gauge algebras and introduce associated 3-products. In Section 3, we present the Lagrangians of the BLG-type models we are interested in as well as their deformations. The results of our computation of the beta function up to two loops are then given in Section 4, and we conclude in Section 5. In the Appendices, we collect some useful formul\ae{} used throughout this work. \section{3-Algebras and associated 3-products} The need for extending the BLG model to higher numbers of M2-branes led to two generalizations of the notion of a 3-algebra: the {\em generalized 3-Lie algebras} \cite{Cherkis:2008qr}, which we will refer to as {\em real 3-algebras}, and the {\em Hermitian 3-algebras} \cite{Bagger:2008se}, see also \cite{deMedeiros:2008zh} for a summary and a re-interpretation in terms of ordinary Lie algebras. In both cases, the underlying 3-bracket is no longer required to be totally antisymmetric. In the following, we will review these structures as well as their representations using matrix algebras. We also introduce the notion of an associated 3-product, a generalization of a 3-bracket\footnote{A similar generalization has been employed in \cite{Berman:2008be}.}, which will allow us to discuss extended superpotential terms yielding marginal deformations of both the BLG and ABJM models. \subsection{Real 3-algebras} A {\em metric real 3-algebra} is a real vector space $\CA$ together with a trilinear bracket $[\cdot,\cdot,\cdot]\,:\,\CA\times\CA\times\CA\rightarrow \CA$ and a positive definite bilinear symmetric pairing $(\cdot,\cdot)\, :\, \CA\times\CA\rightarrow \FR$ satisfying the following properties for all $A,B,C,D,E\in\CA$: \begin{subequations} \begin{itemize} \setlength{\itemsep}{-1mm} \item[(i)] The real fundamental identity: \begin{equation}\label{Eq:FundamentalIdentity} [A,B,[C,D,E]]\ =\ [[A,B,C],D,E]+[C,[A,B,D],E]+[C,D,[A,B,E]]~, \end{equation} \item[(ii)] the real compatibility relation: \begin{equation}\label{Eq:Comp1} ([A,B,C],D)+(C,[A,B,D])\ =\ 0~, \end{equation} \item[(iii)] and the real symmetry property: \begin{equation} (D,[A,B,C])\ =\ (B,[C,D,A])~. \end{equation} \end{itemize} \end{subequations} This is a generalization of the concept of a 3-Lie algebra in the sense of Filippov \cite{Filippov:1985aa}, which amounts to the special case of a totally antisymmetric 3-bracket. Choosing a basis $\tau_a$ of $\CA$, $a=1,\ldots,\mathrm{dim}\,\CA$, we can introduce the metric $h_{ab}$ and the structure constants $f_{abcd}$ as \begin{equation}\label{eq:StructureConstantsReal} h_{ab}\ :=\ (\tau_a,\tau_b)\eand f_{abcd}\ :=\ (\tau_d,[\tau_a,\tau_b,\tau_c])~. \end{equation} Because of the properties (ii) and (iii), the structure constants obey the following symmetry relations: \begin{equation} f_{abcd}\ =\ -f_{bacd}\ =\ f_{cdab}\ =\ -f_{abdc}~. \end{equation} When taking the 3-bracket of $\RZ_2$-graded objects as e.g.\ bosonic or fermionic fields, we define the 3-bracket to be insensitive to the grading: \begin{equation} [A,B,C]\ :=\ A^aB^bC^c[\tau_a,\tau_b,\tau_c]~,{~~~\mbox{with}~~~} A\ =\ A^a\tau_a~~\mbox{etc.} \end{equation} Every real 3-algebra comes with an associated Lie algebra $\frg_\CA$, the Lie algebra of {\em inner derivations} on $\CA$. Choosing a basis $\tau_a$ of $\CA$, we define $\frg_\CA$ to be the image of the map $\delta\,:\,\Lambda^2\CA\rightarrow \mathrm{Der}(\CA)$ that is given by \begin{equation} \begin{aligned} &\Lambda^2\CA\ \ni\ X \ =\ X^{ab}\tau_a\wedge\tau_b\ \mapsto\ \delta_X\ \in\ \mathrm{Der}(\CA)\\ &\kern1.7cm\delta_X(A)\ :=\ X^{ab}[\tau_a,\tau_b,A] \end{aligned} \end{equation} for $A\in\CA$. Note that $X^{ab}=-X^{ba}$. Note also that $\delta$ is not an injective map in general and thus the components $X^{ab}$ in the definition of $\delta_X$ are usually not uniquely defined. The Lie bracket $[\hspace{-0.05cm}[\cdot,\cdot]\hspace{-0.05cm}]$ on $\frg_{\mathcal{A}}$ is defined by the commutator action on $\CA$, i.e.\ $[\hspace{-0.05cm}[\delta_X,\delta_Y]\hspace{-0.05cm}](A):=\delta_X(\delta_Y(A))-\delta_Y(\delta_X(A))$ for $A\in\CA$. Closure of this bracket on $\frg_\CA$ follows from the fundamental identity \eqref{Eq:FundamentalIdentity}. Additionally, we may endow the Lie algebra $\frg_\CA$ with a bilinear pairing \begin{equation}\label{eq:IPReal} (\hspace{-0.1cm}( \delta_X,\delta_Y)\hspace{-0.1cm})\ :=\ X^{ab}Y^{cd}f_{abcd}~, \end{equation} which is symmetric, non-degenerate and $ad$-invariant, i.e.\ $(\hspace{-0.1cm}([\hspace{-0.05cm}[\delta_X,\delta_Y]\hspace{-0.05cm}],\delta_Z)\hspace{-0.1cm})+ (\hspace{-0.1cm}(\delta_Y,[\hspace{-0.05cm}[\delta_X,\delta_Z]\hspace{-0.05cm}])\hspace{-0.1cm})=0$. The most prominent example of a 3-Lie algebra is the algebra $A_4$, which is the vector space $\FR^4$ endowed with the following 3-bracket and bilinear pairing: \begin{equation} f_{abcd}\ =\ \eps_{abcd}\eand h_{ab}\ =\ \delta_{ab}~. \end{equation} The associated Lie algebra is $\frg_{A_4}\cong\mathfrak{so}(4)\cong\mathfrak{su}(2)\oplus\mathfrak{su}(2)$, and the bilinear pairing induced by the structure constants on this Lie algebra has split signature\footnote{This property is connected to parity invariance of the Chern-Simons Lagrangian, cf.\ Section 3.2.}: On the first $\mathfrak{su}(2)$ it is positive definite, on the second one negative definite. Further classes of examples of real 3-algebras are given in the next section. \subsection{Matrix representations of real 3-algebras} By a {\em matrix representation $\rho(\CA)$ of a 3-algebra $\CA$}, we will mean a homomorphism $\rho\,:\,\CA\rightarrow \mathcal{R}:=\mathrm{Mat}(N,\FC)$, which forms a representation of the 3-algebra $\CA$ in the following way: The invariant pairing on $\CA$ is given by the natural scalar product $(A,B):=\tr(\rho(A)^\dagger \rho(B))$ for elements $A,B\in\CA$ and the 3-bracket is constructed using the natural operations on the matrix algebra: The product and the Hermitian conjugate. It should be stressed that $\rho(\CA)$ can be a true subset of $\mathcal{R}$; however, the 3-bracket is certainly required to close on $\rho(\CA)$. In the case of real 3-algebras, the matrix algebra $\mathcal{R}$ is restricted\footnote{One could also choose Hermitian matrices; they, however, can be embedded into the real matrices, so that our restriction does not imply any loss of generality.} to $\mathrm{Mat}(N,\FR)$ and the Hermitian conjugate turns into the transpose. In the sequel, we will often not make a notational distinction between an element $A\in\CA$ and its matrix realization $\rho(A)\in\mathcal{R}$ and simply write $A$ in both cases. Such representations have been classified in \cite{Cherkis:2008ha}, and for real 3-algebras, there are in fact four families: \begin{equation}\label{3brackets} \begin{aligned} \mathrm{I}^R_\alpha: &&& A,B,C\ \mapsto\ \alpha([[A^T,B],C]+[[A,B^T],C]+[[A,B],C^T] -[[A^T,B^T],C^T])~,\\ \mathrm{II}^R_{\alpha}: &&& A,B,C\ \mapsto\ \alpha([[A,B^T],C]+[[A^T,B],C])~,\\ \mathrm{III}^R_{\alpha,\beta}: &&& A,B,C\ \mapsto\ \alpha(AB^T-BA^T)C+\beta C(A^T B- B^T A)~,\\ \mathrm{IV}^R_{\alpha,\beta}: &&& A,B,C\ \mapsto\ \alpha([[A,B],C]+[[A^T,B^T],C]+ [[A^T,B],C^T]+[[A,B^T],C^T])\\ &&&~~~~~~~~~~~~~~+\beta([[A,B],C^T]+[[A^T,B],C]+[[A,B^T],C]+[[A^T,B^T],C^T])~, \end{aligned} \end{equation} where $\alpha$ and $\beta$ are arbitrary (real) parameters. Although $\alpha$ can always be removed from the bracket by a rescaling, we will find it convenient to keep it explicitly. Besides forming representations, these brackets give rise to a real 3-algebra structure on $\mathrm{Mat}(N,\FR)$, and we denote the arising real 3-algebras by $M^R_{\rm I_\alpha}(N),\ldots,M^R_{\rm IV_{\alpha,\beta}}(N)$. The case $M^R_{\rm III_{\alpha,\beta}}(N)$ is of particular importance: The real 3-algebras $\mathscr{C}^{2d}$ defined in \cite{Cherkis:2008qr} allow for representations in the class ${\rm III}^R_{\alpha,\beta}$. The 3-Lie algebra $A_4$, which is a sub-3-algebra of $\mathscr{C}^{4}$ can be identified with a real sub-3-algebra of $M^R_{\rm III_{1,-1}}(4)$. Let us therefore expose the associated Lie algebra structure of $M^R_{\rm III_{\alpha,\beta}}(N)$ in the following. A derivation $\delta_X\in\frg_\CA$ acts on an element $C\in\CA=M^R_{\rm III_{\alpha,\beta}}(N)$ according to \begin{equation} \begin{aligned} \delta_X(C)\ &=\ X^{ab}[\tau_a,\tau_b,C]\ =\ \underbrace{\alpha X^{ab}(\tau_a\tau_b^T-\tau_b\tau_a^T)}_{\ =:\ \hat X_L}C+C\underbrace{\beta X^{ab}(\tau_a^T\tau_b-\tau_b^T\tau_a)}_{\ =:\ \hat X_R}\\ &=\ \hat{X}_L C+C\hat{X}_R~. \end{aligned} \end{equation} Thus, $\frg_\CA$ splits into two parts: one acting on $\CA$ from the left and one acting from the right. The fact that $\frg_\CA$ forms a Lie algebra follows from the fundamental identity as mentioned above. In particular, \begin{equation} [\hspace{-0.05cm}[\delta_X,\delta_Y]\hspace{-0.05cm}](C)\ =\ [\hat{X}_L,\hat{Y}_L] C+C[\hat{Y}_R,\hat{X}_R]\ =\ \hat{Z}_L C+C\hat{Z}_R\ =\ \delta_Z(C)~. \end{equation} Note that $\hat{X}_L=-\hat{X}_L^T$ and $\hat{X}_R=-\hat{X}_R^T$, that is, both are antisymmetric matrices and they can be chosen independently. We therefore conclude that $\frg_\CA\subseteq\mathfrak{o}(N)\oplus\mathfrak{o}(N)$ and in particular, if $\rho(\CA)=\mathcal{R}$, we have $\frg_\CA\cong\mathfrak{o}(N)\oplus\mathfrak{o}(N)$. Moreover, a short calculation reveals that the pairing on $\frg_\CA$ is given by \begin{equation}\label{signature-real} (\hspace{-0.1cm}( X,Y )\hspace{-0.1cm})\ =\ X^{ab}Y^{cd}f_{abcd}\ =\ -\alpha\tr(\hat{X}^\dagger_L\hat{Y}_L)-\beta\tr(\hat{X}^\dagger_R\hat{Y}_R)~, \end{equation} and thus for $\alpha=-\beta$, the pairing has split signature. This property is required to render a Chern-Simons matter theory based on this gauge algebra parity invariant, see Section \ref{sec:deform} \subsection{Associated 3-products of real 3-algebras}\label{sec:Assprodreal} In gauge theories, the gauge potential (and its superpartners) takes values in a Lie algebra, while the matter fields take values in a representation of this Lie algebra. If the matter fields $X,Y$ sit in the adjoint matrix representation, there is a product between these fields -- the ordinary matrix product -- which transforms covariantly under gauge transformations $\delta_\Lambda=[\Lambda,\cdot]$: \begin{equation} [\Lambda,X\cdot Y]\ =\ [\Lambda,X]\cdot Y+ X\cdot [\Lambda,Y]~. \end{equation} Both the matrix product and the commutator are special cases of the more general product \begin{equation} \alpha_1 XY-\alpha_2YX~,{~~~\mbox{with}~~~}\alpha_{1,2}\ \in\ \FC~, \end{equation} which also transforms covariantly. An analogous product can be introduced for representations of 3-algebras: Consider a matrix representation $\mathcal{R}$ of a real 3-algebra $\CA$. An {\em associated 3-product} of $\CA$ in $\mathcal{R}$ is a trilinear map $\langle A,B,C \rangle\,:\,\mathcal{R}\times\mathcal{R}\times\mathcal{R}\rightarrow \mathcal{R}$ satisfying the following identity: \begin{equation} [A,B,\langle C,D,E\rangle]\ =\ \langle [A,B,C],D,E\rangle+\langle C,[A,B,D],E\rangle+\langle C,D,[A,B,E]\rangle~. \end{equation} This identity corresponds to the condition that the associated 3-product transforms covariantly under gauge transformations governed by the 3-bracket. Later on, this will allow us to replace ordinary 3-brackets in the superpotential by associated 3-products preserving gauge invariance. Evidently, all matrix representations of 3-brackets satisfy this identity and thus they are just special cases of associated 3-products. The general associated 3-product, however, allows for more general deformations of the superpotential than the conventional 3-bracket would do. In the Hermitian case, this includes in particular the deformations studied in \cite{Imeroni:2008cr}, as discussed later. One may now ask for the most general 3-product, which can be written down using nothing but matrix products and transpositions, analogously to the matrix representations of 3-brackets \eqref{3brackets}. In the representation $\mathcal{R}$ of type ${\rm III}_{\alpha,\beta}^R$, the most general such product reads as \begin{equation} \langle A,B,C\rangle\ =\ \alpha_1 A B^T C+ \alpha_2CB^TA+\beta_1 BC^TA+\beta_2 AC^TB+\gamma_1 CA^TB+\gamma_2 BA^TC~, \end{equation} where $\alpha_{1,2}$, $\beta_{1,2}$ and $\gamma_{1,2}$ are real parameters. \subsection{Hermitian 3-algebras} A {\em metric Hermitian 3-algebra} is a complex vector space $\CA$ together with a bilinear-antilinear triple product $[\cdot,\cdot\,;\cdot]\,:\,\CA\times\CA\times\CA\rightarrow \CA$ and a positive definite Hermitian pairing\footnote{We choose the first slot to be antilinear and the second one to be linear.} $(\cdot,\cdot)\,:\,\CA\times\CA\rightarrow \FC$ satisfying the following properties for all $A,B,C,D,E\in\CA$: \begin{subequations}\label{eq:AxiomsH3A} \begin{itemize} \setlength{\itemsep}{-1mm} \item[(i)] The Hermitian fundamental identity: \begin{equation}\label{Eq:HFundamentalIdentity} [[C,D;E],A;B]\ =\ [[C,A;B],D;E]+[C,[D,A;B];E]-[C,D;[E,B;A]]~, \end{equation} \item[(ii)] the Hermitian compatibility relation: \begin{equation}\label{Eq:HComp1} (D,[A,B;C])-([D,C;B],A)\ =\ 0~, \end{equation} \item[(iii)] and the Hermitian symmetry property: \begin{equation}\label{sym} (D,[A,B;C])\ =\ -(D,[B,A;C])~. \end{equation} \end{itemize} \end{subequations} With respect to a basis $\tau_a$ of $\CA$, we introduce the metric and the structure constants \begin{equation} h_{ab}=(\tau_a,\tau_b)\eand f_{abcd}\ :=\ (\tau_d,[\tau_a,\tau_b;\tau_c])~, \end{equation} which satisfy the following symmetry relations: \begin{equation} h_{ab}=(h_{ba})^*\eand f_{abcd}\ =\ -f_{bacd}\ =\ -f_{abdc}\ =\ (f_{cdab})^*~. \end{equation} Analogously to the case of real 3-algebras, a Hermitian 3-algebra comes with an associated Lie algebra, which is naturally a complex Lie algebra $\frg_\CA^\FC$. Here, we will merely be interested in a real form $\frg_\CA$ of $\frg_\CA^\FC$ that is defined as follows: Consider a basis $\tau_a$ of $\CA$ together with a basis $\tau_a^*$ of the complex conjugate $\CA^*$ of $\CA$.\footnote{The precise definition of $\CA^*$ is irrelevant at this point.} An element $X=X^{ab}\tau_a\wedge\tau_b^*$ of $\mathfrak{Re}(\CA\wedge\CA^*)$ has components $X^{ab}$ satisfying $X^{ab}=-(X^{ba})^*$, and we then define $\frg_\CA$ to be the image of the map $\delta\,:\,\mathfrak{Re}(\CA\wedge\CA^*) \rightarrow \mathrm{Der}(\CA)$, with $X\mapsto \delta_X$ and \begin{equation} \delta_X(A)\ :=\ X^{ab}[A,\tau_a;\tau_b]~, \end{equation} for $A\in\CA$. The Lie bracket $[\hspace{-0.05cm}[\cdot,\cdot]\hspace{-0.05cm}]$ on $\frg_\CA$ is defined as the commutator action of two inner derivations $\delta_X,\delta_Y\in\frg_\CA$ on $A\in\CA$. As in the case of real 3-algebras, closure of this bracket on $\frg_\CA$ follows from the fundamental identity. A pairing on $\frg_\CA$ can be chosen as\footnote{Note that our definition differs from that of \cite{deMedeiros:2008zh} in that we have introduced an additional factor of $1/2$.} \cite{deMedeiros:2008zh} \begin{equation} \begin{aligned} (\hspace{-0.1cm}( \delta_X,\delta_Y)\hspace{-0.1cm})\ &:=\ X^{ab}Y^{cd}\,f_{cabd}~. \end{aligned} \end{equation} This pairing is symmetric, bilinear, non-degenerate and $ad$-invariant. Note that when $\CA$ is considered as the carrier space for a representation of $\frg_\CA$, $\CA^*$ forms the carrier space for the complex conjugate representation. \subsection{Matrix representations of Hermitian 3-algebras} Let us now come to matrix representations of Hermitian 3-algebras as introduced in Section \ref{sec:Assprodreal} It was shown in \cite{Cherkis:2008ha} that there is only one such family of representations given by a homomorphism $\rho\,:\,\CA\rightarrow {\rm Mat}(N,\FC)$ and the 3-bracket \begin{equation}\label{Herm_I} \mathrm{I}^H_\alpha:~~~ A,B,C\ \mapsto\ \alpha(AC^\dagger B-BC^\dagger A)~, \end{equation} where $\alpha$ is a real parameter. Interestingly, this is also the representation used in \cite{Bagger:2008se} to recast the ABJM model in 3-algebra language. In the following, we will denote the Hermitian 3-algebra defined by the above bracket on $\mathrm{Mat}(N,\FC)$ by $M_{\mathrm{I}_\alpha}^H(N)$. Note that the 3-Lie algebra $A_4$ introduced above coincides with the Hermitian 3-algebra $M_{\mathrm{I}_\alpha}^H(2)$. The associated Lie algebra structure of this Hermitian 3-algebra is easily found to be $\frg_{\CA}\cong\mathfrak{su}(N)\oplus \mathfrak{su}(N)$, cf.\ \cite{Bagger:2008se}: Consider an element of $\delta_X=X^{ab}[\cdot,\tau_a;\tau_b]\in \frg_\CA$, where $\tau_a$ and $\tau_b$ are complex $N\times N$-matrices and $(X^{ab})^*=-X^{ba}$. With the definition \eqref{Herm_I}, we obtain ($\alpha=1$) \begin{equation} \delta_X(A)\ =\ X^{ab}[A,\tau_a;\tau_b]\ =\ X^{ab}(A\tau_b^\dagger\tau_a-\tau_a\tau_b^\dagger A)~. \end{equation} Analogously to the case of $M^R_{\rm III_{\alpha,\beta}}(N)$, we can associate the following matrices with the inner derivations: \begin{equation} \hat{X}_R\ =\ X^{ab}\tau_b^\dagger \tau_a\eand\hat{X}_L\ =\ -X^{ab}\tau_a \tau_b^\dagger~, \end{equation} which are both anti-Hermitian; for example, we have $(\hat{X}_R)^\dagger=(X^{ab}\tau_b^\dagger\tau_a)^\dagger=-X^{ba}\tau_a^\dagger\tau_b=-\hat{X}_R$. Similar considerations as in the real case show that $\hat{X}_R$ and $\hat{X}_L$ can be chosen independently, exhausting the fundamental representation of $\mathfrak{su}(N)$. The trace part is excluded as it would have a trivial action on $\CA$. Since left- and right-actions commute, we arrive at the conclusion that $\frg_\CA\cong\mathfrak{su}(N)\oplus \mathfrak{su}(N)$. The symmetric bilinear pairing of elements $\delta_X,\delta_Y\in\frg_\CA$ is then given by \begin{equation} (\hspace{-0.1cm}( X,Y)\hspace{-0.1cm})\ =\ X^{ab}Y^{cd} f_{cabd}\ =\ \tr(\hat{X}_L^\dagger\hat{Y}_L)-\tr(\hat{X}_R^\dagger\hat{Y}_R)~, \end{equation} and this expression shows that the signature on $\frg_\CA$ is again split, with positive and negative signature on the left and right acting subalgebra of $\frg_\CA$, respectively. \subsection{Associated 3-products for Hermitian 3-algebras} Consider again a matrix representation $\mathcal{R}$ of a Hermitian 3-algebra $\CA$. By an associated 3-product of $\CA$ in $\mathcal{R}$, we mean a bilinear-antilinear map $\langle A,B;C \rangle\,:\,\mathcal{R}\times\mathcal{R}\times\mathcal{R}\rightarrow \mathcal{R}$ satisfying the following identity: \begin{equation} [\langle C,D;E\rangle,A;B]\ =\ \langle[C,A;B],D;E\rangle+\langle C,[D,A;B];E\rangle-\langle C,D;[E,B;A]\rangle~. \end{equation} We specialize now to the Hermitian 3-algebra $M_{\rm I_\alpha}^H(N)$ with basis $\tau_a$ for which $\mathcal{R}={\rm Mat}(\FC,N)$. Note that the $\tau_a$ form a basis for both $M_{\rm I_\alpha}^H(N)$ and $\mathcal{R}$. With respect to this basis, we can introduce structure constants of the associated 3-product as follows: \begin{equation} \langle \tau_a,\tau_b;\tau_c\rangle\ =\ g_{abc}{}^d \tau_d\eand g_{abcd}\ =\ g_{abc}{}^e h_{de}~. \end{equation} In the representation $\mathcal{R}$ of type $\mathrm{I}^H_\alpha$, the most general such product written in terms of matrices and Hermitian conjugation is given by the following expression: \begin{equation} \langle A,B;C\rangle\ =\ \alpha_1 A C^\dagger B-\alpha_2 B C^\dagger A~, \end{equation} where $\alpha_{1,2}$ are complex parameters. Below, we shall solely be interested in the one-parameter family that is given by $\alpha_1=\de^{\di\beta}$ and $\alpha_2=\de^{-\di\beta}$ for $\beta\in\FR$. In analogy to the $\beta$-deformed commutator given in \eqref{eq:BetaDeformSYM}, we denote the {\it $\beta$-3-bracket} by \begin{equation}\label{eq:BetaCommutator} [\tau_a,\tau_b;\tau_c]_\beta\ :=\ \de^{\di\beta}\tau_a \tau_c^\dagger \tau_b-\de^{-\di\beta} \tau_b \tau_c^\dagger \tau_a \ =:\ \big[\cos\beta {f_{abc}}^d+\di\sin\beta\, {d_{abc}}^d\big]\tau_d~. \end{equation} The ${f_{abc}}^d$ are the structure constants of the Hermitian 3-bracket and $d_{abcd}={d_{abc}}^e h_{de}$ obeys \begin{equation} d_{abcd}\ =\ d_{bacd}\ =\ d_{abdc}\ =\ (d_{cdab})^*~. \end{equation} Therefore, \begin{equation} g_{abcd}\ =\ g_{badc}\ =\ -(g_{dcab})^*~. \end{equation} These symmetry properties of the structure constants $g_{abcd}$ can be re-phrased without referring to a particular choice of basis analogously to \eqref{Eq:HComp1} and \eqref{sym}: \begin{equation}\label{eq:BetaDeformSCS} \begin{aligned} (D,[A,B;C]_\beta)\ =\ -([D,C;A]_\beta,B)\eand (D,[A,B;C]_\beta)\ =\ (C,[B,A;D]_\beta)~. \end{aligned} \end{equation} Interestingly, \eqref{eq:BetaCommutator} will yield precisely the marginal deformations of the ABJM case recently studied in \cite{Imeroni:2008cr}. \section{Deformations of BLG-type actions preserving $\mathcal{N}=2$ supersymmetry} In the following, we present deformations of BLG-type actions which make use of either real 3-algebras or Hermitian 3-algebras as their gauge 3-algebra structures. We will refer to these two cases as the real and Hermitian cases, respectively. All deformations will be manifestly $\mathcal{N}=2$ supersymmetric and supergauge invariant. \subsection{Conventions} We shall use the usual superfield conventions of \cite{Wess:1992cp} dimensionally reduced from four to three dimensions as done in \cite{Cherkis:2008qr}. Our superfields will live on $\FR^{1,2|4}$ and their expansions are given by \begin{subequations}\label{eq:WZGaugeComponents} \begin{equation} \Phi^i(y)\ =\ \phi^i(y)+\sqrt{2} \theta \psi^i(y)+\theta^2 F^i(y)~, \end{equation} for the chiral superfield and \begin{equation} V(x)\ =\ - \theta^\alpha{\bar{\theta}}^\beta(\sigma^\mu_{\alpha\beta}A_\mu(x)+\di\eps_{\alpha\beta}\sigma(x))+\di\theta^2({\bar{\theta}}\bar{\lambda}(x))-\di{\bar{\theta}}^2(\theta\lambda(x))+\tfrac{1}{2}\theta^2{\bar{\theta}}^2 D(x) \end{equation} \end{subequations} for the vector superfield in Wess-Zumino (WZ) gauge.\footnote{When discussing the quantum theory, we will not fix WZ gauge; see below.} Here, $y$ are chiral coordinates, $i,j,\ldots=1,\ldots,N_f$ are flavor indices (counting complex field components) and $\alpha,\beta,\ldots=1,2$ are three-dimensional spinor indices. We will mostly be interested in $N_f=4$, but keeping $N_f$ arbitrary will prove useful as a book-keeping device. Notice that the spin group in $1+2$ dimensions is $\mathsf{SL}(2,\FR)$ and hence, we do not need to distinguish between dotted and undotted spinors. In particular, indices of barred spinors can be contracted with those of unbarred ones. Our conventions for spinor contractions are as follows: $\chi\psi:=\chi^\alpha\psi_\alpha$, $\bar\chi\bar\psi:=\bar\chi_\alpha\bar\psi^\alpha$. Furthermore, $\sigma^\mu$ are the $\sigma$-matrices in three dimensions with $\sigma^\mu_{\alpha\beta}=\sigma^\mu_{\beta\alpha}$ and $\varepsilon_{\alpha\beta}=-\varepsilon_{\beta\alpha}$ with $\varepsilon_{\alpha\gamma}\varepsilon^{\gamma\beta}=\delta_\alpha^\beta$. The superfields $\Phi^i$ take values in a 3-algebra\footnote{i.e.\ either a real or a Hermitian 3-algebra} $\CA$, while $V$ takes values in its associated Lie algebra $\frg_\mathcal{A}$. By a bar, we shall mean the appropriate complex conjugation operation (i.e.\ that of components and that of the gauge algebra representation). To make our notation more concise, we shall always write $X(A)$ or even $X A$ as a shorthand for the action of an element $\delta_X$ of the associated Lie algebra $\frg_\CA$ on $A\in\CA$. \subsection{Deformations of the superfield action in the real case}\label{sec:deform} We start from a Wess-Zumino model minimally coupled to a Chern-Simons theory. Correspondingly, the superfield action reads as \begin{equation}\label{eq:S0Real} S^R_0\ =\ \di\sqrt{\kappa}\int \dd^{3|4}z\int_0^1\dd t\,(\hspace{-0.1cm}( V,\bar{D}^\alpha \big(\de^{-\frac{2\di}{\sqrt{\kappa}}tV} D_\alpha \de^{\frac{2\di}{\sqrt{\kappa}}tV}\big))\hspace{-0.1cm})+ \int \dd^{3|4}z\,(\bar{\Phi}_i,\de^{-\frac{2\di}{\sqrt{\kappa}}V}\Phi^i)~, \end{equation} where $\dd^{3|4}z:=\dd^3x\,\dd^4\theta$, cf.\ \cite{Zupnik:1988en,Ivanov:1991fn,Cherkis:2008qr}. The superfields $\Phi^i$ are all in the same representation of the gauge algebra $\frg_\CA$ whose carrier space is $\CA$. The coupling constant $\kappa$ is related to the Chern-Simons level $k$ via $\kappa=k/\pi$. Notice that the vector superfield has been rescaled appropriately to ensure that the action \eqref{eq:S0Real} has a proper free-field limit, $1/\sqrt{\kappa}\to0$, needed for perturbation theory. Recall that the ordinary Chern-Simons Lagrangian containing the Killing form of the Lie algebra as bilinear pairing receives a total sign under parity transformations. Many real 3-algebras, however, come with an associated Lie algebra of the form $\frg_\CA\cong\frg_1\oplus\frg_2$, where $\frg_1\cong\frg_2$, and the bilinear pairing is positive definite on $\frg_1$ and negative definite on $\frg_2$. The Chern-Simons Lagrangian then splits into two pieces of Chern-Simons type with a relative sign between the two Chern-Simons levels. Parity invariance can now be restored by postulating that under this transformation, the first Chern-Simons Lagrangian transforms into the second one and vice versa. We also allow for superpotential terms, which we take to be of the following form:\footnote{We could also have included terms involving the associated 3-product, but in the real case the ordinary 3-bracket already allows for marginal deformations. Additionally, one could introduce mass deformations of the form $\int\dd^{3|2}z\,R_{ij}(\Phi^i,\Phi^j)+\mbox{c.c.}$ but in this work we shall only be concerned with deformations that do not break conformal invariance already at the classical level.} \begin{equation}\label{eq:S1Real} \begin{aligned} S_1^R\ &=\ \int \dd^{3|2}z\,\left[R^{(1)}_{ijkl}(\Phi^l,[\Phi^i,\Phi^j,\Phi^k]) +R^{(2)}_{ijkl}(\Phi^i,\Phi^j)(\Phi^k,\Phi^l)\right]\\ &\kern1cm+\int \dd^{3|2}\bar z\,\left[R^{ijkl}_{(1)} (\bar{\Phi}_l,[\bar{\Phi}_i,\bar{\Phi}_j,\bar{\Phi}_k])+R^{ijkl}_{(2)}({\bar{\Phi}}_i,{\bar{\Phi}}_j) ({\bar{\Phi}}_k,{\bar{\Phi}}_l)\right], \end{aligned} \end{equation} where $\dd^{3|2}z:=\dd^3x\, \dd^2\theta$ and $\dd^{3|2}\bar z:=\dd^3x\, \dd^2{\bar{\theta}}$ are the (anti)chiral superspace measures. The symmetry properties of the 3-bracket and the pairing induces the following symmetry structures on the four-index parameters: \begin{equation} R_{ijkl}^{(1)}\ =\ -R_{jikl}^{(1)}\ =\ -R_{ijlk}^{(1)}\ =\ R_{klij}^{(1)}\eand R_{ijkl}^{(2)}\ =\ R_{jikl}^{(2)}\ =\ R_{ijlk}^{(2)}\ =\ R_{klij}^{(2)}~. \end{equation} The couplings with upper indices are related to those with lower indices by complex conjugation, \begin{equation} R_{ijkl}^{(1)}\ =\ (R^{ijkl}_{(1)})^*\eand R_{ijkl}^{(2)}\ =\ (R^{ijkl}_{(2)})^*~. \end{equation} The component form of the action $S_0^R+S_1^R$ is given in Appendix \ref{app:CFofA} Note that the double trace term in the superpotential \eqref{eq:S1Real} corresponds to a double and a triple trace deformation in the potential. Note also that when discussing Feynman rules, the quartic terms $R^{(1)}$ and $R^{(2)}$ may be formally combined into one single vertex, cf.\ \eqref{eq:DefOfSymR} together with \ \eqref{eq:PPPP-Vertex} and \eqref{eq:PPPPbar-Vertex}. Furthermore, the full supergauge transformations\footnote{after performing the integral over $t$} are given by \begin{equation} \begin{aligned} \delta V\ &=\ \pounds_{-\frac{\di}{\sqrt{\kappa}}V}\big\{\Lambda-\bar\Lambda+\coth(\pounds_{-\frac{\di}{\sqrt{\kappa}}V})(\Lambda+\bar \Lambda)\big\}\ =\ \Lambda+\bar\Lambda-\tfrac{\di}{\sqrt{\kappa}}[V,\Lambda-\bar\Lambda]+\mathcal{O}(1/\kappa)~,\\ \delta\Phi^i\ &=\ \tfrac{2\di}{\sqrt{\kappa}}\Lambda(\Phi^i)~, \end{aligned} \end{equation} where $\pounds$ is the Lie-derivative $\pounds_{X}(Y)=[\hspace{-0.05cm}[ X,Y]\hspace{-0.05cm}]$, $\coth(\pounds_{-\frac{i}{\sqrt{\kappa}}V})$ is defined via its series expansion and $\Lambda$ and $\bar\Lambda$ are the chiral and antichiral gauge parameters. By construction, the above model has at least $\mathcal{N}=2$ supersymmetry. Higher supersymmetry depends on the underlying 3-algebra and the choices for the coefficients in the superpotential. For instance, the original BLG model corresponds to \begin{equation}\label{eq:BLGmodel} \CA\ =\ A_4~,~~~N_f\ =\ 4~,~~~R_{ijkl}^{(1)}\ =\ \tfrac{\di}{4!\kappa}\eps_{ijkl} \eand R_{ijkl}^{(2)}\ =\ 0~, \end{equation} which yields the maximally supersymmetric theory with $\mathcal{N}=8$ supersymmetry. \subsection{Deformations of the superfield action in the Hermitian case} In the Hermitian case which is based on Hermitian 3-algebras, the $\mathsf{SU}(N_f)$ flavor multiplet will not be chiral, as discussed, e.g., in \cite{Minahan:2008hf}. Therefore, we have the set of chiral superfields $\Phi^i=(\Phi^1,\ldots,\Phi^{N_f})=(\Phi^m,\Phi^\mdt)$, but the $\mathsf{SU}(N_f)$ flavor multiplet is formed by $(\Phi^m,\bar{\Phi}_\mdt)$. That is, we split the flavor index $i=1,\ldots,N_f$ into a pair $m,\mdt=1,\ldots,N_f/2$, where $\Phi^m$ and $\bar{\Phi}_{\mdt}$ will now be in the same representation of the gauge algebra whose carrier space is $\CA$. Accordingly, we have to adjust the model here to read as \begin{equation}\label{eq:S0Complex} \begin{aligned} S^H_0\ &=\ \di\sqrt{\kappa}\int \dd^{3|4}z\int_0^1\dd t\,(\hspace{-0.1cm}( V,\bar{D}^\alpha\big(\de^{-\frac{2\di}{\sqrt{\kappa}} tV} D_\alpha \de^{\frac{2\di}{\sqrt{\kappa}} tV}\big))\hspace{-0.1cm})\\ &\kern1cm+\int \dd^{3|4}z\,\big[(\Phi^m,\de^{-\frac{2\di}{\sqrt{\kappa}}V}\Phi^m)+ ({\bar{\Phi}}_\mdt,\de^{\frac{2\di}{\sqrt{\kappa}}V}{\bar{\Phi}}_\mdt)\big]~. \end{aligned} \end{equation} The unusual contraction of the flavor indices is due to the antilinearity of the third slot in the Hermitian 3-bracket and the first slot in the Hermitian pairing, respectively. The coupling constant $\kappa$ is again related to the Chern-Simons level $k$ via $\kappa=k/\pi$. We will allow for the following superpotential deformations, which preserve classical conformal invariance: \begin{equation}\label{eq:S1Complex} \begin{aligned} S_1^H\ &=\ \int \dd^{3|2}z\,\left[H_{mn\mdt\ndt}^{(1)}(\bar{\Phi}_\ndt,[\Phi^m,\Phi^n;\bar{\Phi}_\mdt]_\beta)+ H_{mn\mdt\ndt}^{(2)}(\bar{\Phi}_\mdt,\Phi^m)(\bar{\Phi}_\ndt,\Phi^n) \right]\\ &\kern1cm+\int \dd^{3|2}\bar z\,\left[H^{\mdt\ndt mn}_{(1)}(\Phi^n,[{\bar{\Phi}}_\mdt,{\bar{\Phi}}_\ndt;\Phi^m]_\beta)+ H^{\mdt\ndt mn}_{(2)}(\Phi^m,\bar{\Phi}_\mdt)(\Phi^n,\bar{\Phi}_\ndt) \right]~, \end{aligned} \end{equation} where $[\cdot,\cdot;\cdot]_\beta$ was defined in \eqref{eq:BetaCommutator}. The symmetry structure of the couplings here read as \begin{equation} H_{mn\mdt\ndt}^{(1)}\ =\ H_{nm\ndt\mdt}^{(1)}\eand H_{mn\mdt\ndt}^{(2)}\ =\ H_{nm\ndt\mdt}^{(2)}~, \end{equation} and the relations of couplings with upper indices to the ones with lower indices are \begin{equation} H_{mn\mdt\ndt}^{(1)}\ =\ -(H^{\ndt\mdt mn}_{(1)})^*\eand H_{mn\mdt\ndt}^{(2)}\ =\ (H^{\mdt\ndt mn}_{(2)})^*~. \end{equation} For the particular choice $\beta=0$, the $\beta$-3-bracket reduces to the Hermitian 3-bracket. In this case, the coupling $H^{(1)}_{mn\mdt\ndt}$ has the additional symmetry properties $H_{mn\mdt\ndt}^{(1)}=-H_{nm\mdt\ndt}^{(1)}=-H_{mn\ndt\mdt}^{(1)}$. Thus, for $N_f=4$ it is of the form $H^{(1)}_{mn\mdt\ndt}\sim\varepsilon_{mn}\varepsilon_{\mdt\ndt}$. Moreover, supergauge transformations in this case are given by \begin{equation} \begin{aligned} \delta V\ &=\ \pounds_{-\frac{\di}{\sqrt{\kappa}}V}\big\{\Lambda-\bar\Lambda+\coth(\pounds_{-\frac{\di}{\sqrt{\kappa}}V})(\Lambda+\bar \Lambda)\big\}\ =\ \Lambda+\bar\Lambda-\tfrac{\di}{\sqrt{\kappa}}[V,\Lambda-\bar\Lambda]+\mathcal{O}(1/\kappa)~,\\ \delta\Phi^m\ &=\ \tfrac{2\di}{\sqrt{\kappa}}\Lambda(\Phi^m)\eand \delta{\bar{\Phi}}_\mdt\ =\ -\tfrac{2\di}{\sqrt{\kappa}}\bar\Lambda({\bar{\Phi}}_\mdt)~. \end{aligned} \end{equation} Note again that the representation formed by $\Phi^\mdt$ is the complex conjugate representation of $\Phi^m$. We refer to Appendix \ref{app:CFofA} for the component version of the above actions (for $\beta=0$). The ABJM model as formulated in \cite{Bagger:2008se} is obtained by choosing $\CA=M_{\rm I_\alpha}^H(N)$ together with the couplings \begin{equation}\label{eq:ABJMmodel} N_f\ =\ 4~,~~~\beta\ =\ 0~,~~~H_{mn\mdt\ndt}^{(1)}\ =\ \tfrac{1}{4\kappa}\varepsilon_{mn}\varepsilon_{\mdt\ndt} \eand H_{mn\mdt\ndt}^{(2)}\ =\ 0~, \end{equation} and putting $\alpha=1$ in \eqref{Herm_I}, one obtains exactly the ABJM model as written down, e.g., in \cite{Benna:2008zy}. \section{Marginal deformations of the BLG and ABJM models} All the superpotential terms introduced in the previous section are classically marginal. Recall that they were captured by parameters $R^{(\ell)}_{ijkl}$ and $H^{(\ell)}_{ijkl}$ for $\ell=1,2$. In the following, we will examine their behavior under quantization. While the beta function of a pure three-dimensional WZ model is zero due to an argument analogous to \cite{Seiberg:1993vc}, the situation is different when we couple the model to a Chern-Simons action, cf.\ \cite{Gaiotto:2007qi}: In SYM theories it is possible to argue that the couplings in the superpotential do not renormalize by promoting the gauge coupling to a chiral superfield. The Chern-Simons level, however, is not a continuous parameter, and therefore this argument does not apply here. Fortunately, it is known that the Chern-Simons level itself does not receive any quantum corrections, see e.g.\ \cite{Gates:1991qn,Kapustin:1994mt,DelCima:1997pb}, even if the model is coupled to arbitrary renormalizable matter theories.\footnote{cf.\ also the discussion in \cite{AlvarezGaume:1989wk,Chen:1992ee,Kao:1995gf} and more recently in \cite{Bedford:2008hn}.} It therefore suffices to study the beta function of the superpotential couplings. \subsection{Quantum action in the real case}\label{sec:QAandFR} To discuss the renormalization of our models, we find it convenient to perform the quantum computations directly in superspace. For textbook treatments of the supergraph formalism in the SYM case in four dimensions, which is very similar to our discussion below, we refer e.g.\ to \cite{Gates:1983nr,Buchbinder:1998qv}. Let us start from the action \eqref{eq:S0Real} in the real setting. The Hermitian case of \eqref{eq:S0Complex} is treated analogously, and we will discuss the differences in Section \ref{eq:BetaHC} We shall suppress the superscript $R$ in the following. First, let us expand \eqref{eq:S0Real} in powers of $V$. For our purposes, it will be enough to keep terms only up to $\mathcal{O}(V^3)$, \begin{equation} \begin{aligned} S_0\ &=\ \int \dd^{3|4}z\left[\di(\hspace{-0.1cm}( V,\bar D_\alpha D^\alpha V)\hspace{-0.1cm})+\tfrac{2}{3\sqrt{\kappa}} (\hspace{-0.1cm}( V,[\hspace{-0.05cm}[ D_\alpha V,{\bar{D}}^\alpha V]\hspace{-0.05cm}])\hspace{-0.1cm})+({\bar{\Phi}}_i,\Phi^i)+\tfrac{-2\di}{\sqrt{\kappa}}({\bar{\Phi}}_i,V\Phi^i)\right.\\ &\kern1cm\left. +\tfrac{1}{2!}\big(\tfrac{-2\di}{\sqrt{\kappa}}\big)^2({\bar{\Phi}}_i,V^2\Phi^i) +\tfrac{1}{3!}\big(\tfrac{-2\di}{\sqrt{\kappa}}\big)^3({\bar{\Phi}}_i,V^3\Phi^i)+\mathcal{O}(V^4)\right]. \end{aligned} \end{equation} Here and in the following, the bracket $[\hspace{-0.05cm}[\cdot,\cdot]\hspace{-0.05cm}]$ denotes the supercommutator, i.e.\ an anticommutator if the Gra\ss mann parity of both arguments is odd and a commutator otherwise. To quantize this action, we adopt a supersymmetric Landau gauge as done e.g.\ in \cite{Avdeev:1991za,Gates:1991qn}. The corresponding gauge fixing term reads as\footnote{Alternatively, we could have introduced the usual gauge fixing Lagrangian $\mathcal{L}_{\rm gf}\sim \frac1\xi(\hspace{-0.1cm}( V,D^2{\bar{D}}^2V+{\bar{D}}^2 D^2V)\hspace{-0.1cm})$ at the cost of having a dimensionful gauge parameter $\xi$; in fact, since $V$ is dimensionless, $\xi$ is of mass-dimension 1. As a consequence, the corresponding gluon propagator has a bad IR behavior for $\xi\neq0$. However, for $\xi=0$ the propagator is the same as the one given in \eqref{eq:GluonProp} for $\alpha\beta\to0$.} \begin{equation}\label{eq:gaugefixingaction} S_{\rm gf}\ =\ \int \dd^{3|4}z\,(\hspace{-0.1cm}( V, \{\alpha^{-1}(D^2+{\bar{D}}^2)-\di \beta^{-1}(D^2-{\bar{D}}^2)\}V )\hspace{-0.1cm})~, \end{equation} where we take the limit $\alpha\beta\to0$. Here, $\alpha$ and $\beta$ are dimensionless parameters and $D^2:=D^\alpha D_\alpha$ and ${\bar{D}}^2:={\bar{D}}_\alpha {\bar{D}}^\alpha$. Accordingly, the Faddeev-Popov action is \begin{equation}\label{eq:ghostaction} S_{\rm gh}\ =\ \int \dd^{3|4}z\,(\hspace{-0.1cm}( b-\bar b, \pounds_{-\frac{i}{\sqrt{\kappa}}V}\big\{c-\bar c+\coth\big(\pounds_{-\frac{i}{\sqrt{\kappa}}V}\big)(c+\bar c)\big\} )\hspace{-0.1cm}) ~, \end{equation} where the $c$ are the ghosts while the $b$ are the antighosts; these are (anti)chiral superfields. As one may check, $S_{\rm gf}+S_{\rm gh}$ is invariant under the following BRST transformation laws: \begin{equation} \begin{aligned} \delta_{BRST} V \ &=\ \tfrac{\di\sqrt{\kappa}}{2}\,\eta\, \pounds_{-\frac{i}{\sqrt{\kappa}}V}\big\{c-\bar c+\coth\big(\pounds_{-\frac{i}{\sqrt{\kappa}}V}\big)(c+\bar c)\big\}~,\\ \delta_{BRST} c\ &=\ -\eta\,c^2\eand \delta_{BRST} \bar c\ =\ \eta\,\bar c^2~,\\ \delta_{BRST} b\ &=\ -\di\sqrt{\kappa}\,\eta\,(\alpha^{-1}-\di \beta^{-1}){\bar{D}}^2V~,\\ \delta_{BRST} \bar b\ &=\ \di\sqrt{\kappa}\,\eta\,(\alpha^{-1}+\di\beta^{-1})D^2V~, \end{aligned} \end{equation} where $\eta$ is some anticommuting parameter. For our purposes, we will need $S_{\rm gh}$ only to $\mathcal{O}(V^1)$, \begin{equation} S_{\rm gh}\ =\ \int \dd^{3|4}z\,\left[-(\hspace{-0.1cm}(\bar b,c)\hspace{-0.1cm})-(\hspace{-0.1cm}(\bar c,b)\hspace{-0.1cm})-\tfrac{\di}{\sqrt{\kappa}} (\hspace{-0.1cm}( b-\bar b,[\hspace{-0.05cm}[ V,c-\bar c]\hspace{-0.05cm}])\hspace{-0.1cm})+\mathcal{O}(V^2)\right]. \end{equation} The full quantum action is then given by \begin{equation}\label{eq:QuantumAction} S_q\ =\ S_0+S_1+S_{\rm gf}+S_{\rm gh}~. \end{equation} In order to have a compact form of the Feynman rules, we use capital Roman letters $A,B,\ldots=1,\ldots,\operatorname{dim}\frg_\CA$ to denote gauge algebra indices. For this, it is important to note that there is a priori no bijection between pairs of indices $ab$ denoting elements of $\Lambda^2\CA$ and an index $A$ corresponding to an element of $\frg_\CA$. This is due to the fact that $\delta:\Lambda^2 \CA\rightarrow \frg_\CA$ is not injective in general (with an exception being the case of the real 3-algebra $A_4$). This point has to be carefully taken into account in all the calculations in the following. In terms of the gauge algebra indices, the invariant form $(\hspace{-0.1cm}(\cdot,\cdot)\hspace{-0.1cm})$ on $\frg_{\CA}$ is simply given by \begin{equation} (\hspace{-0.1cm}( X,Y)\hspace{-0.1cm})\ =\ X^{ab} Y^{bc} f_{abcd}\ =:\ X^A Y^B G_{AB}~,{~~~\mbox{with}~~~} G_{AB}\ =\ G_{BA}~. \end{equation} We assume that $G_{AB}$ has an inverse denoted by $G^{AB}$ with $G_{AC}G^{CB}={\delta_A}^B$. Note that the identification $G_{AB}=f_{abcd}$ holds only if $\delta$ is a bijection (as is the case for $\CA=A_4$). The structure constants of $\frg_{\CA}$ are denoted by ${F_{AB}}^C$. In interactions like the 3-gluon vertex, the quantity $F_{ABC}:= {F_{AB}}^D G_{DC}$ will appear. Due to $ad$-invariance of $(\hspace{-0.1cm}(\cdot,\cdot)\hspace{-0.1cm})$, $F_{ABC}$ is totally antisymmetric in $ABC$. Moreover, we will use multi-indices $I=ia$ combining flavor and 3-algebra indices whenever convenient. For example, vertices like \begin{subequations} \begin{equation} ({\bar{\Phi}}_i,V(\Phi^i))\ =\ V^{ab}\Phi^{ic}{\bar{\Phi}}_i^{d} f_{abcd}\ =\ V^{ab}\Phi^{jc}{\bar{\Phi}}_{id} {f_{abc}}^d\delta_i^{~j}~, {~~~\mbox{with}~~~}{\bar{\Phi}}_{ia}\ :=\ h_{ab}{\bar{\Phi}}_i^b \end{equation} that appear in the expansion of $(\bar{\Phi}_i,\de^{-\frac{2\di}{\sqrt{\kappa}}V}\Phi^i)$, will be written as \begin{equation} ({\bar{\Phi}}_i,V(\Phi^i))\ =\ \Phi^I V^A {T_{AI}}^J {\bar{\Phi}}_J~, \end{equation} where \begin{equation} [T_A,T_B]\ =\ {F_{AB}}^C T_C~. \end{equation} \end{subequations} We stress again that the identification ${T_{AI}}^J={f_{abc}}^d\delta_i^{~j}$ works only if $\delta:\Lambda^2\CA\rightarrow \frg_\CA$ is a bijection. \subsection{Feynman rules} We have now all the necessary ingredients to write down the momentum space Feynman rules for our theory ($\partial_\mu\mapsto-\di p_\mu$). \bigskip \noindent{\it \underline{Propagators:}} \bigskip \noindent The propagators are found to be\footnote{Here and in the sequel, we make no notational distinction between a position space field and its momentum space version (after Fourier transform).} \begin{subequations}\label{eq:propagators} \begin{eqnarray} \kern-.8cm \begin{picture}(80,20)(-5,0) \SetScale{1} \put(0,0){ \Photon(0,2)(65,2){3}{12} \Text(65,12)[l]{$B$} \Text(65,-10)[l]{$\theta_2$} \Text(2,12)[r]{$A$} \Text(2,-10)[r]{$\theta_1$} \Text(34,10)[]{$\longrightarrow$} \Text(36,15)[]{$p$} } \end{picture} \kern8pt:\kern8pt \langle V^A(-p,\theta_1) V^B(p,\theta_2)\rangle\! &=&\notag\\[12pt] &&\kern-8.5cm=\ -\frac{\di}{4p^2}G^{AB}\left[{\bar{D}}_\alpha D^\alpha-\tfrac{\di}{4}\tfrac{\alpha^2\beta^2}{\alpha^2+\beta^2} \left\{\alpha^{-1}(D^2+{\bar{D}}^2)- \di \beta^{-1}(D^2-{\bar{D}}^2)\right\}\right]\delta^{(4)}(\theta_1-\theta_2)~,\label{eq:GluonProp}\\[12pt] \kern-.8cm \begin{picture}(80,20)(-4,0) \SetScale{1} \put(0,0){ \ArrowLine(0,2)(65,2) \Text(65,12)[l]{$J$} \Text(65,-10)[l]{$\theta_2$} \Text(2,12)[r]{$I$} \Text(2,-10)[r]{$\theta_1$} \Text(34,10)[]{$\longrightarrow$} \Text(36,15)[]{$p$} } \end{picture} \kern8pt:\kern8pt \langle \Phi^{I}(-p,\theta_1){\bar{\Phi}}_J(p,\theta_2)\rangle\! &=&\! -\frac{\di}{p^2}{\delta^I}_J \delta^{(4)}(\theta_1-\theta_2)~,\\[12pt] \kern-.8cm \begin{picture}(80,20)(0,0) \SetScale{1} \put(0,0){ \DashArrowLine(0,2)(65,2){4} \Text(65,12)[l]{$B$} \Text(65,-10)[l]{$\theta_2$} \Text(2,12)[r]{$A$} \Text(2,-10)[r]{$\theta_1$} \Text(34,10)[]{$\longrightarrow$} \Text(36,15)[]{$p$} } \end{picture} \kern8pt:\kern8pt \langle c^A(-p,\theta_1)\bar b^B(p,\theta_2)\rangle\! &=&\! \frac{\di}{p^2}G^{AB}\delta^{(4)}(\theta_1-\theta_2)~,\\[12pt] \kern-.8cm \begin{picture}(80,20)(0,0) \SetScale{1} \put(0,0){ \DashArrowLine(0,2)(65,2){4} \Text(65,12)[l]{$B$} \Text(65,-10)[l]{$\theta_2$} \Text(2,12)[r]{$A$} \Text(2,-10)[r]{$\theta_1$} \Text(34,10)[]{$\longrightarrow$} \Text(36,15)[]{$p$} } \end{picture} \kern8pt:\kern8pt \langle b^A(-p,\theta_1)\bar c^B(p,\theta_2)\rangle\! &=&\! \frac{\di}{p^2}G^{AB}\delta^{(4)}(\theta_1-\theta_2)~, \end{eqnarray} \end{subequations} where all derivatives are understood to depend on $p$ and to act on $\theta_1$.\footnote{Note that we make no pictorial distinction between $\langle c\bar b\rangle $ and $\langle b\bar c\rangle $.} Here, we suppressed the usual $\di\varepsilon$-prescription of the poles. As already indicated, in this work we will use Landau gauge with $\alpha\beta\to0$. We shall also use the convention ${\bar{D}}_\alpha D^\alpha = {\bar{D}} D$. \bigskip \noindent{\it \underline{Vertices:}} \bigskip \noindent Vertices can be read off directly from the action \eqref{eq:QuantumAction}, and for the reader's convenience we have summarized them in Appendix \ref{app:vertices} As for SYM theory in superspace language, there is one additional feature that for each chiral or antichiral line leaving a vertex there is a factor of $-\frac14 {\bar{D}}^2$ or $-\frac14 D^2$ acting on the corresponding propagator. However, for purely chiral or antichiral vertices that come from the superpotential, we omit one factor of $-\frac14 {\bar{D}}^2$ or $-\frac14 D^2$ corresponding to one internal line, i.e.\ a vertex with $n$ internal lines attached carries $n-1$ derivative factors. \bigskip \noindent{\it \underline{Integration, symmetry factors and regularization:}} \bigskip \noindent First, there are the usual loop-momentum integrals $\int\frac{\dd^3p}{(2\pi)^3}$ for each loop and momentum conserving delta functions. Second, we integrate over $\dd^4\theta$ at each vertex. Finally, the usual symmetry factors associated with the diagram have to be taken into account. Our regularization prescription is as follows: We will perform all manipulations of the formul\ae{} in $D=3$, $\mathcal{N}=2$ superspace and only compute the final loop-momentum integrals in dimensional regularization. This prescription corresponds to dimensional reduction \cite{Siegel:1979wq}, a procedure, which is known to be valid at least up to two loop order \cite{Chen:1992ee}. \subsection{Powercounting} Before performing the calculation, it is useful to look at the superficial degree of divergence $\delta(\Gamma)$ of some diagram $\Gamma$. With the given Feynman rules, the gluon propagator $\langle VV\rangle$ scales as $1/p$ for large momenta while the propagators for the matter $\langle\Phi{\bar{\Phi}}\rangle$ and the ghosts $\langle c\bar b\rangle$ and $\langle b\bar c\rangle$ go like $1/p^2$. The $V^{n}$ vertex scales as $D{\bar{D}}\sim p$, each vertex of type $\Phi V^n{\bar{\Phi}}$ goes like $D^2{\bar{D}}^2\sim p^2$ and the $\Phi^4$ and ${\bar{\Phi}}^4$ vertices behave as ${\bar{D}}^6\sim p^3$ and $D^6\sim p^3$, respectively. Any ghost/gluon interaction goes like $D^2{\bar{D}}^2\sim p^2$. Then each external chiral or antichiral line (matter and ghost lines) goes like $1/{\bar{D}}^2\sim 1/p$ or $1/D^2\sim 1/p$. Finally, as in SYM theory in four dimensions \cite{Grisaru:1979wc}, for each loop one may reduce all the $\dd^4\theta$-integrals to just a single one by partially integrating the $D$- and ${\bar{D}}$-derivatives, hence leaving each loop-momentum integral to behave as $\dd^3 p/D^2{\bar{D}}^2\sim p$. Altogether, the superficial degree of divergence is thus given by \begin{equation} \delta(\Gamma)\ =\ V_g+2V_{cg}+3V_c-I_g-2I_c+L-E_c~, \end{equation} where $V_g$ is the number of purely gluonic vertices, $V_{cg}$ the number of matter/gluon and ghost/gluon interactions and $V_c$ is the number of purely chiral vertices of $\Gamma$. Then, $I_g$ is the number of internal gluon lines, $I_c$ is the number of ghost and matter lines and $E_c$ is the number of external ghost and matter lines. Finally, $L$ is the number of loops. Using the formul\ae{} \begin{equation} L\ =\ I-V+1\ =\ I_g+I_c-V_g-V_{cg}-V_c+1\eand E_c+2I_c \ =\ 2V_{cg}+4V_c~, \end{equation} we eventually arrive at \begin{equation}\label{eq:SDofD} \delta(\Gamma) \ =\ \tfrac{1}{2}(2-E_c)~. \end{equation} Comparing this with the result of SYM theory in four dimensions, \cite{Grisaru:1979wc}, we conclude that $\delta_{SCS}=\frac12\delta_{SYM}$. Equation \eqref{eq:SDofD} then tells us that all diagrams with more than two external chiral lines are superficially convergent. Notice that \eqref{eq:SDofD} can be refined further. When partially integrating the supercovariant derivatives some of them will get transferred to external lines (when, e.g., computing the wave function renormalization of the vector superfield or the renormalization of the superpotential). If we let $N_D$ be the number of $D$- and ${\bar{D}}$-derivatives that are transferred to external lines, then the superficial degree of divergence is given by \begin{equation}\label{eq:SDofD2} \delta(\Gamma) \ =\ \tfrac{1}{2}(2-E_c-N_D)~. \end{equation} \subsection{Two-loop renormalization in the real case} Let us now come to the computation of the beta functions $\beta_{ijkl}^{(\ell)}$ for the couplings $R_{ijkl}^{(\ell)}$ with $\ell=1,2$. Upon rescaling $\Phi^i_0=(Z^{1/2})_j^{~i}\Phi^j$, where the subscript `0' refers to the bare quantities, we find \begin{equation} R_{0\,ijkl}^{(\ell)}\ =\ (Z^{-1/2})_i^{~i'}\cdots(Z^{-1/2})_l^{~l'}{Z^{(\ell)}_{i'j'k'l'}}\,\!^{i''j''k''l''} R_{i''j''k''l''}^{(\ell)}~, \end{equation} and hence \begin{subequations} \begin{equation} \begin{aligned} \beta_{ijkl}^{(\ell)}\ &=\ {(Z^{-1})_{ijkl}}^{i'j'k'l'}\gamma_{i'}^{~m}{Z^{(\ell)}_{mj'k'l'}}\,\!^{i''j''k''l''}R_{i''j''k''l''}^{(\ell)}\\ &\kern1cm +\cdots+ {(Z^{-1})_{ijkl}}^{i'j'k'l'}\gamma_{l'}^{~m}{Z^{(\ell)}_{i'j'k'm}}\,\!^{i''j''k''l''} R_{i''j''k''l''}^{(\ell)}\\ &\kern2cm+{((Z^{(\ell)})^{-1})_{ijkl}}\,\!^{i'j'k'l'} \frac{\dd {Z^{(\ell)}_{i'j'k'l'}}\,\!^{i''j''k''l''}}{\dd\log \mu} R_{i''j''k''l''}^{(\ell)}~, \end{aligned} \end{equation} where \begin{equation} \gamma_i^{~j}\ =\ (Z^{-1/2})_i^{~k}\frac{\dd (Z^{1/2})_k^{~j}}{\dd\log \mu}\ =\ \frac12 \frac{\dd(\log Z)_i^{~j}}{\dd\log \mu} \end{equation} \end{subequations} denotes the anomalous dimension of the field $\Phi^i$ and ${Z^{(\ell)}_{ijkl}}\,\!^{i'j'k'l'}$ is the renormalization of the quartic vertex $\ell$. \begin{figure}[h] \begin{center} \begin{picture}(100,110)(60,-10) \SetScale{1} \put(0,0){ \Vertex(28,50){1} \Vertex(72,50){1} \PhotonArc(50,50)(22,0,383){3}{25.5} \ArrowLine(0,50)(28,50) \ArrowLine(28,50)(72,50) \ArrowLine(72,50)(100,50) \LongArrowArc(50,58)(22,60,120) \LongArrowArc(50,42)(22,240,300) \Text(105,50)[l]{{\footnotesize ${\bar{\Phi}}_J(p,\theta_2)$}} \Text(-47,50)[l]{{\footnotesize $\Phi^I(-p,\theta_1)$}} \Text(12,42)[l]{{\footnotesize $p$}} \Text(47,42)[l]{{\footnotesize $k$}} \Text(84,42)[l]{{\footnotesize $p$}} \Text(32,87)[l]{{\footnotesize $k+l-p$}} \Text(47,13)[l]{{\footnotesize $l$}} \Text(44,0)[l]{{\footnotesize $(a)$}} } \end{picture} \begin{picture}(100,110)(-60,-10) \SetScale{1} \put(0,0){ \Vertex(28,50){1} \Vertex(72,50){1} \ArrowArc(50,50)(22,0,180) \ArrowArcn(50,50)(22,360,180) \ArrowLine(0,50)(28,50) \ArrowLine(72,50)(28,50) \ArrowLine(72,50)(100,50) \Text(105,50)[l]{{\footnotesize ${\bar{\Phi}}_J(p,\theta_2)$}} \Text(-47,50)[l]{{\footnotesize $\Phi^I(-p,\theta_1)$}} \Text(12,42)[l]{{\footnotesize $p$}} \Text(47,42)[l]{{\footnotesize $k$}} \Text(84,42)[l]{{\footnotesize $p$}} \Text(28,80)[l]{{\footnotesize $-k-l-p$}} \Text(47,18)[l]{{\footnotesize $l$}} \Text(47,0)[l]{{\footnotesize $(b)$}} } \end{picture} \begin{picture}(100,110)(0,-10) \SetScale{1} \put(0,0){ \Vertex(50,30){1} \PhotonArc(50,55)(22,120,420){3}{18.5} \ArrowLine(0,30)(50,30) \ArrowLine(50,30)(100,30) \GOval(50,74)(11,11)(0){0.7} \LongArrowArc(50,70)(22,60,120) \Text(105,30)[l]{{\footnotesize ${\bar{\Phi}}_J(p,\theta_1)$}} \Text(-45,30)[l]{{\footnotesize $\Phi^I(-p,\theta_1)$}} \Text(22,22)[l]{{\footnotesize $p$}} \Text(72,22)[l]{{\footnotesize $p$}} \Text(48,100)[l]{{\footnotesize $k$}} \Text(44,0)[l]{{\footnotesize $(c)$}} } \end{picture} \end{center} \vspace*{-15pt} \caption{Logarithmically divergent diagrams that contribute to $Z_i^{~j}$.}\label{fig:ZDiagrams} \end{figure} To compute $\beta_{ijkl}^{(\ell)}$, we emphasize that there is no one-loop renormalization, as there are no Feynman diagrams which could potentially contribute. Note that Lemma 3 of \cite{Gates:1991qn} is very helpful here, as it immediately rules out contributions from large classes of diagrams. The first non-trivial result is found at two loops. From the discussion in the previous section, we conclude that all four-point functions are superficially convergent and indeed, by inspecting all two-loop four-point diagrams of types $\langle\Phi\Phi\Phi\Phi\rangle$ and $\langle{\bar{\Phi}}\Phib{\bar{\Phi}}\Phib\rangle$ explicitly, one realizes that they all are convergent: There is a single such diagram potentially contributing (the two-loop gluon correction to the vertex), which is, however, convergent. We are therefore left with \begin{equation}\label{eq:BetaFunction} \beta_{ijkl}^{(\ell)}\ =\ \gamma_i^{~m}R_{mjkl}^{(\ell)}+\cdots+\gamma_l^{~m}R_{ijkm}^{(\ell)}~. \end{equation} Moreover, there are only three diagrams that contribute to $\gamma_i^{~j}$ and they are displayed in Fig.\ \ref{fig:ZDiagrams}. All other diagrams either vanish by supersymmetry or by their respective color structure or they are simply finite. Furthermore, it will be helpful to introduce the following operators: \begin{equation}\label{eq:ops123} \begin{aligned} (\mathscr{O}_1)_I^{~J}\ &:=\ G^{AB}{T_{AI}}^K{T_{BK}}^L G^{CD}{T_{CL}}^M{T_{DM}}^J~,\\ (\mathscr{O}_2)_I^{~J}\ &:=\ \tfrac14 G^{AC}G^{BD} {F_{AB}}^E {F_{CD}}^F {T_{EI}}^{~K} {T_{FK}}^{~J}~,\\ (\mathscr{O}_3)_I^{~J}\ &:=\ \tfrac{1}{N_f}{T_{AK}}^L {T_{BL}}^K G^{AC} G^{BD}{T_{CI}}^M {T_{DM}}^J~, \end{aligned} \end{equation} and one can show that they commute with all $T_A$. However, the $T_A$ need not form an irreducible representation of $\frg_\CA$ in general, so Schur's lemma cannot be applied directly. Nevertheless, it turns out that for the 3-algebras we are interested in, i.e.\ $A_4$ and the class $M_{\rm III_{\alpha,\beta}}^R(N)$ and also later for $M_{\rm I_{\alpha}}^H(N)$, the operators \eqref{eq:ops123} are indeed proportional to the identity. In these cases, we define \begin{equation} (\mathscr{O}_1)_I^{~J}\ =:\ k_1^2\delta_I^{~J}~,~~~ (\mathscr{O}_2)_I^{~J}\ =:\ k_2\delta_I^{~J}~,~~~ (\mathscr{O}_3)_I^{~J}\ =:\ k_3 \delta_I^{~J}~. \end{equation} The explicit values of $k_1$, $k_2$ and $k_3$ for the various matrix representations are listed in Appendix \ref{app:UsefulFormulae} To be concise, we will give all our formul\ae{} using these constants in the following. Let us start from diagram \ref{fig:ZDiagrams}a). Using the Feynman rules listed in the previous section and in Appendix \ref{app:vertices}, this diagram is given by the following integral: \begin{equation}\label{eq:DiagramAa} \begin{aligned} \Sigma^{(a)}\ &=\ -\frac{\di}{16\cdot2\kappa^2}\big[k_2+k_1^2\big]\int\frac{\dd^3p}{(2\pi)^3}\frac{\dd^3k}{(2\pi)^3}\frac{\dd^3l}{(2\pi)^3}\,\dd^4\theta_1\dd^4\theta_2\, \Phi^I(-p,\theta_1){\bar{\Phi}}_I(p,\theta_2)\\ &\kern4cm\times\frac{[D^2{\bar{D}}^2(k,\theta_1)\delta_{21}][{\bar{D}} D(k+l-p,\theta_2)\delta_{12}][{\bar{D}} D(l,\theta_1)\delta_{12}]}{k^2 l^2 (k+l-p)^2}~, \end{aligned} \end{equation} where $\delta_{12}:=\delta^{(4)}(\theta_1-\theta_2)$; the $1/2$ is the symmetry factor. We arrive at this expression after making use of the {\it transfer rule} \begin{equation} D(p,\theta_1)\delta_{12}\ =\ -D(-p,\theta_2)\delta_{12}~, \end{equation} where $D$ represents both, $D$ and ${\bar{D}}$. Integrating by parts and by employing the $D$-algebra $\{D,{\bar{D}}\}\sim p$, $\{D,D\}=0$ and $\{{\bar{D}},{\bar{D}}\}=0$, the integral \eqref{eq:DiagramAa} simplifies to \begin{eqnarray} \Sigma^{(a)}\! &=&\! -\frac{4\di}{\kappa^2}\big[k_2+k_1^2\big]\int\frac{\dd^3p}{(2\pi)^3}\,\dd^4\theta\, \Phi^I(-p,\theta){\bar{\Phi}}_I(p,\theta)\underbrace{\int\frac{\dd^3k}{(2\pi)^3}\frac{\dd^3l}{(2\pi)^3}\, \frac{1}{k^2l^2(k+l-p)^2}}_{=\ -\frac{\log\Lambda}{16\pi^2}}\notag\\ &=&\! \frac{\di}{4\pi^2\kappa^2}\big[k_2+k_1^2\big] \log\Lambda\,\int\frac{\dd^3p}{(2\pi)^3}\, \dd^4\theta\, ({\bar{\Phi}}_i(p,\theta),\Phi^i(-p,\theta))~. \end{eqnarray} Thus, the contribution of $\Sigma^{(a)}$ to $Z_i^{~j}$ is \begin{equation}\label{eq:Za} \mbox{(a) :}~~~~\delta Z_i^{~j}\ =\ -\frac{\log\Lambda}{4\pi^2\kappa^2}\big[k_2+k_1^2\big]\delta_i^{~j}~. \end{equation} In a very similar manner, we find the contribution coming from diagram \ref{fig:ZDiagrams}b) to be \begin{equation}\label{eq:Zb} \begin{aligned} \mbox{(b) :}~~~~\delta Z_i^{~j}\ &=\ -\frac{2\log\Lambda}{\pi^2} \left[R_{iklm}^{(1)}\big(-c_3R^{jklm}_{(1)}+2c_2 R^{jmlk}_{(1)}+2c_1R^{jmlk}_{(2)}\big)\right.\\ &\kern2.5cm\left.+\ R_{iklm}^{(2)}\big(d\,R^{jklm}_{(2)}+2 R^{jmlk}_{(2)}+2c_1R^{jmlk}_{(1)}\big) \right]~, \end{aligned} \end{equation} where $c_1$, $c_2$ and $c_3$ are the three ``Casimirs'' of $\CA$ that are given by \begin{equation} {f_{ac}}^{cb}\ =\ c_1{\delta_a}^b~,~~~ f_{acde}f^{bedc}\ =\ c_2{\delta_a}^b\eand f_{acde}f^{bcde}\ =\ -c_3\delta_a^{~b}~ \end{equation} and $d=\operatorname{dim}\CA$ is the dimension of the 3-algebra. These relations follow from the fundamental identity. We refer to Appendix \ref{app:UsefulFormulae}, where we list $c_1$, $c_2$ and $c_3$ for the matrix representation $M_{\rm III_{\alpha,\beta}}^R(N)$. \begin{figure}[h] \begin{center} \begin{picture}(100,110)(60,-10) \SetScale{1} \put(0,0){ \Vertex(28,50){1} \Vertex(72,50){1} \PhotonArc(50,50)(22,0,383){3}{25.5} \Photon(0,50)(28,50){3}{5} \Photon(72,50)(100,50){3}{5} \LongArrowArc(50,58)(22,60,120) \LongArrowArc(50,42)(22,240,300) \LongArrow(8,42)(18,42) \LongArrow(79,42)(89,42) \Text(105,50)[l]{{\footnotesize $V^B(p,\theta_2)$}} \Text(-47,50)[l]{{\footnotesize $V^A(-p,\theta_1)$}} \Text(12,36)[l]{{\footnotesize $p$}} \Text(84,36)[l]{{\footnotesize $p$}} \Text(40,87)[l]{{\footnotesize $k-p$}} \Text(47,13)[l]{{\footnotesize $k$}} \Text(44,0)[l]{{\footnotesize $(a)$}} } \end{picture} \begin{picture}(100,110)(-60,-10) \SetScale{1} \put(0,0){ \Vertex(28,50){1} \Vertex(72,50){1} \DashCArc(50,50)(22,0,180){4} \DashCArc(50,50)(22,180,360){4} \Photon(0,50)(28,50){3}{5} \Photon(72,50)(100,50){3}{5} \LongArrowArc(50,58)(22,60,120) \LongArrowArc(50,42)(22,240,300) \LongArrow(8,42)(18,42) \LongArrow(79,42)(89,42) \Text(105,50)[l]{{\footnotesize $V^B(p,\theta_2)$}} \Text(-47,50)[l]{{\footnotesize $V^A(-p,\theta_1)$}} \Text(12,36)[l]{{\footnotesize $p$}} \Text(84,36)[l]{{\footnotesize $p$}} \Text(40,87)[l]{{\footnotesize $k-p$}} \Text(47,13)[l]{{\footnotesize $k$}} \Text(44,0)[l]{{\footnotesize $(b)$}} } \end{picture} \begin{picture}(100,110)(0,-10) \SetScale{1} \put(0,0){ \Vertex(28,50){1} \Vertex(72,50){1} \ArrowArc(50,50)(22,0,180) \ArrowArc(50,50)(22,180,360) \Photon(0,50)(28,50){3}{5} \Photon(72,50)(100,50){3}{5} \LongArrow(8,42)(18,42) \LongArrow(79,42)(89,42) \Text(105,50)[l]{{\footnotesize $V^B(p,\theta_2)$}} \Text(-47,50)[l]{{\footnotesize $V^A(-p,\theta_1)$}} \Text(12,36)[l]{{\footnotesize $p$}} \Text(84,36)[l]{{\footnotesize $p$}} \Text(40,84)[l]{{\footnotesize $k-p$}} \Text(47,16)[l]{{\footnotesize $k$}} \Text(44,0)[l]{{\footnotesize $(c)$}} } \end{picture} \end{center} \vspace*{-15pt} \caption{One-loop diagrams that contribute to the gluon self-energy $\Pi$; they are all finite. The ghost diagram (b) represents all four ghost contributions.}\label{fig:GluonSelfEnergy} \end{figure} Finally, we need to find the contribution coming from diagram \ref{fig:ZDiagrams}c). To compute this diagram, it is useful to perform the calculation in two steps. Let us first compute the one-loop contributions to the gluon self-energy $\Pi$. For this, we introduce the usual superspin projectors $\mathscr{P}_0$ and $\mathscr{P}_{1/2}$, \begin{equation} \mathscr{P}_0\ :=\ -\frac{1}{16p^2}\left[D^2{\bar{D}}^2+{\bar{D}}^2 D^2\right]\eand \mathscr{P}_{1/2}\ :=\ \frac{1}{8p^2} D^\alpha {\bar{D}}^2 D_\alpha~, \end{equation} which obey \begin{equation} \mathscr{P}_0^2\ =\ \mathscr{P}_0~,~~~\mathscr{P}_{1/2}^2\ =\ \mathscr{P}_{1/2}\eand\mathscr{P}_0+\mathscr{P}_{1/2}\ =\ 1~. \end{equation} With these, the relevant diagrams displayed in Fig.\ \ref{fig:GluonSelfEnergy} contribute according to \begin{subequations}\label{eq:GSE} \begin{eqnarray} \Pi^{(a)}\! &=&\! -\frac{\di}{8\kappa}{F_{AC}}^D{F_{BD}}^C\int\frac{\dd^3 p}{(2\pi)^3}\, \dd^4\theta\,V^A(-p,\theta)\, p\, \mathscr{P}_0 V^B(p,\theta)~,\\ \Pi^{(b)}\! &=&\! \frac{\di}{8\kappa}{F_{AC}}^D{F_{BD}}^C\int\frac{\dd^3 p}{(2\pi)^3}\,\dd^4\theta\, V^A(-p,\theta)\, p\, (\mathscr{P}_{1/2}+\mathscr{P}_0) V^B(p,\theta)~,\\ \Pi^{(c)}\! &=&\! -\frac{\di}{4\kappa}{T_{AI}}^J{T_{BJ}}^I\int\frac{\dd^3 p}{(2\pi)^3}\,\dd^4\theta\, V^A(-p,\theta)\, p\, \mathscr{P}_{1/2}V^B(p,\theta)~, \end{eqnarray} \end{subequations} as follows by using the Feynman rules listed in Section \ref{sec:QAandFR} and in Appendix \ref{app:vertices} Summing up the terms \eqref{eq:GSE}, we find \begin{equation}\label{eq:1LoopPi} \Pi\ =\ \frac{\di}{8\kappa}\left[{F_{AC}}^D{F_{BD}}^C-2{T_{AI}}^J{T_{BJ}}^I\right]\int\frac{\dd^3 p}{(2\pi)^3}\,\dd^4\theta\, V^A(-p,\theta)\, p\, \mathscr{P}_{1/2}V^B(p,\theta)~. \end{equation} Note that the longitudinal part $\mathscr{P}_0$ does not appear in this expression as required by the Ward identity for the vector superfield propagator. Using the result \eqref{eq:1LoopPi}, we can now derive the contribution to the anomalous dimension of $\Phi^i$ coming from diagram \ref{fig:ZDiagrams}c). After some algebraic manipulations, we arrive at \begin{equation}\label{eq:Zc} \mbox{(c) :}~~~~\delta Z_i^{~j}\ =\ -\frac{\log\Lambda}{48\pi^2\kappa^2}\big[2k_2+N_fk_3\big]\delta_i^{~j}~. \end{equation} Collecting all the results, \eqref{eq:Za}, \eqref{eq:Zb} and \eqref{eq:Zc}, we finally obtain \begin{equation}\label{eq:2LoopGammaR} \begin{aligned} \gamma_i^{~j}\ &=\ \frac{1}{8\pi^2\kappa^2} \left\{\big[k_2+k_1^2+\tfrac{1}{12}(2k_2+N_fk_3)\big]\delta_i^{~j}\right.\\ &\kern2.5cm +~8\kappa^2\left[R_{iklm}^{(1)}\big(-c_3R^{jklm}_{(1)}+2c_2 R^{jmlk}_{(1)}+2c_1R^{jmlk}_{(2)}\big)\right.\\ &\kern5cm\left.\left.+\ R_{iklm}^{(2)}\big(d\,R^{jklm}_{(2)}+2 R^{jmlk}_{(2)}+2c_1R^{jmlk}_{(1)}\big) \right]\right\}~ \end{aligned} \end{equation} for the anomalous dimension of $\Phi^i$. Equation \eqref{eq:2LoopGammaR} may then be substituted into \eqref{eq:BetaFunction} to get the final expressions for the two-loop beta functions $\beta_{ijkl}^{(\ell)}$. As a check, let us consider $\CA=A_4$. In this case we have $d=4$ and $f_{abcd}=\varepsilon_{abcd}$. Then $k_1=0$, $k_2=-3$, $k_3=6$, $c_1=0$ and $c_2=c_3=-6$. We also take $N_f=4$ together with $R_{ijkl}^{(1)}=\lambda \varepsilon_{ijkl}$ with some constant $\lambda$ and $R_{ijkl}^{(2)}=0$. Using \eqref{eq:2LoopGammaR}, the beta functions \eqref{eq:BetaFunction} reduce to \begin{equation}\label{eq:BLGBetaFunction} \beta_{ijkl}^{(1)}\ =\ -\tfrac{3}{4\pi^2\kappa^2}\big[1-(4!\kappa)^2|\lambda|^2\big]R^{(1)}_{ijkl} \eand \beta_{ijkl}^{(2)}\ =\ 0~, \end{equation} and this expression vanishes for either $\lambda=0$ (because $R^{(1)}_{ijkl}=\lambda \varepsilon_{ijkl}$) or $|\lambda|=\frac{1}{4!\kappa}$. The latter value of $\lambda$ is precisely the value for the original BLG model \eqref{eq:BLGmodel}. Furthermore, one might check that the phase of $\lambda$ does not flow (see also Section \ref{sec:DiscAndRes}). To characterize the fixed points, it is therefore sufficient to consider the modulus of $\lambda$. The value $|\lambda|=0$, the minimally coupled Chern-Simons matter theory, is thus a UV stable fixed point, while $|\lambda|=\frac{1}{4!\kappa}$, the BLG model, forms an IR stable fixed point. \subsection{Two-loop renormalization in the Hermitian case}\label{eq:BetaHC} Let us now discuss the Hermitian case with the action given by \eqref{eq:S0Complex}, \eqref{eq:S1Complex}, \eqref{eq:gaugefixingaction} and \eqref{eq:ghostaction}. The calculation is essentially the same as in the real case modulo some changes in the color/flavor structure of the diagrams due to the two different types of matter that transform in opposite representations of the gauge group. We introduce again \begin{equation} (\hspace{-0.1cm}( X,Y)\hspace{-0.1cm})\ =\ X^{ab} Y^{bc} f_{cabd}\ =:\ X^A Y^B G_{AB}~,{~~~\mbox{with}~~~} G_{AB}\ =\ G_{BA}~ \end{equation} and assume that $G_{AB}$ has an inverse. Due to the $ad$-invariance of $(\hspace{-0.1cm}(\cdot,\cdot)\hspace{-0.1cm})$, the structure constants $F_{ABC}:={F_{AB}}^D G_{CD}$ are totally antisymmetric, as in the real case. Here, we have to use multi-indices of two types: $I=am$ and $\dot I=\,\!^a_\mdt$. Correspondingly, the chiral superfields read as $\Phi^I$ and $\Phi_{\dot I}$ and their conjugates are ${\bar{\Phi}}_I$ and ${\bar{\Phi}}^{\dot I}$; in writing this, we are implicitly using the metric $h_{ab}$ as we did in the real setting. With these conventions, the propagators are essentially the same as those listed in \eqref{eq:propagators}. The vertices are displayed in Appendix \ref{app:vertices} Everything else like regularization and power counting works, of course, as in the real setting. The beta functions for the two couplings $H_{mn\mdt\ndt}^{(\ell)}$ with $\ell=1,2$ are here given by \begin{subequations} \begin{equation} \begin{aligned} \beta_{mn\mdt\ndt}^{(\ell)}\ &=\ {(Z^{-1})_{mn\mdt\ndt}}^{m'n'\mdt'\ndt'}\gamma_{m'}^{~k}{Z^{(\ell)}_{kn'\mdt'\ndt'}}\, \!^{m''n''\mdt''\ndt''}H_{m''n''\mdt''\ndt''}^{(\ell)}\\ &\kern1cm +\cdots+ {(Z^{-1})_{mn\mdt\ndt}}^{m'n'\mdt'\ndt'}\gamma_{\ndt'}^{~\dot k} {Z^{(\ell)}_{m'n'\mdt'\dot k}}\,\!^{m''n''\mdt''\ndt''} H_{m''n''\mdt''\ndt''}^{(\ell)}\\ &\kern2cm+{((Z^{(\ell)})^{-1})_{mn\mdt\ndt}}\,\!^{m'n'\mdt'\ndt'} \frac{\dd {Z^{(\ell)}_{m'n'\mdt'\ndt'}}\,\!^{m''n''\mdt''\ndt''}}{\dd\log \mu} H_{m''n''\mdt''\ndt''}^{(\ell)}~, \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \gamma_m^{~n}\ =\ \frac12 \frac{\dd(\log Z)_m^{~n}}{\dd\log \mu}\eand \gamma_\mdt^{~\ndt}\ =\ \frac12 \frac{\dd(\log Z)_\mdt^{~\ndt}}{\dd\log \mu} \end{aligned} \end{equation} \end{subequations} denote the anomalous dimensions of the fields $\Phi^m$ and $\Phi^\mdt$ and ${Z^{(\ell)}_{mn\mdt\ndt}}\,\!^{m'n'\mdt'\ndt'}$ is the renormalization of the quartic vertex $\ell$. As in the real case, there is no renormalization of the vertices to two-loop order and we are therefore left with the wave function renormalizations \begin{equation}\label{eq:BetaFunctionH} \beta_{mn\mdt\ndt}^{(\ell)}\ =\ \gamma_m^{~k}R_{kn\mdt\ndt}^{(\ell)}+ \cdots+\gamma_\ndt^{~\dot k}R_{mn\mdt\dot k}^{(\ell)}~. \end{equation} Using the conventions introduced above, the diagrams in Fig.\ \ref{fig:ZDiagrams} yield the following contributions to the wave function renormalization: \begin{subequations} \begin{eqnarray} \kern-1cm \mbox{(a) :}~~~~\delta Z_m^{~n}\! &=&\! -\frac{\log\Lambda}{4\pi^2\kappa^2}\big[k_2+k_1^2\big]\delta_m^{~n}\eand \delta Z_\mdt^{~\ndt}\ =\ -\frac{\log\Lambda}{4\pi^2\kappa^2}\big[k_2+ k_1^2\big]\delta_\mdt^{~\ndt}~,\\ \kern-1cm \mbox{(b) :}~~~~\delta Z_m^{~n}\! &=&\! -\frac{\log\Lambda}{4\pi^2}\left[ \big(H_{mk\mdt\ndt}^{(1)} H^{\mdt\ndt kn}_{(1)}-H_{mk\mdt\ndt}^{(1)} H^{\ndt\mdt kn}_{(1)}\big) c_2\cos^2\beta\right.\notag\\ &&\kern1.5cm\left. +\ \big(H_{mk\mdt\ndt}^{(1)} H^{\mdt\ndt kn}_{(1)}+H_{mk\mdt\ndt}^{(1)} H^{\ndt\mdt kn}_{(1)}\big) c'_2\sin^2\beta\right.\notag\\ &&\kern1.5cm\left. +\ \big(H_{mk\mdt\ndt}^{(1)}H^{\mdt\ndt kn}_{(2)}+H_{mk\mdt\ndt}^{(2)} H^{\mdt\ndt kn}_{(1)}\big) \big(c_1\cos\beta+\di c'_1\sin\beta\big)\right.\notag\\ &&\kern1.5cm \left. -\ \big(H_{mk\mdt\ndt}^{(1)}H^{\ndt\mdt kn}_{(2)}+H_{mk\mdt\ndt}^{(2)} H^{\ndt\mdt kn}_{(1)}\big) \big(c_1\cos\beta-\di c'_1\sin\beta\big)\right.\notag\\ &&\kern1.5cm\left.+\ \big(H_{mk\mdt\ndt}^{(2)}H^{\mdt\ndt kn}_{(2)}+d\,H_{mk\mdt\ndt}^{(2)} H^{\ndt\mdt kn}_{(2)}\big)\right],\label{Zb1}\\ \delta Z_\mdt^{~\ndt}\! &=&\! -\frac{\log\Lambda}{4\pi^2}\left[ \big(H^{\ndt\dot kmn}_{(1)} H_{mn\dot k\mdt}^{(1)}-H^{\dot k\ndt mn}_{(1)} H_{mn \dot k\mdt}^{(1)}\big) c_2\cos^2\beta\right.\notag\\ &&\kern1.5cm\left. +\ \big(H^{\ndt\dot kmn}_{(1)} H_{mn\dot k\mdt}^{(1)}+H^{\dot k\ndt mn}_{(1)} H_{mn \dot k\mdt}^{(1)}\big) c'_2\sin^2\beta\right.\notag\\ &&\kern1.5cm\left. +\ \big(H^{\ndt\dot kmn}_{(1)}H_{mn\dot k\mdt}^{(2)}+H^{\ndt\dot k mn}_{(2)} H_{mn\dot k\mdt}^{(1)}\big) \big(c_1\cos\beta+\di c'_1\sin\beta\big)\right.\notag\\ &&\kern1.5cm \left. -\ \big(H^{\dot k\ndt mn}_{(1)}H_{mn\dot k\mdt}^{(2)}+H^{\dot k\ndt mn}_{(2)} H_{nm\dot k\mdt}^{(1)}\big) \big(c_1\cos\beta-\di c'_1\sin\beta\big)\right.\notag\\ &&\kern1.5cm\left.+\ \big(H^{\ndt\dot kmn}_{(2)}H_{mn\dot k\mdt}^{(1)}+d\,H^{\dot k\ndt mn}_{(2)} H_{mn\dot k\mdt}^{(2)}\big)\right],\label{Zb2}\\ \kern-1cm \mbox{(c) :}~~~~\delta Z_m^{~n}\! &=&\! -\frac{\log\Lambda}{48\pi^2\kappa^2} \big[2k_2+N_fk_3\big]\delta_m^{~n}\eand \delta Z_\mdt^{~\ndt}\ =\ -\frac{\log\Lambda}{48\pi^2\kappa^2} \big[2k_2+N_fk_3\big]\delta_\mdt^{~\ndt}~, \end{eqnarray} where \begin{equation} \begin{aligned} (\mathscr{O}_1)_I^{~J}\ &:=\ G^{AB}{T_{AI}}^K{T_{BK}}^L G^{CD}{T_{CL}}^M{T_{DM}}^J\ =\ k^2_1\delta_I^{~J}~,\\ (\tilde \mathscr{O}_1)_{\dot I}^{~\dot J}\ &:=\ G^{AB}{T_{A\dot I}}^{\dot K}{T_{B\dot K}}^{\dot L} G^{CD}{T_{C\dot L}}^{\dot M}{T_{D\dot M}}^{\dot J}\ =\ k^2_1\delta_{\dot I}^{~\dot J}~,\\ (\mathscr{O}_2)_I^{~J}\ &:=\ \tfrac14 G^{AC}G^{BD} {F_{AB}}^E {F_{CD}}^F {T_{EI}}^{~K} {T_{FK}}^{~J} \ =\ k_2\delta_I^{~J}~,\\ (\tilde \mathscr{O}_2)_{\dot I}^{~\dot J}\ &:=\ \tfrac14 G^{AC}G^{BD} {F_{AB}}^E {F_{CD}}^F {T_{E\dot I}}^{~\dot K} {T_{F\dot K}}^{~\dot J} \ =\ k_2\delta_{\dot I}^{~\dot J}~,\\ (\mathscr{O}_3)_I^{~J}\ &:=\ \tfrac{2}{N_f}{T_{AK}}^L {T_{BL}}^K G^{AC} G^{BD}{T_{CI}}^M {T_{DM}}^J =\ k_3 \delta_I^{~J}~,\\ (\tilde \mathscr{O}_3)_{\dot I}^{~\dot J}\ &:=\ \tfrac{2}{N_f}{T_{A\dot K}}^{\dot L} {T_{B\dot L}}^{\dot K} G^{AC} G^{BD}{T_{C\dot I}}^{\dot M} {T_{D\dot M}}^{\dot J} =\ k_3 \delta_{\dot I}^{~\dot J}~, \end{aligned} \end{equation} and \begin{equation} {f_{ac}}^{cb}\ =\ c_1\delta_a^{~b}~,~~~f_{acde}f^{edcb}\ =\ -c_2{\delta_a}^b~,~~~ {d_{ac}}^{cb}\ =\ c'_1\delta_a^{~b}~,~~~ d_{acde}d^{edcb}\ =\ -c'_2{\delta_a}^b~ \end{equation} \end{subequations} with $d=\operatorname{dim}\CA$. For the explicit values of the Casimirs $k_i$, $c_i$ and $c'_i$ in the matrix representation $M_{\mathrm{I}_\alpha}^H(N)$, we refer to Appendix \ref{app:UsefulFormulae} Altogether, we obtain the following anomalous dimensions: \begin{subequations}\label{eq:2LoopGammaH} \begin{eqnarray} \gamma_m^{~n}\! &=&\! \frac{1}{8\pi^2\kappa^2}\bigg\{\big[k_2+k_1^2+\tfrac{1}{12}(2k_2 +N_fk_3)\big]\delta_m^{~n}\notag\\ &&\kern1cm+~\kappa^2\left[ \big(H_{mk\mdt\ndt}^{(1)} H^{\mdt\ndt kn}_{(1)}-H_{mk\mdt\ndt}^{(1)} H^{\ndt\mdt kn}_{(1)}\big) c_2\cos^2\beta\right.\notag\\ &&\kern1.5cm\left. +\ \big(H_{mk\mdt\ndt}^{(1)} H^{\mdt\ndt kn}_{(1)}+H_{mk\mdt\ndt}^{(1)} H^{\ndt\mdt kn}_{(1)}\big) c'_2\sin^2\beta\right.\notag\\ &&\kern1.5cm\left. +\ \big(H_{mk\mdt\ndt}^{(1)}H^{\mdt\ndt kn}_{(2)}+H_{mk\mdt\ndt}^{(2)} H^{\mdt\ndt kn}_{(1)}\big) \big(c_1\cos\beta+\di c'_1\sin\beta\big)\right.\notag\\ &&\kern1.5cm \left. -\ \big(H_{mk\mdt\ndt}^{(1)}H^{\ndt\mdt kn}_{(2)}+H_{mk\mdt\ndt}^{(2)} H^{\ndt\mdt kn}_{(1)}\big) \big(c_1\cos\beta-\di c'_1\sin\beta\big)\right.\notag\\ &&\kern1.5cm\left.+\ \big(H_{mk\mdt\ndt}^{(2)}H^{\mdt\ndt kn}_{(2)}+d\,H_{mk\mdt\ndt}^{(2)} H^{\ndt\mdt kn}_{(2)}\big)\right]\bigg\}~,\\ \gamma_{\dot m}^{~\dot n}\! &=&\! \frac{1}{8\pi^2\kappa^2}\bigg\{\big[k_2+k_1^2+ \tfrac{1}{12}(2 k_2 +N_fk_3)\big]\delta_m^{~n}\notag\\ &&\kern1cm+~\kappa^2\left[ \big(H^{\ndt\dot kmn}_{(1)} H_{mn\dot k\mdt}^{(1)}-H^{\dot k\ndt mn}_{(1)} H_{mn \dot k\mdt}^{(1)}\big) c_2\cos^2\beta\right.\notag\\ &&\kern1.5cm\left. +\ \big(H^{\ndt\dot kmn}_{(1)} H_{mn\dot k\mdt}^{(1)}+H^{\dot k\ndt mn}_{(1)} H_{mn \dot k\mdt}^{(1)}\big) c'_2\sin^2\beta\right.\notag\\ &&\kern1.5cm\left. +\ \big(H^{\ndt\dot kmn}_{(1)}H_{mn\dot k\mdt}^{(2)}+H^{\ndt\dot k mn}_{(2)} H_{mn\dot k\mdt}^{(1)}\big) \big(c_1\cos\beta+\di c'_1\sin\beta\big)\right.\notag\\ &&\kern1.5cm \left. -\ \big(H^{\dot k\ndt mn}_{(1)}H_{mn\dot k\mdt}^{(2)}+H^{\dot k\ndt mn}_{(2)} H_{nm\dot k\mdt}^{(1)}\big) \big(c_1\cos\beta-\di c'_1\sin\beta\big)\right.\notag\\ &&\kern1.5cm\left.+\ \big(H^{\ndt\dot kmn}_{(2)}H_{mn\dot k\mdt}^{(1)}+d\,H^{\dot k\ndt mn}_{(2)} H_{mn\dot k\mdt}^{(2)}\big)\right]\bigg\}~. \end{eqnarray} \end{subequations} These expressions may be substituted into \eqref{eq:BetaFunctionH} to arrive at the final result for the beta functions. As a check, let us consider the ABJM model. In that case we have, $\beta=0$, $N_f=4$, $H_{mn\mdt\ndt}^{(1)}=\lambda\varepsilon_{mn}\varepsilon_{\mdt\ndt}$ for some constant $\lambda$ and $H^{(2)}_{mn\mdt\ndt}=0$. Furthermore, we choose $M_{\mathrm{I}_{\alpha=1}}^H(N)$ and hence $k_1=0$, $k_2=1-N^2 $, $k_3=-2+2N^2$, $c_1=0$ and $c_2=2-2N^2$. Therefore, we find \begin{equation} \begin{aligned} \gamma_m^{~n}\ &=\ \frac{1}{16\pi^2\kappa^2}(1-N^2)\big[1-(4\kappa)^2|\lambda|^2\big]\delta_m^{~n}~,\\ \gamma_\mdt^{~\ndt}\ &=\ \frac{1}{16\pi^2\kappa^2}(1-N^2)\big[1-(4\kappa)^2|\lambda|^2\big]\delta_\mdt^{~\ndt}~, \end{aligned} \end{equation} and thus, we recover precisely the value $|\lambda|=\frac{1}{4\kappa}$ for the ABJM model; see equations \eqref{eq:ABJMmodel}. For $N=2$, this of course agrees with the result \eqref{eq:BLGBetaFunction} as for this particular value of $N$, the ABJM model coincides with the BLG model. As in the real case, the phase of $\lambda$ does not flow (see also Section \ref{sec:DiscAndRes}) and so we can restrict ourselves to the modulus $|\lambda|$. Therefore, the conformal fixed point corresponding to the ABJM model forms an IR fixed point, just like in the case of the BLG model. \subsection{Discussion of the results}\label{sec:DiscAndRes} The above expressions for the anomalous dimensions and the resulting expressions for the beta functions certainly allow for many conformal fixed points depending on the particular choices of the superpotential couplings and of the underlying 3-algebra structure. For this reason, we shall merely discuss two examples. We hope to report on a more thorough analysis elsewhere. In our subsequent discussion, we assume that $N_f=4$. \bigskip \noindent{\it \underline{Real 3-algebras:}} \bigskip \noindent Let us consider $\CA=A_4$. We recall that in this case the Casimirs are given by \begin{equation} k_1\ =\ 0~,~~~k_2\ =\ -3~,~~~k_3\ =\ 6~,~~~c_1\ =\ 0~,~~~c_2\ =\ -6~,~~~c_3\ =\ -6~. \end{equation} Furthermore, we take \begin{equation}\label{rels} R_{ijkl}^{(1)}\ =\ \frac{\lambda_1}{\kappa}\,\varepsilon_{ijkl}\eand R_{ijkl}^{(2)}\ =\ \frac{\lambda_2}{\kappa}\, \delta_{ij}\delta_{kl}~, \end{equation} with $\lambda_\ell=r_\ell \de^{\di\varphi_\ell}$. Plugging these values into the expression \eqref{eq:2LoopGammaR} for the two-loop anomalous dimension, one finds that the corresponding beta functions are given by \begin{equation}\label{bet} \beta^{(\ell)}_{ijkl}\ =\ \frac{f(r_1,r_2)}{\kappa^2} R_{ijkl}^{(\ell)}~,{~~~\mbox{with}~~~} f(r_1,r_2)\ :=\ -\frac{3}{4\pi^{2}}\left[1-96\big(6r_1^{2} + r_2^{2}\big)\right]~. \end{equation} The zero-locus $f(r_1,r_2)=0$ defines an ellipse in $\FR^2$, \begin{equation} r_1\ =\ \frac{1}{24}\cos t\eand r_2\ =\ \frac{1}{4\sqrt{6}}\sin t {~~~\mbox{for}~~~} t\ \in\ [0,2\pi)~. \end{equation} We thus obtain a one-parameter family of marginal multi-trace deformations (i.e.\ double-trace in superfields and double- and triple-trace in components) of the BLG model, the latter corresponding to $t=0$. \vspace*{2cm} \begin{figure}[h] \hspace{3.5cm} \begin{picture}(240,150) \psfrag{0.00}{\kern-2pt 0.0} \psfrag{0.0}{\kern-2pt 0.0} \psfrag{0.05}{\kern-7pt 0.05} \psfrag{0.10}{\kern-3pt 0.1} \psfrag{0.1}{\kern-3pt 0.1} \psfrag{0.2}{\kern-2pt 0.2} \psfrag{0.15}{\kern-3pt 0.15} \includegraphics[width=90mm]{plotf.eps} \put(0.0,60.0){\makebox(0,0)[c]{$r_2$}} \put(-290.0,150.0){\makebox(0,0)[c]{$f(r_1,r_2)$}} \put(-160.0,15.0){\makebox(0,0)[c]{$r_1$}} \end{picture} \vspace*{-5pt} \caption{The function $f(r_1,r_2)$ capturing the beta functions of single- and multi-trace deformations.}\label{fig2} \end{figure} \vspace*{10pt} Furthermore, \eqref{bet} implies the following equations for the running couplings $\tilde\lambda_\ell=\tilde r_\ell\,\de^{\di \tilde\varphi_\ell}$: \begin{equation} \dot{\tilde{r}}_\ell\ =\ \frac{\tilde{r}_\ell }{\kappa^2}f(\tilde{r}_1,\tilde{r}_2)\eand \tilde{r}_\ell\dot{\tilde{\varphi}}_\ell\ =\ 0~,{~~~\mbox{with}~~~} \tilde{\lambda}_\ell(\mu;\lambda_\ell)\ =\ \lambda_\ell~, \end{equation} where dot means a total derivative with respect to $\log (p/\mu)$. Hence, the phases $\tilde{\varphi}_\ell$ do not flow. To get a more intuitive picture of the situation, we plotted the function $f(r_1,r_2)$ in Fig.\ \ref{fig2} for $r_1,r_2>0$. {}From the figure it is then clear that every point on the fixed point locus of the beta functions corresponds to an IR fixed point of the renormalization group, as the derivative of the function $f(r_1,r_2)$ in the direction of the outward normal of the curve is positive. Notice further that the minimally coupled Chern-Simons matter theory, $r_\ell=0$, is a UV fixed point. Thus, by turning on the above deformation at $r_\ell=0$, the theory flows to one of the points on the curve $f(r_1,r_2)=0$ in the IR. \bigskip \noindent{\it \underline{Hermitian 3-algebras:}} \bigskip \noindent Let us now perform a similar analysis in the Hermitian setting. We take $\CA=M^H_{\mathrm{I}_{\alpha=1}}(N)$. In this case, we know that \begin{equation} \begin{aligned} &k_1\ =\ 0~,~~~k_2\ =\ 1-N^2~,~~~k_3\ =\ -2(1-N^2)~,~~~c_1\ =\ 0~,~~~c_2\ =\ 2(1-N^2)~,\\ &\kern4.0cm c'_1\ =\ -2N ~,~~~c'_2\ =\ -2(1+N^2)~ \end{aligned} \end{equation} and $d=N^2$. Let us focus on superpotential couplings of the form \begin{equation} \begin{aligned} H_{mn\mdt\ndt}^{(1)}\ &=\ \frac{\lambda_1}{\kappa}\big[\varepsilon_{mn}\varepsilon_{\mdt\ndt} +\rho(\delta_{(mn\mdt\ndt),(1,2,2,1)}+\delta_{(mn\mdt\ndt),(2,1,1,2)})\big]~,\\ H_{mn\mdt\ndt}^{(2)}\ &=\ \frac{\lambda_2}{\kappa}\, \delta_{m\mdt}\delta_{n\ndt}~. \end{aligned} \end{equation} Note that $\lambda_2$ controls the multi-trace deformations. Substituting these expressions into \eqref{eq:2LoopGammaH} for the two-loop anomalous dimension, we find after some algebraic manipulations that the beta functions \eqref{eq:BetaFunctionH} vanish if ($\lambda_\ell=r_\ell\,\de^{\di \varphi_\ell}$ and $\rho=r_3\,\de^{\di\varphi_3}$) \begin{subequations} \begin{equation}\label{vanish1} a\,r_1^2(r_3^2-4r_3\cos\varphi_3+4)+b\,r_1^2r_3^2+c\, r_2^2+d\,r_1r_2r_3\sin(\varphi_1-\varphi_2+\varphi_3)\ =\ 1~, \end{equation} where \begin{equation} a\ :=\ 4\cos^2\beta~,~~~ b\ :=\ \frac{2N^2+2}{N^2-1}\sin^2\beta~,~~~ c\ :=\ \frac{4N^2+2}{N^2-1}~,~~~ d\ :=\ \frac{8N}{N^2-1}\sin\beta~. \end{equation} \end{subequations} For $\rho=-2$ and $\lambda_2=0$, we find the $\beta$-deformed ABJM model that was discussed in \cite{Imeroni:2008cr} by studying the gravitational dual of the theory while for $\rho=0$ (implying $\beta=0$ without loss of generality) and $\lambda_2\neq0$, we obtain a marginal multi-trace deformation of the ABJM model. \section{Conclusions and outlook}\label{sec:conclusions} In summary, we have described marginal deformations of Chern-Simons matter theories that are based on real and Hermitian 3-algebras. In particular, we wrote down the most general superpotentials consisting of single- and multi-trace terms that are i) conformally invariant at the classical level, ii) compatible with $\mathcal{N}=2$ supersymmetry and iii) supergauge invariant. For these superpotential terms, we computed the two-loop beta functions using $\mathcal{N}=2$ supergraph techniques. As familiar from four dimensional SYM theories, supergraphs turned out to be a powerful tool also in the case of supersymmetric Chern-Simons matter theories: The calculation of the two-loop beta functions boiled down to the computation of the three Feynman supergraphs displayed in Fig.\ \ref{fig:ZDiagrams}. We expressed our results concisely in terms of certain ``Casimir invariants'' of the underlying 3-algebra and its associated Lie algebra. Using our expressions for the beta functions, we confirmed conformality of both the BLG and ABJM models. In addition, we discussed $\beta$-deformations of the ABJM model and certain marginal multi-trace deformations of both the BLG and ABJM models, explicitly. We mostly focused on the 3-algebras $M^R_{\rm{III}_{\alpha,\beta}}(N)$ and $M^H_{\rm{I}_\alpha}(N)$, but a similar analysis can easily be carried out for other 3-algebras. Even though real and Hermitian 3-algebras already allow for classes of marginal deformations, we found that not all deformations, and in particular not the $\beta$-deformations of \cite{Imeroni:2008cr}, are captured by 3-brackets. Instead, one has to introduce an associated 3-product, i.e.\ a triple product that transforms covariantly under gauge transformations. This is in the same spirit to what happens in four-dimensional SYM theory, where one replaces the Lie bracket by some deformed bracket. To discuss $\beta$-deformations of the ABJM model, for instance, we were led to introduce the $\beta$-3-bracket \eqref{eq:BetaCommutator}, which is just a special instance of an associated 3-product. As far as $\beta$-deformations are concerned, we mainly focused on the Hermitian case. Here, we obtained an independent confirmation of the deformations studied in \cite{Imeroni:2008cr}. Note, however, that more general deformations than the $\beta$-deformations we focused on can in principle be discussed in both the real and Hermitian cases using associated 3-products. The most interesting open question is certainly to what extent our deformations are exactly marginal, or at least, to all orders in perturbation theory. Because of the many simplifications which arise, e.g., due to Lemma 3 of \cite{Gates:1991qn}, one might be able to make precise statements using our superfield formulation. Otherwise, it might be necessary to switch to a different description as, for example, light-cone superspace as done in \cite{Ananth:2006ac} for $\beta$-deformations of $\mathcal{N}=4$ SYM theory. Another point is certainly to study the 't Hooft limit of our deformed theories\footnote{after appropriate rescalings by factors of `$N$', see e.g.~\cite{Witten:2001ua}} and identify all geometries which form their gravitational duals, extending the work of \cite{Imeroni:2008cr}. Vice versa, one could reformulate some of the deformations considered in \cite{Imeroni:2008cr} in terms of 3-algebra language to gain more insight into the 3-algebra structures involved. Finally, it would be interesting to extend the analysis of \cite{Minahan:2008hf,Bak:2008cp} to our deformed BLG-type models and to study a possible correspondence of the dilatation operator in these models to the Hamiltonian of an integrable spin chain, using superspace and 3-algebra language. This is possible, because the operators considered in \cite{Minahan:2008hf} can easily be formulated in terms of the associated 3-products introduced in this work. \vspace*{.5cm} \noindent {\bf Acknowledgements.} We are very grateful to Martin Ammon, Sergey Cherkis, Stefano Kovacs and Riccardo Ricci for discussions, questions and suggestions. N.A.\ was supported by the Dutch Foundation for Fundamental Research on Matter (FOM). C.S.\ was supported by an IRCSET Postdoctoral Fellowship. M.W.\ was supported by an STFC Postdoctoral Fellowship and by a Senior Research Fellowship at the Wolfson College, Cambridge, U.K. \section*{Appendices}\setcounter{subsection}{0}\setcounter{equation}{0}\renewcommand{\thesubsection}{\Alph{subsection}. \subsection{Casimirs for matrix representations}\label{app:UsefulFormulae} In this appendix, we discuss the Casimirs $k_i$ and $c_i$ that appear throughout this work for the different matrix representations. \bigskip \noindent{\it \underline{Casimirs $c_i$ and $k_i$ for the real 3-algebra $M_{\rm III_{\alpha,\beta}}^R(N)$:}} \bigskip \noindent The underlying vector space for this real 3-algebras has dimension $N^2$ and one easily finds a basis with elements satisfying the following relations \begin{equation} \tr (\tau_a^T \tau_b)\ =\ \delta_{ab}\ =:\ h_{ab}\eand(\tau_a)_{ij}(\tau_a)_{kl}\ =\ \delta_{ik}\delta_{jl}~. \end{equation} With the above formul\ae{}, the three Casimirs $c_1,c_2$ and $c_3$ defined by \begin{equation} {f_{ac}}^{cb}\ =\ c_1{\delta_a}^b~,~~~ f_{acde}f^{bedc}\ =\ c_2{\delta_a}^b\eand f_{acde}f^{bcde}\ =\ -c_3\delta_a^{~b} \end{equation} can be computed straightforwardly and we obtain \begin{equation} \begin{aligned} &c_1\ =\ (N-1)(\alpha-\beta)~,~~~ c_2\ =\ (N-1)(\alpha^2-2(N-1)\alpha\beta+\beta^2)~,\\ &\kern3cm c_3\ =\ -2N(N-1)(\alpha^2+\beta^2)~. \end{aligned} \end{equation} The Casimirs $k_i$ can similarly be calculated by using identities for the appearing generators of $\frg_\CA\cong\mathfrak{o}(N)\oplus\mathfrak{o}(N)$ together with formula \eqref{signature-real} for the bilinear form $(\hspace{-0.1cm}(\cdot,\cdot)\hspace{-0.1cm})$ on $\frg_\CA$. We find here that \begin{equation} k_1\ =\ \tfrac{1}{\sqrt{2}}(\alpha^3+\beta^3)~,~~~k_2\ =\ -\tfrac{1}{4}(\alpha^3+\beta^3)~,~~~k_3\ =\ -\tfrac{1}{2}N(\alpha^6+\beta^6)~. \end{equation} Note that the algebra $A_4$ is a sub-3-algebra of the 3-algebra $M^R_{{\rm III}_{1,-1}}(4)$. In this case, one can compute the Casimirs directly from the structure constants and the fact that $\frg_\CA=\mathfrak{su}(2)\oplus\mathfrak{su}(2)$ and we obtain \begin{equation}\label{A4Casimirs} c_1\ =\ 0~,~~~c_2\ =\ -6~,~~~c_3\ =\ -6~,~~~k_1\ =\ 0~,~~~k_2\ =\ -3~,~~~k_3\ =\ 6~. \end{equation} Analogously, one constructs the Casimirs for the other real 3-algebras $M^R_{\rm I_\alpha}(N)$, $M^R_{\rm II_\alpha}(N)$ and $M^R_{\rm IV_{\alpha,\beta}}(N)$, but we refrain from going into more detail at this point. \bigskip \noindent{\it \underline{Casimirs $c_i$ and $k_i$ for the Hermitian 3-algebra $M^H_{\rm I_\alpha}(N)$:}} \bigskip \noindent The underlying vector space here is spanned by generators of $\sU(N)$ in the fundamental representation. For simplicity, we fix $\alpha=1$, as we did throughout most of the paper. As basis $\tau_a$, we have $N^2$ anti-Hermitian $N\times N$-matrices and we choose them such that we have the following identities: \begin{equation} \tr (\tau_a^\dagger \tau_b)\ =\ \delta_{ab}\ =:\ h_{ab}~,~~~h^{ab}\ =\ \delta^{ab}\eand(\tau_a)_{ij}(\tau_a)_{kl}\ =\ -\delta_{il}\delta_{jk}~. \end{equation} {}From these, one obtains for the Casimirs $c_1,c_2$ and $c_3$, which are defined by \begin{equation} {f_{ac}}^{cb}\ =\ c_1{\delta_a}^b~,~~~ f_{acde}f^{edcb}\ =\ -c_2{\delta_a}^b\eand f_{acde}f^{bcde}\ =\ -c_3\delta_a^{~b}~, \end{equation} the following expressions: \begin{equation}\label{HCasimirs1} c_1\ =\ 0~,~~~c_2\ =\ 2(1-N^2)\eand c_3\ =\ 2(1-N^2)~. \end{equation} In addition, we have \begin{equation}\label{HCasimirs2} k_1\ =\ 0~,~~~k_2\ =\ 1-N^2\eand k_3\ =\ -2(1-N^2)~. \end{equation} Recall that $M^H_{\rm I_{\alpha=1}}(2)=A_4$, and the above formul\ae{} \eqref{HCasimirs1} and \eqref{HCasimirs2} reproduce indeed \eqref{A4Casimirs} for $N=2$. \bigskip \noindent{\it \underline{Remarks on the bracket $[A,B;C]_\beta$:}} \bigskip \noindent Recall the form of the $\beta$-3-bracket \begin{equation} [\tau_a,\tau_b;\tau_c]_\beta\ =\ {g_{abc}}^d\tau_d{~~~\mbox{with}~~~} {g_{abc}}^d\ =\ \cos\beta {f_{abc}}^d+\di\sin\beta\, {d_{abc}}^d~. \end{equation} Therefore, apart from the Casimirs $c_i$ we also have the $c'_i$ \begin{equation} {d_{ac}}^{cb}\ =\ c'_1{\delta_a}^b\eand d_{acde}d^{edcb}\ =\ -c'_2{\delta_a}^b~. \end{equation} Explicitly, we obtain the following values: \begin{equation} c'_1\ =\ -2 N\eand c_2'\ =\ -2(1+N^2)~. \end{equation} \subsection{Component form of the actions}\label{app:CFofA} In this appendix we give the component form of the superspace actions in WZ gauge. \bigskip \noindent{\it \underline{Component action in the real case:}} \bigskip \noindent In terms of the component fields \eqref{eq:WZGaugeComponents}, the action \eqref{eq:S0Real} reads as \begin{equation}\label{S_{0}R} \begin{aligned} S^R_0\ =\ &\int \dd^3 x\, \bigg[\eps^{\mu\nu\lambda}(\hspace{-0.1cm}( A_\mu,\dpar_\nu A_\lambda+ \tfrac{1}{3\sqrt{\kappa}}[\hspace{-0.05cm}[ A_\nu,A_\lambda]\hspace{-0.05cm}])\hspace{-0.1cm})- \di (\hspace{-0.1cm}({\bar{\lambda}}_\alpha,\lambda^\alpha)\hspace{-0.1cm})- \di (\hspace{-0.1cm}(\lambda_\alpha,{\bar{\lambda}}^\alpha)\hspace{-0.1cm})- (\hspace{-0.1cm}( D,\sigma)\hspace{-0.1cm})-(\hspace{-0.1cm}( \sigma,D)\hspace{-0.1cm})\\ &~~~~~~+(\bar{F}_i,F^i)- (\nabla_\mu \bar{\phi}_i,\nabla^\mu \phi^i)- \di(\bar{\psi}_i^\alpha,\nabla_{\alpha\beta}\psi^{i\beta})- \tfrac{\di}{\sqrt{\kappa}} (\bar{\phi}_i,D(\phi^i))+ \sqrt{\tfrac{2}{\kappa}} (\bar{\phi}_i,\lambda^\alpha(\psi^i_\alpha))\\ &\kern1cm +\sqrt{\tfrac{2}{\kappa}}({\bar{\lambda}}_\alpha(\bar{\psi}_i^\alpha),\phi^i)+ \tfrac{1}{\kappa}(\bar{\phi}_i,\sigma^2(\phi^i))+ \tfrac{1}{\sqrt{\kappa}}(\bar{\psi}_{i\alpha},\sigma(\psi^{i\alpha}))\bigg]~, \end{aligned} \end{equation} where $\nabla_{\alpha\beta}:=\sigma^\mu_{\alpha\beta}\nabla_\mu$. Upon performing the integrals over the fermionic directions, the component form of the superpotential term \eqref{eq:S1Real} is given by \begin{equation}\label{superterms} \begin{aligned} S_1^R\ &=\ -2\int \dd^3x~\bigg\{ R_{ijkl}^{(1)}\left[\big(\phi^l,[\psi^{i\alpha },\psi_\alpha^j,\phi^k]+2[\phi^i,\psi^{j\alpha},\psi_\alpha^k]\big)-2([\phi^i,\phi^j,\phi^k],F^l)\right]\\ &\kern2.6cm +R_{ijkl}^{(2)}\left[(\psi^{i\alpha },\psi_\alpha^j)(\phi^k,\phi^l)+ 2(\psi^{i\alpha},\phi^j)(\psi^k_\alpha,\phi^l)-2(F^i,\phi^j)(\phi^k,\phi^l)\right] \bigg\}\\ &\kern3.6cm+\mathrm{c.c.}~. \end{aligned} \end{equation} The next step is to eliminate the auxiliary fields. After varying $S^R=S_0^R+S_1^R$, we find the following (algebraic) equations of motion for $F^i$, $\bar F_i$, $D$, $\sigma$, $\lambda$ and $\bar\lambda$: \begin{equation}\label{eq:AuxEqReal} \begin{aligned} F^i\ &=\ -4R^{ijkl}_{(1)}[{\bar{\phi}}_l,{\bar{\phi}}_k,{\bar{\phi}}_j]-4R^{ijkl}_{(2)}(\bar\phi_l,\bar\phi_k)\bar\phi_j~,\\ \bar F_i\ &=\ -4R_{ijkl}^{(1)}[\phi^l,\phi^k,\phi^j]-4R_{ijkl}^{(2)}(\phi^l,\phi^k)\phi^j~,\\ D(A)\ &=\ \tfrac{1}{2\kappa}\big[[\phi^i,\sigma({\bar{\phi}}_i),A]- [\sigma(\phi^i),{\bar{\phi}}_i,A]\big] +\tfrac{1}{2\sqrt{\kappa}}[\psi^{i\alpha},{\bar{\psi}}_{i\alpha},A]~,\\ \sigma(A)\ &=\ -\tfrac{\di}{2\sqrt{\kappa}}[\phi^i,{\bar{\phi}}_i,A]~,\\ \lambda_\alpha(A)\ &=\ -\tfrac{\di}{\sqrt{2\kappa}}[{\bar{\psi}}_{i\alpha},\phi^i,A]\eand \bar\lambda_\alpha(A)\ =\ \tfrac{\di}{\sqrt{2\kappa}}[\psi^i_{\alpha},{\bar{\phi}}_i,A] ~, \end{aligned} \end{equation} and hence $S^R=\int\dd^3x\, \mathcal{L}^R $ with \begin{equation}\label{eq:S01RComp} \begin{aligned} \mathcal{L}^R\ &=\ \eps^{\mu\nu\lambda}(\hspace{-0.1cm}( A_\mu,\dpar_\nu A_\lambda+\tfrac{1}{3\sqrt{\kappa}}[\hspace{-0.05cm}[ A_\nu,A_\lambda]\hspace{-0.05cm}])\hspace{-0.1cm})-\big|\nabla^\mu\phi^i\big|^2- \di\big(\bar{\psi}_i^\alpha,\nabla_{\alpha\beta} \psi^{i\beta}\big)\\[3pt] &\kern.5cm+\tfrac{1}{4\kappa^2}\big([\phi^i,{\bar{\phi}}_i,{\bar{\phi}}_k],[\phi^j,{\bar{\phi}}_j,\phi^k]\big)+ \tfrac{\di}{2\kappa}\left({\bar{\psi}}_j^{\alpha},[\phi^i,{\bar{\phi}}_i,\psi^j_\alpha]\right)+\tfrac{\di}{\kappa}\left([{\bar{\psi}}^\alpha_j,\phi^j,{\bar{\phi}}_i],\psi^i_\alpha\right)\\[3pt] &\kern.5cm-2 R_{ijkl}^{(1)}\big(\phi^l,[\psi^{i\alpha},\psi_\alpha^j,\phi^k]+2[\phi^i, \psi^{j\alpha},\psi_\alpha^k]\big)\\[3pt] &\kern.5cm-2R^{ijkl}_{(1)}\big({\bar{\phi}}_l,[{\bar{\psi}}_{i\alpha},{\bar{\psi}}^\alpha_j,{\bar{\phi}}_k]+2[{\bar{\phi}}_i,{\bar{\psi}}_{j\alpha },{\bar{\psi}}^\alpha_k]\big)\\[3pt] &\kern.5cm-2R_{ijkl}^{(2)}\Big[(\psi^{i\alpha },\psi_\alpha^j)(\phi^k,\phi^l)+ 2(\psi^{i\alpha},\phi^j)(\psi^k_\alpha,\phi^l)\Big]\\[3pt] &\kern.5cm-2R^{ijkl}_{(2)}\Big[(\bar\psi_{i\alpha },\bar\psi^\alpha_j)({\bar{\phi}}_k,{\bar{\phi}}_l)+ 2(\bar\psi_{i\alpha},{\bar{\phi}}_j)(\bar\psi_k^\alpha,{\bar{\phi}}_l)\Big]\\[3pt] &\kern.5cm-16\left|R_{ijkl}^{(1)}[\phi^l,\phi^k,\phi^j]+R_{ijkl}^{(2)}(\phi^l,\phi^k)\phi^j \right|^2~, \end{aligned} \end{equation} where $|A|^2:=(\bar A,A)$ for any $A\in\CA$. For the reader's convenience, we finally extract the multi-trace terms explicitly: \begin{equation} \begin{aligned} \mathcal{L}_{mult}^R\ &=\ -2R_{ijkl}^{(2)}\Big[(\psi^{i\alpha },\psi_\alpha^j)(\phi^k,\phi^l)+ 2(\psi^{i\alpha},\phi^j)(\psi^k_\alpha,\phi^l)\Big]\\[3pt] &\kern.5cm-2R^{ijkl}_{(2)}\Big[(\bar\psi_{i\alpha },\bar\psi^\alpha_j)({\bar{\phi}}_k,{\bar{\phi}}_l)+ 2(\bar\psi_{i\alpha},{\bar{\phi}}_j)(\bar\psi_k^\alpha,{\bar{\phi}}_l)\Big]\\[3pt] &\kern.5cm-16\left[R_{ijkl}^{(1)}R^{ij'k'l'}_{(2)}([\phi^l,\phi^k,\phi^j],\bar\phi_{j'}) (\bar\phi_{l'},\bar\phi_{k'})\right.\\[3pt] &\kern2cm + R_{ijkl}^{(2)}R^{ij'k'l'}_{(1)}([\bar\phi_{l'},\bar\phi_{k'},\bar\phi_{j'}],\phi^j) (\phi^l,\phi^k)\\[3pt] &\kern2cm\left.+\,R_{ijkl}^{(2)}R^{ij'k'l'}_{(2)}(\phi^l,\phi^k)(\bar\phi_{l'},\bar\phi_{k'}) (\phi^j,\bar\phi_{j'})\right]. \end{aligned} \end{equation} \bigskip \noindent{\it \underline{Component action in the Hermitian case:}} \bigskip \noindent Let us now discuss the Hermitian case. Here, we shall assume that $\beta=0$, i.e.\ we work with the usual Hermitian 3-bracket in the superpotential. In terms of the component fields \eqref{eq:WZGaugeComponents}, the action \eqref{eq:S0Complex} reads as \begin{equation}\label{eq:S0H} \begin{aligned} S^H_0\ &=\ \int \dd^3 x\, \bigg[ \eps^{\mu\nu\lambda}(\hspace{-0.1cm}( A_\mu,\dpar_\nu A_\lambda+\tfrac{1}{3\sqrt{\kappa}}[\hspace{-0.05cm}[ A_\nu,A_\lambda]\hspace{-0.05cm}])\hspace{-0.1cm}) -\di (\hspace{-0.1cm}(\lambda_\alpha,\bar\lambda^\alpha)\hspace{-0.1cm})-\di(\hspace{-0.1cm}({\bar{\lambda}}_\alpha,\lambda^\alpha)\hspace{-0.1cm}) -(\hspace{-0.1cm}( D,\sigma)\hspace{-0.1cm})-(\hspace{-0.1cm}( \sigma,D)\hspace{-0.1cm})\\ &\kern1cm +(F^m,F^m)+(\bar{F}_\mdt,\bar{F}_\mdt) -(\nabla_\mu \phi^m,\nabla^\mu \phi^m) -(\nabla_\mu \bar{\phi}_\mdt,\nabla^\mu \bar{\phi}_\mdt) -\di(\psi^{m\alpha},\nabla_{\alpha\beta}\psi^{m\beta})\\ &\kern1.5cm +\di(\bar{\psi}^{m\alpha},\nabla_{\alpha\beta}\bar{\psi}^{m\beta}) -\tfrac{\di}{\sqrt{\kappa}}(\phi^m,D(\phi^m)) +\tfrac{\di}{\sqrt{\kappa}}(\bar{\phi}_\mdt,D(\bar{\phi}_\mdt)) +\sqrt{\tfrac{2}{\kappa}}(\phi^m,\lambda^\alpha(\psi^m_\alpha))\\ &\kern2cm -\sqrt{\tfrac{2}{\kappa}}({\bar{\psi}}_\mdt^{\alpha},\lambda_\alpha({\bar{\phi}}_\mdt)) +\sqrt{\tfrac{2}{\kappa}}(\lambda^\alpha(\psi^m_\alpha),\phi^m)\\ &\kern2.5cm -\sqrt{\tfrac{2}{\kappa}}(\lambda^\alpha({\bar{\phi}}_\mdt),{\bar{\psi}}_{\mdt\alpha}) +\tfrac{1}{\kappa}(\phi^m,\sigma^2(\phi^m))\\ &\kern3cm +\tfrac{1}{\kappa}({\bar{\phi}}_\mdt,\sigma^2({\bar{\phi}}_\mdt)) +\tfrac{1}{\sqrt{\kappa}}(\psi^m_\alpha,\sigma(\psi^{m\alpha})) -\tfrac{1}{\sqrt{\kappa}}({\bar{\psi}}_{\mdt}^\alpha,\sigma({\bar{\psi}}_\mdt^{\alpha}))\bigg]~. \end{aligned} \end{equation} In component form, the superpotential terms \eqref{eq:S1Complex} are given by \begin{equation}\label{eq:S1H} \begin{aligned} S_1^H\ &=\ -2\int \dd^3x~\bigg\{ H_{mn\mdt\ndt}^{(1)}\bigg[ (\bar F_\mdt,[\phi^m,\phi^n;\bar\phi_\ndt])+([\bar\phi_\mdt,\bar\phi_\ndt;\phi^n],F^m)\\ &\kern3cm +(\bar\psi_{\ndt\alpha},[\psi^{m\alpha},\phi^n;\bar\phi_\mdt]) +(\bar\phi_\ndt,[\psi^m_\alpha,\phi^n;\bar\psi_\mdt^\alpha])\\ &\kern3.5cm -\tfrac12(\bar\psi_{\ndt\alpha},[\phi^m,\phi^n;\bar\psi^\alpha_\mdt]) -\tfrac12(\bar\phi_\ndt,[\psi^m_\alpha,\psi^{n\alpha};\bar\phi_\mdt)\bigg]\\ &\kern2.2cm+H_{mn\mdt\ndt}^{(2)}\bigg[ -(\bar F_\mdt,\phi^m)(\bar\phi_\ndt,\phi^n)-(\bar\phi_\mdt,F^m)(\bar\phi_\ndt,\phi^n)\\ &\kern3cm +(\bar\psi_{\mdt\alpha},\psi^{m\alpha})(\bar\phi_\ndt,\phi^n) +(\bar\psi_{\mdt\alpha},\phi^m)(\bar\phi_\ndt,\psi^{n\alpha})\\ &\kern3.5cm -\tfrac12(\bar\psi_{\mdt\alpha},\phi^m)(\bar\psi_{\ndt}^\alpha,\phi^n) -\tfrac12(\bar\phi_\mdt,\psi^m_\alpha)(\bar\phi_\ndt,\psi^{n\alpha})\bigg]\bigg\} +\mathrm{c.c.}~. \end{aligned} \end{equation} Varying $S^H=S_0^H+S_1^H$, we find the following (algebraic) equations of motion for the auxiliary fields $F^m$, $\bar F_m$, $F^\mdt$, $\bar F_\mdt$, $D$, $\sigma$, $\lambda$ and $\bar\lambda$: \begin{equation}\label{eq:AuxEqComplex} \begin{aligned} F^m\ &=\ 2 H^{\mdt\ndt mn}_{(1)}[{\bar{\phi}}_\mdt,{\bar{\phi}}_\ndt;\phi^n] -2 H^{\mdt\ndt mn}_{(2)}{\bar{\phi}}_\mdt(\phi^n,{\bar{\phi}}_\ndt)~,\\ \bar F_m\ &=\ -2H_{mn\mdt\ndt}^{(1)}[\phi^\mdt,\phi^\ndt;{\bar{\phi}}_n] -2H_{mn\mdt\ndt}^{(2)}\phi^\mdt({\bar{\phi}}_\ndt,\phi^n)~,\\ F^\mdt\ &=\ -2 H^{\mdt\ndt mn}_{(1)}[{\bar{\phi}}_m,{\bar{\phi}}_n;\phi^\mdt] -2 H^{\mdt\ndt mn}_{(2)}{\bar{\phi}}_m(\phi^\ndt,{\bar{\phi}}_n)~,\\ \bar F_\mdt\ &=\ 2H_{mn\mdt\ndt}^{(1)}[\phi^m,\phi^n;{\bar{\phi}}_\ndt] -2H_{mn\mdt\ndt}^{(2)}\phi^m({\bar{\phi}}_n,\phi^\ndt)~,\\ D(A)\ &=\ \tfrac{1}{2\kappa}\Big([A,\sigma(\phi^m);\phi^m] +[A,\sigma({\bar{\phi}}_\mdt);{\bar{\phi}}_\mdt] -[A,\phi^m;\sigma(\phi^m)] -[A,{\bar{\phi}}_\mdt;\sigma({\bar{\phi}}_\mdt)]\Big)\\ &\kern3cm -\tfrac{1}{2\sqrt{\kappa}}\Big([A,\psi^{m\alpha};\psi^{m}_\alpha] -[A,{\bar{\psi}}_\mdt^{\alpha};{\bar{\psi}}_{\mdt\alpha}]\Big),\\ \sigma(A)\ &=\ -\tfrac{\di}{2\sqrt{\kappa}}\left([A,\phi^m;\phi^m]-[A,{\bar{\phi}}_{\mdt};{\bar{\phi}}_\mdt]\right),\\ \lambda_\alpha(A)\ &=\ \tfrac{\di(-1)^{\tilde{A}}}{\sqrt{2\kappa}} \left([A,\phi^m;\psi^{m}_\alpha]-[A,{\bar{\psi}}_{\mdt\alpha};{\bar{\phi}}_{\mdt}]\right),\\ \bar\lambda_\alpha(A)\ &=\ \tfrac{\di(-1)^{\tilde{A}}}{\sqrt{2\kappa}} \left([A,\psi^m_\alpha;\phi^m]-[A,{\bar{\phi}}_\mdt;{\bar{\psi}}_{\mdt\alpha}]\right), \end{aligned} \end{equation} where $A$ is an arbitrary field taking values in $\CA$. We may now substitute these expressions into equations \eqref{eq:S0H} and \eqref{eq:S1H} to arrive at the final expression for the component action. Since this is a rather lengthy expression and moreover basically of the same form as \eqref{eq:S01RComp}, we shall not display the full action here but only give the multi-trace terms: \begin{equation} \begin{aligned} \mathcal{L}^H_{mult}\ &=\ -2 H_{mn\mdt\ndt}^{(2)}\Big[ (\bar\psi_{\mdt\alpha},\psi^{m\alpha})(\bar\phi_\ndt,\phi^n) +(\bar\psi_{\mdt\alpha},\phi^m)(\bar\phi_\ndt,\psi^{n\alpha})\\ &\kern3cm -\tfrac12(\bar\psi_{\mdt\alpha},\phi^m)(\bar\psi_{\ndt}^\alpha,\phi^n) -\tfrac12(\bar\phi_\mdt,\psi^m_\alpha)(\bar\phi_\ndt,\psi^{n\alpha})\Big]\\ &\kern.5cm -2 H^{\mdt\ndt mn}_{(2)}\Big[ (\psi^{m\alpha},\bar\psi_{\mdt\alpha})(\phi^n,\bar\phi_\ndt) +(\psi^{n\alpha},\bar\phi_\ndt)(\phi^m,\bar\psi_{\mdt\alpha})\\ &\kern3cm -\tfrac12(\phi^n,\bar\psi_{\ndt}^\alpha)(\phi^m,\bar\psi_{\mdt\alpha}) -\tfrac12(\psi^{n\alpha},\bar\phi_\ndt)(\psi^m_\alpha,\bar\phi_\mdt)\Big]\\ &\kern.5cm +4H_{mn\mdt\ndt}^{(1)}H^{\mdt'\ndt'mn'}_{(2)} ([{\bar{\phi}}_\mdt,{\bar{\phi}}_\ndt;\phi^n],{\bar{\phi}}_{\mdt'})(\phi^{n'},{\bar{\phi}}_{\ndt'})\\ &\kern.5cm +4H_{mn\mdt\ndt}^{(2)}H^{\mdt'\ndt'mn'}_{(1)} ({\bar{\phi}}_\mdt,[{\bar{\phi}}_{\mdt'},{\bar{\phi}}_{\ndt'};\phi^{n'}])({\bar{\phi}}_\ndt,\phi^n)\\ &\kern.5cm -4H_{mn\mdt\ndt}^{(2)}H^{\mdt'\ndt'mn'}_{(2)} ({\bar{\phi}}_\mdt,{\bar{\phi}}_{\mdt'})({\bar{\phi}}_\ndt,\phi^n)(\phi^{n'},{\bar{\phi}}_{\ndt'})\\ &\kern.5cm +4H^{\mdt\ndt mn}_{(1)}H_{m'n'\mdt\ndt'}^{(2)} ([\phi^m,\phi^n;{\bar{\phi}}_\ndt],\phi^{m'})({\bar{\phi}}_{\ndt'},\phi^{n'})\\ &\kern.5cm +4H^{\mdt\ndt mn}_{(2)}H_{m'n'\mdt\ndt'}^{(1)} (\phi^m,[\phi^{m'},\phi^{n'};{\bar{\phi}}_{\ndt'}])(\phi^n,{\bar{\phi}}_\ndt)\\ &\kern.5cm -4H^{\mdt\ndt mn}_{(2)}H_{m'n'\mdt\ndt'}^{(2)} (\phi^m,\phi^{m'})(\phi^n,{\bar{\phi}}_\ndt)({\bar{\phi}}_{\ndt'},\phi^{n'})~. \end{aligned} \end{equation} \subsection{Feynman rules: Vertices}\label{app:vertices} \bigskip \noindent{\it \underline{Vertices for real 3-algebras:}} \bigskip \noindent Let us list the Feynman rules for the vertices in Landau gauge $\alpha\beta\to0$. They are: \begin{subequations}\label{eq:Vertex} \vspace*{-40pt} \begin{eqnarray} \kern-1cm\mbox{$V^3$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \Photon(25,25)(60.36,25){3}{6} \Photon(0,0)(25,25){-3}{6} \Photon(25,25)(0,50){-3}{6} \Text(63.36,25)[l]{$A_1$} \Text(0,52)[br]{$A_2$} \Text(0,-2)[tr]{$A_3$} \Text(1,50)[bl]{$\searrow k_2$} \Text(4,4)[tl]{$\nearrow k_3$} \Text(40,33)[tl]{$\longleftarrow$} \Text(25,45)[tl]{$-k_2-k_3$} \Text(25,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt \tfrac{2\di}{\sqrt{\kappa}}V_{A_1A_2A_3}~,\label{eq:VVV-Vertex} \\ \kern-1cm\mbox{$\Phi V {\bar{\Phi}}$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \Photon(25,25)(60.36,25){3}{6} \ArrowLine(0,0)(25,25) \ArrowLine(25,25)(0,50) \Text(63.36,25)[l]{$A$} \Text(0,52)[br]{$J$} \Text(0,-2)[tr]{$I$} \Text(25,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt \di \tfrac{-2\di}{\sqrt{\kappa}} {T_{AI}}^J~,\label{eq:VPP-Vertex}\\ \kern-1cm\mbox{$\Phi V^2 {\bar{\Phi}}$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \Photon(25,25)(50,0){3}{6} \Photon(25,25)(50,50){-3}{6} \ArrowLine(0,0)(25,25) \ArrowLine(25,25)(0,50) \Text(0,52)[br]{$J$} \Text(0,-2)[tr]{$I$} \Text(52,-2)[tl]{$A$} \Text(52,52)[bl]{$B$} \Text(22,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt {\di}\big(\tfrac{-2\di}{\sqrt{\kappa}}\big)^2 {T_{(AI}}^K{T_{B)K}}^J~, \label{eq:VPPPP-Vertex}\\ \kern-1cm\mbox{$\Phi^4$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \ArrowLine(50,0)(25,25) \ArrowLine(50,50)(25,25) \ArrowLine(0,0)(25,25) \ArrowLine(0,50)(25,25) \Text(25,15)[l]{$\theta$} \Text(0,52)[br]{$L$} \Text(0,-2)[tr]{$I$} \Text(52,-2)[tl]{$J$} \Text(52,52)[bl]{$K$} } \end{picture} \kern20pt=\kern20pt \di 4! R_{IJKL}~,\label{eq:PPPP-Vertex}\\[20pt]\notag \end{eqnarray} \vspace*{-75pt} \begin{eqnarray} \kern-1cm\mbox{${\bar{\Phi}}^4$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \ArrowLine(25,25)(50,0) \ArrowLine(25,25)(50,50) \ArrowLine(25,25)(0,0) \ArrowLine(25,25)(0,50) \Text(0,52)[br]{$L$} \Text(0,-2)[tr]{$I$} \Text(52,-2)[tl]{$J$} \Text(52,52)[bl]{$K$} \Text(25,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt \di 4! R^{IJKL}~,\label{eq:PPPPbar-Vertex}\\ \kern-1cm\mbox{ghost/gluon-vertices :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \Photon(25,25)(60.36,25){3}{6} \DashLine(0,0)(25,25){4} \DashLine(25,25)(0,50){4} \Text(63.36,25)[l]{$A$} \Text(0,52)[br]{$B$} \Text(0,-2)[tr]{$C$} \Text(25,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt \di(-1)^{\#}\tfrac{\di}{\sqrt{\kappa}}F_{ABC}~.\label{eq:VG-Vertex} \end{eqnarray} \vspace*{10pt} \end{subequations} \noindent Here, `$\#$' is the number of antichiral (anti)ghosts entering the vertex. We have not put arrows on the ghost lines, since we have vertices where only either chiral or antichiral ghosts enter or where chiral and antichiral ghosts enter. Furthermore, \begin{subequations} \begin{equation} V_{A_1A_2A_3}\ =\ \sum_{r,s}F_{A_1A_rA_s}[{\bar{D}}(-k_r,\theta)\Delta_{r\theta}(k_r)][D(-k_s,\theta)\Delta_{s\theta}(k_s)] \end{equation} with \begin{equation} \Delta_{ij}(k_i)\ :=\ -\tfrac{\di}{4k_i^2}{\bar{D}} D(k_i,\theta_i)\delta_{ij}\eand \delta_{ij}\ :=\delta^{(4)}(\theta_i-\theta_j)~. \end{equation} \end{subequations} The coefficients appearing in \eqref{eq:PPPP-Vertex} and \eqref{eq:PPPPbar-Vertex} are \begin{equation}\label{eq:DefOfSymR} \begin{aligned} R_{IJKL}\ &=\ \big[R_{ijkl}^{(1)}f_{abcd}+R_{ijkl}^{(2)}h_{ab}h_{cd}\big]_s\\ &=\ \tfrac{1}{3}\big(R_{ijkl}^{(1)}f_{abcd}+R_{iklj}^{(1)}f_{acdb} +R_{iljk}^{(1)}f_{adbc}+R_{ijkl}^{(2)}h_{ab}h_{cd}+R_{iklj}^{(2)}h_{ac}h_{db} +R_{iljk}^{(2)}h_{ad}h_{bc}\big)~,\\ R^{IJKL}\ &=\ \big[R^{ijkl}_{(1)}f^{abcd}+R^{ijkl}_{(2)}h^{ab}h^{cd}\big]_s\\ &=\ \tfrac{1}{3}\big(R^{ijkl}_{(1)}f^{abcd}+R^{iklj}_{(1)}f^{acdb} +R^{iljk}_{(1)}f^{adbc}+R^{ijkl}_{(2)}h^{ab}h^{cd}+R^{iklj}_{(2)}h^{ac}h^{db} +R^{iljk}_{(2)}h^{ad}h^{bc}\big)~. \end{aligned} \end{equation} The subscript `$s$' refers to total symmetrization in the multi-indices $IJKL$. \bigskip \noindent{\it \underline{Vertices for Hermitian 3-algebras:}} \bigskip \noindent In the Hermitian case, the Feynman rules for the vertices are very similar to the ones for real 3-algebras. The purely gluonic and gluon/ghost vertices are the same and we shall again adopt Landau gauge. The only difference is in the gluon/matter and pure matter vertices, since we have two different types of matter: $\Phi^I$ and $\Phi_{\dot I}$. We have \begin{subequations}\label{eq:VertexComplex} \vspace*{-40pt} \begin{eqnarray} \kern-1cm\mbox{$\Phi V {\bar{\Phi}}$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \Photon(25,25)(60.36,25){3}{6} \ArrowLine(0,0)(25,25) \ArrowLine(25,25)(0,50) \Text(63.36,25)[l]{$A$} \Text(0,52)[br]{$J$} \Text(0,-2)[tr]{$I$} \Text(25,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt \di \tfrac{-2\di}{\sqrt{\kappa}} {T_{AI}}^J~,\\ \kern-1cm\mbox{$\Phi V {\bar{\Phi}}$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \Photon(25,25)(60.36,25){3}{6} \ArrowLine(0,0)(25,25) \ArrowLine(25,25)(0,50) \Text(63.36,25)[l]{$A$} \Text(0,52)[br]{$\dot J$} \Text(0,-2)[tr]{$\dot I$} \Text(25,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt \di \tfrac{2\di}{\sqrt{\kappa}} {T_{A\dot I}}^{\dot J}~,\\[20pt]\notag \end{eqnarray} \vspace*{20pt} \vspace*{-90pt} \begin{eqnarray} \kern-1cm\mbox{$\Phi V^2 {\bar{\Phi}}$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \Photon(25,25)(50,0){3}{6} \Photon(25,25)(50,50){-3}{6} \ArrowLine(0,0)(25,25) \ArrowLine(25,25)(0,50) \Text(0,52)[br]{$J$} \Text(0,-2)[tr]{$I$} \Text(52,-2)[tl]{$A$} \Text(52,52)[bl]{$B$} \Text(22,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt {\di}\big(\tfrac{-2\di}{\sqrt{\kappa}}\big)^2 {T_{(AI}}^K{T_{B)K}}^J~,\\ \kern-1cm\mbox{$\Phi V^2 {\bar{\Phi}}$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \Photon(25,25)(50,0){3}{6} \Photon(25,25)(50,50){-3}{6} \ArrowLine(0,0)(25,25) \ArrowLine(25,25)(0,50) \Text(0,52)[br]{$\dot J$} \Text(0,-2)[tr]{$\dot I$} \Text(52,-2)[tl]{$A$} \Text(52,52)[bl]{$B$} \Text(22,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt {\di}\big(\tfrac{2\di}{\sqrt{\kappa}}\big)^2 {T_{(A\dot I}}^{\dot K}{T_{B)\dot K}}^{\dot J}~,\\ \kern-1cm\mbox{$\Phi^4$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \ArrowLine(50,0)(25,25) \ArrowLine(50,50)(25,25) \ArrowLine(0,0)(25,25) \ArrowLine(0,50)(25,25) \Text(25,15)[l]{$\theta$} \Text(0,52)[br]{$\dot L$} \Text(0,-2)[tr]{$I$} \Text(52,-2)[tl]{$J$} \Text(52,52)[bl]{$\dot K$} } \end{picture} \kern20pt=\kern20pt \di 4 {H_{IJ}}^{\dot K\dot L}~,\\ \kern-1cm\mbox{${\bar{\Phi}}^4$-vertex :}&&\kern30pt \begin{picture}(70,70)(0,20) \SetScale{1} \put(0,0){ \Vertex(25,25){1} \ArrowLine(25,25)(50,0) \ArrowLine(25,25)(50,50) \ArrowLine(25,25)(0,0) \ArrowLine(25,25)(0,50) \Text(0,52)[br]{$\dot L$} \Text(0,-2)[tr]{$I$} \Text(52,-2)[tl]{$J$} \Text(52,52)[bl]{$\dot K$} \Text(25,15)[l]{$\theta$} } \end{picture} \kern20pt=\kern20pt \di {4 H_{\dot K\dot L}}^{IJ}~, \end{eqnarray} \end{subequations} \vspace*{30pt} \noindent where \begin{equation} \begin{aligned} {H_{IJ}}^{\dot K\dot L}\ &=\ \big[H_{mn\mdt\ndt}^{(1)}{g_{ab}}^{cd}+H_{mn\mdt\ndt}^{(2)} \delta_a^{~c}\delta_b^{~d}\big]_s\\ &=\ \tfrac12\big[H_{mn\mdt\ndt}^{(1)}{g_{ab}}^{cd}+H_{mn\ndt\mdt}^{(1)}{g_{ab}}^{dc}+H_{mn\mdt\ndt}^{(2)} \delta_a^{~c}\delta_b^{~d}+H_{mn\ndt\mdt}^{(2)} \delta_a^{~d}\delta_b^{~c}\big]~,\\ {H_{\dot K\dot L}}^{IJ}\ &=\ \big[H^{\ndt\mdt mn}_{(1)}{g_{dc}}^{ab}+H^{\mdt\ndt mn}_{(2)} \delta_c^{~a}\delta_d^{~b}\big]_s\\ &=\ \tfrac12\big[H^{\ndt\mdt mn}_{(1)}{g_{dc}}^{ab}+H^{\mdt\ndt mn}_{(1)}{g_{cd}}^{ab}+ H^{\mdt\ndt mn}_{(2)} \delta_c^{~a}\delta_d^{~b}+H^{\ndt\mdt mn}_{(2)} \delta_d^{~a}\delta_c^{~b}\big]~,\\ \end{aligned} \end{equation} where `$s$' refers again to total symmetrization. \noindent \bigskip \bigskip
2,877,628,089,986
arxiv
\subsubsection{% \@startsection{subsubsection}{3}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {-1.5ex \@plus .2ex}% {\normalfont\normalsize\bfseries}% } \makeatother \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \allowdisplaybreaks \title{Collaborative targeted minimum loss inference from continuously indexed nuisance parameter estimators } \author{Cheng Ju$^1$, Antoine Chambaz$^2$, Mark J. van der Laan$^1$\\[1em] $^1$ Division of Biostatistics, UC Berkeley\\ $^2$ MAP5 (UMR 8145), Universit\'e Paris Descartes} \begin{document} \def\spacingset#1{\renewcommand{\baselinestretch}% {#1}\small\normalsize} \spacingset{1}% \if11 {% \title{\textbf{Collaborative targeted inference from continuously indexed nuisance parameter estimators}} \author{Cheng Ju \hspace{.2cm}\\ Division of Biostatistics, UC Berkeley\\ and \\ Antoine Chambaz\\ MAP5 (UMR 8145), Universit\'e Paris Descartes\\ and \\ Mark J. van der Laan\\ Division of Biostatistics, UC Berkeley } \maketitle } \fi \if01 { \bigskip \bigskip \bigskip \begin{center} \LARGE{\textbf{Collaborative inference from continuously indexed nuisance parameter estimators}} \end{center} \medskip } \fi \bigskip \begin{abstract} Suppose that we wish to infer the value of a statistical parameter at a law from which we sample independent observations. Suppose that this parameter is smooth and that we can define two variation-independent, infinite-dimensional features of the law, its so called $Q$- and $G$-components (comp.), such that if we estimate them consistently at a fast enough product of rates, then we can build a confidence interval (CI) with a given asymptotic level based on a plain targeted minimum loss estimator (TMLE). The estimators of the $Q$- and $G$-comp.\ would typically be by products of machine learning algorithms. We focus on the case that the machine learning algorithm for the $G$-comp.\ is fine-tuned by a real-valued parameter $h$. Then, a plain TMLE with an $h$ chosen by cross-validation would typically not lend itself to the construction of a CI, because the selection of $h$ would trade-off its empirical bias with something akin to the empirical variance of the estimator of the $G$-comp.\ as opposed to that of the TMLE. A collaborative TMLE (C-TMLE) might, however, succeed in achieving the relevant trade-off. We prove that this is the case indeed. We construct a C-TMLE and show that, under high-level empirical processes conditions, and if there exists an oracle $h$ that makes a bulky remainder term asymptotically Gaussian, then the C-TMLE is asymptotically Gaussian hence amenable to building a CI provided that its asymptotic variance can be estimated too. The construction hinges on guaranteeing that an additional, well chosen estimating equation is solved on top of the estimating equation that a plain TMLE solves. The optimal $h$ is chosen by cross-validating an empirical criterion that guarantees the wished trade-off between empirical bias and variance. We illustrate the construction and main result with the inference of the so called average treatment effect, where the $Q$-comp.\ consists in a marginal law and a conditional expectation, and the $G$-comp.\ is a propensity score (a conditional probability). We also conduct a multi-faceted simulation study to investigate the empirical properties of the collaborative TMLE when the $G$-comp.\ is estimated by the LASSO. Here, $h$ is the bound on the $\ell^{1}$-norm of the candidate coefficients. The variety of scenarios shed light on small and moderate sample properties, in the face of low-, moderate- or high-dimensional baseline covariates, and possibly positivity violation. \end{abstract} \noindent% \textit{Keywords:} cross-validation, empirical process theory, semiparametric models\vfill \newpage \section{Introduction} \label{sec:intro} We wish to infer the value of a statistical parameter at a law from which we sample independent observations. The parameter is a smooth function of the data distribution. We assume that we can define two variation-independent, infinite-dimensional features of the law, its so called $Q$- and $G$-components, such that if we estimate them consistently at a fast enough joint rate, then we can build a confidence interval (CI) with a given asymptotic level based on a plain targeted minimum loss estimator (TMLE)~\citep{van2006targeted,van2011targeted}. Typically, the parameter depends on the law only through its $Q$-component, whereas its canonical gradient depends on the law through both its $Q$- and $G$-components. The estimators of the $Q$- and $G$-components would typically be by products of machine learning algorithms. We focus on the case that the machine learning algorithm for the $G$-component is fine-tuned by a real-valued parameter $h$. Is it possible to construct an estimator that will lend itself to the construction of a CI, by fine-tuning data-adaptively and in a targeted fashion both the algorithm for the estimation of the $G$-component and the resulting estimator of the parameter of interest? \subsubsection*{Literature overview.} \label{subsec:literature} The general problem that we address is often encountered in observational studies of the effect of an exposure, for instance when one wishes to infer the average effect of a two-level exposure. It is then necessary to account for the fact that the level of exposure is not fully randomized in the observed population. A pivotal object of interest in such studies, the so called exposure mechanism (that is, the conditional law of exposure given baseline covariates) is an example of what we generally call a $G$-component of the law of the experiment. A wide range of estimators of the average effect of a two-level exposure require the estimation of the propensity score: Horvitz-Thompson estimators \citep{horvitz1952generalization}; estimators based on propensity score matching \citep{rosenbaum1983central,ho2007matching,ho2006matchit} or stratification \citep{cochran1968effectiveness, rosenbaum1984reducing}; any estimator relying on the efficient influence curve, among which double-robust inverse probability of exposure weighted estimators \citep{Robins00a, Robins00b, Robins00c} or estimators built based on the targeted minimum loss estimation (TMLE) methodology~\citep{van2006targeted,van2011targeted}. Common methods for the estimation of the propensity score are multivariate logistic regression~\citep{kurth2006results}, high-dimensional propensity score adjustment~\citep{schneeweiss2009high,franklin2015regularized}, and a variety of machine learning algorithms~\citep{lee2010improving,gruber2015ensemble, ju2016propensity}. Except in the so called \textit{collaborative} variant of TMLE that we will discuss shortly, the estimators of the propensity score can be derived at a preliminary step, regardless essentially of why they are needed and how they are used at the subsequent step. This is problematic because optimality at the preliminary step has little if any relation to optimality at the subsequent step. For instance, the optimal estimator of the propensity score at the preliminary step might take values very close to zero, therefore disqualifying it as a viable estimator at the subsequent step, not to mention an optimal one. In a less dramatic scenario, using an instrumental variable (which only influences exposure but not the outcome) to estimate the propensity score could concomitantly yield a better estimator thereof and only increase the variance of the resulting estimator of the effect of exposure~\citep{van2010collaborative, van2011targeted}. This prompted the development of the so called \textit{collaborative} version of the targeted minimum loss estimation methodology~\citep{van2010collaborative, van2011targeted}, where the estimation of the $G$-component is not separated from that of the parameter of main interest anymore. More concretely, collaborative TMLE (C-TMLE) consists in building a sequence of estimators of the $G$-component and in selecting one of them by optimizing a criterion that targets the parameter of main interest. For instance, in the above less dramatic scenario, covariates that are strongly predictive of exposure but not of the outcome would be removed, resulting in less bias for the estimator of the parameter of main interest. The C-TMLE methodology has been adapted to a wide range of fields, including genomics~\citep{gruber2010application, wang2011finding}, survival analysis \citep{stitelman2010collaborative}, and clinical studies\citep{ju2016scalable}. Because the derivation of C-TMLE estimators is often computationally demanding, scalable versions have also been developed~\citep{ju2016scalable}. In~\citep{schnitzer2017collaborative}, the authors propose a C-TMLE algorithm that uses regression shrinkage of the exposure model for the estimation of the propensity score. It sequentially reduces the parameter that determines the amount of penalty placed on the size of the coefficient values, and selects the appropriate parameter by cross-validation. The methodology for continuously fine-tuned, collaborative targeted learning that we develop in this article encompasses the algorithm of~\citep{schnitzer2017collaborative}. Its statistical analysis sheds light on why, and under which assumptions, it would provide valid statistical inference. The present study builds upon~\citep{chapterTLBII}. The methodology is also studied in~\citep{ju2017adaptive,ju2017collaborative}, the latter an example of real-life application. At this point in the introduction, we wish to formalize what is the problem at stake. What follows recasts the introductory paragraph in the theoretical framework that we adopt in the article. \subsubsection*{Setting the scene.} \label{subsec:scene} Let $O_{1}, \ldots, O_{n}$ be $n$ independent draws from a law $P_{0}$ on a set $\mathcal{O}$. We view $P_{0}$ as an element of the statistical model $\mathcal{M}$, a collection of plausible laws for $O_{1}, \ldots, O_{n}$. The more we know about $P_{0}$, the smaller is $\mathcal{M}$. Our primary goal is to infer the value of parameter $\Psi : \mathcal{M} \to \mathbb{R}$ at $P_{0}$, namely, $\psi_{0} \equiv \Psi(P_{0})$. Our statistical analysis is asymptotic in the number of observations. We consider the case that $\Psi$ is pathwise differentiable at every $P\in \mathcal{M}$ with respect to (w.r.t.) a tangent set $\mathcal{S}_{P} \subset L_{0}^{2} (P)$: there exists $D^{*} (P) \in L_{0}^{2} (P)$ such that, for every $s \in S_{P}$, there exists a submodel $\{P_{t} : t \in \mathbb{R}, |t| < c\} \subset \mathcal{M}$ satisfying \textit{(i)} $P_{t}|_{t=0} = P$, \textit{(ii)} $P_{t} \ll P$ for all $t \in ]-c,c[$, \textit{(iii)} \begin{equation*} \left.\frac{d}{dt} \log \frac{dP_{t}}{dP} (O)\right|_{t=0} = s(O) \end{equation*} (the submodel's score function equals $s$), and \textit{(iv)} the real valued mapping $t \mapsto \Psi(P_{t})$ is differentiable at $t=0$ with a derivative equal to $P D^{*}(P) s$, where $Pf$ is a shorthand notation for $E_{P} (f(O))$ (any measurable $f$). It is assumed moreover that every $P \in \mathcal{M}$ is associated with two possibly infinite-dimensional features $Q \in \mathcal{Q}$ and $G}%{\bar{G} \in \mathcal{G}}%{\bar{\mathcal{G}}$ such that \textit{(i)} $Q$ and $G}%{\bar{G}$ are unrelated (\textit{i.e.}, variation independent: knowing anything about $Q$ tells nothing about $G}%{\bar{G}$ and vice versa), \textit{(ii)} $\Psi(P)$ depends on $P$ only through $Q$, \textit{(iii)} $ D^{*}(P)$ depends on $P$ only through $Q$ and $G}%{\bar{G}$, and \textit{(iv)} $G}%{\bar{G}$ is a mapping from $\mathcal{O}$ to $\mathbb{R}$. At this early stage, we can introduce the pivotal \begin{equation*} \label{eq:remainder:intro} \mathrm{Rem}_{20} (Q, G}%{\bar{G}) \equiv \Psi(P) - \Psi(P_{0}) + P_{0} D^{*}(P) \end{equation*} for every $P\in\mathcal{M}$. The notation is justified \textit{(i)} because we wish to think of the right-hand-side expression as a remainder term, and \textit{(ii)} by the fact that $\Psi(P)$ and $D^{*}(P)$ depend on $P$ only through $Q$ and $G}%{\bar{G}$. We consider the case that parameter $\Psi$ is such that, for some pseudo-distances $d_{\mathcal{Q}}$ and $d_{\mathcal{G}}%{\bar{\mathcal{G}}}$ on $\mathcal{Q}$ and $\mathcal{G}}%{\bar{\mathcal{G}}$, \begin{equation} \label{eq:remainder:bound:intro} |\mathrm{Rem}_{20} (Q, G}%{\bar{G})| \lesssim d_{\mathcal{Q}}(Q, Q_{0}) \times d_{\mathcal{G}}%{\bar{\mathcal{G}}}(G}%{\bar{G}, G}%{\bar{G}_{0}), \end{equation} where $a \lesssim b$ stand for ``there exists a universal positive constant $c>0$ such that $a \leq bc$''. A remainder term satisfying \eqref{eq:remainder:bound:intro} is said double-robust. Let $\hat{Q}$ be an algorithm for the estimation of $Q_{0}$, the $Q$-component of the true law $P_{0}$. Likewise, let $\hat{G}%{\bar{G}}_{h}$ ($h \in \mathcal{H}$, an open interval of $\mathbb{R}_{+}^{*}$ of which the closure contains 0) be an $h$-specific algorithm for the estimation of $G}%{\bar{G}_{0}$, the $G}%{\bar{G}$-component of $P_{0}$. Formally, we view $\hat{Q}$ and each $\hat{G}%{\bar{G}}_{h}$ as mappings from \begin{equation*} \bigcup_{N \geq 1} \left\{N^{-1}\sum_{i=1}^{N} \Dirac(o_{i}) : o_{1}, \ldots, o_{N} \in \mathcal{O}\right\} \end{equation*} to $\mathcal{Q}$ and $\mathcal{G}}%{\bar{\mathcal{G}}$, respectively, that can ``learn'' from the empirical measure $P_{n}$ some estimators $\hat{Q}(P_{n})$ and $\hat{G}%{\bar{G}}_{h}(P_{n})$ of $Q_{0}$ and $G}%{\bar{G}_{0}$. Set $Q_{n}^{0} \equiv \hat{Q}(P_{n})$ (the superscript 0 stands for ``initial''), $G}%{\bar{G}_{n,h} \equiv \hat{G}%{\bar{G}}_{h}(P_{n})$, and let $P_{n,h}^{0} \in \mathcal{M}$ be any element of the model of which the $Q$- and $G}%{\bar{G}$-components equal $Q_{n}^{0}$ and $G}%{\bar{G}_{n,h}$. Derived by the mere substitution of $P_{n}^{0}$ for $P_{0}$ in $\Psi(P_{0})$, $\Psi(P_{n,h}^{0})$ is a natural estimator of $\Psi(P_{0})$. It is \textit{not} targeted toward the inference of $\Psi(P_{0})$ in the sense that none of the known features of $P_{n,h}^{0}$ was derived specifically for the sake of ultimately estimating $\Psi(P_{0})$. It is well documented in the TMLE literature that one way to target $\Psi(P_{n,h}^{0})$ toward $\Psi(P_{0})$ is to build $P_{n,h}^{*} \in \mathcal{M}$ from $P_{n,h}^{0}$ in such a way that \begin{equation*} P_{n} D^{*} (P_{n,h}^{*}) = o_{P} (1/\sqrt{n}) \end{equation*} and to infer $\Psi(P_{0})$ with $\Psi(P_{n,h}^{*})$. This can be achieved, in such a way that $G}%{\bar{G}_{n,h}$ is not modified, by ``fluctuating'' $P_{n,h}^{0}$, a procedure that we will develop in details in the specific example studied in the article. Then, by~\eqref{eq:remainder:intro}, the estimator satisfies the asymptotic expansion: \begin{equation} \label{eq:asym:exp:intro} \Psi(P_{n,h}^{*}) - \Psi(P_{0}) = (P_{n} - P_{0}) D^{*} (P_{n,h}^{*}) + \mathrm{Rem}_{20} (Q_{n,h}^{*}, G}%{\bar{G}_{n,h}) + o_{P} (1/\sqrt{n}). \end{equation} By convention, we agree that small values of $h$ correspond with less bias for $G}%{\bar{G}_{n,h}$ as an estimator of $G}%{\bar{G}_0$. Moreover, we assume that there exists $h_{n} \in \mathcal{H}$, $h_{n} = o(1)$, such that $d_{\mathcal{G}}%{\bar{\mathcal{G}}}(G}%{\bar{G}_{n,h_{n}}, G}%{\bar{G}_{0}) = o_{P} (\rho_{1,n})$ for some $\rho_{1,n} = o(1)$, \textit{i.e.}, that $G}%{\bar{G}_{n,h_{n}}$ consistently estimates $G}%{\bar{G}_{0}$ at rate $\rho_{1,n}$. If $Q_{n,h_{n}}^{*}$ is also such that $d_{\mathcal{Q}}(Q_{n,h_{n}}^{*}, Q_{0}) = o_{P}(\rho_{2,n})$ for some $\rho_{2,n} = o(1)$, and if $\rho_{1,n} \rho_{2,n} = o(1/\sqrt{n})$, then \eqref{eq:remainder:bound:intro} and \eqref{eq:asym:exp:intro} yield \begin{equation*} \Psi(P_{n,h_{n}}^{*}) - \Psi(P_{0}) = (P_{n} - P_{0}) D^{*} (P_{n,h_{n}}^{*}) + o_{P} (1/\sqrt{n}) \end{equation*} which may in turn imply the asymptotic linear expansion \begin{equation} \label{eq:asymp:exp:intro} \Psi(P_{n,h_{n}}^{*}) - \Psi(P_{0}) = (P_{n} - P_{0}) \mathrm{IF} + o_{P} (1/\sqrt{n}), \end{equation} with influence function $\mathrm{IF} \equiv D^{*} (P_{0})$, depending in particular on how data-adaptive are algorithms $\hat{Q}$ and $\hat{G}%{\bar{G}}_{h}$ ($h \in \mathcal{H}$). By the central limit theorem, \eqref{eq:asymp:exp:intro} guarantees that $\sqrt{n} (\Psi(P_{n,h_{n}}^{*}) - \Psi(P_{0}))$ is asymptotically Gaussian. \textit{We focus on a more challenging situation, where $\rho_{1,n} \rho_{2,n}$ is not necessarily $o(1/\sqrt{n})$.} We anticipate that our analysis is also very relevant at small and moderate sample sizes when $\rho_{1,n} \rho_{2,n} = o(1/\sqrt{n})$. In order to derive an asymptotic linear expansion similar to \eqref{eq:asymp:exp:intro} from \eqref{eq:asym:exp:intro} in this situation, we would have to derive an asymptotic expansion of $\mathrm{Rem}_{20} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}})$. Unfortunately, we have reasons to believe that this is not possible without targeting (their presentation in an example is deferred to Section~\ref{subsec:select:uncoop}). Now, observe that the estimators $\Psi(P_{n,h}^{*})$ $(h \in \mathcal{H})$ do not cooperate in the sense that, although $Q_{n,h}^{*}$ and $Q_{n,h'}^{*}$ (for any two $h, h' \in \mathcal{H}$, $h \neq h'$) share the same initial estimator $Q_{n}^{0}$, the construction of the latter does not capitalize on that of the former. In contrast, we propose to build collaboratively a continuum of estimators of the form $\Psi(P_{n,h}^{*})$ ($h \in \mathcal{H}$) and to select data-adaptively one among them that will be asymptotically Gaussian, under conditions often encountered in empirical process theory. \subsubsection*{Organization of the article.} \label{subsec:org} In Section~\ref{sec:general}, we lay out a high-level presentation of collaborative TMLE, and state a high-level result. In Sections~\ref{sec:ctmle_con}, \ref{sec:software}, \ref{sec:experiments} and \ref{sec:transfer}, we consider a specific example. In Section~\ref{sec:ctmle_con}, we particularize the theoretical construction and analysis. In Section~\ref{sec:software}, we describe two practical instantiations of the estimator developed in Section~\ref{sec:ctmle_con}. In Sections~\ref{sec:experiments} and \ref{sec:transfer}, we carry out a mutli-faceted simulation study of their performances and comment upon its results. In Section~\ref{sec:discuss}, we summarize the content of the article. All the proofs are gathered in the appendix. \section{High-level presentation and result} \label{sec:general} We now state and prove a general result about continuously fine-tuned, collaborative targeted minimum loss estimation, a version of \citep[Theorem~10.1 in][]{chapterTLBII}. Its high-level assumptions are clarified in the particular example that we study in the next sections. From now on, we slightly abuse notation and denote $D^{*}(Q, G}%{\bar{G})$ instead of $D^{*} (P)$, where $Q$ and $G}%{\bar{G}$ are the $Q$- and $G}%{\bar{G}$-components of $P$. Let $G_{\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}} \equiv \{G}%{\bar{G}_{t} : t \in \mathcal{T}\} \subset \mathcal{G}}%{\bar{\mathcal{G}}$ be a (one-dimensional) subset of $\mathcal{G}}%{\bar{\mathcal{G}}$ (indexed by a real parameter ranging in an open subset $\mathcal{T}$ of $\mathcal{H}$) such that $t \mapsto D^{*} (Q, G}%{\bar{G}_{t})(O)$ is twice differentiable over $\mathcal{T}$ for all $Q \in \mathcal{Q}$ ($P_{0}$-almost surely). We characterize $\partial D^{*}$ and $\partial^{2} D^{*}$ by setting, for every $h \in \mathcal{T}$ and $Q \in \mathcal{Q}$, \begin{eqnarray} \label{eq:dD:general} \partial_{h} D^{*} (Q, G_{\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}})(O) &\equiv& \frac{d}{dt} D^{*} (Q, G}%{\bar{G}_{t})(O)|_{t=h}, \\ \notag \partial_{h}^{2} D^{*} (Q, G_{\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}})(O) &\equiv& \frac{d^{2}}{dt^{2}} D^{*} (Q, G}%{\bar{G}_{t})(O)|_{t=h}. \end{eqnarray} Consider the following inter-dependent assumptions. The first one is indexed by $(Q, h, c)\in \mathcal{Q} \times \mathcal{H} \times \mathbb{R}_{+}^{*}$. \begin{description} \item[\textbf{A1}$(Q,h,c)$] There exists an open neighborhood $\mathcal{T} \subset \mathcal{H}$ of $h \in \mathcal{H}$ for which the set $G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}} \equiv \{\hat{G}%{\bar{G}}_{h}(P_{n}) \equiv G}%{\bar{G}_{n,h} : h \in \mathcal{T}\} \subset \mathcal{G}}%{\bar{\mathcal{G}}$ is such that $t \mapsto D^{*} (Q, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}})(O)$ is twice differentiable over $\mathcal{T}$ ($P_{0}$-almost surely). Moreover, $P_{0}$-almost surely, \begin{equation*} \sup_{h \in \mathcal{T}} |\partial_{h}^{2} D^{*} (Q, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) (O)| \leq c. \end{equation*} \item[\textbf{A2}] For all $h \in \mathcal{H}$, we know how to build $P_{n,h}^{*} \in \mathcal{M}$, with $Q$- and $G}%{\bar{G}$-components denoted by $Q_{n,h}^{*}$ and $G}%{\bar{G}_{n,h}$, in such a way that $P_{n} D^{*} (Q_{n,h}^{*}, G}%{\bar{G}_{n,h}) = o_{P} (1/\sqrt{n})$. Moreover, we know how to choose $h_{n} \in \mathcal{H}$ such that \begin{equation} \label{eq:hla2:one} P_{n} D^{*} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}}) = o_{P} (1/\sqrt{n}). \end{equation} and, for some deterministic $c_{2} > 0$, \textbf{A1}$(Q_{n,h_{n}}^{*}, h_{n}, c_{2})$ is met and \begin{equation} \label{eq:hla2:two} P_{n} \partial_{h_{n}} D^{*} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) = o_{P} (1/n^{1/4}). \end{equation} \item[\textbf{A3}] It holds that $d_{\mathcal{G}}%{\bar{\mathcal{G}}}(G}%{\bar{G}_{n,h_{n}}, G}%{\bar{G}_{0}) = o_{P} (1)$, and there exists $Q_{1} \in \mathcal{Q}$ such that $d_{\mathcal{Q}}(Q_{n,h_{n}}^{*}, Q_{1}) = o_{P} (1)$. In addition, \begin{eqnarray} \label{eq:hla3:one} (P_{n} - P_{0}) \left(D^{*}(Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}}) - D^{*}(Q_{1}, G}%{\bar{G}_{0})\right) &=& o_{P}(1/\sqrt{n}),\\ \label{eq:hla3:three} \mathrm{Rem}_{20} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}}) - \mathrm{Rem}_{20} (Q_{1}, G}%{\bar{G}_{n,h_{n}}) &=& o_{P} (1/\sqrt{n}). \end{eqnarray} \item[\textbf{A4}] Let $\Phi_{0} : \mathcal{G}}%{\bar{\mathcal{G}} \to \mathbb{R}$ be given by $\Phi_{0} (G}%{\bar{G}) \equiv P_{0} D^{*} (Q_{1}, G}%{\bar{G})$. There exist $\th_{n} \in \mathcal{H}$ and $\Delta(P_{1}) \in L_{0}^{2} (P_{0})$ such that \begin{equation} \label{eq:hla4} \Phi_{0}(G}%{\bar{G}_{n,\th_{n}}) - \Phi_{0}(G}%{\bar{G}_{0}) = (P_{n} - P_{0}) \Delta(P_{1}) + o_{P} (1/\sqrt{n}). \end{equation} \item[\textbf{A5}] It holds that $(h_{n} - \th_{n})^{2} = o_{P} (1/\sqrt{n})$. Moreover, there exists a deterministic $c_{5} > 0$ such that \textbf{A1}$(Q_{1}, h_{n}, c_{5})$ is met, and \begin{eqnarray} \label{eq:hla5:one} (P_{n} - P_{0}) \left(D^{*}(Q_{1}, G}%{\bar{G}_{n,h_{n}}) - D^{*}(Q_{1}, G}%{\bar{G}_{n,\th_{n}})\right) &=& o_{P}(1/\sqrt{n}),\\ \label{eq:hla5:two} (h_{n} - \th_{n}) \times P_{0} \left(\partial_{h_{n}} D^{*}(Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) - \partial_{h_{n}} D^{*}(Q_{1}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}})\right) &=& o_{P}(1/\sqrt{n}),\\ \label{eq:hla5:three} (P_{n} - P_{0}) \left(\partial_{h_{n}} D^{*}(Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) - \partial_{h_{n}} D^{*}(Q_{1}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}})\right) &=& o_{P}(1/\sqrt{n}). \end{eqnarray} \end{description} Now that we have introduced our high-level assumptions, we can state the corresponding high-level result that they entail. The proof is relegated to the appendix. \begin{theorem}[Asymptotics of the collaborative TMLE -- a high-level result] \label{theo:high:level} Under assumptions \textbf{A2} to \textbf{A5}, it holds that \begin{equation} \label{eq:theo:high:level} \Psi(P_{n,h_{n}}^{*}) - \Psi(P_{0}) = (P_{n} - P_{0}) \left(D^{*} (Q_{1}, G}%{\bar{G}_{0}) + \Delta(P_{1})\right) + o_{P} (1/\sqrt{n}). \end{equation} \end{theorem} \subsubsection*{Commenting on the high-level assumptions.} Assumption \textbf{A1}$(Q,h,c)$ concerns both $D^{*}$ (specifically, how $D^{*}(Q, G}%{\bar{G})(O)$ depends on $G}%{\bar{G}(O)$) and algorithms $\hat{G}%{\bar{G}}_{t}$, $t \in \mathcal{H}$ (specifically, how smooth is $t \mapsto \hat{G}%{\bar{G}}_{t}(P_{n})(O)$ around $h$). In the particular example studied in the following sections, the counterpart \textbf{C1} of \textbf{A1}$(Q,h,c)$ concerns only algorithms $\hat{G}%{\bar{G}}_{t}$, $t \in \mathcal{H}$. In the example, we show how $P_{n,h_{n}}^{*}$ can be built collaboratively in such a way that \textbf{A2} is met, under a series of nested assumptions about the smoothness of data-dependent, real-valued functions over $\mathcal{H}$, the construction of which notably involve algorithms $\hat{G}%{\bar{G}}_{t}$, $t \in \mathcal{H}$. To understand why achieving \eqref{eq:hla2:two} is relevant, observe that the following oracle version of $P_{n} \partial_{h_{n}} D^{*} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}})$, \begin{equation*} \lim\limits_{\substack{t\to 0\\t\neq 0}} \frac{1}{t} P_{0} \left(D^{*} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}+t}) - D^{*} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}})\right), \end{equation*} can be rewritten as \begin{equation*} \lim\limits_{\substack{t\to 0\\t\neq 0}} \frac{1}{t} \left(\mathrm{Rem}_{20} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}+t}) - \mathrm{Rem}_{20} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}})\right) \end{equation*} in view of \eqref{eq:remainder:intro}. Thus, achieving \eqref{eq:hla2:one} relates to finding critical points of $h \mapsto \mathrm{Rem}_{20} (Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h})$. Assumption \textbf{A3} formalizes the convergence of $G}%{\bar{G}_{n,h_{n}}$ to its target $G}%{\bar{G}_{0}$ w.r.t. $d_{\mathcal{G}}%{\bar{\mathcal{G}}}$, and that of $Q_{n,h_{n}}^{*}$ to some limit $Q_{1} \in \mathcal{Q}$ w.r.t. $d_{\mathcal{Q}}$. It does not require that $Q_{1}$ be equal to the target $Q_{0}$ of $Q_{n,h_{n}}^{*}$, but \textbf{A4} may be impossible to meet when $Q_{1} \neq Q_{0}$ (see below). Condition~\eqref{eq:hla3:one} in \textbf{A3} is met for instance if the $L^{2}(P_{0})$-norm of $D^{*}(Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,h_{n}}) - D^{*}(Q_{1}, G}%{\bar{G}_{0})$ goes to zero in probability and if the difference falls in a $P_{0}$-Donsker class with probability tending to one. As for \eqref{eq:hla3:three}, it typically holds whenever the product of the rates of convergence of $Q_{n,h_{n}}^{*}$ and $G}%{\bar{G}_{n,h_{n}}$ to their limits is $o_{P}(1/\sqrt{n})$. The counterpart of \textbf{A3} in the example studied in the following sections is \textbf{C2}. With \textbf{A4}, we assume the existence of an oracle $\th_{n}$ that undersmoothes $G}%{\bar{G}_{n,h}$ enough so that $\Phi_{0} (G}%{\bar{G}_{n,\th_{n}})$ is an asymptotically linear estimator of $\Phi_{0}(G}%{\bar{G}_{0})$, where we note that $\Phi_{0}$ is pathwise differentiable in a similar way as $\Psi$. We say that $\th_{n}$ is an oracle because the definition of $\Phi_{0}$ involves $P_{0}$ and $Q_{1}$. It happens that \begin{lemma} \label{lem:A4} Under \textbf{A2} and \textbf{A3}, if $Q_{1} = Q_{0}$, $d_{\mathcal{G}}%{\bar{\mathcal{G}}}(G}%{\bar{G}_{n,h_{n}}, G}%{\bar{G}_{0}) \times d_{\mathcal{Q}}(Q_{n,h_{n}}^{*}, Q_{0}) = o_{P} (1/\sqrt{n})$, and if \eqref{eq:asymp:exp:intro} is met with $\mathrm{IF} = D^{*}(P_{0})$, then \textbf{A4} holds with $h_{n} = \th_{n}$ and $\Delta(P_{1}) = 0$. \end{lemma} It is difficult to assess whether or not \textbf{A4} is a tall order when $d_{\mathcal{G}}%{\bar{\mathcal{G}}}(G}%{\bar{G}_{n,h_{n}}, G}%{\bar{G}_{0}) \times d_{\mathcal{Q}}(Q_{n,h_{n}}^{*}, Q_{0})$ is not necessarily $o_{P} (1/\sqrt{n})$, or if $Q_{1} \neq Q_{0}$. Finally, \textbf{A5} states that the distance between $\th_{n}$ and $h_{n}$, introduced in \textbf{A2}, is of order $o_{P} (1/n^{1/4})$ at most. Its conditions \eqref{eq:hla5:one} and \eqref{eq:hla5:three} are of similar nature as \eqref{eq:hla3:one}. As for \eqref{eq:hla5:two}, the Cauchy-Schwarz inequality reveals that it is met if the $L^{2}(P_{0})$-norm of $\partial_{h_{n}} D^{*}(Q_{n,h_{n}}^{*}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) - \partial_{h_{n}} D^{*}(Q_{1}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}})$ is $o_{P}(1/n^{1/4})$. \section{Collaborative TMLE for continuous tuning when inferring the average treatment effect: presentation and analysis} \label{sec:ctmle_con} In this section, we specialize the discussion to the inference of a specific statistical parameter, the so called average treatment effect. Section~\ref{subsec:prelim} introduces the parameter and recalls what are the corresponding $D^{*}$ and $\mathrm{Rem}_{20}$ from Section~\ref{sec:intro}. Section~\ref{subsec:continuum:TMLEs} describes the \textit{un}cooperative construction of a continuum of uncooperative TMLEs. Section~\ref{subsec:select:uncoop} argues why the selection of one of the uncooperative TMLEs is unlikely to yield a well behaved (\textit{i.e.}, asymptotically Gaussian) estimator when the product of the rates of convergence of the estimators of $Q_{0}$ and $G}%{\bar{G}_{0}$ to their limits is not fast enough (\textit{i.e.}, $o(1/\sqrt{n})$). Then, Sections~\ref{subsec:continuum:CTMLEs} and \ref{subsec:select:colla} present the collaborative construction of collaborative TMLEs and how to select one among them that is well behaved, under assumptions that are spelled out in Section~\ref{subsec:asymptot}, where the high-level Theorem~\ref{theo:high:level} and its assumptions are specialized. \subsection{Preliminary} \label{subsec:prelim} We observe $n$ independent draws $O_{1} \equiv (W_{1}, A_{1}, Y_{1})$, $\ldots$, $O_{n} \equiv (W_{n}, A_{n}, Y_{n})$ from $P_{0}$, the true law of $O\equiv (W,A,Y)$. It is known that $Y$ takes its values in $[0,1]$. We consider the statistical model $\mathcal{M}$ that leaves unspecified the law $Q_{W,0}$ of $W$ and the conditional law of $Y$ given $(A,W$), while we might know that the conditional expectation $G}%{\bar{G}_0$ of $A$ given $W$ belongs to a set $\mathcal{G}}%{\bar{\mathcal{G}}$. Introduce \begin{equation*} \bar{Q}_0(A,W)\equiv E_{P_{0}}(Y|A,W), \quad G}%{\bar{G}_0(W)\equiv P_{0}(A=1|W). \end{equation*} The parameter of interest is the average treatment effect, \begin{equation*} \psi_0 \equiv E_{Q_{W,0}} \left(\bar{Q}_0(1, W) - \bar{Q}_{0} (0, W)\right). \end{equation*} We choose it because its study provides a wealth of information and paves the way for the analysis of a variety of other parameters often encountered in the statistical literature. More generally, every $P\in\mathcal{M}$ gives rise to $Q_{W}$, $\bar{Q}(A,W)$, $G}%{\bar{G}(W)$ and $Q\equiv (Q_{W}, \bar{Q})$, which are respectively the marginal law of $W$ under $P$, the conditional expectation of $Y$ given $(A,W)$ under $P$, the conditional probability that $A=1$ given $W$ under $P$, and the couple consisting of $Q_{W}$ and $\bar{Q}$. For each of them, the average treatment effect is $\Psi(P)$, where $\Psi : \mathcal{M} \to [0,1]$ is given by \begin{equation*} \Psi(P) \equiv E_{Q_{W}} \left(\bar{Q}(1, W) - \bar{Q} (0, W)\right). \end{equation*} For notational conciseness, we let $\ellG}%{\bar{G}$ be given by \begin{equation} \label{eq:ellbG} \ellG}%{\bar{G}(A,W) \equiv A G}%{\bar{G}(W) + (1-A) (1-G}%{\bar{G}(W)) \end{equation} for every $G}%{\bar{G} \in \mathcal{G}}%{\bar{\mathcal{G}}$. Note that $\ellG}%{\bar{G}(A,W)$ is the conditional likelihood of $A$ given $W$ when $A$ given $W$ is drawn from the Bernoulli law with parameter $G}%{\bar{G}(W)$, hence the ``$\ell$'' in the notation. Parameter $\Psi$ viewed as a real-valued mapping over $\mathcal{M}$ is pathwise differentiable at every $P\in\mathcal{M}$ w.r.t. the maximal tangent set $\mathcal{S}_{P} = L_{0}^{2} (P)$. The efficient influence curve $D^*(P)$ of $\Psi$ at $P\in \mathcal{M}$ is given by \begin{eqnarray} \label{eq:EIC} D^*(P)(O) &\equiv& D_{2}^{*} (\bar{Q}, G}%{\bar{G}) (O) + \left(\bar{Q}(1,W) - \bar{Q}(0,W) - \Psi(P)\right) \quad \text{where}\\ \notag D_{2}^{*} (\bar{Q}, G}%{\bar{G}) (O) &\equiv& \frac{2A-1}{\ellG}%{\bar{G}(A,W)}(Y-\bar{Q}(A,W)). \end{eqnarray} Recall definition \eqref{eq:remainder:intro}. It is easy to check that, for every $P\in\mathcal{M}$, \begin{equation} \label{eq:remainder} \mathrm{Rem}_{20} (\bar{Q}, G}%{\bar{G}) = E_{P_{0}} \left[(2A-1) \left(1 - \frac{\ellG}%{\bar{G}_{0}(A,W)}{\ellG}%{\bar{G}(A,W)}\right) \left(\bar{Q}(A,W) - \bar{Q}_0(A,W)\right) \right]. \end{equation} Writing $\mathrm{Rem}_{20} (\bar{Q}, G}%{\bar{G})$ instead of $\mathrm{Rem}_{20} (Q, G}%{\bar{G})$ slightly abuses notation, but is justified because integrating out $A$ in the RHS of \eqref{eq:remainder} reveals that it only depends on $P_{0}$, $\bar{Q}$ and $G}%{\bar{G}$. Furthermore, by the Cauchy-Schwartz inequality, it holds that \begin{equation} \label{eq:bound:R20} \mathrm{Rem}_{20}(\bar{Q}, G}%{\bar{G})^{2} \leq P_{0} (\bar{Q}-\bar{Q}_0)^{2} \times P_{0} \left(\frac{G}%{\bar{G}-G}%{\bar{G}_0}{\ellG}%{\bar{G}}\right)^{2}. \end{equation} \subsection{Uncooperative construction of a continuum of uncooperative TMLEs} \label{subsec:continuum:TMLEs} \subsubsection*{Prerequisites.} Let $\bar{Q}_{n}^{0} \equiv \hat{\bar{Q}} (P_{n})$ be an initial estimator of $\bar{Q}_0$ and $\{G}%{\bar{G}_{n,h}\equiv \hat{G}%{\bar{G}}_h(P_n) : h \in \mathcal{H}\}$ be a continuum of candidate estimators of $G}%{\bar{G}_0$ indexed by a real-valued tuning parameter $h \in \mathcal{H}$, an open interval of $\mathbb{R}_{+}^{*}$. By convention, we agree that small values of $h$ correspond with less bias for $G}%{\bar{G}_{n,h}$ as an estimator of $G}%{\bar{G}_0$. Specifically, denoting $L_{1}$ the valid loss function for the estimation of $G}%{\bar{G}_{0}$ given by \begin{equation} \label{eq:L1:loss} L_{1}(G}%{\bar{G})(A,W) \equiv - \log \ellG}%{\bar{G}(A,W) = -A \log G}%{\bar{G}(W) - (1-A) \log(1-G}%{\bar{G}(W)), \end{equation} for every $G}%{\bar{G} \in \mathcal{G}}%{\bar{\mathcal{G}}$, where $\ellG}%{\bar{G}$ was defined in~\eqref{eq:ellbG}, we assume from now on that the empirical risk $h \mapsto P_n L_{1}(G}%{\bar{G}_{n,h})$ increases. For example, $\hat{G}%{\bar{G}}_h$ could correspond to fitting a logistic linear regression maximizing the log-likelihood under the constraint that the sum of the absolute values of the coefficients is smaller than or equal to $1/h$ with $h \in \mathcal{H}\equiv \mathbb{R}_{+}^{*}$. We will refer to this algorithm as the LASSO logistic regression algorithm. \subsubsection*{Uncooperative TMLEs.} Let $Q_{W,n}$ be the empirical law of $\{W_{1}, \ldots, W_{n}\}$. Set arbitrarily $h \in \mathcal{H}$ and let $P_{n,h}^{0} \in \mathcal{M}$ denote any element of $\mathcal{M}$ such that the marginal law of $W$ under $P_{n,h}^{0}$ equals $Q_{W,n}$ and the conditional expectation of $Y$ given $(A,W)$ under $P_{n}^{0}$ is equal to $\bar{Q}_{n}^{0}$, hence $Q_{n}^{0} = (Q_{W,n}, \bar{Q}_{n}^{0})$ on the one hand; and the conditional expectation of $A$ given $W$ under $P_{n,h}^{0}$ coincide with $G}%{\bar{G}_{n,h}$ on the other hand. Evaluating $\Psi$ at $P_{n,h}^{0}$ yields an estimator of $\Psi(P_{0})$, \begin{equation*} \Psi(P_{n,h}^{0}) = \frac{1}{n} \sum_{i=1}^{n} \left(\bar{Q}_{n}^{0} (1, W_{i}) - \bar{Q}_{n}^{0} (0, W_{i}) \right), \end{equation*} which is not targeted toward the inference of $\Psi(P_{0})$ in the sense that none of the known features of $P_{n}^{0}$ was derived specifically for the sake of ultimately estimating $\Psi(P_{0})$. One way to target $\Psi(P_{n,h}^{0})$ toward $\Psi(P_{0})$ is to build $P_{n,h}^{*} \in \mathcal{M}$ from $P_{n,h}^{0}$ in such a way that \begin{equation*} P_{n} D^{*} (P_{n,h}^{*}) = o_{P} (1/\sqrt{n}) \end{equation*} and to infer $\Psi(P_{0})$ with $\Psi(P_{n,h}^{*})$. This can be achieved by ``fluctuating'' $P_{n,h}^{0}$ in the following sense. For every $G}%{\bar{G} \in \mathcal{G}}%{\bar{\mathcal{G}}$, introduce the so called ``clever covariate'' $\mathcal{C}(G}%{\bar{G})$ given by \begin{equation} \label{eq:clever} \mathcal{C}(G}%{\bar{G}) (A,W) \equiv \frac{2A-1}{\ellG}%{\bar{G}(A,W)}. \end{equation} Now, for every $\varepsilon \in \mathbb{R}$, let $\bar{Q}_{n,h,\varepsilon}^{0}$ be characterized by \begin{equation} \label{eq:fluct:ref} \logit \left(\bar{Q}_{n,h,\varepsilon}^{0} (A,W)\right) \equiv \logit \left(\bar{Q}_{n}^{0} (A,W)\right) + \varepsilon \mathcal{C}(G}%{\bar{G}_{n,h}) (A,W) \end{equation} and $P_{n,h,\varepsilon}^{0} \in \mathcal{M}$ be defined like $P_{n,h}^{0}$ except that the conditional expectation of $Y$ given $(A,W)$ under $P_{n,h,\varepsilon}^{0}$ equals $\bar{Q}_{n,h,\varepsilon}^{0}$ (and not $\bar{Q}_{n}^{0}$). Clearly, $P_{n,h,\varepsilon}^{0} = P_{n,h}^{0}$ when $\varepsilon=0$. Moreover, denoting $L_{2}$ the loss function given by \begin{equation*} L_{2} (\bar{Q}) (O) \equiv -Y \log \bar{Q}(A,W) - (1-Y) \log \left(1 - \bar{Q}(A,W)\right) \end{equation*} for every $\bar{Q}$ induced by a $P \in \mathcal{M}$, it holds that \begin{equation*} \frac{d}{d\varepsilon} L_{2} (\bar{Q}_{n,h,\varepsilon}^{0}) (O) = - D_{2} (\bar{Q}_{n,h,\varepsilon}^{0}, G}%{\bar{G}_{n,h}) (O), \end{equation*} a property that prompts us to say that the one-dimensional submodel $\{P_{n,h,\varepsilon}^{0} : \varepsilon \in \mathbb{R}\} \subset \mathcal{M}$ ``fluctuates'' $P_{n,h}^{0}$ ``in the direction of'' $D_{2} (\bar{Q}_{n}^{0}, G}%{\bar{G}_{n,h})$. The optimal fluctuation of $P_{n,h}^{0}$ along the above submodel is indexed by the minimizer of the empirical risk \begin{equation} \label{eq:opt:eps:ref} \varepsilon_{n,h} \equiv \mathop{\argmin}_{\varepsilon \in \mathbb{R}} P_{n} L_{2} (\bar{Q}_{n,h,\varepsilon}^{0}), \end{equation} of which the existence is assumed (note that $\varepsilon \mapsto P_{n} L_{2} (\bar{Q}_{n,h,\varepsilon}^{0})$ is twice differentiable and strictly convex). We call $P_{n,h}^{*} \equiv P_{n,h,\varepsilon_{n,h}}^{0}$ the TMLE of $P_{0}$, and the resulting estimator \begin{equation} \label{eq:TMLE} \psi_{n,h}^{*} \equiv \Psi(P_{n,h}^{*}) = \frac{1}{n} \sum_{i=1}^{n} \left(\bar{Q}_{n,h,\varepsilon_{n,h}}^{0} (1, W_{i}) - \bar{Q}_{n,h,\varepsilon_{n,h}}^{0} (0, W_{i}) \right) \end{equation} the TMLE of $\Psi(P_{0})$. It is readily seen that \eqref{eq:TMLE} is equivalent to \begin{equation*} P_{n} \left(D^{*} (P_{n,h}^{*}) - D_{2}^{*} (\bar{Q}_{n,h}^{*}, G}%{\bar{G}_{n,h}) \right) = 0 \end{equation*} where $\bar{Q}_{n,h}^{*} \equiv \bar{Q}_{n,h,\varepsilon_{n,h}}^{0}$. Since $\varepsilon_{n,h}$ minimizes the differentiable mapping $\varepsilon \mapsto P_{n} L_{2} (Q_{n,h,\varepsilon}^{0})$, it holds moreover that \begin{equation} \label{eq:EIC:2:eqn:solved} P_{n} D_{2}^{*} (\bar{Q}_{n,h}^{*}, G}%{\bar{G}_{n,h}) = 0 \end{equation} which, combined with the previous display, yields \begin{equation} \label{eq:EIC:eqn:solved} P_{n} D^{*} (P_{n,h}^{*}) = 0; \end{equation} in words, $\psi_{n,h}^{*}$ is targeted toward $\Psi(P_{0})$ indeed. Furthermore, in view of \eqref{eq:remainder} and \eqref{eq:EIC:eqn:solved}, $\psi_{n,h}^{*}$ satisfies \begin{equation} \label{eq:TMLE:expansion} \psi_{n,h}^{*} - \Psi(P_{0}) = (P_{n} - P_{0}) D^{*} (P_{n,h}^{*}) + \mathrm{Rem}_{20} (\bar{Q}_{n,h}^{*}, G}%{\bar{G}_{n,h}). \end{equation} Finally, the TMLEs $\psi_{n,h}^{*}$ ($h \in \mathcal{H}$) are said uncooperative because, although they share the same initial estimator $\bar{Q}_{n}^{0}$, for any two $h, h' \in \mathcal{H}$, $h \neq h'$, the construction of $\psi_{n,h}^{*}$ does not capitalize on that of $\psi_{n,h'}^{*}$. \subsection{Selecting one of the uncooperative TMLEs} \label{subsec:select:uncoop} At this stage of the procedure, a crucial question is to select one TMLE in the collection of uncooperative TMLEs, one that lends itself to the construction of a CI for $\Psi(P_{0})$ with a given asymptotic level. Such a TMLE necessarily writes as $\psi_{n,h_{n}}^{*}$ for some well chosen $h_{n}\in \mathcal{H}$. This could possibly be a deterministic (fixed in $n$) or a data-driven (random and $n$-dependent) element of $\mathcal{H}$. The risk $R_{1}$ generated by $L_{1}$ \eqref{eq:L1:loss} is given by \begin{equation*} R_{1} (G}%{\bar{G}, G}%{\bar{G}_{0}) \equiv E_{Q_{0,W}} \left[\mathrm{KL}(G}%{\bar{G}_{0}(W), G}%{\bar{G}(W))\right], \end{equation*} where $\mathrm{KL}(p,q)$ is the Kullback-Leibler divergence between the Bernoulli laws with parameters $p,q\in [0,1]$. By Pinsker's inequality, it holds that \begin{equation*} 0 \leq 2 P_{0} \left(G}%{\bar{G} - G}%{\bar{G}_{0}\right)^{2} \leq R_{1} (G}%{\bar{G}, G}%{\bar{G}_{0}) \end{equation*} for all $G}%{\bar{G}\in \mathcal{G}}%{\bar{\mathcal{G}}$. Therefore, if $G}%{\bar{G}$ is bounded away from zero and one, then \eqref{eq:bound:R20} implies \begin{equation} \label{eq:bound:R20:bis} \mathrm{Rem}_{20} (\bar{Q}, G}%{\bar{G})^{2} \lesssim P_{0} (\bar{Q} - \bar{Q}_{0})^{2} \times R_{1}(G}%{\bar{G}, G}%{\bar{G}_{0}). \end{equation} If the deterministic $h_{n} \in \mathcal{H}$ is such that \textit{(i)} there exist two rates $\rho_{1,n} = o(1)$ and $\rho_{2,n} = o(1)$ such that $R_{1} (G}%{\bar{G}_{n,h_{n}}, G}%{\bar{G}_{0}) = o_{P}(\rho_{1,n}^{2})$ and $P_{0} (\bar{Q}_{n,h_{n}}^{*} - \bar{Q}_{0})^{2} = o_{P} (\rho_{2,n}^{2})$, \textit{(ii)} $P_{0} \left(D^{*} (P_{n,h_{n}}^{*}) - D^{*} (P_{0})\right)^{2} = o_{P} (1)$, \textit{(iii)} $D^{*} (P_{n,h_{n}}^{*})$ falls in a $P_{0}$-Donsker class with $P_{0}$-probability tending to one, \textit{(iv)} $o_{P} (\rho_{1,n}\rho_{2,n}) = o_{P} (1/\sqrt{n})$, then \citep[][Lemma~19.24]{VdV98}, \eqref{eq:TMLE:expansion} and \eqref{eq:bound:R20:bis} guarantee that \eqref{eq:asymp:exp:intro} is met (with $\mathrm{IF} = D^{*}(P_{0})$). [This argument will be used repeatedly throughout the article.] Thus, by the central limit theorem, $\sqrt{n} (\psi_{n,h_{n}}^{*} - \Psi(P_{0}))$ converges in law to the centered Gaussian law with variance $\mathrm{Var}_{P_{0}}(D^{*}(P_{0})(O))$. So, if the synergy between the convergences of $\bar{Q}_{n,h_{n}}^{*}$ and $G}%{\bar{G}_{n,h_{n}}$ to their respective limits $\bar{Q}_{0}$ and $G}%{\bar{G}_{0}$ is sufficient, then the TMLE $\psi_{n,h_{n}}^{*}$ can be used to build CIs. The argument falls apart if $o_{P} (\rho_{1,n}\rho_{2,n})$ is not $o_{P} (1/\sqrt{n})$ (or, worse, if the $L^{2}(P_{0})$-limit $\bar{Q}_{1}$ of $\bar{Q}_{n,h_{n}}^{*}$ is not $\bar{Q}_{0}$, because we do not expect that $R_{1} (G}%{\bar{G}_{n,h_{n}}, G}%{\bar{G}_{0}) = o_{P}(1/n)$). In that case, whether or not it is possible to derive a useful asymptotic linear expansion of a TMLE $\psi_{n,h_{n}}^{*}$ similar to \eqref{eq:asymp:exp:intro} will depend on whether or not we can derive an asymptotic linear expansion for $\sqrt{n}\,\mathrm{Rem}_{20} (\bar{Q}_{n,h_{n}}^*,G}%{\bar{G}_{n,h_{n}})$. If $G}%{\bar{G}_{n,h_{n}}$ was derived by maximizing the likelihood over a correctly specified, finite-dimensional and fine-tune-parameter-free parametric model, then $\sqrt{n}\,\mathrm{Rem}_{20} (\bar{Q}_{n,h_{n}}^*,G}%{\bar{G}_{n,h_{n}})$ would be asymptotically linear. Because of how we estimate $G}%{\bar{G}_{0}$, we now argue that there is little chance that we can select $h_{n} \in \mathcal{H}$ such that the remainder term $\sqrt{n}\,\mathrm{Rem}_{20} (\bar{Q}_{n,h_{n}}^*,G}%{\bar{G}_{n,h_{n}})$ is asymptotically linear. A natural choice would be to use the likelihood-based cross-validation selector $h_{n,\mathrm{CV}}$. Let us recall how it is derived and explain why we do not believe it will solve our problem. Let $B_n\in \{0,1\}^n$ be a cross-validation scheme. For instance, $B_{n}$ could be a $V$-fold cross-validation scheme, \textit{i.e.}, a random vector taking $V$ different values $b_{1}, \ldots, b_{V} \in \{0,1\}^{n}$, each with probability $1/V$, such that \textit{(i)} the proportion $n^{-1}\sum_{i=1}^{n} b_{v}(i)$ of ones among the coordinates of each $b_{v}$ is close to $1/V$, and \textit{(ii)} $\sum_{v=1}^{V} b_{v}(i) = 1$ for all $1 \leq i \leq n$. Let $P_{n,B_n}^0$ be the empirical probability law of the training subsample $\{O_i : B_n(i)=0, 1 \leq i \leq n\}$ and $P_{n,B_n}^1$ be the empirical probability law of the validation subsample $\{O_{i}: B_n(i)=1, 1 \leq i \leq n\}$. The likelihood-based cross-validation selector $h_{n,\mathrm{CV}}$ of $h\in\mathcal{H}$ is given by \begin{equation} \label{eq:CV:selector} h_{n,\mathrm{CV}} \equiv \mathop{\arg\min}_{h \in \mathcal{H}} E_{B_n} \left[P_{n,B_n}^1 L_{1}(\hat{G}%{\bar{G}}_h(P_{n,B_n}^0))\right]. \end{equation} Unfortunately, we do not expect that $\sqrt{n}\,\mathrm{Rem}_{20} (\bar{Q}_{n,h}^*,G}%{\bar{G}_{n,h})$ is asymptotically linear. Heuristically, $h_{n,\mathrm{CV}}$ trades off the bias and variance of $G}%{\bar{G}_{n,h}$ as an estimator of $G}%{\bar{G}_0$, whereas we wish to trade off this bias with the variance of $\psi_{n,h}^*$. Clearly, the variance of the estimator $\psi_{n,h}^* = \Psi(P_{n,h}^{*})$, where $\Psi$ is a smooth functional, is significantly smaller than that of the infinite-dimensional object $G}%{\bar{G}_{n,h}$. \subsection{Collaborative construction of finitely many collaborative TMLEs } \label{subsec:continuum:CTMLEs} The take-home message of Sections~\ref{subsec:continuum:TMLEs} and \ref{subsec:select:uncoop} is that the \textit{uncooperative} construction of a continuum of standard TMLEs will typically fail to produce one asymptotically linear TMLE if the product of the rates of convergence of the estimators of $\bar{Q}_{0}$ and $G}%{\bar{G}_{0}$ to their limits is not fast enough (\textit{i.e.}, $o(1/\sqrt{n})$). In Sections~\ref{subsec:continuum:CTMLEs} and \ref{subsec:select:colla}, we demonstrate how a \textit{collaborative} construction of a continuum of standard TMLEs can produce one asymptotically linear TMLE in this challenging situation, under appropriate assumptions. \subsubsection*{Recursive construction.} We now present the collaborative construction of \textit{finitely many} TMLEs. In the forthcoming theoretical presentation, we make on the fly a series of assumptions. The most important ones will be emphasized. We argued that the cross-validated selector $h_{n,\mathrm{CV}}$ \eqref{eq:CV:selector} does not sufficiently undersmooth $G}%{\bar{G}_{n,h}$ to make of $\sqrt{n}\,\mathrm{Rem}_{20} (\bar{Q}_{n,h}^*,G}%{\bar{G}_{n,h})$ an asymptotically linear term. Since we have assumed that $h \mapsto P_{n} L_{1} (G}%{\bar{G}_{n,h})$ increases, we can focus on those tuning parameters $h$ in $\mathcal{H} \cap ]0,h_{n,\mathrm{CV}}]$, a set assumed non-empty from now (an assumption that we call \As{1}{$P_{n},1$}). The construction is recursive. It unfolds as follows. \begin{description} \item[Initialization.] We begin as in Section~\ref{subsec:continuum:TMLEs}: for every $h \in \mathcal{H} \cap ]0,h_{n,\mathrm{CV}}]$, we build $\bar{Q}_{n,h}^{(*)}$ and $P_{n,h}^{(*)}$ using $\bar{Q}_{n}^{0}$ as an initial estimator of $\bar{Q}_{0}$ and $G}%{\bar{G}_{n,h}$ as the estimator of $G}%{\bar{G}_{0}$. Note that placing the star symbol between parentheses suggests that $\bar{Q}_{n,h}^{(*)}$ and $P_{n,h}^{(*)}$ are the \textit{tentative} $h$-specific estimator of $\bar{Q}_{0}$ and TMLE. Specifically, for every $h \in \mathcal{H} \cap ]0,h_{n,\mathrm{CV}}]$, we define $\bar{Q}_{n,h,\varepsilon}^{0}$ as in \eqref{eq:fluct:ref}, $\varepsilon_{n,h,1}$ as in \eqref{eq:opt:eps:ref}, assuming that it exists (an assumption that we call \As{2}{$P_{n}, 1$}), then set $\bar{Q}_{n,h}^{(*)} \equiv \bar{Q}_{n,h,\varepsilon_{n,h,1}}^{0}$ and find $P_{n,h}^{(*)} \in \mathcal{M}$ such that the marginal law of $W$ under $P_{n,h}^{(*)}$ is the empirical law $Q_{W,n}$ of $\{W_{1}, \ldots, W_{n}\}$ and the conditional expectation of $Y$ given $(A,W)$ under $P_{n,h}^{(*)}$ equals $\bar{Q}_{n,h}^{(*)}$, hence $Q_{n,h}^{(*)} = (Q_{W,n}, \bar{Q}_{n,h}^{(*)})$ on the one hand; and the conditional expectation of $A$ given $W$ under $P_{n,h}^{(*)}$ coincides with $G}%{\bar{G}_{n,h}$ on the other hand. We assume that $h \mapsto P_{n} L_{2} (Q_{n,h}^{(*)})$ is minimized globally at $h_{n,1}$ in the interior of $\mathcal{H} \cap ]0,h_{n,\mathrm{CV}}]$ (an assumption that we call \As{3}{$P_{n},1$}). If there are several minimizers, then $h_{n,1}$ is the largest of them by choice. Observe that, for every $h \in \mathcal{H} \cap ]0,h_{n,\mathrm{CV}}]$, \begin{equation*} P_{n} L_{2} (\bar{Q}_{n,h_{n,1}}^{(*)}) \leq P_{n} L_{2} (\bar{Q}_{n,h}^{(*)}) \leq P_{n} L_{2} (\bar{Q}_{n,h}^{0}) \end{equation*} and, in particular, \begin{equation*} P_{n} L_{2} (\bar{Q}_{n,h_{n,1}}^{(*)}) < P_{n} L_{2} (\bar{Q}_{n,h_{n,\mathrm{CV}}}^{*}) \leq P_{n} L_{2} (\bar{Q}_{n,h_{n,\mathrm{CV}}}^{0}). \end{equation*} Let us assume now that, in addition, $h \mapsto \varepsilon_{n,h,1}$, $h \mapsto 1/G}%{\bar{G}_{n,h}(W_{i})$ and $h \mapsto 1/(1-G}%{\bar{G}_{n,h}(W_{i}))$ (all $1 \leq i \leq n$) are differentiable in an open neighborhood of $h_{n,1}$ (an assumption that we call \As{4}{$P_{n},1$}). Consequently, \textit{(i)} $\partial_{h_{n,1}} D^{*} (\bar{Q}_{n,h_{n,1}}^{(*)}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) (O_{i})$ is well defined for each $1 \leq i \leq n$ (see \eqref{eq:dD:general}), and \textit{(ii)} $h \mapsto P_{n} L_{2} (\bar{Q}_{n,h}^{(*)})$ is differentiable in that neighborhood. Moreover, since $h_{n,1}$ minimizes the previous mapping, we have \begin{eqnarray*} 0 &=& -\left.\frac{d}{dt} P_{n} L_{2} (\bar{Q}_{n,t}^{(*)})\right|_{t=h_{n,1}} \\ &=& \left(\left.\frac{d}{dt} \varepsilon_{n,t,1} \right|_{t=h_{n,1}}\right) \times P_{n} D_{2}^{*} (\bar{Q}_{n,h_{n,1}}^{(*)}, G}%{\bar{G}_{n,h_{n,1}}) + \varepsilon_{n,h,1} \times P_{n} \partial_{h_{n,1}} D^{*} (\bar{Q}_{n,h_{n,1}}^{(*)}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) \\ &=& \varepsilon_{n,h,1} \times P_{n} \partial_{h_{n,1}} D^{*} (\bar{Q}_{n,h_{n,1}}^{(*)}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}), \end{eqnarray*} where the third equality holds because \begin{equation*} P_{n} D_{2}^{*} (\bar{Q}_{n,h_{n,1}}^{(*)}, G}%{\bar{G}_{n,h_{n,1}}) = P_{n} D_{2}^{*} (\bar{Q}_{n,h, \varepsilon_{n,h,1}}^{0}, G}%{\bar{G}_{n,h_{n,1}}) = 0 \end{equation*} in light of \eqref{eq:EIC:2:eqn:solved}. If $\varepsilon_{n,h,1} \neq 0$ (an assumption that we call \As{5}{$P_{n}, 1$}), then we thus have proven that the following equation is solved \begin{equation*} P_{n} \partial_{h_{n,1}} D^{*} (\bar{Q}_{n,h_{n,1}}^{(*)}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) = 0. \end{equation*} To complete the initialization, we define $h_{n,0} \equiv h_{n,\mathrm{CV}}$, $\bar{Q}_{n,h_{n,1}}^{*} \equiv \bar{Q}_{n,h_{n,1}}^{(*)}$, $Q_{n,h_{n,1}}^{*} \equiv Q_{n,h_{n,1}}^{(*)}$, $P_{n,h_{n,1}}^{*} \equiv P_{n,h_{n,1}}^{(*)}$, $\psi_{n,h_{n,1}}^{*} \equiv \Psi(P_{n,h_{n,1}}^{*})$, and note that they satisfy \begin{gather*} P_{n} \partial_{h_{n,1}} D^{*} (\bar{Q}_{n,h_{n,1}}^{*}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) = P_{n} D^{*} (P_{n,h_{n,1}}^{*}) = 0 \qquad \text{and}\\ P_{n} L_{2} (\bar{Q}_{n,h_{n,1}}^{*}) < P_{n} L_{2} (\bar{Q}_{n,h_{n,0}}^{*}) \end{gather*} (recall how \eqref{eq:EIC:2:eqn:solved} implied \eqref{eq:EIC:eqn:solved} earlier). \item[Recursion.] Let $k \geq 2$ be arbitrarily chosen. Suppose that, for all $1 \leq \ell < k$, we have already built the 5-tuples $(h_{n,\ell},\bar{Q}_{n,h_{n,\ell}}^{*}, Q_{n,h_{n,\ell}}^{*}, P_{n,h_{n,\ell}}^{*}, \psi_{n,h_{n,\ell}}^{*})$ under assumptions \As{1}{$P_{n},\ell$} to \As{5}{$P_{n},\ell$}, and also that $\mathcal{H} \cap ]0,h_{n,k-1}] \neq \emptyset$ (an assumption that we call \As{1}{$P_{n},k$}). Let us now present the construction of $(h_{n,k}, \bar{Q}_{n,h_{n,k}}^{*}, Q_{n,h_{n,k}}^{*}, P_{n,h_{n,k}}^{*})$ under assumptions \As{1}{$P_{n},k$} to \As{5}{$P_{n},k$}. Because the presentation is very similar to that of the initialization, it is more laid out more directly. For every $h \in \mathcal{H} \cap ]0,h_{n,k-1}]$, we build \textit{again} $\bar{Q}_{n,h}^{(*)}$ and $P_{n,h}^{(*)}$ \textit{but} using $\bar{Q}_{n,h_{n,k-1}}^{*}$ as an initial estimator of $\bar{Q}_{0}$ and $G}%{\bar{G}_{n,h}$ as the estimator of $G}%{\bar{G}_{0}$. Specifically, for every $h \in \mathcal{H} \cap ]0,h_{n,k-1}]$, we define $\bar{Q}_{n,h,\varepsilon}^{k-1}$ as in \eqref{eq:fluct:ref} with $\bar{Q}_{n,h_{n,k-1}}^{*}$ substituted for $\bar{Q}_{n}^{0}$, $\varepsilon_{n,h,k}$ as in \eqref{eq:opt:eps:ref} with $\bar{Q}_{n,h,\varepsilon}^{k-1}$ substituted for $\bar{Q}_{n,h,\varepsilon}^{0}$ (\As{2}{$P_{n}, k$} assumes the existence of $\varepsilon_{n,h,k}$), then set $\bar{Q}_{n,h}^{(*)} \equiv \bar{Q}_{n,h,\varepsilon_{n,h,k}}^{k}$ and find $P_{n,h}^{(*)} \in \mathcal{M}$ such that the marginal law of $W$ under $P_{n,h}^{(*)}$ is the empirical law $Q_{W,n}$ of $\{W_{1}, \ldots, W_{n}\}$ and the conditional expectation of $Y$ given $(A,W)$ under $P_{n,h}^{(*)}$ equals $\bar{Q}_{n,h}^{(*)}$, hence $Q_{n,h}^{(*)} = (Q_{W,n}, \bar{Q}_{n,h}^{(*)})$ on the one hand; and the conditional expectation of $A$ given $W$ under $P_{n,h}^{(*)}$ coincides with $G}%{\bar{G}_{n,h}$ on the other hand. We assume that $h \mapsto P_{n} L_{2} (Q_{n,h}^{(*)})$ is minimized globally at $h_{n,k}$ in the interior of $\mathcal{H} \cap ]0,h_{n,k-1}]$ (an assumption that we call \As{3}{$P_{n},k$}). If there are several minimizers, then $h_{n,k}$ is the largest of them by choice. Moreover, we also assume that $h \mapsto \varepsilon_{n,h,k}$, $h \mapsto 1/G}%{\bar{G}_{n,h}(W_{i})$ and $h \mapsto 1/(1-G}%{\bar{G}_{n,h}(W_{i}))$ (all $1 \leq i \leq n$) are differentiable in an open neighborhood of $h_{n,k}$ (an assumption that we call \As{4}{$P_{n},k$}). Consequently, $\partial_{h_{n,k}} D^{*} (\bar{Q}_{n,h_{n,k}}^{(*)}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}})(O_{i})$ is well defined for each $1 \leq i \leq n$ (see \eqref{eq:dD:general}), $h \mapsto P_{n} L_{2} (\bar{Q}_{n,h}^{(*)})$ is differentiable in that neighborhood and, since $h_{n,k}$ minimizes the previous mapping, \begin{equation*} \varepsilon_{n,h,k} \times P_{n} \partial_{h_{n,k}} D^{*} (\bar{Q}_{n,h_{n,k}}^{(*)}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) = 0. \end{equation*} If $\varepsilon_{n,h,k} \neq 0$ (an assumption that we call \As{5}{$P_{n}, k$}), then it holds that \begin{equation*} P_{n} \partial_{h_{n,k}} D^{*} (\bar{Q}_{n,h_{n,k}}^{(*)}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) = 0. \end{equation*} To complete the presentation and the recursion, we define $\bar{Q}_{n,h_{n,k}}^{*} \equiv \bar{Q}_{n,h_{n,k}}^{(*)}$, $Q_{n,h_{n,k}}^{*} \equiv Q_{n,h_{n,k}}^{(*)}$, $P_{n,h_{n,k}}^{*} \equiv P_{n,h_{n,k}}^{(*)}$, $\psi_{n,h_{n,k}}^{*} \equiv \Psi(P_{n,h_{n,k}}^{*})$, and note that they satisfy \begin{gather} \label{eq:two:eqns:solved:recur} P_{n} \partial_{h_{n,k}} D^{*} (\bar{Q}_{n,h_{n,k}}^{*}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) = P_{n} D^{*} (P_{n,h_{n,k}}^{*}) = 0 \qquad \text{and}\\ \notag P_{n} L_{2} (\bar{Q}_{n,h_{n,\ell}}^{*}) < P_{n} L_{2} (\bar{Q}_{n,h_{n,\ell-1}}^{*}) \end{gather} for all $1 \leq \ell \leq k$. \end{description} We discuss when to stop the loop in the next paragraph. The collection $\{P_{n,h_{n,k}}^{*}: 0 \leq k \leq K_{n}\}$ of TMLEs is arguably built collaboratively, as the derivation of every $P_{n,h_{n,\ell}}^{*}$ heavily depends on $P_{n,h_{n,\ell-1}}^{*}$. The loop is iterated until a stopping criterion is met. The instantiations of the collaborative TMLE laid out in Section~\ref{sec:software} rely on the LASSO logistic regression algorithm. It is thus possible to pre-specify an upper bound on $K_{n}$. In general, we may decide to stop the recursive construction whenever a maximal number $K$ of iterations has been reached, or $h_{n,k} \leq \hbar$, or $M$ successive TMLEs $\psi_{n,h_{n,k+m}}^{*}$ ($0 \leq m < M$) all belong to an interval of length smaller than $\eta_{n,k}$, for some user-supplied integers $K_{\max}$, $M$ and small positive numbers $h_{\min}$ and $\eta_{n,k}$, the former chosen such that $\mathcal{H} \cap ]0,h_{\min}[$ is non-empty and the latter possibly sample-size- and data-driven. The choice of $K_{\max}$ would typically be driven by considerations about the computational time. The choice of $h_{\min}$ would typically depend on the collection $\{\hat{G}%{\bar{G}}_h : h \in \mathcal{H}\}$ of $h$-specific algorithms, $h\leq h_{\min}$ meaning that too much undersmoothing is certainly at play when using $\hat{G}%{\bar{G}}_{h}$. We would suggest choosing $M\equiv 3$ and characterizing $\eta_{n,k}$ by $\eta_{n,k}^{2} \equiv \Upsilon_{P_{n}}(P_{n,h_{n,k}}^{*}) / 10n$ with $\Upsilon_{P_{n}} : \mathcal{M} \to \mathbb{R}_{+}^{*}$ given by \begin{equation*} \Upsilon_{P_{n}}(P) \equiv E_{P_{n}}\left[D^{*}(P)(O)^{2}\right] = \frac{1}{n} \sum_{i=1}^{n} D^{*}(P)(O_{i})^{2}. \end{equation*} The definition of $\Upsilon_{P_{n}}$ is justified by the fact that $\Upsilon_{P_{n}} (D^{*}(P_{n,h_{n}}^{*}))$ estimates the asymptotic variance of the TMLE $\Psi(P_{n,h_{n}}^{*})$ in the context where we prove \eqref{eq:asymp:exp:intro} (with $\mathrm{IF} = D^{*} (P_{0})$) in Section~\ref{subsec:select:uncoop}. \subsection{Selecting one of the finitely many collaborative TMLEs } \label{subsec:select:colla} It remains to determine which TMLE to select among the collection of collaborative TMLEs that we constructed in Section~\ref{subsec:continuum:CTMLEs}. Again, the selection hinges on the cross-validation principle. The recursive construction described in Section~\ref{subsec:continuum:CTMLEs} can be applied to the empirical measure $\mathbb{P}_{n}$ of any subset of the complete data set. Starting from $h_{n,\mathrm{CV}}$ (as defined in \eqref{eq:CV:selector} even when $\mathbb{P}_{n}$ differs from $P_{n}$), let the 5-tuple $(\mathbb{H}_{n,1}, \bar{\bbQ}_{n,\mathbb{H}_{n,1}}^{*}, \mathbb{Q}_{n,\mathbb{H}_{n,1}}^{*}, \mathbb{P}_{n,\mathbb{H}_{n,1}}^{*}, \Psi(\mathbb{P}_{n,\mathbb{H}_{n,1}}^{*}))$ be defined like the 5-tuple $(h_{n,1},\bar{Q}_{n,h_{n,1}}^{*}, Q_{n,h_{n,1}}^{*}, P_{n,h_{n,1}}^{*}, \psi_{n,h_{n,1}}^{*})$ with $\mathbb{P}_{n}$ substituted for $P_{n}$, under assumptions \As{1}{$\mathbb{P}_{n}, 1$} to \As{5}{$\mathbb{P}_{n}, 1$}. Then, recursively, let $(\mathbb{H}_{n,k}, \bar{\bbQ}_{n,\mathbb{H}_{n,k}}^{*}, \mathbb{Q}_{n,\mathbb{H}_{n,k}}^{*}, \mathbb{P}_{n,\mathbb{H}_{n,k}}^{*}, \Psi(\mathbb{P}_{n,\mathbb{H}_{n,k}}^{*}))$ be defined like $(h_{n,k},\bar{Q}_{n,h_{n,k}}^{*}, Q_{n,h_{n,k}}^{*}, P_{n,h_{n,k}}^{*}, \psi_{n,h_{n,k}}^{*})$ with $\mathbb{P}_{n}$ substituted for $P_{n}$, under assumptions \As{1}{$\mathbb{P}_{n}, k$} to \As{5}{$\mathbb{P}_{n}, k$}. The recursive construction is stopped when $\mathbb{K}_{n}$ 5-tuples have been derived, where $\mathbb{K}_{n}$ is defined like $K_{n}$ with $\mathbb{P}_{n}$ substituted for $P_{n}$. The collection \begin{equation} \label{eq:bbCTMLE} \left\{(\mathbb{H}_{n,k}, \bar{\bbQ}_{n,\mathbb{H}_{n,k}}^{*}, \mathbb{Q}_{n,\mathbb{H}_{n,k}}^{*}, \mathbb{P}_{n,\mathbb{H}_{n,k}}^{*}, \Psi(\mathbb{P}_{n,\mathbb{H}_{n,k}}^{*})) : 1 \leq k \leq \mathbb{K}_{n}\right\} \end{equation} of $\mathbb{K}_{n}$ collaborative TMLEs is used to define a continuum of collaborative TMLEs in the following straightforward way. The challenge is to associate a 4-tuple $(\bar{\bbQ}_{n,h}^{*}, \mathbb{Q}_{n,h}^{*}, \mathbb{P}_{n,h}^{*}, \Psi(\mathbb{P}_{n,h}^{*}))$ to any $h \in \mathcal{H} \cap ]0, h_{n,\mathrm{CV}}]$. To do so, we simply let $\mathbb{H}_{n}(h)$ be the element of $\{\mathbb{H}_{n,k} : 1 \leq k \leq \mathbb{K}_{n}\}$ that is closest to $h$ (with a preference for the larger of the two closer ones when $h$ is right in the middle), that is, formally, we set \begin{equation} \label{eq:bbH} \mathbb{H}_{n}(h) \equiv \max \Big\{\mathbb{H}_{n,k} : |h - \mathbb{H}_{n,k}| = \min \{|h - \mathbb{H}_{n,\ell}| : 1 \leq \ell \leq \mathbb{K}_{n}\}\Big\} \end{equation} and associate to $h$ the corresponding 4-tuple \begin{equation*} (\bar{\bbQ}_{n,\mathbb{H}_{n}(h)}^{*}, \mathbb{Q}_{n,\mathbb{H}_{n}(h)}^{*}, \mathbb{P}_{n,\mathbb{H}_{n}(h)}^{*}, \Psi(\mathbb{P}_{n,\mathbb{H}_{n}(h)}^{*})). \end{equation*} Let $B_{n}$ be the cross-validation scheme introduced in Section~\ref{subsec:select:uncoop}. By convention, let the $\max$ of the empty set be 0. The collaborative TMLE that we select is \begin{equation} \label{eq:CTMLE} (\bar{Q}_{n,h_{n,\kappa_{n}}}^{*}, Q_{n,h_{n,\kappa_{n}}}^{*}, P_{n,h_{n,\kappa_{n}}}^{*}, \psi_{n,h_{n,\kappa_{n}}}^{*}) \end{equation} where $\kappa_{n}$ is given by \begin{equation} \label{eq:kappa} \kappa_{n} \equiv 1 \vee \max \left\{1 \leq k \leq K_{n} : h_{n,k} \geq \mathop{\arg\min}_{h \in \mathcal{H} \cap ]0, h_{n,\mathrm{CV}}]} E_{B_n} \left[P_{n,B_n}^1 L_{2}(\bar{\bbQ}_{n,\mathbb{H}_{n}(h)}^{*}\big|_{\mathbb{P}_{n} = P_{n,B_n}^0}))\right]\right\}. \end{equation} In words, $\kappa_{n}$ is the unique element of $\{1, \ldots, K_{n}\}$ such that $h_{n,\kappa_{n}}$ is the smallest element of $\{h_{n,1}, \ldots, h_{n,K_{n}}\}$ that is larger than the minimizer of the cross-validated $L_{2}$-risk of the collaborative TMLE, if there exists such an element, and 1 otherwise. In \eqref{eq:kappa}, $\bar{\bbQ}_{n,\mathbb{H}_{n}(h)}^{*}\big|_{\mathbb{P}_{n} = P_{n,B_n}^0}$ equals $\bar{\bbQ}_{n,\mathbb{H}_{n}(h)}^{*}$ when $\mathbb{P}_{n} = P_{n,B_n}^0$. The contrast between $h_{n,\kappa_{n}}$ and $h_{n,\mathrm{CV}}$ is stark. At first glance, the main difference is that the role play by cross-validated $L_{1}$-risks of algorithms to estimate $G}%{\bar{G}_{0}$ in \eqref{eq:CV:selector} is played by cross-validated $L_{2}$-risks of algorithms to estimate $\bar{Q}_{0}$ in \eqref{eq:kappa}. A closer examination reveals that the difference is deeper. Replacing $L_{1} (\hat{G}%{\bar{G}}_{h} (P_{n,B_{n}}^{0}))$ by $L_{2} (\bar{Q}_{n,B_{n}h}^{0*})$ (with $\bar{Q}_{n,B_{n},h}^{0*}$ defined like $\bar{Q}_{n,h}^{*}$ in Section~\ref{subsec:continuum:TMLEs} but based on $P_{n,B_{n}}^{0}$ instead of $P_{n}$) would not make of the resulting alternative cross-validated selector of $h$ a good candidate: because of the inherent lack of cooperation between the uncooperative TMLEs $\psi_{n,h}^{*}$, the resulting estimator of $G}%{\bar{G}_{0}$ would not even be consistent. This fact motivates the general C-TMLE methodology, of which the present instantiation includes a twist consisting in solving two critical equations, see \eqref{eq:two:eqns:solved:recur}. \subsection{Asymptotics} \label{subsec:asymptot} The study of the asymptotic properties of the collaborative TMLE $\psi_{n,h_{n,\kappa_{n}}}^{*}$ hinges on Theorem~\ref{theo:high:level}. We first specify two pseudo-distances $d_{\mathcal{G}}%{\bar{\mathcal{G}}}$ and $d_{\mathcal{Q}}$ on $\mathcal{G}}%{\bar{\mathcal{G}}$ and $\mathcal{Q}$ in light of requirement \eqref{eq:remainder:bound:intro}. On the one hand, because we will eventually assume that $G}%{\bar{G}_{0}$ is bounded away from zero and one, \eqref{eq:remainder} yields that we can choose $d_{\mathcal{G}}%{\bar{\mathcal{G}}}$ such that, for each $G}%{\bar{G}_{1}, G}%{\bar{G}_{2} \in \mathcal{G}}%{\bar{\mathcal{G}}$, \begin{equation*} d_{\mathcal{G}}%{\bar{\mathcal{G}}} (G}%{\bar{G}_{1},G}%{\bar{G}_{2})^{2} \equiv P_{0} (G}%{\bar{G}_{1}-G}%{\bar{G}_{2})^{2}. \end{equation*} On the other hand, note that any data-dependent $Q_{n} \equiv (Q_{W,n}, \bar{Q}_{n}) \in \mathcal{Q}$ naturally gives rise to a substitution estimator $\psi_{n}$ of $\psi_{0}$: \begin{equation*} \psi_{n} \equiv E_{Q_{W,n}} \left(\bar{Q}_{n} (1, W) - \bar{Q}_{n} (0,W)\right) = \frac{1}{n} \sum_{i=1}^{n} \left(\bar{Q}_{n} (1, W_{i}) - \bar{Q}_{n} (0,W_{i})\right). \end{equation*} It is easy to check (see the appendix) that the following result holds. \begin{lemma} \label{lem:easy} Assume that $G}%{\bar{G}_{0}$ is bounded away from zero and one and that $\bar{Q}_{n}(1,\cdot) - \bar{Q}_{n}(0,\cdot)$ falls in a $P_{0}$-Donsker class with $P_{0}$-probability tending to one. If $P_{0} (\bar{Q}_{n} - \bar{Q}_{1})^{2} = o_{P} (1)$ for some $\bar{Q}_{1}$, then $\psi_{n} = P_{0} (\bar{Q}_{1}(1,\cdot) - \bar{Q}_{1}(0,\cdot)) + o_{P} (1)$. \end{lemma} Since we always estimate the marginal law of $W$ under $P_{0}$, $Q_{W,0}$, with its empirical counterpart $Q_{W,n}$, we can thus define the pseudo-distance $d_{\mathcal{Q}}$ by setting, for each $Q_{1}, Q_{2} \in \mathcal{Q}$, \begin{equation*} d_{\mathcal{Q}} (Q_{1}, Q_{2})^{2} \equiv P_{0} (\bar{Q}_{1} - \bar{Q}_{2})^{2}, \end{equation*} an expression that does not depend on the first components of $Q_{1}$ and $Q_{2}$. Consider the following inter-dependent assumptions. The first one is related to \textbf{A1}$(Q,h,c)$ and completes \As{5}{$P_{n}, h_{n,\kappa_{n}}$}. \begin{description} \item[\textbf{C1}] There exists a universal constant $C_{1} \in ]0,1/2[$ such that $G}%{\bar{G}_{0}$ and any by-product $G}%{\bar{G}_{n,h}$ of algorithm $\hat{G}%{\bar{G}}_{h}$ (any $h \in \mathcal{H}$) trained on the empirical measure $P_{n}$ take their values in $[C_{1}, 1-C_{1}]$. Moreover, there exists an open neighborhood $\mathcal{T} \subset \mathcal{H}$ of $h_{n,\kappa_{n}}$ and a universal constant $C_{2} > 0$ such that $t \mapsto G}%{\bar{G}_{n,t}(W)$ is twice differentiable over $\mathcal{T}$ ($P_{0}$-almost surely) and, $P_{0}$-almost surely, \begin{equation*} \sup_{h \in \mathcal{T}} \left|\frac{d}{dt} G}%{\bar{G}_{n,t} (W)|_{t=h}\right| \vee \sup_{h \in \mathcal{T}} \left|\frac{d^{2}}{dt^{2}} G}%{\bar{G}_{n,t} (W)|_{t=h}\right| \leq C_{2}. \end{equation*} \end{description} When \textbf{C1} is met, we denote $G}%{\bar{G}_{n,h}'(W)$ the first derivative of $t \mapsto G}%{\bar{G}_{n,t}(W)$ at $h \in \mathcal{H}$. \begin{description} \item[\textbf{C2}] Both $G}%{\bar{G}_{n, h_{n,\kappa_{n}}}$ and $\bar{Q}_{n,h_{n,\kappa_{n}}}^{*}$ converge in $L^{2}(P_{0})$, to $G}%{\bar{G}_{0}$ and $\bar{Q}_{1}$ respectively. Moreover, it holds that $P_{0} (G}%{\bar{G}_{n,h_{n,\kappa_{n}}} - G}%{\bar{G}_{0})^{2} = o_{P}(1/\sqrt{n})$ and $P_{0} (G}%{\bar{G}_{n,h_{n,\kappa_{n}}} - G}%{\bar{G}_{0})^{2} \times P_{0} (\bar{Q}_{n,h_{n,\kappa_{n}}}^{*} - \bar{Q}_{1})^{2} = o_{P}(1/n)$. \item[\textbf{C3}] Assumption \textbf{A4} is met, $(h_{n,\kappa_{n}} - \th_{n})^{2} = o_{P}(1/\sqrt{n})$ and $(h_{n,\kappa_{n}} - \th_{n})^{2} \times P_{0} (\bar{Q}_{n,h_{n,\kappa_{n}}}^{*} - \bar{Q}_{1})^{2} = o_{P}(1/n)$. \item[\textbf{C4}] With $P_{0}$-probability tending to one, $\bar{Q}_{n,h_{n,\kappa_{n}}}^{*}$, $G}%{\bar{G}_{n,h_{n,\kappa_{n}}}$, $G}%{\bar{G}_{n,\th_{n}}$ and $G}%{\bar{G}_{n,h_{n,\kappa_{n}}}'$ fall in $P_{0}$-Donsker classes. \end{description} We are now in a position to state the corollary of Theorem~\ref{theo:high:level} that describes the asymptotic properties of the collaborative TMLE targeting the average treatment effect. \begin{corollary}[Asymptotics of the collaborative TMLE -- targeting the average treatment effect] \label{theo:specific} Suppose that assumptions \As{1}{$\cdot, \cdot$} to \As{5}{$\cdot, \cdot$} that we made in Sections~\ref{subsec:continuum:CTMLEs} and \ref{subsec:select:colla} when constructing the collaborative TMLE given in \eqref{eq:CTMLE} are met. In addition, suppose that \textbf{C1} to \textbf{C4} are satisfied. Then \begin{equation*} \psi_{n,h_{n,\kappa_{n}}}^{*} - \Psi(P_{0}) = (P_{n} - P_{0}) \left(D^{*} (Q_{1}, G}%{\bar{G}_{0}) + \Delta(P_{1})\right) + o_{P} (1/\sqrt{n}). \end{equation*} \end{corollary} By the central limit theorem, the corrolary implies that $\sqrt{n} (\psi_{n,h_{n,\kappa_{n}}}^{*} - \Psi(P_{0}))$ converges in law to the centered Gaussian law with a variance $\sigma^{2} \equiv P_{0} (D^{*} (Q_{1}, G}%{\bar{G}_{0}) + \Delta(P_{1}))^{2}$. Therefore, provided that we can estimate $\sigma^{2}$ consistently (or conservatively), we can build CIs for $\Psi(P_{0})$ with a given asymptotic level. Sections~\ref{sec:software}, \ref{sec:experiments} and \ref{sec:transfer} investigate the practical implementation of the collaborative TMLE and its performances in a simulation study. \section{Collaborative TMLE for continuous tuning when inferring the average treatment effect: practical implementation} \label{sec:software} In this section, we describe the practical implementation of the two instantiations of the collaborative TMLE algorithm presented and studied in Section~\ref{sec:ctmle_con}. In both of them, the collection $\{\hat{G}%{\bar{G}}_{h} : h \in \mathcal{H}\}$ is embodied in \verb|R| by the \verb|glmnet| algorithm~\citep{glmnet2010}. The nature of algorithm $\hat{\bar{Q}}$ is left unspecified. As for $\bar{Q}_{n}^{0}$, it is obtained once and for all by training $\hat{\bar{Q}}$ on $P_{n}$ at the beginining of the procedure. More specifically, we never evaluate $\hat{\bar{Q}}(\mathbb{P}_{n})$ for $\mathbb{P}_{n} \neq P_{n}$. \subsection{LASSO-C-TMLE} \label{sec:lasso:ctmle} We now describe our LASSO-C-TMLE algorithm. Recall that $\mathbb{P}_{n}$ denotes the empirical measure of a generic subset of the complete data set. The following algorithm implements the theoretical procedure laid out in Sections~\ref{subsec:continuum:CTMLEs} and \ref{subsec:select:colla}. \begin{enumerate} \item\label{enum:one} Build a sequence $\{G}%{\bar{G}_{n, h} \equiv \hat{G}%{\bar{G}}_{h} (P_{n}): h \in \mathcal{H}_{100}\}$ by computing a discretized version of the path of the LASSO logistic regression of $A$ on $W$ with a regularization parameter $h$ ranging in the set $\mathcal{H}_{100}$ provided by \verb|cv.glmnet| with options \verb|nlambda=100| (hence $\text{card}(\mathcal{H}_{100}) = 100$) and \verb|nfolds=10|. Set $h_{\min} \equiv \min \mathcal{H}_{100}$ and let $h_{n,\mathrm{CV}}$ be equal to \verb|lambda.min|. \item Build a sequence $\{\mathbb{G}}%{\bar{\mathbb{G}}_{n, h} \equiv \hat{G}%{\bar{G}}_{h} (\mathbb{P}_{n}): h \in \mathcal{H}_{100} \cap [h_{\min}, h_{n,\mathrm{CV}})\}$ by computing a discretized version of the path of the LASSO logistic regression of $A$ on $W$ with a regularization parameter $h$ ranging in $\mathcal{H}_{100} \cap [h_{\min}, h_{n,\mathrm{CV}})$ using \texttt{glmnet} with a \texttt{lambda} set to $\mathcal{H}_{100} \cap [h_{\min}, h_{n,\mathrm{CV}})$ from step~\ref{enum:one}. \item[] Set $k\equiv 1$ and $\mathbb{H}_{n,k-1} \equiv h_{n,\mathrm{CV}}$. \item \label{enum:three} For every $h \in \mathcal{H}_{100} \cap [h_{\min}, \mathbb{H}_{n,k-1})$, determine $\bar{\bbQ}_{n,h}^{k}$ by fluctuating $\bar{\bbQ}_{n}^{k-1}$ based on $\mathbb{G}}%{\bar{\mathbb{G}}_{n,h}$ (and $\mathbb{P}_{n}$) as Section~\ref{subsec:continuum:TMLEs}. \item\label{enum:four} Identify the minimizer $\mathbb{H}_{n,k}$ of $h \mapsto \mathbb{P}_{n} L_{2} (\bar{\bbQ}_{n,h}^{k})$ over $\mathcal{H}_{100} \cap [h_{\min}, \mathbb{H}_{n,k-1})$, define and store $\bar{\bbQ}_{n,h}^{*} \equiv \bar{\bbQ}_{n,h}^{k}$ for every $h \in \mathcal{H}_{100} \cap [\mathbb{H}_{n,k}, \mathbb{H}_{n,k-1})$, and finally define $\bar{\bbQ}_n^{k} \equiv \bar{\bbQ}_{n,h_{n,k}}^{k}$. \item\label{enum:five} As long as $\mathbb{H}_{n,k} > h_{\min}$, set $k \leftarrow k+1$ and repeat steps~\ref{enum:three} and \ref{enum:four} recursively. \end{enumerate} The algorithm necessarily converges in a finite number of repetitions of step~\ref{enum:four}. Let $\mathbb{K}_{n}$ be the number of repetitions. For every $1 \leq k \leq \mathbb{K}_{n}$, set $\mathbb{Q}_{n,\mathbb{H}_{n,k}}^{*} \equiv (\mathbb{Q}_{W,n}, \bar{\bbQ}_{n,\mathbb{H}_{n,k}}^{*}) \in \mathcal{Q}$ (its first component is the empirical law of $W$ under $\mathbb{P}_{n}$) and let $\mathbb{P}_{n,\mathbb{H}_{n,k}}^{*} \in \mathcal{M}$ be any element of model $\mathcal{M}$ of which the $Q$-component equals $\mathbb{Q}_{n,\mathbb{H}_{n,k}}^{*}$. The collection \eqref{eq:bbCTMLE} of $\mathbb{K}_{n}$ collaborative TMLEs and mapping $h \mapsto \mathbb{H}_{n} (h)$ over $\mathcal{H}_{100} [h_{\min}, h_{n,\mathrm{CV}})$ as in \eqref{eq:bbH} are thus now well defined. Recall the definition of the cross-validation scheme $B_{n}$ introduced in Section~\ref{subsec:select:uncoop}. Set \begin{equation*} \hbar(P_{n}) \equiv \mathop{\arg\min}_{h \in \mathcal{H}_{100} \cap [h_{\min}, h_{n,\mathrm{CV}})} E_{B_{n}} \left[P_{n,B_{n}}^{1} L_{2} (\left.\bar{\bbQ}_{n,\mathbb{H}_{n}(h)}\right|_{\mathbb{P}_{n} = P_{n,B_{n}}^{0}})\right] \end{equation*} and run once steps~\ref{enum:one} to \ref{enum:five} with $\mathbb{P}_{n} \equiv P_{n}$, hence the collection \begin{equation*} \left\{(h_{n,k}, \bar{Q}_{n,h_{n,k}}^{*}, Q_{n,h_{n,k}}^{*}, P_{n,h_{n,k}}^{*}, \Psi(P_{n,h_{n,k}}^{*})) : 1 \leq k \leq K_{n}\right\} \end{equation*} of collaborative TMLEs. Finally, set \begin{equation*} \kappa_{n} \equiv 1 \vee \max \left\{1 \leq k \leq K_{n} : h_{n,k} \geq \hbar(P_{n})\right\}. \end{equation*} The collaborative TMLE that we select, our LASSO-C-TMLE estimator, is $\Psi(P_{n,h_{n,\kappa_{n}}}^{*})$, as in \eqref{eq:CTMLE}. \subsection{LASSO-PSEUDO-C-TMLE} \label{sec:lasso:pseudo:ctmle} The LASSO-C-TMLE procedure described in Section~\ref{sec:lasso:ctmle} is quite demanding computationally. It is thus tempting to try and develop an alternative algorithm that would mimick LASSO-C-TMLE but be simpler. In Section~\ref{subsec:select:colla}, we emphasized (see comment before statement of theorem) that one of the keys of LASSO-C-TMLE is to ensure the existence of $h_{n} \in \mathcal{H}$ and $\bar{Q}_{n,h_{n}}^{*}$ such that \begin{equation*} P_{n} \partial_{h_{n}} D^{*} (\bar{Q}_{n,h_{n}}^{*}, G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) = 0. \end{equation*} If we knew how to compute the derivative $G}%{\bar{G}_{n,h}'(W)$ of $t \mapsto G}%{\bar{G}_{n,t}(W)$ at $t=h$, then this could be easily achieved by enriching the fluctuation of the initial $\bar{Q}_{n}^{0} \equiv \hat{\bar{Q}} (P_{n})$. Specifically, in light of \eqref{eq:clever} and \eqref{eq:fluct:ref}, given any $h\in \mathcal{H}$, we would define \begin{equation} \label{eq:clever:two} \mathcal{C}_{h}^{+} (G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) (A,W) \equiv \mathcal{C} (G}%{\bar{G}_{n,h}) (A,W) \left(1, G}%{\bar{G}_{n,h}'(W)\right), \end{equation} introduce $\bar{Q}_{n,h,\varepsilon^{+}}^{0}$ characterized for any $\varepsilon^{+} \in \mathbb{R}^{2}$ by \begin{equation*} \logit \left(\bar{Q}_{n,h,\varepsilon^{+}}^{0} (A,W)\right) \equiv \logit \left(\bar{Q}_{n}^{0} (A,W)\right) + \mathcal{C}_{h}^{+}(G}%{\bar{G}_{n,\raisebox{-0.3ex}{\scalebox{1.2}{$\cdot$}}}) (A,W) \varepsilon^{+} \end{equation*} and $P_{n,h,\varepsilon^{+}}^{0} \in \mathcal{M}$ defined like $P_{n,h,\varepsilon}^{0}$ except that the conditional expectation of $Y$ given $(A,W)$ under $P_{n,h,\varepsilon^{+}}^{0}$ equals $\bar{Q}_{n,h,\varepsilon^{+}}^{0}$ (and not $\bar{Q}_{n,h,\varepsilon}^{0}$). Then, the optimal fluctuation would be indexed by the minimizer of the empirical risk \begin{equation*} \varepsilon_{n,h}^{+} \equiv \mathop{\argmin}_{\varepsilon^{+} \in \mathbb{R}^{2}} P_{n} L_{2} (\bar{Q}_{n,h,\varepsilon^{+}}^{0}). \end{equation*} It would result in $\bar{Q}_{n,h}^{*+} \equiv \bar{Q}_{n,h,\varepsilon^{+}}^{0}$, $P_{n,h}^{*+} \equiv P_{n,h,\varepsilon_{n,h}^{+}}^{0}$ and the TMLE \begin{equation*} \psi_{n,h}^{*+} \equiv \Psi(P_{n,h}^{*+}) = \frac{1}{n} \sum_{i=1}^{n} \left(\bar{Q}_{n,h}^{*+} (1, W_{i}) - \bar{Q}_{n,h}^{*+} (0, W_{i}) \right) \end{equation*} where, by construction, we would have \begin{equation*} P_{n} D^{*} (\bar{Q}_{n,h}^{*+}, G}%{\bar{G}_{n,h}) = P_{n} \partial_{h} D^{*} (\bar{Q}_{n,h}^{*+}, G}%{\bar{G}_{n,h}) = 0. \end{equation*} The LASSO-PSEUDO-C-TMLE algorithm that we now describe adapts the above procedure. It unfolds as follows. \begin{enumerate} \item Build a sequence $\{G}%{\bar{G}_{n, h} \equiv \hat{G}%{\bar{G}}_{h} (P_{n}): h \in \mathcal{H}_{100}\}$ by computing a discretized version of the path of the LASSO logistic regression of $A$ on $W$ with a regularization parameter $h$ ranging in the set $\mathcal{H}_{100}$ provided by \verb|cv.glmnet| with option \verb|nlambda=100| (hence $\text{card}(\mathcal{H}_{100}) = 100$). Let $h_{n}$ be equal to \verb|lambda.min|. Our estimator of $G}%{\bar{G}_{0}$ is $G}%{\bar{G}_{n,h_{n}}$. \item Choose arbitrarily $h_{n}^{+} \in \mathop{\arg\min} \{|h - h_{n}| : h \in \mathcal{H}_{100}, h \neq h_{n}\}$ and, for every $1 \leq i \leq n$, define \begin{equation*} G}%{\bar{G}_{n,h_{n}}^{\prime +} (W_{i}) \equiv \frac{G}%{\bar{G}_{n,h_{n}^{+}} (W_{i}) - G}%{\bar{G}_{n,h_{n}} (W_{i})}{h_{n}^{+} - h_{n}}, \end{equation*} a rudimentary numerical approximation of the derivative $G}%{\bar{G}_{n,h_{n}}' (W_{i})$ of $t \mapsto G}%{\bar{G}_{n,t} (W_{i})$ at $t = h_{n}$. \item Determine $\bar{Q}_{n,h_{n}}^{*+}$ and $P_{n,h_{n}}^{*+}$ as described above, with $h=h_{n}$ and $G}%{\bar{G}_{n,h_{n}}^{\prime +}$ substituted for $G}%{\bar{G}_{n,h_{n}}'$ in \eqref{eq:clever:two}. \end{enumerate} The LASSO-PSEUDO-C-TMLE estimator is $\psi_{n,h_{n}}^{*+}$. \section{Main simulation study} \label{sec:experiments} In this section, we present the results of a multi-faceted simulation study of the behaviors and performances of the two instantiations of the collaborative TMLE described in Section~\ref{sec:software}. Section~\ref{subsec:sim} specifies the synthetic data-generating distribution $P_{0}$ that we use, Section~\ref{subsec:compet} introduces the competing estimators, Section~\ref{subsec:strategy} outlines the structure of the simulation study, and Section~\ref{subsec:results} gathers its results. Written in \verb|R|~\citep{R}, our code makes extensive use of the \verb|C-TMLE| package~\citep{gruber2010ctmle}. \subsection{Synthetic data-generating distribution} \label{subsec:sim} Our synthetic data-generating distribution $P_{0} = \Pi_{0,p,\delta}$ depends on two fine-tune parameters: the dimension~$p$ of the baseline covariate $W$ and a nonnegative constant $\delta \geq 0$. Sampling $O \equiv (W,A,Y)$ under $\Pi_{0,p,\delta}$ unfolds sequentially along the following steps. \begin{enumerate} \item Sample $\tilde{W}$ from the centered Gaussian law on $\mathbb{R}^{M}$, $M = \ceil{p/10}$, of which the covariance matrix $\Sigma$ is the block-diagonal matrix $(A_{kl})_{1 \leq k, l \leq M}$ where: $A_{11}$ is the $10\times 10$ identity matrix; each $A_{kk}$ for $1<k\leq M$ is the block-diagonal matrix $(B_{k,st})_{1 \leq s,t \leq 4}$ with \begin{equation*} B_{k,11} = \left( \begin{array}{ccc} 1&0&.25\\ 0&1&.25\\ .25&.25&1\\ \end{array} \right), \quad B_{k,22} = B_{k,33} = \left( \begin{array}{cc} 1&.5\\ .5&1\\ \end{array} \right), \quad B_{k,44} = \left( \begin{array}{ccc} 1&.5&0\\ .5&1&0\\ 0&0&1\\ \end{array} \right) \end{equation*} and $B_{k,st}$ is a zero matrix for $1 \leq s \neq t \leq 4$; each $A_{kl}$ for $1 \leq k\neq l \leq M$ is a zero matrix. If $p=10$, then $\Sigma = A_{11}$ and we set $W \equiv \tilde{W}$. If $M > 10p$, then we set $W \equiv (\tilde{W}_{1}, \ldots, \tilde{W}_{p})^{\top}$. \item Sample $A$ conditionally on $W$ from the Bernoulli law with paramater \begin{equation*} G}%{\bar{G}_{0} (W) \equiv \expit \left(\delta + \sum_{k=1}^{p} \beta_{k} W_{k}\right), \end{equation*} where $(\beta_1, \cdots, \beta_p) = (1, 1, 3/(p-2), \ldots, 3/(p-2))$. \item Sample $\tilde{Y}$ conditionally on $(A,W)$ from the Gaussian law with (conditional) variance 1/25 and expectation \begin{equation*} f_{0}(A,W) \equiv \frac{2}{5} (1 + W_{1} + W_{2} + W_{5} + W_{6} + W_{8} + A), \end{equation*} then define $Y \equiv \expit(\tilde{Y})$. \end{enumerate} The covariance matrix $\Sigma$ induces a loose dependence structure. The components of $\tilde{W}$ can be gathered in $1+4\times (M-1)$ independent groups, one group consisting of $10+(M-1)$ independent random variables, and the other groups consisting of either two or three mildly dependent random variables (with correlations equal to either 0.25 or 0.5). Neither \begin{equation*} \bar{Q}_{0} (A,W) \equiv \int_{[0,1]} \frac{e^{-[\logit(u) - f_{0}(A,W)]^{2}/50}}{10\sqrt{\pi} (1 - u)} du \end{equation*} nor $\Psi(\Pi_{0,p,\delta})$ has a closed form expression. Independently of $p$ and $\delta$, $\Psi(\Pi_{0,p,\delta}) \approx 0.0799$. \subsection{Competing estimators} \label{subsec:compet} Let $O_{1}, \ldots, O_{n}$ be independent draws from $P_{0}$. Recall the characterization of $\hat{G}%{\bar{G}}_{h}$ ($h \in \mathcal{H}_{100}$) and definition of $h_{n,\mathrm{CV}}$ given in Section~\ref{sec:lasso:ctmle}, step~\ref{enum:one}. Let the algorithm $\hat{\bar{Q}}$ for the estimation of $\bar{Q}_{0}$ consist in fitting the working model $\{\bar{Q}_{\theta} : \theta = (\theta_{0}, \theta_{1}), \theta_{0}, \theta_{1} \in \mathbb{R}^{8}\}$ where $\bar{Q}_{\theta}$ is given by \begin{equation*} \bar{Q}_{\theta} (A,W) \equiv \Phi\left((A \theta_{1}^{\top} + (1-A)\theta_{0}^{\top})) (W_{3}, \ldots, W_{10})^{\top}\right) \end{equation*} with $\Phi$ the distribution function of the standard normal law. Note that the working model is necessarily mis-specified, notably because of the absence of $W_{1}$ and $W_{2}$ in the above definition. Recall that $\bar{Q}_{n}^{0}$ is obtained by training $\hat{\bar{Q}}$ on the whole data set once and for all. To emphasize, $\hat{\bar{Q}}$ is never re-trained during the cross-validation procedure. This is consistent with implementation the original instantiation of the C-TMLE algorithm and of its scalable instantiations. We compare the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators of $\Psi(\Pi_{0,p,\delta})$ from Section~\ref{sec:software} with the following commonly used competitors: \begin{itemize} \item the unadjusted estimator: \begin{equation*} \psi_n^{\text{unadj}}\equiv \frac{\sum_{i=1}^n A_iY_i}{\sum_{i=1}^n A_i} - \frac{\sum_{i=1}^n (1-A_i) Y_i}{\sum_{i=1}^n (1-A_i)}; \end{equation*} \item the so called G-comp estimator \citep{robins1986new}: \begin{equation*} \psi_n^{\text{G-comp}} \equiv \frac{1}{n} \sum_{i=1}^n \left(\bar{Q}_n^0(1,W_i) - \bar{Q}_n^0(0,W_i)\right); \end{equation*} \item the so called IPTW estimator \citep{horvitz1952generalization,robins2000marginal}: \begin{equation*} \psi_n^{\text{IPTW}} \equiv \frac{1}{n} \sum_{i=1}^n \frac{ (2A_i - 1)Y_{i}}{G}%{\bar{G}_{n,h_{\mathrm{CV}}}(A_i,W_i)}; \end{equation*} \item the so-called A-IPTW estimator \citep{robins1995semiparametric}: \begin{equation*} \psi_n^{\text{A-IPTW}} \equiv \frac{1}{n} \sum_{i=1}^n \frac{(2A_i-1)}{G}%{\bar{G}_{n,h_{\mathrm{CV}}}(A_i,W_i)} \left(Y_i-\bar{Q}^0_n(W_i,A_i)\right) + \frac{1}{n} \sum_{i=1}^n \left(Q^0_n(1,W_i)-Q^0_n(0,W_i)\right); \end{equation*} \item and the plain TMLE estimator $\psi_{n,h_{n,\mathrm{CV}}}^{*}$, see \eqref{eq:TMLE}. \end{itemize} \subsection{Outline of the structure of the simulation study} \label{subsec:strategy} We consider six different scenarios. In each of them, we repeat independently $B=200$ times the following steps: for each $(n, p, \delta)$ in a collection of scenario-specific triplets, \begin{enumerate} \item simulate a data set of $n$ independent observations drawn from $\Pi_{0,p,\delta}$; \item derive the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators of Sections~\ref{sec:lasso:ctmle} and \ref{sec:lasso:pseudo:ctmle} as well as the competing estimators presented in Section~\ref{subsec:compet}; \item for the double-robust estimators only, \textit{i.e.}, $\psi_n^{\text{A-IPTW}}$, $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ and our two collaborative TMLEs, construct 95\% CIs and check whether or not each of them contains~$\Psi(\Pi_{0,p,\delta})$. \end{enumerate} \subsubsection*{Building confidence intervals based on the collaborative TMLEs.} By Corollary~\ref{theo:specific}, the asymptotic variances of our collaborative TMLEs both write as \begin{equation} \label{eq:asymp:var} \mathrm{Var}_{P_{0}} \left[\left(D^{*}(Q_{1}, G}%{\bar{G}_{0}) + \Delta(P_{1})\right)(O)\right]. \end{equation} Because $\Delta(P_{1})$ is difficult to estimate, we estimate \eqref{eq:asymp:var} with the empirical variance of $D^{*}(P_{n,h_{n,\kappa_{n}}}^{*})$, \textit{i.e.}, with \begin{equation*} P_{n} D^{*}(P_{n,h_{n,\kappa_{n}}}^{*})^{2} = \frac{1}{n} \sum_{i=1}^{n} D^{*}(P_{n,h_{n,\kappa_{n}}}^{*}) (O_{i})^{2} \end{equation*} (recall that $P_{n} D^{*}(P_{n,h_{n,\kappa_{n}}}^{*}) = 0$ by construction). Therefore, the 95\% CIs based on our collaborative TMLEs take the form \begin{equation*} \psi_{n,h_{n,\kappa_{n}}}^{*} \pm 1.96 \sqrt{P_{n} D^{*}(P_{n,h_{n,\kappa_{n}}}^{*})^{2}/n}. \end{equation*} We anticipate that the asymptotic variances are over-estimated, resulting in CIs that are too wide. However, we also anticipate that the omitted correction term is of second order relative to main term, or, put in other words, that the difference between \eqref{eq:asymp:var} and $\mathrm{Var}_{P_{0}} (D^{*}(Q_{1}, G}%{\bar{G}_{0})(O))$ is small. \subsubsection*{Six scenarios.} The three first scenarios investigate what happens when $\delta = 0$ and the number of covariates $p$ increases as a function of sample size $n$. In scenario~1, $p = 0.2 \times n$ and we increase $n$. In scenario~2, $p = \floor{2.83 \times \sqrt{n}}$ and we increase $n$. In scenario~3, $p = \floor{7.6 \times \log n}$ and we increase~$n$. The values of the pairs $(p,n)$ used in these scenarios are presented in Table~\ref{table:np}. The constants 0.2, 2.83 and 7.6 are chosen so that $p = 40$ at sample size $n = 200$ in the three scenarios. \setcounter{table}{-1} \begin{table}[H] \centering \caption{Values of $p$ and $n$ in scenarios 1, 2 and 3.} \label{table:np} \begin{tabular}{l|rrrrrrrrrr} \hline $n$ & 200 & 400 & 600 & 800 & 1000 & 1200 & 1400 & 1600 & 1800 & 2000 \\ \hline $p$ in scenario~1 & 40 & 80 & 120 & 160 & 200 & 240 & 280 & 320 & 360 & 400 \\ $p$ in scenario~2 & 40 & 56 & 69 & 80 & 89 & 98 & 105 & 113 & 120 & 126 \\ $p$ in scenario~3 & 40 & 45 & 48 & 50 & 52 & 53 & 55 & 56 & 56 & 57 \\ \hline \end{tabular} \end{table} In scenarios~4 and 5, we still set $\delta = 0$ and either keep $p$ fixed and increase $n$ (scenario~4) or keep $n$ fixed and increase $p$ (scenario~5). Finally, in scenario~6, we keep $n$ and $p$ fixed and challenge the positivity assumptions that $G}%{\bar{G}_{0}$ is bounded away from 0 and 1 by progressively increasing $\delta$. In each scenario and for all estimators, we report in a table the average bias (bias, multiplied by 10), standard error (SE, multiplied by 10) and mean squared error (MSE, multiplied by 100) across the $B=200$ repetitions. Specifically, if $\{\phi_{n}^{(b)} : 1 \leq b \leq B\}$ are the $B$ realizations of an estimator of $\psi_{0} = \Psi(\Pi_{0,p,\delta})$ based on $n$ independent draws from $\Pi_{0,p,\delta}$, then we call $\overline{\phi}_{n}^{1:B} \equiv B^{-1} \sum_{b=1}^{B} (\phi_{n}^{(b)} - \psi_{0})$ the average bias, $(B^{-1} \sum_{b=1}^{B} (\phi_{n}^{(b)} - \overline{\phi}_{n}^{1:B})^{2})^{1/2}$ the standard error, and $(\overline{\phi}_{n}^{1:B})^{2} + B^{-1} \sum_{b=1}^{B} (\phi_{n}^{(b)} - \overline{\phi}_{n}^{1:B})^{2}$ the mean squared error. We also represent in a series of figures how MSE, the empirical coverage of the 95\% CIs and their widths evolve as the sample size (scenarios~1 to 4) or number of covariates (scenario~5) or parameter $\delta$ (scenario~6) increase. To ease comparisons, all similar figures share the same $x$- and $y$-axes. \subsection{Results} \label{subsec:results} \subsubsection*{Scenarios~1, 2, and 3: increasing $\boldsymbol{n}$ and setting $\boldsymbol{p = 0.2 n, \floor{2.83 \sqrt{n}}, \floor{7.6 \log n}}$.} \label{sec:sim:scenario:one:two:three} The results of the three simulation studies under scenarios~1, 2 and 3 are best presented and commented upon altogether. Figure~\ref{fig:N} and Table~\ref{table:N} summarize the numerical findings under scenario~1; Figure~\ref{fig:Nsqrt} and Table~\ref{table:Nsqrt} summarize the numerical findings under scenario~2; Figure~\ref{fig:Nlog} and Table~\ref{table:Nlog} summarize the numerical findings under scenario~3. \begin{figure}[p] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_N.pdf} \caption{MSE for five of the seven estimators. The MSE of the A-IPTW estimator is so large that it does not fit in the picture. \textit{MSE is multiplied by 100.} } \label{fig:plot_N} \end{subfigure}\\ \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_N_CI} \caption{Coverage of 95\% CIs based on the double-robust estimators.} \label{fig:plot_N_ci1} \end{subfigure} \hspace{10mm} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_N_CI_len} \caption{Relative width of 95\% CIs based on the double-robust estimators w.r.t. that of the plain TMLE, $\psi_{n,h_{n,\mathrm{CV}}}^{*}$.} \label{fig:plot_N_ci2} \end{subfigure} \caption{\textbf{Scenario~1.} We fix the ratio $p/n = 0.2$, and increase the sample size $n$ from 200 to 2000.} \label{fig:N} \end{figure} \begin{table}[p] \centering \begin{tabular}{l|l|rrrrrrr} \hline $n$ & & $\psi_{n}^{\text{unadj}}$ & $\psi_{n}^{\text{G-comp}}$ & $\psi_{n}^{\text{IPTW}}$ & $\psi_{n}^{\text{A-IPTW}}$ & $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ & \scriptsize{L-C-TMLE} & \scriptsize{LP-C-TMLE} \\\hline 200 & bias & 1.259 & 1.212 & 0.435 & 0.632 & 0.327 & -0.007 & 0.039 \\ & SE & 0.236 & 0.152 & 0.320 & 0.137 & 0.151 & 0.242 & 0.260 \\ & MSE & 1.641 & 1.491 & 0.291 & 0.418 & 0.130 & 0.059 & 0.069 \\ & ratio &&& & 1.414 & 0.882 & 0.577 & 0.466 \\ \hline 1000 & bias & 1.171 & 1.206 & 0.279 & 0.407 & 0.189 & 0.032 & 0.026 \\ & SE & 0.110 & 0.066 & 0.157 & 0.061 & 0.064 & 0.083 & 0.091 \\ & MSE & 1.382 & 1.459 & 0.103 & 0.169 & 0.040 & 0.008 & 0.009 \\ & ratio &&& & 1.765 & 0.969 & 0.882 & 0.632 \\ \hline 2000 & bias & 1.175 & 1.217 & 0.232 & 0.339 & 0.134 & 0.014 & 0.020 \\ & SE & 0.076 & 0.047 & 0.120 & 0.050 & 0.045 & 0.050 & 0.069 \\ & MSE & 1.386 & 1.483 & 0.068 & 0.118 & 0.020 & 0.003 & 0.005 \\ & ratio &&& & 1.666 & 0.959 & 1.062 & 0.626 \\ \hline \end{tabular} \caption{\textbf{Scenario~1.} The performance of each estimator at sample size $n \in \{200, 1000, 2000\}$, with ratio $p/n = 0.2$. The columns named L-C-TMLE and LP-C-TMLE correspond to the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators, respectively. Rows \textit{ratio} report the ratios of the average of the SE estimates across the $B$ repetitions to the empirical SE. \textit{Bias and SE are multiplied by 10, and MSE is multiplied by 100.}} \label{table:N} \end{table} \begin{figure}[p] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_Nsqrt.pdf} \caption{MSE for five of the seven estimators.} \label{fig:plot_Nsqrt} \end{subfigure}\\ \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_Nsqrt_CI} \caption{Coverage of 95\% CIs based on the double-robust estimators. \textit{MSE is multiplied by 100.} } \label{fig:plot_Nsqrt_ci1} \end{subfigure} \hspace{10mm} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_Nsqrt_CI_len} \caption{Relative width of 95\% CIs based on the double-robust estimators w.r.t. that of the plain TMLE, $\psi_{n,h_{n,\mathrm{CV}}}^{*}$.} \label{fig:plot_Nsqrt_ci2} \end{subfigure} \caption{\textbf{Scenario~2.} We increase the sample size $n$ from 200 to 2000 and set $p = \floor{2.83\sqrt{n}}$.} \label{fig:Nsqrt} \end{figure} \begin{table}[p] \centering \begin{tabular}{l|l|rrrrrrr} \hline $n$ & & $\psi_{n}^{\text{unadj}}$ & $\psi_{n}^{\text{G-comp}}$ & $\psi_{n}^{\text{IPTW}}$ & $\psi_{n}^{\text{A-IPTW}}$ & $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ & \scriptsize{L-C-TMLE} & \scriptsize{LP-C-TMLE} \\\hline 200 & bias & 1.242 & 1.173 & 0.469 & 0.614 & 0.322 & 0.014 & 0.018 \\ & SE & 0.221 & 0.157 & 0.278 & 0.135 & 0.156 & 0.226 & 0.217 \\ & MSE & 1.592 & 1.401 & 0.297 & 0.395 & 0.128 & 0.051 & 0.048 \\ & ratio &&& & 1.377 & 0.857 & 0.630 & 0.553 \\ \hline 1000 & bias & 1.216 & 1.214 & 0.271 & 0.361 & 0.126 & 0.003 & 0.019 \\ & SE & 0.104 & 0.068 & 0.184 & 0.077 & 0.064 & 0.076 & 0.074 \\ & MSE & 1.489 & 1.479 & 0.107 & 0.136 & 0.020 & 0.006 & 0.006 \\ & ratio &&& & 1.505 & 0.965 & 0.862 & 0.790 \\ \hline 2000 & bias & 1.192 & 1.214 & 0.201 & 0.274 & 0.080 & 0.007 & 0.018 \\ & SE & 0.075 & 0.051 & 0.140 & 0.061 & 0.051 & 0.053 & 0.049 \\ & MSE & 1.426 & 1.477 & 0.060 & 0.079 & 0.009 & 0.003 & 0.003 \\ & ratio &&& & 1.488 & 0.870 & 0.898 & 0.904 \\ \hline \end{tabular} \caption{\textbf{Scenario~2.} The performance of each estimator at sample size $n \in \{200, 1000, 2000\}$, with ratio $p = \floor{2.83\sqrt{n}}$. The columns named L-C-TMLE and LP-C-TMLE correspond to the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators, respectively. Rows \textit{ratio} report the ratios of the average of the SE estimates across the $B$ repetitions to the empirical SE. \textit{Bias and SE are multiplied by 10, and MSE is multiplied by 100.}} \label{table:Nsqrt} \end{table} \begin{figure}[p] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_Nlog.pdf} \caption{MSE for five of the seven estimators. \textit{MSE is multiplied by 100.} } \label{fig:plot_Nlog} \end{subfigure}\\ \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_Nlog_CI} \caption{Coverage of 95\% CIs based on the double-robust estimators.} \label{fig:plot_Nlog_ci1} \end{subfigure} \hspace{10mm} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_Nlog_CI_len} \caption{Relative width of 95\% CIs based on the double-robust estimators w.r.t. that of the plain TMLE, $\psi_{n,h_{n,\mathrm{CV}}}^{*}$.} \label{fig:plot_Nlog_ci2} \end{subfigure} \caption{\textbf{Scenario~3.} We increase the sample size $n$ from 200 to 2000 and keep $p= \floor{7.6\log(n)}$.} \label{fig:Nlog} \end{figure} \begin{table}[p] \centering \begin{tabular}{l|l|rrrrrrr} \hline $n$ & & $\psi_{n}^{\text{unadj}}$ & $\psi_{n}^{\text{G-comp}}$ & $\psi_{n}^{\text{IPTW}}$ & $\psi_{n}^{\text{A-IPTW}}$ & $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ & \scriptsize{L-C-TMLE} & \scriptsize{LP-C-TMLE} \\\hline 200& bias & 1.244 & 1.190 & 0.427 & 0.634 & 0.314 & -0.009 & 0.014 \\ & SE & 0.228 & 0.148 & 0.316 & 0.136 & 0.170 & 0.239 & 0.241 \\ & MSE & 1.600 & 1.439 & 0.282 & 0.420 & 0.128 & 0.057 & 0.058 \\ & ratio &&& & 1.363 & 0.777 & 0.598 & 0.502 \\ \hline 1000 & bias & 1.242 & 1.208 & 0.224 & 0.286 & 0.056 & -0.008 & -0.006 \\ & SE & 0.114 & 0.077 & 0.219 & 0.086 & 0.065 & 0.071 & 0.069 \\ & MSE & 1.555 & 1.465 & 0.098 & 0.089 & 0.007 & 0.005 & 0.005 \\ & ratio &&& & 1.544 & 0.995 & 0.992 & 0.894 \\ \hline 2000 & bias & 1.227 & 1.205 & 0.159 & 0.185 & 0.019 & -0.006 & -0.003 \\ & SE & 0.074 & 0.050 & 0.157 & 0.066 & 0.046 & 0.053 & 0.050 \\ & MSE & 1.511 & 1.453 & 0.050 & 0.039 & 0.003 & 0.003 & 0.002 \\ & ratio &&& & 1.552 & 1.023 & 0.978 & 0.942 \\ \hline \end{tabular} \caption{\textbf{Scenario~3.} The performance of each estimator at sample size $n \in \{200, 1000, 2000\}$, with $p = \floor{7.6\log(n)}$. The columns named L-C-TMLE and LP-C-TMLE correspond to the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators, respectively. Rows \textit{ratio} report the ratios of the average of the SE estimates across the $B$ repetitions to the empirical SE. \textit{Bias and SE are multiplied by 10, and MSE is multiplied by 100.}} \label{table:Nlog} \end{table} Figures~\ref{fig:plot_N}, \ref{fig:plot_Nsqrt} and \ref{fig:plot_Nlog} reveal a general trend: MSE decreases as sample size $n$ increases, despite the fact that the number of covariates $p$ also increases (at different $n$-rates in each scenario). Overall, LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE perform similarly and better than TMLE; TMLE outperforms IPTW, and IPTW outperforms A-IPTW. Moreover, the gap between LASSO-C-TMLE, LASSO-PSEUDO-C-TMLE on the one hand and TMLE on the other hand \textit{(i)}~reduces as sample size $n$ increases (in each scenario), and \textit{(ii)}~reduces as $p$ decreases (for each sample size $n$, across scenarios). Judging by Tables~\ref{table:N}, \ref{table:Nsqrt} and \ref{table:Nlog}, the unadjusted, G-comp, IPTW and A-IPTW estimators are strongly biased. The TMLE estimator is strongly biased too, even for large sample size $n$, when the number of covariates $p$ is not sufficiently small. Note however that the bias of TMLE vanishes at sample size $n = 2000$ in scenario~2 (then, $p = 126$) and at sample size $n \in \{1000, 2000\}$ (then, $p \in \{52, 57\}$). Double-robustness is in action. In contrast, the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators are both essentially unbiased in all configurations. Tables~\ref{table:N}, \ref{table:Nsqrt} and \ref{table:Nlog} also reveal that the variance of the TMLE estimator tends to be smaller than those of the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators, those last two variances being very similar. Moreover, the gap between them tends to diminish as sample size $n$ increases, in all scenarios. We now turn to Figures~\ref{fig:plot_N_ci1}, \ref{fig:plot_Nsqrt_ci1} and \ref{fig:plot_Nlog_ci1}. The LASSO-C-TMLE estimator performs best in terms of empirical coverage, followed by the LASSO-PSEUDO-C-TMLE, TMLE and A-IPTW estimators, in that order. At moderate sample size, the superiority of LASSO-C-TMLE-based CIs over the others is striking. However, even they fail to provide the wished coverage except when sample size $n$ is large (say, larger than $1500$). As a side note, we recall that if $S$ is drawn from the Binomial law with parameter $(B, q) = (200, q)$, then $S \leq 185$ with probability approximately 8\% for $q = 95\%$, 22\% for $q = 94\%$ and 43\% for $q = 93\%$. In this light, an empirical coverage of 7.5\% is not that abnormal for $B = 200$ independent CIs of exact coverage $q = 93\%$, and even $q = 94\%$. Moreover, we anticipated to get conservative CIs because of how we estimate the asymptotic variance of the LASSO-C-TMLE estimator, see Section~\ref{subsec:strategy}. The fact that the ``ratio'' entries of Table~\ref{table:Nlog}, scenario~3, are that close to one for the LASSO-C-TMLE estimator at sample size $n \in \{1000, 2000\}$ (not to mention at sample size $n=2000$ in Table~\ref{table:N}, scenario~1) reveals that little over-estimation of the asymptotic variance is at play. Finally, we see in Figures~\ref{fig:plot_N_ci2}, \ref{fig:plot_Nsqrt_ci2} and \ref{fig:plot_Nlog_ci2} that the CIs based on the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators are systematically slightly wider and slightly narrower than those based on the TMLE estimator, all much narrower than those based on the A-IPTW estimator. \subsubsection*{Scenario~4: keeping $\boldsymbol{p}$ fixed and increasing $\boldsymbol{n}$.} Figure~\ref{fig:fixp} and Table~\ref{table:fixp} summarize the numerical findings under scenario~4, where the number of covariates $p$ is set to 40 and sample size $n$ goes from 200 to 2000 by steps of 200. We observe the same trend in Figure~\ref{fig:plot_fixp} as in Figures~\ref{fig:plot_N}, \ref{fig:plot_Nsqrt} and \ref{fig:plot_Nlog}: MSE decreases as sample size $n$ increases; overall, LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE perform similarly and better than TMLE; TMLE outperforms IPTW, and IPTW outperforms A-IPTW. Moreover, the gap between LASSO-C-TMLE, LASSO-PSEUDO-C-TMLE on the one hand and TMLE on the other hand vanishes completely as sample size increases, whereas it only got smaller in scenarios 1, 2, 3. Judging by Table~\ref{table:fixp}, the unadjusted, G-comp, IPTW and A-IPTW estimators are strongly biased whereas the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators are both essentially unbiased even at small sample size $n=200$. The TMLE estimator is strongly biased too at sample size $n=200$, but much less so as $n$ increases, with no bias at all at $n = 2000$. Again, double-robustness is in action. Moreover, there is little if any difference between the LASSO-C-TMLE, LASSO-PSEUDO-C-TMLE and TMLE estimators in terms of bias, SE and MSE for sample size $n \in \{1000, 2000\}$. Figure~\ref{fig:ci:fixp} reveals that, at sample sizes $n \geq 1000$, the empirical coverage of the CIs based on the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators is satisfactory, and that CIs based on the TMLE estimator may provide more coverage than wished. By Table~\ref{table:fixp} (\textit{ratio} rows), the estimation of the actual variance of the LASSO-C-TMLE and TMLE estimator is quite good at sample size $n \in \{1000, 2000\}$. Apparently, the variance of the LASSO-PSEUDO-C-TMLE estimator is under-estimated at sample size $n=1000$, but much better estimated at sample size $n=2000$. \begin{figure}[p] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_fixp.pdf} \caption{MSE for five of the seven estimators. \textit{MSE is multiplied by 100.} } \label{fig:plot_fixp} \end{subfigure}\\ \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_fixp_CI} \caption{Coverage of 95\% CIs based on the double-robust estimators.} \label{fig:ci:fixp} \end{subfigure} \hspace{10mm} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_fixp_CI_len} \caption{Relative width of 95\% CIs based on the double-robust estimators w.r.t. that of the plain TMLE, $\psi_{n,h_{n,\mathrm{CV}}}^{*}$.} \end{subfigure} \caption{\textbf{Scenario~4.} We fix the number of covariates $p=40$, and increase the sample size $n$ from 200 to 2000.} \label{fig:fixp} \end{figure} \begin{table}[p] \centering \begin{tabular}{l|l|rrrrrrrr} \hline $n$ & & $\psi_{n}^{\text{unadj}}$ & $\psi_{n}^{\text{G-comp}}$ & $\psi_{n}^{\text{IPTW}}$ & $\psi_{n}^{\text{A-IPTW}}$ & $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ \scriptsize{L-C-TMLE} & \scriptsize{LP-C-TMLE} \\ \hline 200 & bias & 1.286 & 1.215 & 0.485 & 0.649 & 0.317 & 0.020 & 0.020 \\ & SE & 0.226 & 0.159 & 0.345 & 0.147 & 0.172 & 0.231 & 0.259 \\ & MSE & 1.705 & 1.501 & 0.354 & 0.443 & 0.130 & 0.054 & 0.068 \\ & ratio &&& & 1.283 & 0.761 & 0.594 & 0.469 \\ \hline 1000 & bias & 1.259 & 1.191 & 0.213 & 0.251 & 0.028 & -0.020 & -0.021 \\ & SE & 0.109 & 0.073 & 0.211 & 0.090 & 0.062 & 0.076 & 0.074 \\ & MSE & 1.597 & 1.425 & 0.090 & 0.071 & 0.005 & 0.006 & 0.006 \\ & ratio &&& & 1.490 & 1.062 & 0.947 & 0.866 \\ \hline 2000 & bias & 1.260 & 1.202 & 0.117 & 0.140 & -0.002 & -0.001 & -0.002 \\ & SE & 0.080 & 0.049 & 0.165 & 0.066 & 0.046 & 0.049 & 0.048 \\ & MSE & 1.595 & 1.448 & 0.041 & 0.024 & 0.002 & 0.002 & 0.002 \\ & ratio &&& & 1.673 & 1.062 & 1.014 & 0.974 \\ \end{tabular} \caption{\textbf{Scenario~4.} The performance of each estimator at sample size $n \in \{200, 1000, 2000\}$, with $p = 40$. The columns named L-C-TMLE and LP-C-TMLE correspond to the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators, respectively. Rows \textit{ratio} report the ratios of the average of the SE estimates across the $B$ repetitions to the empirical SE. \textit{Bias and SE are multiplied by 10, and MSE is multiplied by 100.}} \label{table:fixp} \end{table} \subsubsection*{Scenario~5: keeping $\boldsymbol{n}$ fixed and increasing $\boldsymbol{p}$.} Figure~\ref{fig:scen5} and Table~\ref{table:scen5} summarize the numerical findings under scenario~5, where the sample size $n$ is set to 1000 and the number of covariates $p$ ranges over $\{50, 75, 100, 150, 200\}$. The take home message of Figure~\ref{fig:plot_ratio} is that, in terms of MSE, the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators outperform the TMLE estimator, which outperforms the IPTW and A-IPTW estimators. Figure~\ref{fig:ci:scen5} further shows that the above message is also valid when considering the empirical coverage of the CIs based on the different estimators. As the number of covariates $p$ increases, all the empirical coverage degrade. However, the CIs based on the LASSO-C-TMLE estimator behave remarkably better than those based on the LASSO-PSEUDO-C-TMLE estimator, which are themselves superior to those based on the TMLE estimator. Examining Table~\ref{table:scen5} helps to better understand the general pattern. The unadjusted, G-comp, IPTW and A-IPTW estimators are too strongly biased to compete. The TMLE estimator performs rather well when the number of covariates $p$ equals 50, like the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators. However, when $p \in \{100, 200\}$, then the TMLE estimator is too biased to compete too -- even double-robustness does not help \textit{yet} at the moderate sample size of $n = 1000$. In contrast, the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators are essentially unbiased, and exhibit relatively small variances (compared to all the variances reported in Tables~\ref{table:N}, \ref{table:Nsqrt}, \ref{table:Nlog}, \ref{table:fixp}. Finally, let us note that the estimation of the variance of the LASSO-C-TMLE estimator is rather good (see the \textit{ratio} rows of Table~\ref{table:scen5}), as opposed to that of the variance of the LASSO-PSEUDO-C-TMLE estimator. \begin{figure}[p] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_ratio.pdf} \caption{MSE for five of the seven estimators. \textit{MSE is multiplied by 100.} } \label{fig:plot_ratio} \end{subfigure}\\ \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_ratio_CI} \caption{Coverage of 95\% CIs based on the double-robust estimators.} \label{fig:ci:scen5} \end{subfigure} \hspace{10mm} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_ratio_CI_len} \caption{Relative width of 95\% CIs based on the double-robust estimators w.r.t. that of the plain TMLE, $\psi_{n,h_{n,\mathrm{CV}}}^{*}$.} \end{subfigure} \caption{\textbf{Scenario~5.} We fix the sample size $n=1000$, and increase the number of covariates $p$ from 50 to 200.} \label{fig:scen5} \end{figure} \begin{table}[p] \centering \begin{tabular}{l|l|rrrrrrr} \hline $p$ & & $\psi_{n}^{\text{unadj}}$ & $\psi_{n}^{\text{G-comp}}$ & $\psi_{n}^{\text{IPTW}}$ & $\psi_{n}^{\text{A-IPTW}}$ & $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ & \scriptsize{L-C-TMLE} & \scriptsize{LP-C-TMLE} \\ \hline 50 & bias & 1.237 & 1.205 & 0.228 & 0.285 & 0.053 & -0.017 & -0.008 \\ & SE & 0.107 & 0.072 & 0.211 & 0.090 & 0.065 & 0.071 & 0.072 \\ & MSE & 1.542 & 1.457 & 0.096 & 0.089 & 0.007 & 0.005 & 0.005 \\ & ratio &&& & 1.472 & 0.983 & 0.998 & 0.864 \\ \hline 100 & bias & 1.179 & 1.199 & 0.252 & 0.357 & 0.130 & 0.005 & 0.013 \\ & SE & 0.102 & 0.069 & 0.167 & 0.073 & 0.071 & 0.072 & 0.075 \\ & MSE & 1.402 & 1.443 & 0.091 & 0.133 & 0.022 & 0.005 & 0.006 \\ & ratio &&& & 1.561 & 0.867 & 0.896 & 0.791 \\ \hline 200 & bias & 1.190 & 1.221 & 0.297 & 0.417 & 0.179 & 0.024 & 0.024 \\ & SE & 0.107 & 0.067 & 0.159 & 0.067 & 0.060 & 0.077 & 0.089 \\ & MSE & 1.428 & 1.494 & 0.114 & 0.178 & 0.036 & 0.006 & 0.009 \\ & ratio &&& & 1.591 & 1.009 & 0.933 & 0.645 \\ \hline \end{tabular} \caption{\textbf{Scenario~5.} The performance of each estimator at sample size $n = 1000$, with $p \in \{50, 100, 200\}$. The columns named L-C-TMLE and LP-C-TMLE correspond to the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators, respectively. Rows \textit{ratio} report the ratios of the average of the SE estimates across the $B$ repetitions to the empirical SE. \textit{Bias and SE are multiplied by 10, and MSE is multiplied by 100.} } \label{table:scen5} \end{table} \subsubsection*{Scenario 6: keeping $\boldsymbol{n}$ and $\boldsymbol{p}$ fixed and challenging the positivity assumption.} In this scenario, we study how the level of posivitity violation influences the performance of the estimators, at small sample size $n = 100$ and with $p=50$ covariates, by progressively increasing $\delta \in \{0.5 + k/10 : 0 \leq k \leq 15\}$. Figure~\ref{fig:g} illustrates how the positivity violation is challenged. We recover the fact that $\delta \mapsto \Pi_{0,50,\delta} (A = 1|W)$ is increasing. When $\delta = 2$, the law is highly skewed to 1, and the positivity assumption is practically violated. Figures~\ref{fig:plot_positivity}, \ref{fig:CI:scen6}, \ref{fig:ratio:CI:scen6} and Table~\ref{table:positivity} summarize the numerical findings under scenario~6. We see in Figure~\ref{fig:scen6} that, overall, the TMLE estimator is much more affected than the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators by the near violation of the positivity assumption at sample size $n = 500$, and that the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators behave similarly in terms of MSE and empirical coverage. Judging by Table~\ref{table:positivity}, The unadjusted, G-comp, IPTW, A-IPTW and TMLE estimators are too strongly biased to compete with the nearly unbiased LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators. The rather poor performance in terms of empirical coverage of the CIs based on the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators may be due to the apparent failure in estimating well their variance (see the \textit{ratio} rows of Table~\ref{table:positivity}). \begin{figure}[p] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_hist.pdf} \caption{For every $\delta \in \{0.5, 1.2, 2\}$, we simulate $n=1000$ observations $(W_{1}, A_{1}, Y_{1})$, \ldots, $(W_{n}, A_{n}, Y_{n})$ from $\Pi_{0,50,\delta}$, compute $\{\Pi_{0,50,\delta}(A=1 | W=W_i) : 1 \leq i \leq n\}$, and finally plot the corresponding empirical cumulative distribution. } \label{fig:g} \end{subfigure} \hspace{10mm} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_positivity_tmle} \caption{MSE for three of the seven estimators. \textit{MSE is multiplied by 100.} } \label{fig:plot_positivity} \end{subfigure}\\ \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_positivity_CI} \caption{Coverage of 95\% CIs based on the double-robust estimators.} \label{fig:CI:scen6} \end{subfigure} \hspace{10mm} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{plot_positivity_CI_len} \caption{Relative width of 95\% CIs based on the double-robust estimators w.r.t. that of the plain TMLE, $\psi_{n,h_{n,\mathrm{CV}}}^{*}$.} \label{fig:ratio:CI:scen6} \end{subfigure} \caption{\textbf{Scenario~6.} We fix $n=500, p = 50$, and vary $\delta \in \{0.5 + k/10: 0 \leq k \leq 15\}$. As the MSEs for IPTW and A-IPTW are too large, we only plot the MSEs of the plain TMLE $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ and the two collaborative TMLEs to ease comparisons.} \label{fig:scen6} \end{figure} \begin{table}[p] \centering \begin{tabular}{l|l|rrrrrrrr} \hline $\delta$ & & $\psi_{n}^{\text{unadj}}$ & $\psi_{n}^{\text{G-comp}}$ & $\psi_{n}^{\text{IPTW}}$ & $\psi_{n}^{\text{A-IPTW}}$ & $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ & \scriptsize{L-C-TMLE} & \scriptsize{LP-C-TMLE} \\ \hline 1.0 & bias & 1.283 & 1.252 & 0.995 & 0.484 & 0.145 & 0.006 & 0.009 \\ & SE & 0.170 & 0.105 & 0.319 & 0.117 & 0.108 & 0.120 & 0.128 \\ & MSE & 1.675 & 1.579 & 1.092 & 0.248 & 0.033 & 0.014 & 0.017 \\ & ratio &&& & 1.385 & 0.816 & 0.774 & 0.650 \\ \hline 2.0 & bias & 1.391 & 1.340 & 1.620 & 0.625 & 0.185 & 0.053 & 0.063 \\ & SE & 0.223 & 0.142 & 0.409 & 0.154 & 0.143 & 0.168 & 0.186 \\ & MSE & 1.984 & 1.817 & 2.791 & 0.414 & 0.055 & 0.031 & 0.039 \\ & ratio &&&& 1.283 & 0.650 & 0.597 & 0.474 \\ \hline \end{tabular} \caption{\textbf{Scenario~6.} The performance of each estimator at sample size $n=500$, with $p=50$ and $\delta \in \{1,2\}$. The columns named L-C-TMLE and LP-C-TMLE correspond to the LASSO-C-TMLE and LASSO-PSEUDO-C-TMLE estimators, respectively. Rows \textit{ratio} report the ratios of the average of the SE estimates across the $B$ repetitions to the empirical SE. \textit{Bias and SE are multiplied by 10, and MSE is multiplied by 100.} } \label{table:positivity} \end{table} \section{Secondary simulation study: LASSO-C-TMLE as a fine-tuning procedure} \label{sec:transfer} In this shorter section, we describe a second, less ambitious simulation study. Its aim is to evaluate the interest in using the LASSO-C-TMLE procedure as a fine-tuning procedure. Specifically we wish to investigate, in the same context as in Section~\ref{sec:experiments}, how the rivals of the LASSO-C-TMLE estimator that also rely on the estimation of $G}%{\bar{G}_{0}$ (\textit{i.e.}, the IPTW, A-IPTW, TMLE and LASSO-PSEUDO-C-TMLE estimators) perform when they are provided with the estimator $G}%{\bar{G}_{n,h_{n,\kappa_{n}}}$ indexed by the data-adaptive, targeted, fine-tune parameter $h_{n,\kappa_{n}}$. We thus choose to repeat independently $B = 200$ times the following steps: for each number of covariates $p \in \{100,200\}$, \begin{enumerate} \item simulate a data set of $n = 1000$ independent observations drawn from $\Pi_{0,p,0}$; \item derive the LASSO-C-TMLE estimator of Sections~\ref{sec:lasso:ctmle} \item derive the LASSO-PSEUDO-C-TMLE estimator of Section~\ref{sec:lasso:pseudo:ctmle} as well as the competing IPTW, A-IPTW and TMLE estimators \textit{exactly} as presented in Section~\ref{subsec:compet}, \textit{and also using $G}%{\bar{G}_{n,h_{n,\kappa_{n}}}$ in place of $G}%{\bar{G}_{n,h_{n}}$ (LASSO-PSEUDO-C-TMLE) and $G}%{\bar{G}_{n,h_{n,\mathrm{CV}}}$ (the others)}. \end{enumerate} The results are reported in Table~\ref{tab:transfer}. A clear pattern emerges from Table~\ref{tab:transfer}: the bias is systematically reduced when using $G}%{\bar{G}_{n,h_{n,\kappa_{n}}}$ in place of $G}%{\bar{G}_{n,h_{n,\mathrm{CV}}}$ or $G}%{\bar{G}_{n,h_{n}}$. Nevertheless, the MSE of the IPTW estimator increases, with a two-fold increase when the number of covariates $p = 200$. In contrast, the A-IPTW estimator benefits more from the substitution, with a stark decrease of the MSE on top of that of the bias, the latter being still far too large. This makes only more remarkable the fact that the TMLE estimator greatly benefits from the substitution on all fronts, bias and MSE. On the contrary, the benefit for the LASSO-PSEUDO-C-TMLE estimator is not convincing. In summary, $G}%{\bar{G}_{n,h_{n,\kappa_{n}}}$ is targeted even ``out of context'', \textit{i.e.}, even when it is used to build a plain TMLE estimator as opposed to the full-fledged C-TMLE estimator. \begin{table}[p] \centering \begin{tabular}{l|l|rrrrrr} \cline{1-6} $p$ & & $\psi_{n}^{\text{IPTW}}$ & $(\psi_{n}^{\text{IPTW}})'$ & $\psi_{n}^{\text{A-IPTW}}$ & $(\psi_{n}^{\text{A-IPTW}})'$ \\ \cline{1-6} 100 & bias & 0.252 & 0.078 & 0.357 & 0.151 & \\ & SE & 0.167 & 0.325 & 0.073 & 0.167 \\ & MSE & 0.091 & 0.112 & 0.133 & 0.050 \\ 200 & bias & 0.297 & 0.106 & 0.417 & 0.152 \\ & SE & 0.159 & 0.480 & 0.067 & 0.150 \\ & MSE & 0.114 & 0.242 & 0.178 & 0.045 \\ \hline \hline $p$ & & $\psi_{n,h_{n,\mathrm{CV}}}^{*}$ & $\psi_{n,h_{n,\kappa_{n}}}^{*}$ & \scriptsize{L-C-TMLE} & \scriptsize{LP-C-TMLE} & \scriptsize{LP-C-TMLE}$'$ &\\ \hline 100 & bias & 0.130 & 0.017 & 0.005 & 0.013 & -0.000 \\ & SE & 0.071 & 0.101 & 0.072 & 0.075 & 0.094 \\ & MSE & 0.022 & 0.011 & 0.005 & 0.006 & 0.009 \\ \hline 200 & bias & 0.179 & -0.037 & 0.024 & 0.024 & -0.072 \\ & SE & 0.060 & 0.140 & 0.077 & 0.089 & 0.148 \\ & MSE & 0.036 & 0.021 & 0.006 & 0.009 & 0.027 \\ \end{tabular} \caption{\textbf{Using LASSO-C-TMLE as a fine-tuning procedure.} The performance of each estimator at sample size $n=1000$ with $p \in \{100, 200\}$. The prime symbol indicates the use of $G}%{\bar{G}_{n,h_{n,\kappa_{n}}}$ as an estimator of $G}%{\bar{G}_{0}$ in place of $G}%{\bar{G}_{n,h_{n,\mathrm{CV}}}$ or $G}%{\bar{G}_{n,h_{n}}$. \textit{Bias and SE are multiplied by 10, and MSE is multiplied by 100.}} \label{tab:transfer} \end{table} \section{Discussion} \label{sec:discuss} We study the inference of the value of a smooth statistical parameter at a law $P_{0}$ from which we sample $n$ independent observations, in situations where \textit{(i)} we rely on a machine learning algorithm fine-tuned by a real-valued parameter $h$ to estimate the $G$-component $G}%{\bar{G}_{0}$ of $P_{0}$, possibly consistently, and \textit{(ii)} the product of the rates of convergence of the estimators of the $Q$- and $G$-components of $P_{0}$ to their targets may be slower than the convenient $o(1/\sqrt{n})$. A plain TMLE with an $h$ chosen by cross-validation would typically not lend itself to the construction of a CI, because the selection of $h$ would trade-off its empirical bias with something akin to the empirical variance of the estimator of $G}%{\bar{G}_{0}$ as opposed to that of the TMLE. We develop a collaborative TMLE procedure that succeeds in achieving the relevant trade-off: under high-level empirical processes conditions, and if there exists an oracle $h$ that makes a bulky remainder term asymptotically Gaussian, then the C-TMLE is asymptotically Gaussian hence amenable to building a CI provided that its asymptotic variance can be estimated too. The construction of the C-TMLE and the main result about its empirical behavior are illustrated with the inference of the average treatment effect, both theoretically and numerically. In the simulation study, the $G$-component is estimated by the LASSO, and $h$ is the bound on the $\ell^{1}$-norm of the candidate coefficients. Overall, the resulting LASSO-C-TMLE estimator is superior to all its competitors, including a plain TMLE estimator. Evaluated in terms of empirical bias, standard error, mean squared error and coverage of CIs, the superiority is striking in small and moderate sample sizes. It is also strong when the number of covariates increases, or when the positivity assumption is increasingly challenged, thus making the inference task progressively even more delicate. The simulation study suggests that the CIs based on the C-TMLE do not provide the wished coverage, especially in small sample sizes. Obviously, this may be explained by the need for the C-TMLE estimator to reach its asymptotic regime. More subtly, this may also be related to high-level assumption \textbf{A4}, that states the existence of an oracle $h$ making a bulky remainder term asymptotically Gaussian. The assumption may fail to hold in practice. We will devote future research to understanding better \textbf{A4} and finding strategies to avoir relying on it. In conclusion, we believe that the present study further demonstrates the high versatility and potential of the collaborative targeted minimum loss estimation methodology. For (relative) simplicity, we focused on the inference of a smooth, real-valued statistical parameter from independent and identically distributed observations, assuming that the machine learning algorithm is fine-tuned by a real-valued parameter. Our instantiation of the collaborative targeted minimum loss estimation methodology can be extended to other statistical parameters, sampling schemes, and fine-tuning of machine learning algorithms.
2,877,628,089,987
arxiv
\section{Introduction} Measurements of the diffuse ultraviolet (UV) radiation field have to contend with a number of contaminating sources including atmospheric emission lines and the zodiacal light \citep{murthy2009}. These foreground sources are particularly important at high galactic latitudes where the Galactic contribution to the radiation field is relatively small and at longer wavelengths where the zodiacal light, which follows the solar spectrum, becomes increasingly important. It has been difficult to disentangle these components, largely because of a lack of relevant observations. Ideally, these would be spectroscopic observations with moderate resolutions over a large part of the sky with different sun angles. However, what we have is thousands of observations from the Galaxy Evolution Explorer (\textit{GALEX}) in two bands (FUV: 1531 \AA\ and NUV: 2321 \AA) with observations far from the Sun to minimize foreground emission. Despite these drawbacks, we have used the \textit{GALEX}\ data to derive empirical formulae for the foreground sources. Although our main interest is in better understanding the galactic and extragalactic diffuse radiation, we hope that our results will also prove useful in studies of the Earth's atmosphere and of the zodiacal light. They will certainly prove useful in mission planning for other space-borne instruments such as the Ultraviolet Imaging Telescope \citep{kumar2012} which will observe the sky with large field of view instruments where diffuse radiation limits the observable sky. \section{Observations \& Data} \subsection{Observations} The \textit{GALEX}\ spacecraft was launched in 2003 and has since observed about 75\% of the sky in two spectral bands (FUV: 1531 \AA\ and NUV: 2321 \AA) with a spatial resolution of about 5\arcsec\ over a field of view of 0.6\degr. The primary mission was described by \citet{martin2005} and the software and data products by \citet{morrissey2007}. Most of the \textit{GALEX}\ observations were short exposures of about 100 seconds in length (All-Sky Imaging Survey: AIS) but there were a number of longer observations of 10,000 seconds or more, either to fulfill specific mission objectives or taken as part of the Guest Investigator (GI) program. A single exposure was limited by the duration of the orbital night (about 1000 seconds) and longer observations were broken up into a series of exposures spread over a time period ranging from days to years. \textit{GALEX}\ observations were subject to severe selection effects due to brightness related constraints from the diffuse radiation integrated over the large field of view. Thus the Galactic plane and other high intensity regions such as Orion or the Magellanic Clouds could not be observed in the original mission. Recent observations have covered many of these but they will not help in refining the foreground because of the brightness of the astrophysical emission. Observations were only taken at orbital night between 20:00 and 04:00 (local time) and only in directions more than 90\degr\ from the Sun to minimize airglow and zodiacal light. Hence, we only sampled a limited part of the total phase space of observational parameters. The diffuse background is the sum of the galactic background, which depends only on the look direction, and the foreground emission --- airglow and zodiacal light --- which depends on the time and date of the observation. The standard data products include a single image of the targeted field for each of the two bands (the FUV detector failed in May, 2009 following which observations were made only with the NUV detector) and a merged catalog of point sources from both bands. Because we are trying to derive the foreground emission which is a function of time and date of the observation, we used the spacecraft housekeeping files (scst files) which included the total count rate (TEC: Total Event Count) in each of the two detectors. The TEC was tabulated every second and was tied to the UT (universal time) of the observation. A typical TEC is plotted with respect to UT in Figure \ref{fig:tec_counter}. \begin{figure}[t] \includegraphics[width=\columnwidth]{murthy_fig1.eps} \caption{\label{fig:tec_counter}The Total Event Counter (TEC) as a function of UT. The minimum emission was at local (spacecraft) midnight.} \end{figure} As implied by its name, the TEC includes all emission in the field of view including starlight, diffuse background, airglow, and zodiacal light and each element had to be estimated separately. However, only the airglow would be expected to vary with the time of day and, to anticipate our results, specifically with the time from local midnight. Unfortunately, this was not readily available from the \textit{GALEX}\ data products and we obtained the TLEs (two-line elements) from Space-Track.org (\verb+https://www.space-track.org/+) and then used STK (\verb+http://www.agi.com/default.aspx+) to calculate the latitude and longitude of the spacecraft ground track at a given UT. From this we calculated the local spacecraft time and cross-indexed with the TEC to obtain the total emission as a function of time. As a result of this rather painful experience, we would recommend that future missions include the local spacecraft coordinates as part of their standard data products. There were about 34,000 observations in each of the FUV and NUV bands in the \textit{GALEX}\ GR6 data release with close to 76,000 scst files and 14,000,000 individual TEC measurements in each band. Note that the longer exposures were divided into multiple exposures and hence multiple scst files. \subsection{Airglow} Naturally enough, most observations of the airglow from space have been downward looking and have found a number of different atomic and molecular lines (reviewed by \citet{meier1991}). There are many fewer observations looking up from low-Earth orbit (LEO). The only emission lines observable in the night spectrum are the geocoronal O I lines at 1304 \AA\ (0.013 kR) and 1356 \AA\ (0.001 kR) in the FUV band and 2471 \AA\ ($< 0.001$ kR) in the NUV band \citep{morrison1992, feldman1992, boffi2007}. The \textit{GALEX}\ FUV band rejected the 1304 \AA\ line but with a 10\% leak \citep{morrissey2007} and these values corresponded to expected levels of about 200 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\ in the FUV band and 100 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\ in the NUV band. As mentioned above, we have adopted an empirical approach in studying the foreground emission. We noted that each observation could be separated into two parts; a minimum value at orbital midnight and a time-variable part which increased smoothly on either side of orbital midnight (Figure \ref{fig:tec_counter}). The only possible source for a signal that varies with the local time is airglow and we extracted this component of the foreground emission by subtracting a baseline calculated from the average of the points within 15 minutes of local midnight, assuming that the observation included this time span. This baseline is comprised of all other sources in the field, including any residual airglow emission at midnight. \begin{figure*}[t] \includegraphics[width=\columnwidth]{murthy_fig2a.eps} \includegraphics[width=\columnwidth]{murthy_fig2b.eps} \caption{Distribution of FUV (a) and NUV (b) baseline-subtracted airglow. Plus signs (+) show the peak of the distribution and the contour lines represent the 1$\sigma$ limits on the level of the airglow. The dark line is the parametrized fit to the peak airglow.} \label{fig:ag_density} \end{figure*} There were approximately 5.8 million independent points in the FUV channel and 6.6 million in the NUV channel with an overlap of about 5 million points. We gridded the baseline-subtracted data to form a density plot for each band and these are shown in Figure \ref{fig:ag_density}. These plots were created by gridding the data into bins of 10 minutes in time and 10 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\ in flux. The plus signs show the peak airglow at a given time and the 1$\sigma$ contour is shown as the closed line; that is, 68\% of the data points fall within the two contour lines between the observation limits of 20:00 to 04:00. The dark lines are the best fit quadratics to the airglow emission in each band with the equations: $$FUV = -7.97 - 1.91 t + 6.22 t^2 (1)$$ $$NUV = -3.94 - 2.25 t + 6.88 t^2 (2)$$ where FUV and NUV are the total flux in the respective bands in units of photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\ and t is the time from midnight in hours. If the observation stretches the entire 8 hours, the maximum for a \textit{GALEX}\ observation, the airglow contribution would be 25 and 33 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\ in the FUV and NUV bands, respectively, with an estimated error of about 20 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\, primarily due to the difficulty of measuring the baseline. Note that the time measured is time on the ground track where 8 hours corresponds to about 20 minutes of actual spacecraft time. There is a strong correlation between the FUV and NUV data (r = 0.745) with NUV = 0.7 FUV, consistent with the origin of both in geocoronal O I lines. \subsection{Zodiacal Light} The zodiacal light is due to sunlight scattered by interplanetary dust in the UV and visible and thermal emission from the dust in the infrared, with a black body temperature of about 60 K. It has been observed extensively in the visible from ground-based observations (reviewed by \citet{leinert1998}) and in the infrared \citep{kelsall1998} from the \textit{Infrared Astronomy Satellite} (\textit{IRAS}) mission. It is generally assumed that the spectrum of the zodiacal light follows the Solar spectrum; that is, the color of the zodiacal light is unity. However, there have been very few observations in the ultraviolet with the most robust being those of \citet{murthy1989} who claimed that the color of the zodiacal light increased with the ecliptic angle. In any case, the zodiacal light will only contribute in the NUV band. We further restricted our dataset to those observations which included both FUV and NUV data and where the observations extended at least 15 minutes on either side of orbital midnight such that a baseline could be robustly defined. There were 9313 such observations and we extracted the baseline value for each. This baseline value at orbital midnight includes, as mentioned above, starlight, the galactic background, airglow and, in the NUV band only, the zodiacal light. The \textit{GALEX}\ pipeline produces a merged catalog for each observation in which the fluxes of the point sources in each band are tabulated from which we could estimate and subtract the contribution due to starlight in each band. These data --- the baseline value at midnight from which the starlight has been subtracted --- form the basis of our further analysis and the NUV data (including residual airglow; diffuse astrophysical background; and zodiacal light) are plotted as a function of sun angle in Figure \ref{fig:nuv_baseline}. There is, of course, considerable scatter reflecting the range in galactic latitude spanned by the observations but a lower envelope to the values can be cleanly drawn with the rise on the right due to locations at low ecliptic latitudes. \begin{figure}[t] \includegraphics[width=\columnwidth]{murthy_fig3.eps} \caption{Distribution of NUV TEC values at local midnight with the starlight subtracted. Airglow is responsible for the rise in the values in the left and zodiacal light for the rise in the right.} \label{fig:nuv_baseline} \end{figure} Zodiacal light does contribute significantly to the NUV channel and is strongly dependent on the ecliptic latitude (Figure \ref{fig:zod_with_ecliptic}) and the helioecliptic longitude (ecliptic longitude - longitude of the Sun). Because of operational constraints on \textit{GALEX}\ observations, all the data were taken at large helioecliptic longitudes (Figure \ref{fig:obs_with_earth}) and, in fact, there was little variation with longitude. \citet{leinert1998} have tabulated observations of the zodiacal light in the optical and, as a first estimate, we have assumed that the zodiacal light in the UV follows the optical distribution with a spectral correction given by the solar spectrum (eg. \citet{colina1996}). The predicted zodiacal light tracks the observed TEC well (Figure \ref{fig:zl_obs_mod}) with the formula: TEC = 690 + 0.65 ZL, where ZL is the zodiacal light from the visible observations by \citet{leinert1998}. The baseline is the zero level representing the residual airglow and the diffuse galactic light while the slope represents the color of the zodiacal light with respect to the visible --- the scattered light in the UV is 65\% that in the optical. Unlike \citet{murthy1989}, we find no dependence of the color on the ecliptic latitude. Although a detailed study of the zodiacal dust is beyond the scope of this work, this suggests that the albedo (reflectivity) of the interplanetary dust grains in the UV is 65\% of the albedo in the visible, in rough agreement with results for interstellar dust \citep{draine2003}. \begin{figure}[t] \includegraphics[width=\columnwidth]{murthy_fig4.eps} \caption{\label{fig:zod_with_ecliptic}NUV TEC as a function of ecliptic latitude.} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth]{murthy_fig5.eps} \caption{\label{fig:obs_with_earth}Observations as a function of helioecliptic longitude and latitude.} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth]{murthy_fig6.eps} \caption{\label{fig:zl_obs_mod}Observed NUV TEC as a function of predicted zodiacal light.} \end{figure} \subsection{Airglow Redux} \begin{figure*}[t] \includegraphics[width=\columnwidth]{murthy_fig7a.eps} \includegraphics[width=\columnwidth]{murthy_fig7b.eps} \caption{Distribution of FUV (a) and NUV (b) TEC values at local midnight. The starlight has been subtracted in each case and, further, the zodiacal light was subtracted in the NUV case. The dark line is the parametrized lower envelope to the TEC.} \label{fig:ag_baseline} \end{figure*} We have subtracted starlight from both bands and zodiacal light from the NUV band and plotted the dependence of these on the Sun angle in Figure \ref{fig:ag_baseline}. Note that the rise in data values on the right side of Figure \ref{fig:nuv_baseline} is no longer seen, indicating that it is, indeed, due to zodiacal light. We have empirically defined a lower envelope to the FUV and NUV TEC by the equations $$FUV = 2000 e^{-SA^2} + 520\ (3)$$ $$NUV = 2000e^{-SA^2} + 650\ (4)$$ where SA is the angle from the Sun in radians and FUV and NUV are the respective TEC values in units of photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$. The variable component must be due to the airglow in the FUV, where zodiacal light will not contribute, and hence is likely to be from the airglow in the NUV also, where it follows the same dependence on Sun angle. However, there is no way to independently separate the astrophysical component from the baseline airglow. If we take recourse to the canonical value of 300 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\ for the diffuse background \citep{henry2010}, we find that the FUV airglow is 220 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\ at midnight and the NUV airglow is 350 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$. \section{Conclusions} We have empirically derived predictions for the foreground emission in the two \textit{GALEX}\ bands. There are two components to the airglow: one which is dependent on the local time and is symmetrical around local midnight; and the other which is dependent on the angle between the target and the Sun. We find that the zodiacal light is proportional to that in the visible with a color of 0.65. We are left with an ambiguity in the baseline at the 100 - 200 photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$\ level which will require spectroscopic information to disentangle. \bibliographystyle{spr-mp-nameyear-cnd}
2,877,628,089,988
arxiv
\section{Introduction} In this paper we examine in detail the notion of `equi-homogeneity', which is a regularity property of sets introduced by Olson, Robinson \& Sharples \cite{ORS} to study the Assouad dimension of products of `generalised Cantor sets' i.e. Cantor sets in which we allow the portion removed to vary at each stage of the construction. Generalised Cantor sets provide simple examples of equi-homogeneous sets $C\subset {\mathbb R}$ whose lower box-counting, upper box-counting, and Assouad dimensions can take arbitrary values satisfying \[ \dim_{\rm LB}C\leq \dim_{\rm B}C\leq \dim_{\rm A}C. \] In our previous paper \cite{ORS} we demonstrated that a large class of `homogeneous' Moran sets, which include the generalised Cantor sets, are equi-homogeneous. Roughly, the equi-homogeneity property means that at each length-scale the number of balls required in each `local cover' of the set is equal up to some constant factor that is uniform across all length-scales. For example, it can be shown that the largest local cover of a generalised Cantor set has cardinality at most 6 times that of the smallest local cover at the same length scale (see Olson, Robinson \& Sharples \cite{ORS}). Generalised Cantor sets are also examples of attractors of non-autonomous iterated function systems. In this paper we extend the regularity results of Olson, Robinson \& Sharples \cite{ORS} to a natural class of attractors of both autonomous and non-autonomous iterated functions systems of contracting similarities. These regularity results are useful as pullback attractors can exhibit dimensionally different behaviour at different length scales and are therefore not, in general, Ahlfors-David regular (discussed in Section 3. See also Heinonen \cite{Heinonen} or Mackay \& Tyson \cite{MikeTyson}). We discuss how equi-homogeneity relates to other notions of regularity of sets. In particular, we demonstrate that equi-homogeneity is weaker than Ahlfors-David regularity (recalled in Definition \ref{definition - Ahlfors-David}, below), which is the content of Theorem \ref{theorem - ahlfors implies equih}. In this theorem we also prove that the lower Assouad, Hausdorff, packing, lower box-counting, upper box-counting, and Assouad dimensions coincide for Ahlfors-David regular sets. A weaker version of this result (omitting the lower Assouad dimension equality) is regarded as mathematical ``folklore'' (this result is stated in Corollary 3.2 of Farkas and Fraser \cite{FarkasFraser14} and Proposition 2.1 of Tyson \cite{Tyson08}, and essentially follows from (6.6) of Luukkainen \cite{Luuk}). We further demonstrate that the equi-homogeneity property is distinct from any previously defined notion of dimensional equivalence: the generalised Cantor sets provide examples of equi-homogeneous sets with unequal dimensions (see Olson, Robinson and Sharples \cite{ORS}). Conversely, in Proposition \ref{proposition - equal dimension not equih} we give an example of a set that is not equi-homogeneous yet has coinciding dimensions. These results establish equi-homogeneity as a widely applicable and useful notion of regularity for fractals and particularly for attractors of iterated functions systems. In contrast, other notions of regularity such as Ahlfors-David regularity or dimensional equality are too restrictive in these contexts as even simple examples of generalised Cantor sets may not have these regularity properties. This paper is organised as follows: in the remainder of the introduction we recall the definitions and properties of pullback attractors and generalised Cantor sets required for our main results, which we summarise at the end of this section. In Section 2 we recall the notions of dimension and regularity that we will use in the remainder. In Section 3 we define equi-homogeneity and discuss the relationship between various notions of dimension, equi-homogeneity, and other regularity properties of sets. In Section 4 we establish existence and uniqueness results for pullback attractors of non-autonomous iterated function systems before proving our main results that a large class of these pullback attractors are equi-homogeneous. \subsection{Pullback attractors} Pullback attractors (see Carvalho, Langa \& Robinson~\cite{Carvalho2013}, Cheban et al. \cite{Cheban2002}, Kloeden \cite{Kloedendifference}, \cite{Kloedensemidynamical}, Kloeden \& Rasmussen \cite{KR}, Kloeden \& Stonier \cite{KloedenStonier1998}, Schamlfu\ss\ \cite{Schmalfuss1992}, for example) were introduced to characterise the possible states at time $t$ of a non-autonomous dissipative continuous dynamical system once all previous initial conditions have been forgotten infinitely far in the past. Given an initial value problem of the form \[ \frac{{\rm d} u}{{\rm d} t} = f(t,u),\qquad u(t_0)=u_0, \] define a semi-process $S(t,t_0)$ for $t\ge t_0$ that maps initial values $u_0$ to their subsequent time evolution by \[ S(t,t_0)(u_0)=u(t)\wwords{for} t\ge t_0. \] The pullback attractor is the unique collection of uniformly bounded compact sets $A^t$ for $t\in{\mathbb R}$ such that $A^t=S(t,t_0)A^{t_0}$ for all $t\ge t_0$ and \[ \rho_H(S(t,t_0)(B),A^t)\to 0\wwords{as} t_0\to-\infty \] for all bounded sets $B$. Here $\rho_H(X,Y)$ is the Hausdorff semi-distance \[ \rho_H(X,Y)=\sup\limits_{x\in X} \inf\limits_{y\in Y} |x-y|. \] It is straightforward to adapt the idea of a pullback attractor to study iterated function systems whose maps change at each step in the iteration. A further generalisation, in which the compositions of maps in a iterated function system are indexed by an infinite tree, arises, for example, from the use of the squeezing property to estimate the upper box-counting dimension of the global attractor of autonomous dissipative continuous dynamical system system (see Eden et al.\ \cite{EFNT}). In order to keep our notation simple and our presentation self-contained we do not consider iterated function systems indexed by trees here. To set notation for the rest of this paper we now describe our non-autonomous iterated function systems. For each $i\in{\mathbb N}$ let $f_i\colon{\mathbb R}^d\to{\mathbb R}^d$ be a contraction with ratio $\sigma_i\in(0,1)$. Thus \begin{equation}\label{contract} |f_i(x)-f_i(y)|\le \sigma_i |x-y| \wwords{for all} i\in{\mathbb N}. \end{equation} We say $f_i$ is a similarity when the above inequality is, in fact, an equality. For each $k\in{\mathbb N}$ let ${\cal I}_k\subset{\mathbb N}$ be an index set with ${\rm card}\left({\cal I}_{k}\right)<\infty$. Given $B\subseteq{\mathbb R}^d$ and $k<l$ define \[ {\cal S}^{k,l}(B)={\cal S}^k\circ\ldots\circ{\cal S}^{l-1}(B) \wwords{and} {\cal S}^{k,k}(B)=B \] where ${\cal S}^k(B)=\bigcup_{i\in{\cal I}_{k+1}} f_i(B).$ We note that ${\cal S}$ has the process structure \[ {\cal S}^{k,l}\circ{\cal S}^{l,m}={\cal S}^{k,m} \wwords{for} k\le l\le m. \] In particular, ${\cal S}^{k,l}$ is the discrete-time analogue of the continuous process operator $S(t,t_0)$, with the identification $t=-k$ and $t_0=-l$. Since pullback attractors are obtained in the limit $t_0\to-\infty$, the switching of signs between $l$ and $t_0$ allows us the convenience of working with positive indices throughout. This change of sign also results in a notation that is more consistent with the natural notation for autonomous iterated function systems, where the functions are the same at every step. \begin{definition}\label{definition - pullback attractor} A pullback attractor of a non-autonomous iterated function system is a collection of sets $\set{F^k}$ for $k\in{\mathbb N}_0={\mathbb N}\cup\{0\}$ such that \begin{enumerate} \item each $F^k$ is compact and uniformly bounded, \label{pullback attractor property 1} \item the collection $\set{F^k}$ is invariant, in the sense that $ F^k={\cal S}^k( F^{k+1})$ \label{pullback attractor property 2} holds, and \item $\rho_H({\cal S}^{k,l}(B), F^k)\to 0$ as $l\to\infty$ for every bounded set $B\subset{\mathbb R}^d$. \label{pullback attractor property 3} \end{enumerate} \end{definition} It immediately follows from the definition that a pullback attractor, if it exists, is unique (Theorem \ref{unique}). We note that the requirement that the sets $\{F_k\}$ are uniformly bounded is not normally part of the definition of the continuous-time pullback attractor (see Carvalho, Langa \& Robinson \cite{Carvalho2013}), but in our setting this restriction is both natural and convenient. Without such an assumption the pullback attractor need not attract itself, and then an additional property (such as minimality) is required to ensure uniqueness. Even within the continuous-time setting attractors that are uniformly bounded `in the past' (for all $t\le t_0$ for each $t_0\in{\mathbb R}$) are convenient to avoid various possible pathologies, and this corresponds to uniform boundedness for our iterated function systems which are only defined for indices that correspond to $t\le0$. Reasonable pullback attractors result when we impose some separation property on the iterated function system (for the autonomous case see, for example, Falconer \cite{BkFalconer14} pp. 139, or Hutchinson \cite{Hutch}). The following condition is a natural generalisation of the familiar Moran open-set condition (see Section \ref{IFS1}). \begin{definition}\label{definition - generalised MOSC} A non-autonomous iterated function system satisfies the \emph{generalised Moran open-set condition} if there exists a uniformly bounded sequence of non-empty open sets $U^k\subset{\mathbb R}^d$ for $k\in{\mathbb N}_0$ such that \begin{enumerate} \item[(i)] $S^k(U^{k+1})\subseteq U^k$; \item[(ii)] $f_i(U^k)\cap f_j(U^k)=\emptyset$ for $i,j\in{\cal I}_k$ such that $i\neq j$; and \item[(iii)] $\lambda(U^k)\ge \epsilon_0>0$ for all $k\in{\mathbb N}_0$, where $\lambda$ is the $d$-dimensional Lebesgue measure. \end{enumerate} \end{definition} Our main results establish conditions under which pullback attractors satisfying the generalised Moran open-set condition are equi-homogeneous. \subsection{Generalised Cantor sets}\label{GCS} The generalised Cantor sets studied in Robinson \& Sharples \cite{RobinsonSharples13RAEX} and Olson, Robinson \& Sharples \cite{ORS} are illustrative examples of pullback attractors. These sets are defined as follows. For $\lambda \in (0, 1/2)$ let the application of ${\rm gen}_\lambda$ to a disjoint set of compact intervals be the procedure in which the open middle $1-2\lambda$ proportion of each interval is removed. Given $c_n\in (0,1/2)$ for all $n\in{\mathbb N}$, the generalised Cantor set $C$ generated by $\set{c_{n}}$ is given by \[ C={\textstyle \bigcap_{n=1}^\infty} C_n \words{where} C_{n+1}={\rm gen}_{c_{n+1}} C_n \words{and} C_0=[0,1]. \] By repeatedly taking the left shift of the sequence $\set{c_{n}}$ we produce a countable family of generalised Cantor sets: for each $k\in\mathbb{N}_{0}$ the set $C^{k}$ is the generalised Cantor set generated by the sequence $\set{c_{n+k}}_{n\in\mathbb{N}}$. This family of Cantor sets is a straightforward example of a pullback attractor. \begin{lemma} Let $\set{c_{n}}_{n\in\mathbb{N}}$ be a sequence with $c_{n}\in\left(0,1/2\right)$. The collection of generalised Cantor sets $\set{C^{k}}$ for $k\in\mathbb{N}_{0}$ is the pullback attractor of the non-autonomous iterated function system given by \begin{equation}\label{gencant} f_{2k-1}(x)=c_k x,\quad f_{2k}(x)=c_k x + 1-c_k \end{equation} with ${\cal I}_k=\{2k-1,2k\}$ for each $k\in{\mathbb N}$. \end{lemma} \begin{proof} Clearly each $C^k$ is compact and uniformly bounded so property \ref{pullback attractor property 1} is satisfied. Next, writing $C^{k}_{1}=\left[0,1\right]$ and $C^k_{n+1}={\rm gen}_{c_{n+k}} C^k_n$ it is not difficult to see that \begin{align}\label{prepend} C^{k}_{n}={\cal S}^{k,k+n}\left(\left[0,1\right]\right)={\cal S}^{k}\circ\ldots\circ {\cal S}^{k+n-1}\left(\left[0,1\right]\right). \end{align} Consequently, \begin{align*} {\cal S}^k C^{k+1} =\bigcap_{n=1}^\infty {\cal S}^k \circ {\cal S}^{k+1,k+n}([0,1]) =\bigcap_{n=1}^\infty {\cal S}^{k,k+n}([0,1]) =\bigcap_{n=1}^\infty C_{n+1}^{k} = C^{k} \end{align*} shows that $C^k$ is invariant, hence property \ref{pullback attractor property 2} is satisfied. Finally, let $B$ be any bounded set. An argument from Section \ref{non-auto-sec} shows that \[ \rho_H(S^{k,l}(B),S^{k,l}([0,1]))\le 2^{k-l}\rho_H(B,[0,1]) \] (this is a consequence of \eqref{lcontract} with $\sigma^*=1/2$) which, coupled with the fact that \[ \rho_H(C^k_n,C^k)\to 0 \quad \text{as}\quad n\to\infty \] yields \begin{align*} \rho_H\big(S^{k,l}(B),C^k\big) &\le \rho_H\big(S^{k,l}(B),S^{k,l}([0,1])\big) +\rho_H\big(S^{k,l}([0,1]),C^k\big)\\ &\le 2^{k-l}\rho_H(B,[0,1]) +\rho_H(C^k_{l-k},C^k)\to 0 \end{align*}% as $l\to\infty$. Therefore, property \ref{pullback attractor property 3} is satisfied. Consequently $\set{C^{k}}$ is a pullback attractor of the above iterated function system, which by Theorem \ref{unique} is unique. \end{proof} We remark that from \eqref{prepend} we see that applying the procedure $\gen_{c_{n}}$ to the intervals $C_{n}$ is equivalent to replacing the chain of maps ${\cal S}^{0,n}$ acting on $\left[0,1\right]$ by ${\cal S}^{0,n}\circ {\cal S}^{n} = {\cal S}^{0,n+1}$, i.e. subsequent levels of the Cantor set construction are obtained by prepending maps in the iterated function system. \subsection{Summary of main results: Equi-homogeneity of attractors for iterated function systems} It is well known that self-similar sets (i.e. attractors of autonomous iterated function systems) that satisfy the Moran open-set condition are Ahlfors-David regular (see, for example, Theorem 1(i) Section 5.3 of Hutchinson \cite{Hutch}). By Theorem \ref{theorem - ahlfors implies equih} it follows that such sets are equi-homogeneous. \begin{restatable}{theorem}{mainzero} \label{theorem - self-similar} Let $F$ be the attractor of an autonomous iterated function system of similarities. If $F$ satisfies the Moran open-set condition then $F$ is equi-homogeneous. \end{restatable} In Section \ref{non-auto-sec} we extend this analysis to pullback attractors of non-autonomous iterated function systems. Unlike the autonomous case these attractors are typically not Ahflors-David regular; however, they are equi-homogeneous under some mild assumptions. After proving the existence and uniqueness of pullback attractors for iterated function systems of arbitrary contractions (essentially following an argument of Hutchinson \cite{Hutch}) we restrict our attention to iterated function systems of contracting similarities. Our first result concerning such non-autonomous iterated function systems requires the contraction ratios to coincide at each stage of the iteration. \begin{restatable}{theorem}{mainone} \label{theorem - homogeneous case} If $\set{F^{k}}$ is the pullback attractor of a non-autonomous iterated function system of similarities satisfying the Moran open-set condition, $\sigma_{i}=c_{k}$ for all $i\in{\cal I}_{k}$ and all $k\in\mathbb{N}$, and there exists an $N>0$ such that ${\rm card}({\cal I}_k) \leq N$ for $k\in{\mathbb N}$ then each set $F^{k}$ is equi-homogeneous. \end{restatable} We also note (and prove in Section \ref{section - equi-homogeneity and assouad}) that while the resulting pullback attractors are equi-homogeneous, under certain choices for the sequence $c_k$, their Assouad and upper box-counting dimensions are not equal. Next we turn to the case where the contracting ratios $\sigma_i$ may be different for indices within each index set ${\cal I}_k$. In this case we require some uniformity in the contraction ratios: \begin{restatable}{theorem}{mainthree} \label{theorem - general} Let $\set{F^{k}}$ be the pullback attractor of a non-autonomous iterated function system of similarities that satisfies the Moran open-set condition. If \[ \inf\{\,\sigma_i:i\in{\mathbb N}\,\}=\sigma_*>0 \] and there exists an $s>0$ such that \begin{equation}\label{hippo} \sum_{i\in{\cal I}_k} \sigma_i^s=1 \words{for all} i\in{\cal I}_k \end{equation} then each set $F^k$ is equi-homogeneous with $\dim_{\rm A} F^k = s$. \end{restatable} Finally, we note that the hypothesis \eqref{hippo} can be weakened a little, for which we introduce the notation \begin{align*} {\cal J}^{k,l}&={\cal I}_{k+1}\times\cdots\times{\cal I}_l &\text{for}\ k<l. \end{align*} \begin{restatable}{theorem}{mainfour}\label{theorem - general averaged} Let $\set{F_{k}}$ be the pullback attractor of a non-autonomous iterated function system of similarities that satisfies the Moran open-set condition. If \[ \inf\{\,\sigma_i:i\in{\mathbb N}\,\}=\sigma_*>0 \] and there exist constants $s>0$, $n_0\in{\mathbb N}$, and $L>0$ such that \begin{equation}\label{ahippo} L^{-1}\le {\sum_{\alpha\in{\cal J}^{k,k+n}}} \sigma_\alpha^s\le L \words{for all} k\in\mathbb{N}\words{and} n\ge n_0 \end{equation} then each set $F^k$ is equi-homogeneous with $\dim_{\rm A} F^k=s$. \end{restatable} Intuitively \eqref{ahippo} implies that \eqref{hippo} holds when averaged over long enough sequences of iterations. Note that the existence of generalised Cantor sets with unequal upper box-counting and Assouad dimensions (see Proposition \ref{proposition - equihom distinct dimension}) indicates a problem with the proof in Li \cite{Li2013} as these sets are examples of Moran sets (see Wen \cite{Wen2001}) with contraction ratios bounded below by some positive number. The addition of the assumption \eqref{hippo} (rewritten in the framework of Moran sets) is sufficient to overcome these problems and conclude that the upper box-counting and Assouad dimensions are equal. Further, with this assumption it can be shown that the Moran sets considered in Li \cite{Li2013} are equi-homogeneous (see Olson, Robinson \& Sharples \cite{ORSMoransets}). \section{Dimension and regularity}\label{section - dimension and regularity} This section recalls the definitions and some facts about various notions of dimension and regularity of sets in an arbitrary metric space $\left(X,d_{X}\right)$. We denote by $B_{\delta}(x)=\set{y\in X : d_X(x,y) \leq \delta}$ the closed ball of radius $\delta$ with centre $x\in X$ and for brevity we refer to closed balls of radius $\delta$ as $\delta$-balls. For a set $F\subset X$ and each $\delta>0$ we denote by ${\cal N}(F,\delta)$ the minimum number of $\delta$-balls with centres in $F$ such that $F$ is contained in their union. A set $F\subset X$ is said to be \textit{totally bounded} if for all $\delta>0$ the quantity ${\cal N}(F,\delta)<\infty$, which is to say that $F$ can be covered by finitely many balls of any radius. We say that a metric space $X$ is \textit{locally totally bounded} if every ball in $X$ is totally bounded. \subsection{Box-counting and Assouad dimensions} We begin by recalling the definitions of the box-counting and Assouad dimensions. \begin{definition}\label{definition - box-counting dimensions} For a totally bounded set $F\subset X$ the upper and lower box-counting dimensions are defined by \begin{align} \dim_{\rm B}F&=\limsup_{\delta\to 0+} \frac{\log {\cal N}(F,\delta)}{-\log \delta}\\ \text{and}\qquad \dim_{\rm LB}F&=\liminf_{\delta\to 0+} \frac{\log {\cal N}(F,\delta)}{-\log \delta}, \end{align} respectively. \end{definition} The box-counting dimensions essentially capture the exponent $s\in{\mathbb R}^{+}$ for which ${\cal N}(F,\delta)\sim \delta^{-s}$. More precisely, it follows from Definition \ref{definition - box-counting dimensions} that for all $\varepsilon>0$ and all $\delta_{0}>0$ there exists a constant $C\geq 1$ such that \begin{align} C^{-1}\delta^{-\dim_{\rm LB}F+\varepsilon} &\leq {\cal N}(F,\delta)\leq C\delta^{-\dim_{\rm B}F-\varepsilon} & \forall\ 0<\delta\leq\delta_0.\label{box-counting growth bounds} \end{align} For some bounded sets $F$ the bounds \eqref{box-counting growth bounds} also hold for $\varepsilon=0$ giving precise control of the growth of ${\cal N}(F,\delta)$. In Olson, Robinson and Sharples \cite{ORS} we distinguish this class of sets and say that they `attain' their box-counting dimensions. A useful quantity for proving lower bounds is ${\cal P}(F,\delta)$, the maximum number of disjoint $\delta$-balls with centres in $F$, which is related to the minimum cover by centred balls by \begin{align}\label{geometric inequalities} {\cal N}(F,2\delta)\leq {\cal P}(F,2\delta)\leq {\cal N}(F,\delta) \end{align} (see, for example, `Equivalent definitions' 2.1 in Falconer \cite{BkFalconer14} or Lemma 2.1 in Robinson \& Sharples \cite{RobinsonSharples13RAEX}). In light of the inequalities \eqref{geometric inequalities}, replacing ${\cal N}(F,\delta)$ with ${\cal P}(F,\delta)$ in Definition \ref{definition - box-counting dimensions} gives an equivalent formulation of the box-counting dimensions. The Assouad dimension is a less familiar notion of dimension, in which we are concerned with `local' coverings of a set $F$: for more details see Assouad \cite{Assouad}, Bouligand \cite{Bouligand}, Luukkainen \cite{Luuk} Olson \cite{EJO}, or Robinson \cite{JCR}. \begin{definition}\label{definition - Assouad dimension} The Assouad dimension of a set $F\subset X$ is the infimum over all $s\in {\mathbb R}^{+}$ such that for all $\delta_{0}>0$ there exists a constant $C>0$ for which \begin{align} \sup_{x\in F} {\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right) \leq C\left(\delta/\rho\right)^{s}\quad \forall\ \delta,\rho \quad \text{with} \quad 0<\rho<\delta\leq\delta_{0}.\label{Assouad scale} \end{align} \end{definition} The lower Assouad dimension, also called the minimal dimension, complements the Assouad dimension with a lower bound on the scaling of local covers (see, for example, Larman \cite{Larman} or Fraser \cite{Fraser2013}). \begin{definition}\label{definition - lower Assouad dimension} The lower Assouad dimension of a set $F\subset X$ is the supremum over all $s\in{\mathbb R}^{+}$ such that for all $\delta_{0}>0$ there exists a constant $C>0$ for which \begin{align*} \inf_{x\in F} {\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right) \geq C\left(\delta/\rho\right)^{s}\quad \forall\ \delta,\rho \quad \text{with} \quad 0<\rho<\delta\leq\delta_{0} \end{align*} \end{definition} Analogous to the box-counting dimensions, the Assouad and lower Assouad dimensions essentially capture the exponent $s\in {\mathbb R}^{+}$ for which ${\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right) \sim \left(\delta/r\right)^{s}$. More precisely, it follows from Definitions \ref{definition - Assouad dimension} and \ref{definition - lower Assouad dimension} that for all $\varepsilon >0$ and all $\delta_{0}>0$ there exists a constant $C\geq 1$ such that \begin{equation} C^{-1}\left(\delta/\rho\right)^{\dim_{\rm LA}F -\varepsilon} \leq {\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right) \leq C\left(\delta/\rho\right)^{\dim_{\rm A}F + \varepsilon} \end{equation} for all $x\in F$ and all $\delta,\rho$ with $0<\rho<\delta\leq \delta_{0}$ Minimally for the Assouad and lower Assouad dimensions to be defined we require that each intersection $B_{\delta}(x)\cap F$ is totally bounded. This trivially holds if $X$ is a locally totally bounded space, which is to say that every ball $B_{\delta}(x)\subset X$ is totally bounded (for example, in Euclidean space $X={\mathbb R}^{n}$). In this case Definition \ref{definition - Assouad dimension} is equivalent to the simpler formulation of taking the infimum over all $s\in{\mathbb R}^{+}$ for which there exist constants $\delta_{0}>0$ and $C>0$ such that \eqref{Assouad scale} holds, which is easier to check. A further useful formulation can be made if $F$ is itself totally bounded. In this case Definition \ref{definition - Assouad dimension} is equivalent to taking the infimum over $s\in {\mathbb R}^{+}$ for which there exists a constant $C>0$ such that \eqref{Assouad scale} holds for all $\delta,\rho$ with $0<\rho<\delta<\infty$. The lower Assouad dimension has similar equivalent formulations in these cases. In this paper we study the Assouad dimensions in arbitrary metric spaces, requiring the full generality of Definition \ref{definition - Assouad dimension}, before focusing on attractors of iterated function systems in Euclidean space, which is locally totally bounded. The following technical lemma gives a relationship between the minimal size of covers of the set $B_{\delta}(x)\cap F$ for different length-scales, which we will use in many of the subsequent proofs. \begin{lemma}\label{lemma - refinement} Let $F\subset X$. For all $\delta,\rho,r>0$ and each $x\in F$ \begin{align}\label{refinement} {\cal N}(B_{\delta}(x)\cap F,\rho)&\leq {\cal N}(B_{\delta}(x)\cap F,r) \sup_{x\in F} {\cal N}(B_{r}(x)\cap F,\rho) \end{align} \end{lemma} \begin{proof} The only non-trivial case occurs when $\rho<r<\delta$. If $M:={\cal N}(B_{\delta}(x)\cap F,r)=\infty$ then there is nothing to prove. Assume that $M<\infty$ and let $x_{1},\ldots, x_{M}\in F$ be the centres of the $r$-balls $B_{r}(x_{j})$ that cover $B_{\delta}(x)\cap F$. Clearly \begin{align*} B_{\delta}(x)\cap F &\subset \bigcup_{j=1}^{M} B_{r}(x_{j})\cap F\\ \text{so}\qquad {\cal N}(B_{\delta}(x)\cap F,\rho) &\leq \sum_{j=1}^{M} {\cal N}(B_{r}(x_{j})\cap F,\rho)\\ &\leq M \sup_{x\in F} {\cal N}(B_{r}(x)\cap F,\rho) \end{align*} which is precisely \eqref{refinement}. \end{proof} It is known that for a totally bounded set $F\subset X$ the four notions of dimension that we have now introduced satisfy \begin{align}\label{dimension inequalities 1} \dim_{\rm LA}F\leq \dim_{\rm LB}F\leq\dim_{\rm B}F\leq\dim_{\rm A}F \end{align} (see, for example, Lemma 9.6 in Robinson \cite{JCR}, Fraser \cite{Fraser2013} or Theorem A.5 in Luukkainen \cite{Luuk}). Further, if $F$ is compact then \begin{align}\label{dimension inequalities 2} \dim_{\rm LA}F \leq \dim_{\rm H} F \leq \dim_{\rm P} F \leq \dim_{\rm B} F \leq \dim_{\rm A} F \end{align} (see Larman \cite{Larman}) where $\dim_{\rm H}$ and $\dim_{\rm P}$ are the familiar Hausdorff and packing dimensions (see, for example, Falconer \cite{BkFalconer14}). The following simple example of a compact countable subset of the real line illustrates that the inequalities involving the Assouad dimensions in \eqref{dimension inequalities 1} can be strict: \begin{proposition}\label{proposition - different scaling} For each $\alpha>0$ the set $F_{\alpha}:= \set{n^{-\alpha}}_{n\in\mathbb{N}}\cup\set{0}$ satisfies \begin{align*} \dim_{\rm LA}F_{\alpha} &=0\\ \dim_{\rm LB}F_{\alpha}=\dim_{\rm B}F_{\alpha}&= (1+\alpha)^{-1} \intertext{and} \dim_{\rm A} F_{\alpha} &= 1. \end{align*} \end{proposition} \begin{proof} See Robinson \cite{JCR} Example 13.4 or Falconer \cite{BkFalconer14} Example 2.7 for the derivation of the box-counting dimensions of $F_{\alpha}$. For the Assouad dimension consider for each $k\in\mathbb{N}$ the lengths $\delta_{k}=k^{-\alpha}$ and $\rho_{k}=\left(2k\right)^{-\alpha}-\left(2k+1\right)^{-\alpha}$. As $k^{\alpha} < \left(2k\right)^{\alpha}$ it is clear that $0<\rho_{k}<\delta_{k}$. Applying the mean value theorem to the function $f\left(x\right)=x^{-\alpha}$ we obtain \begin{align} \alpha \left(2k+1\right)^{-\alpha-1} \leq \rho_{k} &\leq \alpha \left(2k\right)^{-\alpha-1}\notag \intertext{from which it follows that} \frac{1}{\alpha}2^{\alpha+1}k\leq \delta_{k}/\rho_{k} \leq \frac{k^{-\alpha}}{\alpha\left(2k+1\right)^{-\alpha-1}} &= \frac{1}{\alpha}\left(\frac{k}{2k+1}\right)^{-\alpha-1}k \leq \frac{1}{\alpha} 3^{\alpha+1} k \label{example ratio bound} \end{align} as $\left(2k+1\right)/k \leq 3$ for all $k\in\mathbb{N}$. Next, observe that $B_{\delta_{k}}\left(0\right)\cap F_{\alpha} = \set{n^{-\alpha}}_{n \geq k+1}$ and that the distance from any of the $k-1$ points in $F_{\alpha,k}=\set{n^{-\alpha} \mid k+1 \leq n \leq 2k-1}$ to any other point of $F_{\alpha}$ is greater that $\rho_{k}$. Consequently, any covering of $B_{\delta_{k}}\left(0\right)\cap F_{\alpha}$ by $\rho_{k}$ balls with centres in $F_{\alpha}$ requires at least $k-1$ balls for the elements of $F_{\alpha}^{k}$ plus at least one ball for the remaining points, hence from \eqref{example ratio bound} \begin{align} N\left(B_{\delta_{k}}\left(0\right) \cap F_{\alpha}, \rho_k \right) &\geq k \geq \alpha 3^{-\alpha -1} \left(\delta_{k}/\rho_{k}\right) & \forall k\in\mathbb{N}.\label{example Assouad lower bound} \end{align} Next, suppose for a contradiction that $\dim_{\rm A}F_{\alpha}<1$, so there exist positive constants $\varepsilon,\delta_{0}$ and $C$ such that \begin{align*} N\left(B_{\delta}\left(0\right)\cap F_{\alpha},\rho\right) &\leq C \left(\delta/\rho\right)^{1-\varepsilon} & 0<\rho<\delta\leq \delta_{0}. \end{align*} As $\delta_{k} < \delta_{0}$ for all $k\in\mathbb{N}$ sufficiently large it follows from \eqref{example Assouad lower bound} that \begin{align*} \alpha 3^{-\alpha -1} \left(\delta_{k}/\rho_{k}\right) \leq N\left(B_{\delta_{k}}\left(0\right) \cap F_{\alpha}, \rho_k \right) \leq C \left(\delta_{k}/\rho_{k}\right)^{1-\varepsilon} \end{align*} hence $\left(\delta_{k}/\rho_{k}\right)^{\varepsilon} \leq C\alpha^{-1} 3^{\alpha+1}$ which is a contradiction as $\delta_{k}/\rho_{k}$ is unbounded from \eqref{example ratio bound}. We conclude that $\dim_{\rm A}F_{\alpha}=1$, as subsets of the real line have Assouad dimension at most $1$ (see Luukkainen \cite{Luuk}). For the lower Assouad dimension observe that $1\in F_{\alpha}$ is an isolated point so \[ \inf_{x\in F_{\alpha}} {\cal N}(B_{\delta}(x)\cap F_{\alpha},\rho) = 1 \] for all $\delta,\rho$ with $0<\rho<\delta<1-2^{-\alpha}$ as $B_{\delta}(1)\cap F_{\alpha}=\set{1}$ for such $\delta$ and this isolated point can be covered by a single ball of any radius. \end{proof} In fact, the above argument demonstrates that any set with an isolated point must have lower Assouad dimension equal to $0$. This could be viewed as an undesirable property for a dimension to have, as adding an isolated point to a set $F$ with $\dim_{\rm LA}F > 0$ has the effect of \emph{reducing} the lower Assouad dimension to zero. Equi-homogeneous sets, which we define in Section \ref{section - equi-homogeneity}, are those for which the quantity $N\left(B_{\delta}\left(x\right)\cap F,\rho\right)$ scales identically at every point $x\in F$. We will see that the equi-homogeneity property essentially removes the local dependence in the definitions of the Assouad and lower Assouad dimensions. Before we introduce equi-homogeneity we finish this section by recalling some familiar notions of regularity. \subsection{Regularity of sets} Dimensional equality is a common notion of regularity of sets, which is enjoyed by all smooth manifolds. Indeed, Mandelbrot \cite{Mandelbrot75} first defined ``fractal'' sets as those with unequal topological and Hausdorff dimension, although this definition has fallen out of favour in recent years. Nevertheless, equality of dimensions is sufficient for some good properties of sets. For example if the Hausdorff and upper box-counting dimensions of $F\subset {\mathbb R}^{n}$ coincide, then the product set inequality \[ \dim_{\rm H}\left(F\times E\right) \geq \dim_{\rm H} F + \dim_{\rm H} E \] is actually an equality for arbitrary $E\subset {\mathbb R}^{m}$ (see Corollary 7.4 of Falconer \cite{BkFalconer14}). Equality in the Assouad and lower Assoaud dimensions is particularly powerful: it follows from the inequalities \eqref{dimension inequalities 1} and \eqref{dimension inequalities 2} that if the Assouad and lower Assouad dimensions coincide then all of these dimensions agree. We now recall the definition of Ahlfors-David regularity. \begin{definition}\label{definition - Ahlfors-David} A bounded set $F\subset X$ is \emph{Ahlfors-David $s$-regular} if there exists a constant $C>0$ such that \begin{align}\label{Ahlfors-David inequality} C^{-1} \delta^{s} \leq \mathcal{H}^{s}\left(B_{\delta}\left(x\right)\cap F\right) \leq C \delta^{s} \end{align} for all $x\in F$ and all $0<\delta <\diameter F$, where $\mathcal{H}^{s}$ is the usual $s$-dimensional Hausdorff measure. \end{definition} If $F$ is Ahlfors-David $s$-regular then by taking $\delta=\diameter F$ it immediately follows that $0<\mathcal{H}^{s}\left(F\right)<\infty$, which is precisely that $F$ is an `$s$-set' (see Falconer \cite{BkFalconer14} pp.48) so in particular $\dim_{\rm H}F=s$. Remarkably, if \eqref{Ahlfors-David inequality} holds with $\mathcal{H}^{s}$ replaced by \emph{any} Borel measure then it follows that $F$ is Ahflors-David $s$-regular (see Chapter 8 of Heinonen \cite{Heinonen}). Ahlfors-David regularity is a strong notion of regularity in the sense that it guarantees the equality of all the dimensions in \eqref{dimension inequalities 1}. We prove this in part of Theorem \ref{theorem - ahlfors implies equih} in the next section. \section{Equi-homogeneity}\label{section - equi-homogeneity} From Definitions \ref{definition - Assouad dimension} and \ref{definition - lower Assouad dimension} we see that the Assouad and lower Assouad dimensions respectively capture the maximum and minimum cardinalities of local covers. The distinct values for the dimensions in the example of Proposition \ref{proposition - different scaling} reflect the different cardinalities of covers at the `high detail' limit point $0\in F$ and the `low detail' isolated point $1\in F$. We introduce the equi-homogeneity property to examine sets where there is no such `local dependence' on the cardinalities of local covers. Roughly this means that at each fixed length-scale an equi-homogeneous set exhibit identical dimensional detail near every point. \begin{definition}\label{equihom} We say that a set $F\subset X$ is \emph{equi-homogeneous} if for all $\delta_{0}>0$ there exist constants $M\geq 1$ and $c_{1},c_{2}>0$ such that \begin{align}\label{equihom inequality} \sup_{x\in F} {\cal N}(B_{\delta}(x)\cap F,\rho)&\leq M \inf_{x\in F} {\cal N}(B_{c_{1}\delta}(x)\cap F,c_{2}\rho) \end{align} for all $\delta,\rho$ with $0<\rho<\delta\leq \delta_{0}$. \end{definition} Note that as ${\cal N}(B_{\delta}(x)\cap F,\rho)$ increases with $\delta$ and decreases with $\rho$, by replacing the $c_{i}$ with $1$ if necessary we can assume without loss of generality that $c_{2}\leq 1 \leq c_{1}$ in \eqref{equihom inequality}. \subsection{Equivalent definitions} As with the definitions of the Assouad dimensions, for a large class of sets it is sufficient that \eqref{equihom inequality} holds only for \textit{some} $\delta_{0}$. \begin{lemma}\label{lemma - equiextension} If $F\subset X$ is totally bounded or $X$ is a locally totally bounded space then $F$ is equi-homogeneous if and only if there exist constants $M\geq 1$ and $c_{1},c_{2},\delta_{1}>0$ such that \begin{align} \sup_{x\in F} {\cal N}(B_{\delta}(x)\cap F,\rho)&\leq M \inf_{x\in F} {\cal N}(B_{c_{1}\delta}(x)\cap F,c_{2}\rho)\label{equihom equivalent} \end{align} for all $\rho,\delta$ satisfying $0<\rho<\delta\leq\delta_{1}$. \end{lemma} \begin{proof} It is sufficient to prove that for each $\delta_{0}>0$ the inequality \eqref{equihom equivalent} can be extended to hold for all $\delta,\rho$ with $0<\rho<\delta\leq \delta_{0}$ up to a change in constant $M$. Let $\delta_{0}>0$ be arbitrary. If $\delta_{0}\leq \delta_{1}$ then there is nothing to prove, so we assume that $\delta_{0}>\delta_{1}$. Suppose that $\delta,\rho$ lie in the range $0<\rho<\delta_{1}<\delta\leq \delta_{0}$ and let $x\in F$ be arbitrary. From Lemma \ref{lemma - refinement} with $r=\delta_{1}$ we obtain \begin{align} {\cal N}(B_{\delta}(x)\cap F,\rho) &\leq {\cal N}(B_{\delta}(x)\cap F,\delta_{1}) \sup_{x\in F} {\cal N}(B_{\delta_{1}} (x)\cap F,\rho)\notag\\ &\leq {\cal N}(B_{\delta}(x)\cap F,\delta_{1})M \inf_{x\in F} {\cal N} \left(B_{c_{1}\delta_{1}}\left(x\right)\cap F,c_{2}\rho\right)\notag\\ &\leq {\cal N}(B_{\delta}(x)\cap F,\delta_{1})M \inf_{x\in F} {\cal N} \left(B_{c_{1}\delta}\left(x\right)\cap F,c_{2}\rho\right)\label{extension bound small rho} \end{align} which follows from \eqref{equihom equivalent} and the fact that $\delta>\delta_{1}$. Taking the first case assume that $X$ is a locally totally bounded space. It follows from \eqref{extension bound small rho} that for $0<\rho<\delta_{1}<\delta\leq \delta_{0}$ \[ {\cal N}(B_{\delta}(x)\cap F,\rho)\leq {\cal N}(B_{\delta_{0}}(0),\delta_{1}) M \inf_{x\in F} {\cal N} \left(B_{c_{1}\delta}\left(x\right)\cap F,c_{2}\rho\right), \] and trivially for $\delta_{1}\leq\rho<\delta\leq\delta_{0}$ that \[ {\cal N}(B_{\delta}(x)\cap F,\rho)\leq {\cal N}(B_{\delta}(x),\rho)\leq {\cal N}(B_{\delta_{0}}(x),\delta_{1})\leq {\cal N}(B_{\delta_{0}}(0),\delta_{1}) M \inf_{x\in F} {\cal N} \left(B_{c_{1}\delta}\left(x\right)\cap F,c_{2}\rho\right) \] as $M \inf_{x\in F} {\cal N} \left(B_{c_{1}\delta}\left(x\right)\cap F,c_{2}\rho\right)\geq 1$. Consequently, with $M_{\delta_{0}}={\cal N}(B_{\delta_{0}}(0),\delta_{1})M$ we obtain \begin{align*} \sup_{x\in F}{\cal N}(B_{\delta}(x)\cap F,\rho)&\leq M_{\delta_{0}} \inf_{x\in F}{\cal N}(B_{c_{1}\delta}(x)\cap F,c_{2}\rho) \qquad &\forall\ \delta,\rho\quad\text{with}\quad0<\rho<\delta\le\delta_0, \end{align*} so the constant $M_{\delta_{0}}$ is sufficient to extend \eqref{equihom equivalent} to all $0<\rho<\delta\leq \delta_{0}$. Taking the second case assume that $F \subset X$ is totally bounded. It follows from \eqref{extension bound small rho} that for $0<\rho<\delta_{1}<\delta\leq \delta_{0}$ \begin{align*} {\cal N}(B_{\delta}(x)\cap F,\rho)&\leq {\cal N}(F,\delta_{1}) M \inf_{x\in F} {\cal N} \left(B_{c_{1}\delta}\left(x\right)\cap F,c_{2}\rho\right), \intertext{and again for $\delta_{1}\leq \rho<\delta\leq \delta_{0}$ that} {\cal N}(B_{\delta}(x)\cap F,\rho)&\leq {\cal N}(F,\delta_{1})\leq {\cal N}\left(F,\delta_{1}\right) M \inf_{x\in F} {\cal N} \left(B_{c_{1}\delta}\left(x\right)\cap F,c_{2}\rho\right). \end{align*} Consequently, the constant $M^{\prime}={\cal N}(F,\delta_{1})M$ is sufficient to extend \eqref{extension bound small rho} to all $0<\rho<\delta\leq \delta_{0}$. The converse implication follows immediately from the definition of equi-homogeneity. \end{proof} We remark that for totally bounded sets $F\subset X$ the constant $M^{\prime}$ does not depend upon $\delta_{0}$, so the inequality \eqref{equihom equivalent} can be extended to hold for all $\delta,\rho$ with $0<\rho<\delta$. If $F$ is totally bounded then we can find $M\geq 1$ such that \eqref{equihom inequality} holds for all $\rho,\delta$ with $0<\rho<\delta$. In normed spaces that are locally totally bounded (such as Euclidean space) there is an even more elementary formulation that does not require the constants $c_{1},c_{2}$. \begin{lemma}\label{lemma - euclideanequiextension} Let $X$ be a normed space that is locally totally bounded. A set $F\subset X$ is equi-homogeneous if and only if there exist constants $M\geq 1$, $\delta_{1}\geq 1$ such that \begin{align} \sup_{x\in F} {\cal N}(B_{\delta}(x)\cap F,\rho)&\leq M \inf_{x\in F} {\cal N}(B_{\delta}(x)\cap F,\rho) \end{align} for all $\rho,\delta$ with $0<\rho<\delta\leq \delta_{1}$. \end{lemma} \begin{proof} The `if' direction follows immediately from Lemma \ref{lemma - equiextension}. To prove the converse fix $\delta_{0}>0$ and let $M\geq 1$ and $c_{1},c_{2}>0$ with $c_{2}\leq 1 \leq c_{1}$ be such that \begin{align*} \sup_{x\in F} {\cal N}(B_{\delta}(x)\cap F,\rho) \leq M \inf_{x\in F} {\cal N}(B_{c_{1}\delta}(x)\cap F,c_{2}\rho) \end{align*} for all $0<\rho<\delta\leq \delta_{0}$. First, observe that replacing $\delta$ by $\delta/c_1$ we can assume that \begin{equation}\label{movec1} \sup_{x\in F} {\cal N}(B_{\delta/c_1}(x)\cap F,\rho)\leq M \inf_{x\in F} {\cal N}(B_{\delta}(x)\cap F,c_{2}\rho) \end{equation} for all $\delta,\rho$ with $0<\rho<\delta/c_1$, $\delta\le c_1\delta_0$. Note that if $\rho\geq \delta/c_1$ then the above inequality holds trivially, since the left-hand side is $1$ and the right-hand side is at least $M\ge 1$; so in fact \eqref{movec1} holds for all $0<\rho<\delta\leq \delta_{1}:= c_1\delta_0$. Now, it follows from \eqref{refinement} with $r=\delta/c_{1}$ that \begin{align} {\cal N}(B_{\delta}(x)\cap F,\rho)&\leq {\cal N}(B_{\delta}(x),\delta/c_{1}) \sup_{x\in F} {\cal N}(B_{\delta/c_{1}}(x)\cap F,\rho) \notag \intertext{for all $x\in F$, so setting $N_{1}:= {\cal N}(B_{\delta}(x),\delta/c_{1}) = {\cal N}(B_{1}(0),1/c_{1})$, which follows as $X$ is a normed space, we obtain} \sup_{x\in F} {\cal N}(B_{\delta}(x)\cap F,\rho) &\leq N_{1} \sup_{x\in F} {\cal N}(B_{\delta/c_{1}}(x)\cap F,\rho).\label{equiextension 1} \end{align} It also follows from \eqref{refinement} that for any $r>0$ \begin{align} {\cal N}(B_{\delta}(x)\cap F,c_{2}\rho) &\leq {\cal N}(B_{\delta}(x)\cap F,r) \sup_{x\in F}{\cal N}(B_{r}(x)\cap F,c_{2}\rho)\notag\\ &\leq {\cal N}(B_{\delta}(x)\cap F,r) \sup_{x\in F}{\cal N}(B_{r}(x),c_{2}\rho)\notag\\ &={\cal N}(B_{\delta}(x)\cap F,r){\cal N}(B_{r}(0),c_{2}\rho)\notag \intertext{so taking $r=\rho$, setting $N_{2}={\cal N}(B_{\rho}(0),c_{2}\rho) ={\cal N}(B_{1}(0),c_{2})$, which again follows as $X$ is a normed space, and taking the infimum over $x\in F$ we obtain} \inf_{x\in F}{\cal N}(B_{\delta}(x)\cap F,c_{2}\rho) &\leq \inf_{x\in F} {\cal N}(B_{\delta}(x)\cap F,\rho) N_{2}. \label{equiextension 2} \end{align} It follows from \eqref{movec1}, \eqref{equiextension 1} and \eqref{equiextension 2} that for all $\rho,\delta$ with $0<\rho<\delta\leq \delta_{1}$ \begin{align*} \sup_{x\in F}{\cal N}(B_{\delta}(x)\cap F,\rho) \leq M\frac{N_{1}}{N_{2}}\inf_{x\in F} {\cal N}(B_{\delta}(x)\cap F,\rho) \end{align*} so we conclude from Lemma \ref{lemma - euclideanequiextension} that $F$ is equi-homogeneous. \end{proof} For reasonable choices of product metric the product of two equi-homogeneous sets is also equi-homogeneous (see Olson, Robinson \& Sharples \cite{ORS}). \subsection{Equi-homogeneity and the Assouad dimensions}\label{section - equi-homogeneity and assouad} Equi-homogeneous sets have identical dimensional detail near every point. The Assouad and lower Assouad dimensions of an equi-homogeneous set can be found by examining the cardinality of the local cover at an arbitrary point. \begin{lemma}\label{lemma - Assouad def for equihom} If $F\subset X$ is equi-homogeneous then for any $x\in F$ \begin{enumerate} \item $\dim_{\rm A} F$ is the infimum over all $s\in {\mathbb R}^{+}$ such that for all $\delta_{0}>0$ there exists a constant $C>0$ for which \label{Assouad def for equihom} \begin{align}\label{Assouad scale equihom} {\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right) \leq C\left(\delta/\rho\right)^{s}\quad \forall\ \delta,\rho \quad \text{with} \quad 0<\rho<\delta\leq\delta_{0}. \end{align} and, \item $\dim_{\rm LA} F$ is the supremum over all $s\in {\mathbb R}^{+}$ such that for all $\delta_{0}>0$ there exists a constant $C>0$ for which \[ {\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right) \geq C\left(\delta/\rho\right)^{s}\quad \forall\ \delta,\rho \quad \text{with} \quad 0<\rho<\delta\leq\delta_{0}. \]\label{lower Assouad def for equihom} \end{enumerate} \end{lemma} \begin{proof} Let $s^{*}$ be the infimum in \ref{Assouad def for equihom}. Clearly \eqref{Assouad scale} implies \eqref{Assouad scale equihom} for all $x\in F$, so $\dim_{\rm A} F \geq s^{*}$. Next, let $\delta_{0}>0$ be arbitrary. As $F$ is equi-homogeneous there exist constants $c_{1},c_{2},M>0$ with $c_{2}\leq 1 \leq c_{1}$ such that for all $x\in F$ \begin{align}\label{equihom at x} \sup_{x\in F} {\cal N} \left(B_{\delta}\left(x\right)\cap F,\rho\right) \leq M {\cal N} \left(B_{c_{1}\delta}\left(x\right)\cap F,c_{2}\rho\right) \quad \forall\ \delta,\rho \quad \text{with} \quad 0<\rho<\delta\leq\delta_{0} \end{align} If $s>s^{*}$ then there exists a constant $C>0$ such that \begin{align*} {\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right) &\leq C\left(\delta/\rho\right)^{s}\quad \forall\ \delta,\rho \quad \text{with} \quad 0<\rho<\delta\leq c_{1}\delta_{0}. \intertext{As $0<\rho<\delta\leq\delta_{0}$ implies $0<c_{2}\rho<c_{1}\delta\leq c_{1}\delta_{0}$ it follows that} M {\cal N} \left(B_{c_{1}\delta}\left(x\right)\cap F,c_{2}\rho\right) &\leq MC\left(c_{1}\delta/c_{2}\rho\right)^{s} \quad \forall\ \delta,\rho \quad \text{with} \quad 0<\rho<\delta\leq\delta_{0} \intertext{hence from \eqref{equihom at x}} \sup_{x\in F} {\cal N} \left(B_{\delta}\left(x\right)\cap F,\rho\right) &\leq MC\left(c_{1}/c_{2}\right)^{s}\left(\delta/\rho\right)^{s} \quad \forall\ \delta,\rho \quad \text{with} \quad 0<\rho<\delta\leq\delta_{0}. \end{align*} It follows that $s\geq \dim_{\rm A} F$, but as $s>s^{*}$ was arbitrary we conclude that $s^{*}\geq \dim_{\rm A}F$, hence $s^{*}=\dim_{\rm A}F$. Part \ref{lower Assouad def for equihom} follows similarly. \end{proof} In fact, for equi-homogeneous sets whose box-counting dimensions are equal and attained (i.e. \eqref{box-counting growth bounds} holds with $\varepsilon=0$) then can find the Assouad and lower Assouad dimensions in terms of the more elementary box-counting dimensions, which do not account for variance in local detail. \begin{theorem}\label{theorem - equi-homogeneous Assouad equal to box} If a totally bounded set $F\subset X$ is equi-homogeneous, $F$ attains both its upper and lower box-counting dimensions, and $\dim_{\rm LB}F=\dim_{\rm B}F$, then \begin{align}\label{attainment theorem equality} \dim_{\rm A}F=\dim_{\rm B}F=\dim_{\rm LB}F=\dim_{\rm LA}F \end{align} \end{theorem} \begin{proof} See Olson, Robinson \& Sharples \cite{ORS} for the first equality of \eqref{attainment theorem equality}. A simple adaptation of their argument yields the last equality. \end{proof} Consequently, equi-homogeneous sets with equal, attained box-counting dimensions satisfy the strong dimensional equality $\dim_{\rm LA}F=\dim_{\rm A} F$. In general, self-similar sets can have distinct Assouad and lower Assouad dimensions (see Fraser \cite{Fraser2014}), although these dimensions coincide for self-similar sets that satisfy the Moran open-set condition (see Corollary 2.11 of Fraser \cite{Fraser2013}). In Theorem \ref{theorem - homogeneous case} we show that sets in this large class are also equi-homogeneous, however we will now show that equi-homogeneity is not equivalent to the condition $\dim_{\rm A}F=\dim_{\rm LA}F$. The Assouad and lower Assouad dimensions encode the scaling of the two distinct quantities $\sup_{x\in F} {\cal N}(B_\delta(x)\cap F,\rho)$, and $\inf_{x\in F} {\cal N}(B_\delta(x)\cap F,\rho)$, which can scale very differently as we have seen in the example of Proposition \ref{proposition - different scaling}. However, in light of Lemma \ref{lemma - Assouad def for equihom}, for equi-homogeneous sets the Assouad and lower Assouad dimensions encode the scaling of the single quantity ${\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right)$ in a similar way to the box-counting dimension scaling law \eqref{box-counting growth bounds}. Even though the minimal and maximal local covers scale identically for an equi-homogeneous set, the Assouad and the lower Assouad dimensions can be distinct. Rather than being the consequence of the different scaling of the minimal and maximal local cover (as in the example of Proposition \ref{proposition - different scaling}), any difference in these dimensions is due to an oscillation in the growth of the covers ${\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right)$. If this oscillation is sufficiently large then the upper and lower bounds on the growth of the covers (provided respectively by the Assouad and lower Assouad dimensions) are distinct. This is analogous to how distinct box-counting dimensions are due to a large oscillation in the growth of ${\cal N}\left(F,\delta\right)$ (see Robinson \& Sharples \cite{RobinsonSharples13RAEX}). We now give an example of the oscillating growth of local covers, and the resulting difference in the Assouad dimensions. As shown in the introduction, the generalised Cantor sets that were constructed in Section 3 of Olson, Robinson \& Sharples \cite{ORS} may also be obtained as the pullback attractors of the non-autonomous iterated function systems given by \eqref{gencant}. These systems satisfy the hypothesis of Theorem \ref{theorem - homogeneous case} (with Moran open sets $U^k=(0,1)$ for all $k$) and are therefore equi-homogeneous (we give a direct proof that generalised Cantor sets are equi-homogeneous in Olson, Robinson \& Sharples \cite{ORS}). At the same time, these sets can have distinct Assouad dimensions. In fact, even the Assouad and the upper box-counting dimension can differ for these sets. We give a simple example here to demonstrate this. \begin{proposition}\label{proposition - equihom distinct dimension} Let $F$ be the generalised Cantor set with contraction ratios \begin{align*} c_k= \begin{cases} 1/3 & \text{for}\quad k\in[2^{2(n-1)},2^{2n-1})\\ 1/9 & \text{for}\quad k\in [2^{2n-1},2^{2n}). \end{cases} \end{align*} Then \[ \dim_{\rm B}F= \frac{3}{5} \frac{\log 2}{\log 3} < \frac{\log 2}{\log 3} \leq\dim_{\rm A}F. \] \end{proposition} \begin{proof} Let $\pi_k=c_1\cdots c_k$. By results from Olson, Robinson \& Sharples \cite{ORS}, see also Hua, Rao, Wen \& Wu~\cite{Hua2000}, the upper box-counting dimension of $F$ is given by \[ \dim_{\rm B}F=\limsup_{k\to\infty} s_k \wwords{where} s_k=\frac{k\log2}{\log(1/\pi_k)}. \] Since $s_k$ is a non-increasing function on $[2^{2(n-1)},2^{2n-1})$ and an increasing function on $[2^{2n-1},2^{2n})$, it follows that we may take the limit supremum along the subsequence $k=4^n$. The calculation \begin{align*} \log (1/\pi_{4^n}) &=\sum_{j=1}^n \Big(2^{2(j-1)}\log 3+2^{2j-1}\log 9\Big)\\ &=\frac{5}{4}\sum_{j=1}^n 4^{j}\log 3 =\frac{5}{3} (4^{n}-1)\log 3 \end{align*} therefore implies \[ \dim_{\rm B}F=\lim_{n\to\infty} \frac{4^n\log 2}{(5/3)(4^n-1)\log 3} =\frac{3}{5} \frac{\log 2}{\log 3}. \] On the other hand, $\dim_{\rm A}F\ge {\log 2/\log 3}$. Suppose, for contradiction, that $\dim_{\rm A}F<s<\log 2/\log 3$. Then, there would exist a constant $C$ such that \[ {\cal N}(B_\delta(x)\cap F,\rho/2)\le C(\delta/\rho)^s \words{for all} 0<\rho<\delta. \] Choose $\delta_n=\pi_{2^{2(n-1)}}$ and $\rho_n=\pi_{2^{2n-1}}$. Then \[ \delta_n/\rho_n=3^{2^{2(n-1)}} \wwords{and} {\cal N}(B_{\delta_n}(x)\cap F,\rho_n/2) \ge 2^{2^{2(n-1)}} \] would imply that \[ 1 \le C 2^{-2^{2(n-1)}} (3^{2^{2(n-1)}})^s = C (3^{2^{2(n-1)}})^{s-\log2/\log3} \to 0 \words{as} n\to\infty \] which is a contradiction. \end{proof} We note that the above set $F$ satisfies the Moran structure conditions of Wen \cite{Wen2001} with $c_*=\inf\{\, c_k : k\in{\mathbb N}\,\}=1/9>0$. Therefore an additional assumption, such as~\eqref{hippo}, is needed in order to complete the proof of Li \cite{Li2013} and conclude that the Assouad and upper box-counting dimensions are equal. We now give a simple example of a set that satisfies $\dim_{\rm A}F=\dim_{\rm LA}F$ but that is not equi-homogeneous. Taken together these two examples demonstrate that the notion of equi-homogeneity is entirely distinct from the coincidence of these two dimensions. \begin{proposition}\label{proposition - equal dimension not equih} The set $F=\{0,1\}\cup \{\, 2^{-n} : n\in \mathbb{N}\,\}$ satisfies $\dim_{\rm A}F=\dim_{\rm LA}F=0$ but $F$ is not equi-homogeneous. \end{proposition} We remark that the equality $\dim_{\rm A}F=0$ is stated as Fact 4.3 in~Olson \cite{EJO} without proof. We also remark that the logarithmic terms that occur in the course of the proof can also be used to show that $F$ does not `attain' its box-counting dimension (i.e. that \eqref{box-counting growth bounds} does not hold with $\varepsilon=0$.). \begin{proof} Let $\delta=1/4$, then \[ B_\delta(1)\cap F= \{1\} \quad\hbox{implies that}\quad \inf_{x\in F} {\cal N}(B_\delta(x)\cap F,\rho) = 1 \] for every $\rho>0$. On the other hand, for $0<\rho<1/4$, let $K$ be chosen so that \[ 2^{-K-1}\le \rho< 2^{-K}. \] Then $K\ge2$ and \[ B_\delta(0)\cap F \supseteq \{\, 2^{-n} : n=2,\ldots, K\,\}. \] Moreover $2^{-n+1}-2^{-n}=2^{-n}\ge 2^{-K}>\rho$ for $n\le K$ implies that at least one set of diameter $\rho$ is required to cover each of the $K-1$ points above. Therefore \[ \sup_{x\in F} {\cal N}(B_\delta(x)\cap F,\rho) \ge K-1 \ge \frac{\log (1/\rho)}{\log 2} -2. \] This shows there is no value for $M$ independent of $\rho$ that could appear in Definition \ref{equihom} for this set, and so $F$ is not equi-homogeneous. Clearly $\dim_{\rm LA}F=0$. To show that $\dim_{\rm A}F=0$ let $x\in [0,1]$ and $0<\rho<\delta<1/4$. Define \[ G=\{\, 2^{-n} : \max(0,x-\delta) < 2^{-n}\le \rho\,\} \] and \[ H=\{\, 2^{-n} : \max(\rho,x-\delta) < 2^{-n}< \min(x+\delta,1)\,\}. \] Then $B_\delta(x)\cap F\subseteq \{0,1\}\cup G\cup H.$ Now depending on $\rho$, $x$, and $\delta$ it may happen that either or both of the sets $H$ and $G$ are empty. As covering an empty set is trivial, we need only consider the cases when these sets are non-empty. If $G\ne\emptyset$ then $x-\delta<\rho$, and it follows that \begin{equation}\label{gest} {\cal N}(G,\rho)\le \frac{\rho-\max(0,x-\delta)}{\rho} +1\le 2. \end{equation} Similarly if $H\ne\emptyset$ then \[ {\cal N}(H,\rho)\le \frac{1}{\log 2} \log\bigg\{\frac{\min(x+\delta,1)}{\max(\rho,x-\delta)}\bigg\} +1. \] If $x+\delta\ge 1$ then $x-\delta\ge 1-2\delta\ge 1/2.$ Thus ${\cal N}(H,\rho)\le 2$. If $x-\delta\le \rho$ then $x+\delta\le \rho+2\delta<3\delta<1$. Thus ${\cal N}(H,\rho)\le (\log 2)^{-1}\log(3\delta/\rho)+1$. Otherwise, $\rho+\delta<x<1-\delta$. On this interval $ x\mapsto \log\big\{(x+\delta)/(x-\delta)\big\} $ is a decreasing function. Therefore, in general, \begin{equation}\label{hest} {\cal N}(H,\rho)\le 2\log(\delta/\rho)+3. \end{equation} Combining \eqref{gest} with \eqref{hest} we obtain \[ {\cal N}(B_\delta(x)\cap F,\rho)\le 2\log(\delta/\rho)+7 \] Since for every $s>0$ there exists $C>0$ such that \[ 2\log(\delta/\rho)+7\le C(\delta/\rho)^s \qquad\hbox{for every}\qquad 0<\rho<\delta<1/4. \] It follows from the remarks after Definition \ref{definition - lower Assouad dimension} that $\dim_{\rm A}F=0$. \end{proof} We finish this section by showing that Ahlfors-David regularity implies equi-homogeneity. Along the way we will show that Ahlfors-David regularity implies equality of the Assouad and lower Assouad dimensions. \begin{theorem}\label{theorem - ahlfors implies equih} Let $F\subset X$ be a bounded set such that either $F$ is totally bounded or $X$ is a locally totally bounded space. If $F \subset X$ is Ahlfors-David $s$-regular then $F$ is equi-homogeneous and $\dim_{\rm LA}F=\dim_{\rm A}F=s$. \end{theorem} \begin{proof} As $F$ is Ahlfors-David $s$-regular there exists a constant $C>0$ such that \begin{align}\label{F Ahlfors-David} C^{-1} \delta^{s} \leq \mathcal{H}^{s}\left(B_{\delta}\left(x\right)\cap F\right) &\leq C \delta^{s} & 0<\delta<\diameter F. \end{align} Let $\delta_{0}=\diameter\left(F\right) / 2 $, fix $x,y\in F$ and $\delta,\rho$ with $0<\rho<\delta<\delta_{0}$. Observe that $N:={\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right)<\infty$ as either $F$ is totally bounded or $X$ is locally totally bounded so certainly $B_{\delta}\left(x\right)\cap F$ is totally bounded. Let $x_{i} \in B_{\delta}\left(x\right)\cap F$ for $i=1,\ldots, N$ be the centres of $\rho$-balls that cover $B_{\delta}\left(x\right)\cap F$. Clearly $\bigcup_{i=1}^{N} B_{\rho}\left(x_{i}\right) \supset B_{\delta}\left(x\right) \cap F$ so \begin{align} \mathcal{H}^{s}\left(\bigcup_{i=1}^{N} B_{\rho}\left(x_{i}\right)\cap F \right) &\geq \mathcal{H}^{s}\left(B_{\delta}\left(x\right)\cap F\right) \geq C^{-1}\delta^{s}\label{Ahlfors first inequality} \intertext{from \eqref{F Ahlfors-David}, while} \mathcal{H}^{s}\left(\bigcup_{i=1}^{N} B_{\rho}\left(x_{i}\right)\cap F \right) &\leq \sum_{i=1}^{N} \mathcal{H}^{s}\left(B_{\rho}\left(x_{i}\right)\cap F\right) \leq \sum_{i=1}^{N} C\rho^{s} = NC\rho^{s}. \label{Ahlfors second inequality} \end{align} Combining \eqref{Ahlfors first inequality} and \eqref{Ahlfors second inequality} we obtain \begin{equation}\label{Ahlfors theorem lower bound} {\cal N}\left(B_{\delta}\left(x\right)\cap F,\rho\right) = N \geq C^{-2}\left(\delta/\rho\right)^{s} \end{equation} for all $\rho,\delta$ with $0<\rho<\delta\leq\delta_{0}$ and all $x\in F$. It follows that $\dim_{\rm LA} F \geq s$ (see the remarks after Definition \ref{definition - lower Assouad dimension}). Next, observe that $P:= {\cal P} \left(B_{\delta}\left(y\right)\cap F,\rho\right)<\infty$ and let $y_{i}\in B_{\rho}\left(y\right)\cap F$ for $i=1,\ldots, P$ be the centres of disjoint $\rho$-balls. As $B_{\rho}\left(y_{i}\right)\subset B_{2\delta}\left(y\right)$ for each $i$ it follows that $\bigcup_{i=1}^{P} B_{\rho}\left(y_{i}\right)\cap F \subset B_{2\delta}\left(y\right) \cap F$ so \begin{align} \mathcal{H}^{s}\left(\bigcup_{i=1}^{P} B_{\rho}\left(y_{i}\right)\cap F\right) \leq \mathcal{H}^{s}\left(B_{2\delta}\left(y\right)\cap F\right) \leq C2^{s}\delta^{s} \label{Ahlfors third inequality} \intertext{from \eqref{F Ahlfors-David}. Further, as the $B_{\rho}\left(y_{i}\right)$ are disjoint it follows that} \mathcal{H}^{s}\left(\bigcup_{i=1}^{P} B_{\rho}\left(y_{i}\right)\cap F\right) = \sum_{i=1}^{P} \mathcal{H}^{s}\left(B_{\rho}\left(y_{i}\right)\cap F\right) \geq \sum_{i=1}^{P} C^{-1}\rho^{s} = PC^{-1}\rho^{s}\label{Ahlfors fourth inequality} \end{align} Combining \eqref{Ahlfors third inequality} and \eqref{Ahlfors fourth inequality} with the geometric inequalities \eqref{geometric inequalities} we obtain \begin{equation}\label{Ahlfors theorem upper bound} {\cal N} \left(B_{\delta}\left(y\right)\cap F,\rho\right) \leq {\cal P} \left(B_{\delta}\left(y\right)\cap F,\rho\right) = P \leq C^{2}2^{s} \left(\delta/\rho\right)^{s} \end{equation} for all $\rho,\delta$ with $0<\rho<\delta\leq \delta_{0}$ and all $y\in F$. It follows that $\dim_{\rm A}F \leq s$ (see the remarks after Definition \ref{definition - lower Assouad dimension}). As we now have $s \leq \dim_{\rm LA}F\leq \dim_{\rm A}F \leq s$ we must have equality throughout. Finally, it follows from \eqref{Ahlfors theorem lower bound} and \eqref{Ahlfors theorem upper bound} that \begin{align*} {\cal N} \left(B_{\delta}\left(y\right)\cap F,\rho\right) \leq C^{2}2^{s}\left(\delta/\rho\right)^{s} \leq C^{4}2^{s} {\cal N} \left(B_{\delta}\left(x\right)\cap F,\rho\right) \end{align*} for all $\rho,\delta$ satisfying $0<\rho<\delta\leq \delta_{1}$ and arbitrary $x,y\in F$. Taking limits we conclude \begin{align*} \sup_{x\in F}{\cal N} \left(B_{\delta}\left(x\right)\cap F,\rho\right) \leq C^{4}2^{s} \inf_{x\in F} {\cal N} \left(B_{\delta}\left(x\right)\cap F,\rho\right) \end{align*} for all $\rho,\delta$ satisfying $0<\rho<\delta\leq \delta_{1}$, so $F$ is equi-homogeneous by Lemma \ref{lemma - equiextension}. \end{proof} \section{Equi-homogeneity and dynamical systems} We now demonstrate that the notion of equi-homogeneity is not overly restrictive: it is enjoyed by all self-similar sets that satisfy the Moran open-set condition, generalised Cantor sets, and the pullback attractors for a certain class of non-autonomous iterated function systems that satisfy the (generalised) Moran open-set condition. \subsection{Autonomous systems}\label{IFS1} We will begin with self-similar sets, which are a much studied and canonical class of fractal sets. Classical (`autonomous') iterated function systems are precisely the non-autonomous iterated function systems that satisfy ${\cal I}_k={\cal I}$ for each $k\in{\mathbb N}$. If we define the set function ${\cal T}$ and its iterates as \[ {\cal T}(B)=\bigcup_{i\in{\cal I}} f_i(B) \wwords{and} {\cal T}^{n+1}(B)={\cal T}\circ {\cal T}^{n}(B) \] where ${\cal T}^{0}(B)=B$ then it is well known (see Falconer \cite{BkFalconer14} for example,) that if the $f_{i}$ are contractions then there exists a unique non-empty compact set $F$ such that \begin{equation} F={\mathcal T}(F) \label{attract} \end{equation} called the attractor of the iterated function system. Further, the attractor satisfies $\rho_H({\cal T}^l(B),F)\to 0$ as $l\to\infty$ for any bounded set $B\subset{\mathbb R}^d$. In this case, defining $F^k=F$ for all $k\in{\mathbb N}_0$ and noting that ${\cal T}^l={\cal S}^{0,l}$ yields a collection of compact sets satisfying Definition \ref{definition - pullback attractor} so the collection $\set{F^{k}}$ is a pullback attractor. Conversely, as pullback attractors are unique (which we prove in Theorem \ref{unique}) it follows that for classical iterated function systems the pullback attractor can be identified with the invariant set. A set $F$ is said to be self-similar if it is the attractor of a classical system of contracting similarities. Reasonable self-similar sets result when we impose some separation properties on the iterated function system, see Falconer \cite{BkFalconer14} pp. 139 or Hutchinson \cite{Hutch}, for example. The simplest such property is the Moran open-set condition (see Moran \cite{Moran}): there exists an open set $U$ such that $F\subset\overline U$, $f_i(U)\subseteq U$, and \[ f_i(U)\cap f_j(U)=\emptyset\qquad\mbox{when}\qquad i\neq j. \] If a classical (`autonomous') iterated function system satisfies the Moran open-set condition, then it satisfies the generalised Moran open-set condition of Definition \ref{definition - generalised MOSC} with $U^k=U$, ${\cal I}_k={\cal I}$ and $F^k=F$ for every $k\in{\mathbb N}_0$. Thus, this definition of the generalised Moran open-set condition reduces to the classical one when the functions are the same at every step of the iteration, and hence there is no ambiguity in referring to the generalised Definition \ref{definition - generalised MOSC} simply as the `Moran open-set condition'. These self-similar sets are equi-homogeneous, which is the content of Theorem \ref{theorem - self-similar} from the Introduction. \mainzero* \begin{proof} Self-similar sets that satisfy the open-set condition are Ahlfors-David regular (Theorem 1(i), Section 5.3 of Hutchinson \cite{Hutch}) so by Theorem \ref{theorem - ahlfors implies equih} they are equi-homogeneous. \end{proof} In fact, the open set condition in Theorem \ref{theorem - self-similar} may be relaxed to allow for the images of the attractor $F\subset {\mathbb R}^{d}$ to overlap in a limited number of ways. This `weak separation' condition, made precise in Fraser et al. \cite{Fraser2014}, is sufficient for $F$ to be Ahlfors-David regular provided that the attractor is not contained in a $d-1$ dimensional hyperplane (Theorem 2.1 of Fraser et al. \cite{Fraser2014}). \subsection{Non-autonomous systems}\label{non-auto-sec} We now consider pullback attractors of non-autonomous iterated function systems. Using the notation given in the introduction we consider a collection of index sets ${\cal I}_k\subset{\mathbb N}$ with ${\rm card}\left({\cal I}_{k}\right)<\infty$ for all $k\in{\mathbb N}$. We also introduce some additional notation: define \[ {\cal I}^0=\bigcup_{n=1}^\infty {\cal I}_1\times\cdots\times{\cal I}_n, \] for $\alpha=(i_1,\ldots,i_n)\in{\cal I}^0$ let \[ f_\alpha=f_{i_1}\circ \cdots\circ f_{i_n} \wwords{and} \sigma_\alpha=\sigma_{i_1}\cdots \sigma_{i_n}, \] and, if $n\ge 2$, we denote the truncation $(i_1,\ldots,i_{n-1})$ by $\alpha'$. Before we give conditions ensuring the existence of a pullback attractor, we first show that the definition is sufficient to ensure uniqueness. \begin{theorem}\label{unique} The pullback attractor of a non-autonomous iterated function system, if it exists, is unique. \end{theorem} \begin{proof} Let $\set{F^{k}}$ and $\set{G^{k}}$ be pullback attractors of a non-autonomous iterated function system. Given $k\in{\mathbb N}_0$ fixed, consider any point $x_k\in G^k$. By property \ref{pullback attractor property 2} of pullback attractors there is $x_{k+1}\in G^{k+1}$ and $i_{k+1}\in{\cal I}_{k+1}$ such that $x_k=f_{i_{k+1}}(x_{k+1})$. By induction define $x_j\in F^j$ and $i_j\in{\cal I}_j$ such that $x_j=f_{i_{j+1}}(x_{j+1})$ for all $j>k$. Since the $x_j\in G^j$ and the $\{G^j\}$ are uniformly bounded then $B=\{\, x_j : j> k\,\}$ is a bounded set. Moreover, $x_k\in{\cal S}^{k,l}(B)$ for every $l>k$. Consequently \[ \rho_H(\{x_k\}, F^k)\le \rho_H({\cal S}^{k,l}(B), F^k)\to 0 \words{as} l\to\infty \] by property \ref{pullback attractor property 3} of pullback attractors. Since $F^k$ is closed, it follows that $x_k\in F^k$. Therefore $G^k\subseteq F^k$. Switching the roles of $ F^k$ and $ G^k$ yields that $ F^k= G^k$. \end{proof} We now show under natural conditions that the pullback attractor of a non-autonomous iterated function system exists. For each $i\in{\mathbb N}$ let $b_i$ be the unique fixed point of $f_i$ such that $f_i(b_i)=b_i$. \begin{theorem}\label{exist} If \[ M=\sup\{\,|b_i|:i\in{\mathbb N}\,\}<\infty \wwords{and} \sigma^*=\sup\{\, \sigma_i:i\in{\mathbb N}\}<1, \] then the pullback attractor exists. \end{theorem} \begin{proof} We first find a compact set $K$ such that for any bounded set $B\subset{\mathbb R}^d$ and any $k\in N$ there exists a corresponding value of $l(k,B)\ge k$ such that \begin{equation}\label{cab} {\cal S}^{k,l}(B)\subseteq K \wwords{for all} l\ge l(k,B). \end{equation} Let $R= 2(1+\sigma^*) M/(1-\sigma^*)$ and $K$ be the closed ball of radius $R$ centred at the origin. Consider a bounded set $B\subset{\mathbb R}^d$ with $\abs{x}\le L$ for all $x\in B$. If $\abs{x} \ge R$, then \begin{align*} \abs{f_i(x)}&\le \abs{f_i(x)-f_i(b_i)}+\abs{b_i} \le \sigma^* \abs{x-b_i}+\abs{b_i}\\ &\le \sigma^* \abs{x} + (1+\sigma^*) \abs{b_i} \le ((1+\sigma^*)/2) \abs{x}. \end{align*}% It follows that \eqref{cab} holds for $l(k,B)=k+\lceil\log(L/R)/\log(2/(1+\sigma^*))\rceil$. Next we show that ${\cal S}^{k}$ is a contraction with respect to the Hausdorff metric. Let $A,B\subset{\mathbb R}^d$ be compact. For each $a\in A$ choose $\pi(a)\in B$ such that \[ \abs{a-\pi(a)}=\min\{\, \abs{a-b} : b\in B\,\}. \] Any $x\in{\cal S}^{k}(A)$ may be written as $x=f_{i}(a)$ for some $a\in A$ and $i\in{\cal I}_k$. Consequently \[ \rho_H(\{x\},{\cal S}^k(B))\le \abs{f_{i}(a)-f_{i}(\pi(a))} \le \sigma_i \abs{a-\pi(a)} \le \sigma^* \rho_H(A,B) \] implies \begin{equation}\label{lcontract} \rho_H({\cal S}^k(A),{\cal S}^k(B))\le \sigma^* \rho_H(A,B) \le \sigma^* \rho_H(A,B). \end{equation} Interchanging the roles of $A$ and $B$ yields \[ \rho_H({\cal S}^k(A),{\cal S}^k(B))\le \sigma^* \rho_H(A,B). \] Given $k$ fixed, define $K_m={\cal S}^{k,k+m}(K)$ for $m\in{\mathbb N}$ to obtain a sequence of compact subsets of ${\mathbb R}^d$. If $m\le n$ then \begin{align*} \rho_H(K_m,K_n)&=\rho_H({\cal S}^{k,k+m}(K),{\cal S}^{k,k+m}\circ {\cal S}^{k+m,k+n}(K))\\ &\le (\sigma^*)^m \rho_H(K,{\cal S}^{k+m,k+n}(K)) \le (\sigma^*)^m 2R, \end{align*} which shows that $K_m$ is a Cauchy sequence. The completeness of the Hausdorff metric on the space of all compact subsets of ${\mathbb R}^d$ then yields a compact limit set, which we call $F^k$. It remains to show that $ F^k$ satisfies the properties required of the pullback attractor. Clearly $ F^k$ is compact. Moreover, since $K_m\subseteq K$ for all $m$ then $F^k\subseteq K$ and so $ F^k$ is uniformly bounded. To show the invariance property \ref{pullback attractor property 2} note that \begin{align*} \rho_H( F^k&,{\cal S}^k( F^{k+1}))\\ &\le \rho_H(F^k,{\cal S}^{k,k+m}(K)) + \rho_H({\cal S}^k\circ{\cal S}^{k+1,k+m}(K),{\cal S}^k( F^{k+1}))\\ &\le \rho_H(F^k,{\cal S}^{k,k+m}(K)) + \sigma^* \rho_H({\cal S}^{k+1,k+m}(K), F^{k+1})\\ &\to 0 \words{as} m\to\infty. \end{align*} Given $\varepsilon>0$ choose $m$ so large that $(\sigma^*)^m 2R<\varepsilon$. Now choose $l\ge k+m$ so large that ${\cal S}^{k+m,l}(B)\subseteq K$. It follows that \[ \rho_H({\cal S}^{k,l}(B), F^k) \le \rho_H({\cal S}^{k,k+m}(K), F^k)=\rho_H(K_m, F^k) \le (\sigma^*)^m 2R <\varepsilon, \] and therefore property \ref{pullback attractor property 3} holds. \end{proof} We are now ready to formulate conditions on non-autonomous iterated function systems that guarantee that the resulting non-autonomous attractor is equi-homogeneous. We first prove that if the Moran open-set condition is satisfied then the pullback attractor is contained in the open sets, which is essentially the non-autonomous version of Falconer \cite{BkFalconer85} pp.122. \begin{lemma}\label{finubar} If a non-autonomous iterated function system satisfies the Moran open-set condition and the hypotheses of Theorem \ref{exist}, then $F^k\subseteq\overline{U^k}$. \end{lemma} \begin{proof} Let $L$ be a uniform bound on $U^k$ and $F^k$ for all $k\in{\mathbb N}$. Since ${\cal S}^{k,l}(U^l)\subseteq U^l$ and ${\cal S}^{k,l}(F^l)=F^k$ for $k\le l$ then \begin{align*} \rho_H(F^k,U^k)& \le \rho_H(S^{k,l}(F^l), S^{k,l}(U^l)) \le (\sigma^*)^{l-k} \rho_H(F^l,U^l) \le (\sigma^*)^{l-k} 2L. \end{align*} Taking $l\to\infty$ it follows that $\rho_H(F^k,U^k)=0$ and consequently $F^k\subseteq\overline{U^k}$. \end{proof} We now consider the case where the contractions $f_i$ are similarities with contraction ratio $\sigma_{i}$. We begin by proving Theorem \ref{theorem - homogeneous case} from the Introduction, which treats the simplest situation where all the contraction ratios are the same at each level $k$ of the iteration. Note that this class of non-autonomous iterated function systems includes those whose pullback attractors are generalised Cantor sets (see Section which appear in Olson, Robinson \& Sharples \cite{ORS}. Intuitively, each step of the iteration corresponds to a different scale, and since all the maps contract in the same way at that scale, it is natural that the pullback attractor is equi-homogeneous. \mainone* \begin{proof} Since all the hypotheses are uniform in $k$ it is sufficient to show that $F^0$ is equi-homogeneous. Let \[ \pi_n={\textstyle\prod_{j=1}^{n}} c_j\wwords{and} \eta=\sup\{\,{\rm diam}(\overline {U^k}): k\in{\mathbb N}\,\}. \] For $\delta\le c_1 \eta$ there exists $n\ge 2$ such that $\pi_n\eta < \delta \le \pi_{n-1}\eta.$ Define \[ {\cal J}_n= {\cal I}_1\times\cdots\times{\cal I}_{n}. \] From the open-set condition we have \begin{equation}\label{oscorm1} f_\alpha(U^n)\cap f_\beta(U^n)=\emptyset \words{for} \alpha,\beta\in {\cal J}_n \words{with} \alpha\ne\beta \end{equation} and by property 2 for pullback attractors, we obtain \begin{equation}\label{attractcorm1} {\textstyle \bigcup_{\alpha\in {\cal J}_n}} f_\alpha(F^n) ={\cal S}^{0,n}(F^n)=F^0. \end{equation} We now use \eqref{oscorm1} and \eqref{attractcorm1} to show that $F^0$ is equi-homogeneous. Let $x\in F^0$ be arbitrary. Then $x\in f_{\alpha}(F^n)$ for some $\alpha\in {\cal J}_n$ and consequently \[ {\rm diam}(f_{\alpha}(F^n)) = \pi_n {\rm diam}(F^n) \le \pi_n\eta<\delta, \] which implies that $f_\alpha(F^n)\subseteq B_\delta(x)$. It follows that \[ B_\delta(x)\cap F^0 =B_\delta(x)\cap \bigcup_{\beta\in {\cal J}_n} f_\beta(F^n) \supseteq B_\delta(x)\cap f_{\alpha}(F^n)=f_\alpha(F^n). \] Therefore \[ {\cal N}(B_\delta(x)\cap F^0,\rho) \ge {\cal N}(f_\alpha(F^n),\rho) = {\cal N}(F^n,\rho/\pi_n) \] implies that \begin{equation}\label{rhinfm1} \inf_{x\in F^0} {\cal N}(B_\delta(x)\cap F^0,\rho) \ge {\cal N}(F^n,\rho/\pi_n). \end{equation} Let $A_n=\{\,\alpha\in {\cal J}_n : B_\delta(x)\cap f_\alpha(\overline {U^n})\ne\emptyset\,\}.$ Then $\beta\in A_{n-1}$ implies that \[ f_\beta(\overline {U^{n-1}})\subseteq B_{\delta +{\rm diam} f_\beta(\overline {U^{n-1}})}(x) \subseteq B_{2\eta\pi_{n-1}}(x). \] Therefore by \eqref{oscorm1} we obtain \begin{align*} \lambda(B_{2\eta\pi_{n-1}}(x))&\ge\lambda\Big(\bigcup_{\beta\in A_{n-1}} f_\beta(U^{n-1})\Big)=\sum_{\beta\in A_{n-1}}\lambda\big(f_\beta(U^{n-1})\big)\\ &= {\rm card}(A_{n-1}) (\pi_{n-1})^d \lambda(U^{n-1}) \ge {\rm card}(A_{n-1})(\pi_{n-1})^d\epsilon_0. \end{align*}% Consequently \[ {\rm card}(A_n)\le {\rm card}({\cal I}_n){\rm card}(A_{n-1})\le M \] where $M=(2\eta)^d N \lambda(B_1(0))/\epsilon_0$ is independent of $n$. Therefore \begin{align*} {\cal N}(B_\delta(x)\cap F^0,\rho)&\le\sum_{\alpha\in A_n} N\big(f_\alpha(F^n),\rho\big) =\sum_{\alpha\in A_n} {\cal N}(F^n,\rho/\pi_n)\\ &={\rm card}(A_n) {\cal N}(F^n,\rho/\pi_n)\le M {\cal N}(F^n,\rho/\pi_n). \end{align*} Taking the supremum of $F^0$ and combining this with \eqref{rhinfm1} we obtain \[ \sup_{x\in F^0} {\cal N}(B_\delta(x)\cap F^0,\rho)\le M {\cal N}(F^n,\rho/\pi_n) \le \inf_{x\in F^0} {\cal N}(B_\delta(x)\cap F^0,\rho) \] which completes the proof of the theorem. \end{proof} \begin{corollary}\label{main2good} If $\set{F^{k}}$ is the pullback attractor of a non-autonomous iterated function system of similarities satisfying the Moran open-set condition and $\sigma_i=c_k$ for all $i\in{\cal I}_k$, and all $k\in\mathbb{N}$ and $\inf\set{\sigma_{i} : i \in\mathbb{N}}=\sigma_{*}>0$ then each set $F^{k}$ is equi-homogeneous. \end{corollary} \begin{proof} Let $B$ be a bounded set such that $U^k\subseteq B$ for every $k\in{\mathbb N}_0$. The open-set condition then implies \begin{align*} \lambda(B) &\ge \lambda(U^{k-1}) \ge \lambda({\textstyle \bigcup_{i\in{\cal I}_k}}f_i(U^k))\\ &\ge {\textstyle \sum_{i\in{\cal I}_k}}\lambda(f_i(U^k)) \ge \sigma_*{\textstyle \sum_{i\in{\cal I}_k}}\lambda(U^k) \ge {\rm card}(I_k)\sigma_* \epsilon_0. \end{align*} Therefore ${\rm card}({\cal I}_k)\le N$ where $N=\lambda(B)/(c_* \epsilon_0)$ and the result now follows from the application of Theorem \ref{theorem - homogeneous case}. \end{proof} We now examine the case when the $\sigma_i$ need not coincide for all the functions in each step of the iteration. This situation requires more refined analysis which we make easier by introducing the following notation. Denote \[ {\cal J}^{k}={\textstyle \bigcup_{m\in{\mathbb N}}} {\cal J}^{k,k+m} \wwords{where} {\cal J}^{k,n}={\cal I}^{k+1}\times\cdots\times {\cal I}^{n} \] Given $\alpha\in{\cal J}^{k,n}$ denote $k_\alpha=k$ and $n_\alpha=n$. Note for $k_\alpha$ and $n_\alpha$ to be well defined, we assume as we may that ${\cal I}_k\cap{\cal I}_n=\emptyset$ for $k\ne n$. Our analysis will further make use of the set \begin{equation}\label{Jdk} {\cal J}^k_\delta=\{\,\alpha\in{\cal J}^k: \sigma_\alpha\eta < \delta \le \sigma_{\alpha'}\eta\,\}. \end{equation} As in the proof of Theorem \ref{theorem - homogeneous case} we have the following facts: if $\alpha,\beta\in{\cal J}^k_\delta$ then \begin{align*} f_\alpha(U^{n_\alpha})\cap f_\beta(U^{n_\beta})&=\emptyset \wwords{when} \alpha\ne\beta \intertext{and} {\textstyle \bigcup_{\alpha\in{\cal J}^k_\delta}} f_\alpha(F^{n_\alpha})&=F^k. \end{align*} We shall need, as before, an estimate on the cardinality of \[ A^k_{\delta}=\{\,\alpha\in{\cal J}^k_\delta : f_\alpha(\overline{U^{n_\alpha}}) \cap B_{\delta}(x) \ne \emptyset \,\}, \] which is provided by the following lemma. \begin{lemma} \label{Alemma} If a non-autonomous iterated function system of similarities satisfies the Moran open-set condition and $\inf\set{\sigma_{i} : i \in\mathbb{N}}=\sigma_{*}>0$ then \[ {\rm card}(A^k_{\delta})\le \kappa_0, \] where $\kappa_0$ is independent of $k$ and $\delta$. \end{lemma} \begin{proof} For $\alpha\in A^k_{\delta}$ we have $f_\alpha(\overline{U^{n_\alpha}})\subseteq B_{2\delta}(x).$ Therefore \begin{align*} \lambda(B_{2\delta}(x)) \ge \lambda\Big({\textstyle \bigcup_{\alpha\in A^k_{\delta}}} f_\alpha(U^{n_\alpha})\Big) &={\textstyle \sum_{\alpha\in A^k_{\delta}}}\lambda\big(f_\alpha(U^{n_\alpha})\big) \\ &\geq {\textstyle \sum_{\alpha\in A^k_{\delta}}}\sigma_{\alpha}^{d} \epsilon_{0}\\ &\geq {\textstyle \sum_{\alpha\in A^k_{\delta}}}\left(\sigma_{\alpha^{\prime}}\sigma_{*}\right)^{d} \epsilon_{0} \geq {\rm card}(A^k_{\delta}) (\delta\sigma_*/\eta)^d\epsilon_0 \end{align*} implies \[ {\rm card}(A^k_{\delta}) \le \lambda(B_1(0))(2\eta)/\sigma_*)^d/\epsilon_0 =\kappa_0 \] where $\kappa_0= \lambda(B_1(0)) (2\eta/\sigma_*)^d/\epsilon_0$. \end{proof} We also need a bound on the cardinality of ${\cal J}^k_\delta$ (as defined in \eqref{Jdk}). In order to obtain this bound we make the uniformity assumption that for some $s>0$ \begin{equation}\label{hippohere} {\textstyle \sum_{i\in {\cal I}_k}} \sigma_i^s=1 \wwords{for all} k\in{\mathbb N} \end{equation} (this was \eqref{hippo} in the Introduction) and consider a sequence of probability measures $\mu^k$ with support on $F^k$ defined by the relationships \[ \mu^k(f_i(B))= \mu^{k+1}(B) \frac{{\rm diam}(f_i(B))^s}{\sum_{j\in{\cal I}_{k+1}} {\rm diam}(f_j(B))^s} = \mu^{k+1}(B) \sigma_i^s \] where $k\in{\mathbb N}_0$, $i\in{\cal I}_{k+1}$ and $B$ is any Borel set. Note for $\alpha\in{\cal J}^{k,n}$ that \[ \mu^k(f_\alpha(F^n))=\mu_n(F^n)(\sigma_\alpha)^s= (\sigma_\alpha)^s. \] We are now ready to estimate ${\rm card}({\cal J}^k_\delta)$. \begin{lemma}\label{Jlemma} If $\set{F^{k}}$ is the pullback attractor of a non-autonomous iterated function system of similarities satisfying the Moran open-set condition, $\inf\set{\sigma_{i} : i \in\mathbb{N}}=\sigma_{*}>0$ and \eqref{hippohere} then \[ \kappa_1 \delta^{-s}\le {\rm card}({\cal J}^k_{\delta})\le \kappa_2 \delta^{-s}, \] where $\kappa_1$ and $\kappa_2$ are independent of $k$ and $\delta$. \end{lemma} \begin{proof} For the lower bound we estimate \begin{align*} 1=\mu^k(F^k) &= \mu^k \Big( {\textstyle \bigcup_{\alpha\in{\cal J}^k_{\delta}}} f_\alpha(F^{n_\alpha}) \Big) \le {\textstyle \sum_{\alpha\in{\cal J}^k_{\delta}}} \mu^k\big( f_\alpha(F^{n_\alpha}) \big)\\ &= {\textstyle \sum_{\alpha\in{\cal J}^k_{\delta}}} (\sigma_\alpha)^s < {\textstyle \sum_{\alpha\in{\cal J}^k_{\delta}}} (\delta/\eta)^s = {\rm card}({\cal J}^k_\delta) (\delta/\eta)^s. \end{align*} Therefore \[ {\rm card}({\cal J}^k_\delta) >\kappa_1\delta^{-s} \] where $\kappa_1=\eta^s$. For the upper bound we will use Lemma \ref{Alemma} to count the non-empty intersections where \[ f_\alpha(F^{n_\alpha})\cap f_\beta(F^{n_\beta})\ne \emptyset \wwords{and} \beta\ne\alpha. \] We will do this inductively. Let $J_1={\cal J}^k_\delta$ and pick $\alpha_1\in J_1$. Define \[ J_{i+1}=\{\, \beta\in J_i : f_{\alpha_i}(F^{n_{\alpha_i}}) \cap f_\beta(F^{n_\beta})= \emptyset \,\}. \] Since $f_{\alpha_k}(F^{n_{\alpha_k}})\subseteq B_\delta(x)$ for some $x$ it follows from Lemma \ref{Alemma} that \[ {\rm card}(J_{i+1})\ge {\rm card}(J_i) - {\rm card}(A^k_\delta) \ge {\rm card}(J_i)-\kappa_0 \ge {\rm card}(J_1)-i\kappa_0. \] We can continue choosing $a_{i+1}\in J_{i+1}$ until $i=i_0$ where \[ (i_0-1)\kappa_0< {\rm card}(J_1)\le i_0\kappa_0. \] By construction, it follows that \[ f_{\alpha_i}(F^{n_{\alpha_i}})\cap f_{\alpha_j}(F^{n_{\alpha_j}})=\emptyset \wwords{for} i\ne j. \] Therefore \begin{align*} 1=\mu^k(F^k) &\ge \mu^k\Big( {\textstyle\bigcup_{i=1}^{i_0}} f_{\alpha_i}(F^{n_{\alpha_i}}) \Big) = {\textstyle\sum_{i=1}^{i_0}} \mu^k\big( f_{\alpha_i}(F^{n_{\alpha_i}})\big)\\ &= {\textstyle\sum_{i=1}^{i_0}} (\sigma_{\alpha_i})^s \ge {\textstyle\sum_{i=1}^{i_0}} (\sigma_* \delta/\eta)^s \ge {\rm card}({\cal J}^k_\delta) (\sigma_*\delta/\eta)^s/\kappa_0. \end{align*}% It follows that \[ {\rm card}({\cal J}^k_\delta)\le \kappa_0 (\eta/\sigma_*)^s \delta^{-s}. \] Taking $\kappa_2=\kappa_0 (\eta/\sigma_*)^s$ finishes the proof. \end{proof} We are now ready to prove sufficient conditions for the equi-homogeneity of pullback attractors in the case when the contraction ratios need not coincide within each stage of the iteration. First we prove Theorem \ref{theorem - general} from the introduction, which has a uniformity assumption on the contraction ratios. \mainthree* \begin{proof} We first estimate \[ \inf_{x\in F^0} {\cal N}(F^0\cap B_\delta(x),\rho). \] Let $x\in F_0$ be arbitrary. There is an $\alpha\in {\cal J}^0_\delta$ such that $x\in f_\alpha(F^{n_\alpha})$ and consequently $F^0\cap B_\delta(x)\supseteq f_\alpha(F^{n_\alpha})$. Define \[ \tilde A^k_\delta=\{\, \alpha\in {\cal J}^k_\delta : f_\alpha(\overline{U^{\alpha_k}}) \cap B_{2\delta}(x)\ne \emptyset\,\}. \] Following the same proof as in Lemma \ref{Alemma} there exists $\tilde\kappa_0$ which is independent of $\delta$ and $k$ such that ${\rm card} (\tilde A^k_\delta)\le\tilde \kappa_0$. Following the same proof as in Lemma \ref{Jlemma} we can find a sequence $\gamma_i\in {\cal J}^{n_\alpha}_{\rho/\sigma_\alpha}$ up to $i=\tilde \imath_0$ where \[ (\tilde \imath_0 -1)\tilde\kappa_0 <{\rm card}({\cal J}^{n_\alpha}_{\rho/\sigma_\alpha}) \le \tilde \imath_0\tilde\kappa_0 \] such that $f_{\gamma_i}(F^{n_{\gamma_i}})\subseteq B_\delta(x_i)$ for $x_i\in f_{\gamma_i}(F^{n_{\gamma_i}})$ and \[ B_{2\delta(x_i)}\cap F_{\gamma_j}(F^{n_{\gamma_j}})=\emptyset \wwords{for} i\ne j. \] In particular, we have found $x_i\in F^{n_\alpha}$ such that \[ B_\delta(x_i)\cap B_\delta(x_j)=\emptyset \wwords{for} i\ne j. \] It follows that \begin{align*} {\cal P}(F^0\cap &B_\delta(x),\rho) \ge {\cal P}(f_\alpha(F^{n_\alpha}),\rho) ={\cal P}(F^{n_\alpha},\rho/\sigma_\alpha) \ge \tilde \imath_0\\ &\ge {\rm card}({\cal J}^{n_\alpha}_{\rho/\sigma_\alpha})/\tilde k_0 \ge (\kappa_1/\tilde\kappa_0) (\sigma_\alpha/\rho)^s \ge (\kappa_1/\tilde\kappa_0)\sigma_*^s (\delta/\rho)^s \end{align*}% Therefore \[ \inf_{x\in F^0} {\cal N}(F^0\cap B_\delta(x),\rho)\ge \kappa_3 (\delta/\rho)^s \] where $\kappa_3=\kappa_1 (2\sigma_*)^s/\tilde \kappa_0$. We now estimate \[ \sup_{x\in F^0} {\cal N}(F^0\cap B_\delta(x),\rho). \] Let $x\in F^0$. Applying Lemma~\ref{Jlemma} and Lemma~\ref{Alemma} we obtain \begin{align*} {\cal N}(F^0\cap B_\delta(x),\rho) &\le {\textstyle \sum_{\beta\in A^0_\delta}} {\cal N}(f_\beta(F^{n_\beta}),\rho) ={\textstyle \sum_{\beta\in A^0_\delta}} {\cal N}(F^{n_\beta},\rho/\sigma_\beta)\\ &={\textstyle \sum_{\beta\in A^0_\delta}} {\cal N}\Big({\textstyle \bigcup_{\gamma\in{\cal J}^{n_\beta}_{\rho/\gamma_\beta}}} f_\gamma(F^{n_\gamma}),\rho/\sigma_\beta\Big)\\ &\le {\textstyle \sum_{\beta\in A^0_\delta}} {\textstyle \sum_{\gamma\in{\cal J}^{n_\beta}_{\rho/\gamma_\beta}}} {\cal N}\big( F^{n_\gamma},\rho/(\sigma_\beta\sigma_\gamma)\big)\\ &= {\textstyle \sum_{\beta\in A^0_\delta}} {\textstyle \sum_{\gamma\in{\cal J}^{n_\beta}_{\rho/\gamma_\beta}}} {\cal N}\big( F^{n_\gamma},\eta\big)\\ &\le {\textstyle \sum_{\beta\in A^0_\delta}} {\rm card}({\cal J}^{n_\beta}_{\rho/\gamma_\beta}) \le {\textstyle \sum_{\beta\in A^0_\delta}} \kappa_2 (\sigma_\beta/\rho)^s\\ &\le \kappa_0\kappa_2\eta^{-s} (\delta/\rho)^s. \end{align*} Taking $\kappa_4=\kappa_0\kappa_2\eta^{-s}$ and $M=\kappa_4/\kappa_3$ yields \[ \sup_{x\in F^0} {\cal N}(F^0\cap B_\delta(x),\rho) \le \kappa_4(\delta/\rho)^s \le M \inf_{x\in F^0} {\cal N}(F^0\cap B_\delta(x),\rho). \] We finish by noting that the above inequality also shows $\dim_{\rm A}F^k=s$. \end{proof} It is worth remarking that the proof of Theorem \ref{theorem - general}, in addition to proving that $F^k$ is equi-homogeneous, also shows that $F^k$ attains its upper and lower box-counting dimensions. Thus, Theorem \ref{theorem - equi-homogeneous Assouad equal to box} implies that $\dim_{\rm B}F^k=\dim_{\rm A}F^{k}$ for all $k$. For the final result in this section we note that the hypothesis \eqref{hippo} on $s$ in Theorem~\ref{theorem - general} can be weakened without changing the details of the proof, which is the content of Theorem \ref{theorem - general averaged} from the Introduction. \mainfour* Note that if $\sup\{\,\sigma_i : i\in{\mathbb N}\,\}=\sigma^{*}<1$ then an $s$ that satisfies \eqref{ahippo} is unique. Indeed, suppose $s_0$ satisfies \eqref{ahippo} and $\delta\ne 0$. If $\delta>0$ then \[ {\textstyle\sum_{\alpha\in{\cal J}_{k,k+n}}} \sigma_\alpha^{s_0+\delta} \le (\sigma^*)^{\delta n} {\textstyle\sum_{\alpha\in{\cal J}_{k,k+n}}} \sigma_\alpha^{s_0} \le (\sigma^*)^{\delta n} L\to 0 \words{as} n\to\infty \] shows that the lower bound in \eqref{ahippo} could not hold for $s=s_0+\delta$. On the other hand, if $\delta<0$ then \[ {\textstyle\sum_{\alpha\in{\cal J}_{k,k+n}}} \sigma_\alpha^{s_0+\delta} \ge (\sigma^*)^{\delta n} {\textstyle\sum_{\alpha\in{\cal J}_{k,k+n}}} \sigma_\alpha^{s_0} \ge (\sigma^*)^{\delta n} L\to \infty \words{as} n\to\infty \] shown that the upper bound could not hold. We conclude that there is at most one value for $s$ such that \eqref{ahippo} holds. The bounds \eqref{ahippo} essentially state that \eqref{hippo} holds uniformly when averaged over long enough sequences of iterations. \section{Conclusion} We have demonstrated that the equi-homogeneous sets include a large class of attractors of iterated functions systems, both autonomous and non-autonomous, in addition to the generalised Cantor sets and homogeneous Moran sets considered in Olson, Robinson and Sharples \cite{ORS}. Further, as equi-homogeneous sets have identical dimensional detail at all points at each fixed length scale, we have shown that the calculation of their Assouad dimensions can be much simplified. Finally, we have demonstrated that equi-homogeneity is independent of any previously defined notion of dimensional equivalence, establishing equi-homogeneity as a novel and useful tool in the analysis of fractal sets. \bibliographystyle{abbrv}
2,877,628,089,989
arxiv
\section{Introduction}\label{intro} This fairly detailed introduction aims at providing a comprehensive and nontechnical overview of the paper, including its asymptotic theory aspects, and a rough description of some of the rank-based test statistics to be derived. It is expected to be accessible to a broad readership. It should be sufficiently informative for the reader not interested in the technical aspects of asymptotic theory, to proceed to Sections~\ref{gausscase} (Gaussian and pseudo-Gaussian tests) and~\ref{ranktests} (rank-based tests), where the proposed testing procedures are described, and for the reader mainly interested in asymptotics, to decide whether he/she is interested in the treatment of a LAN family with curved parametrization developed in Sections~\ref{LANsection} and~\ref{paramtests}. \subsection{Hypothesis testing for principal components} Principal components are probably the most popular and widely used device in the traditional multivariate analysis toolkit. Introduced by \citet{P1901}, principal component analysis (PCA) was rediscovered by \citet{H33}, and ever since has been an essential part of daily statistical practice, basically in all domains of application. The general objective of PCA is to reduce the dimension of some observed $k$-dimensional random vector $\Xb$ while preserving most of its total variability. This is achieved by considering an adequate number $q$ of linear combinations of the form $\betab_1\pr\Xb,\ldots , \betab_q\pr\Xb$, where $\betab_j$, $j=1,\ldots, k$, are the eigenvectors associated with the eigenvalues $\lambda_1,\ldots, \lambda_k$ of $\Xb$'s covariance matrix $\Sigb_{\mathrm{cov}}$, ranked in decreasing order of magnitude. Writing $\betab$ for the orthogonal $k\times k$ matrix with columns $\betab_1 ,\ldots, \betab_k$ and $ \Lamb_{\Sigb_{\mathrm{cov}}}$ for the diagonal matrix of eigenvalues $\lambda_1,\ldots, \lambda_k$, the matrix $\Sigb_{\mathrm{cov}}$ thus factorizes into $\Sigb_{\mathrm{cov}}=\betab\Lamb_{\Sigb_{\mathrm{cov}}} \betab\pr$. The random variable $\betab_j\pr\Xb$, with variance $\lambda_j$, is known as $\Xb$'s \textit{$j$th principal component}. Chapters on inference for eigenvectors and eigenvalues can be found in most textbooks on multivariate analysis, and mainly cover Gaussian maximum likelihood estimation (MLE) and the corresponding Wald and Gaussian likelihood ratio tests (LRT). The MLEs of $\betab$ and $\Lamb_{\Sigb_{\mathrm{cov}}}$ are the eigenvectors and eigenvalues of the empirical covariance matrix \[ \mathbf{S}^{(n)}:=\frac{1}{n}\sum_{i=1}^n \bigl(\mathbf{X}_i-\bar{\mathbf X}^{(n)}\bigr)\bigl(\mathbf{X}_i-\bar{\mathbf X}^{(n)}\bigr)\pr\qquad \mbox{with } \bar{\mathbf X}^{(n)}:=\frac{1}{n}\sum_{i=1}^n \mathbf{X}_i, \] while testing problems classically include testing for sphericity (equality of eigenvalues), testing for \textit{subsphericity} (equality among some given subset of eigenva\-lues---typically, the last $k-q$ ones), testing that the $\ell$th eigenvector has some specified direction, or that the proportion of variance accounted for by the last $k-q$ principal components is larger than some fixed proportion of the total variance: see, for instance, \citet{A03} or \citet{J86}. Gaussian MLEs and the corresponding tests (Wald or likelihood ratio tests---since they are asymptotically equivalent, in the sequel we indistinctly refer to LRTs) for covariance matrices and functions thereof are notoriously sensitive to violations of Gaussian assumptions; see \citet{MW80} for a classical discussion of this fact, or \citet{YTM05} for a more recent overview. The problems just mentioned about the eigenvectors and eigenvalues of $\Sigb_{\mathrm{cov}}$ are no exception to that rule, although belonging, in Muirhead and Waternaux's terminology, to the class of ``easily robustifiable'' ones. For such problems, \textit{adjusted} LRTs remaining valid under the whole class of elliptical distributions with finite fourth-order moments can be obtained via a correction factor involving estimated kurtosis coefficients [see \citet{SB87} for a general result on the ``easy'' cases, and \citet{HP08c} for the ``harder'' ones]. Such adjusted LRTs were obtained by Tyler (\citeyear{T81}, \citeyear{T83}) for eigenvector problems and by \citet{D77} for eigenvalues. Tyler actually constructs tests for the \textit{scatter matrix} $\Sigb$ characterizing the density contours [of the form $(\mathbf{x}-\tetb)\pr \Sigb^{-1}(\mathbf{x}-\tetb)=$ constant] of an elliptical family. His tests are the Wald tests associated with any available estimator $\hat{\Sigb}$ of ${\Sigb}$ such that $n^{1/2} \vecop (\hat{\Sigb}- \Sigb)$ is asymptotically normal, with mean zero and covariance matrix ${\bolds\Psi} _{f}$, say, under ${f}\in \mathcal{F}$, where $\mathcal F$ denotes some class of elliptical densities and ${\bolds\Psi} _{f}$ either is known or (still, under ${f}\in\mathcal{F}$) can be estimated consistently. The resulting tests then are valid under the class $\mathcal F$. When the estimator $\hat{\Sigb}$ is the empirical covariance matrix $\mathbf{S}^{(n)}$, these tests under Gaussian densities are asymptotically equivalent to Gaussian LRTs. Unlike the latter, however, they remain (asymptotically) valid under the class $\mathcal{F}^4$ of all elliptical distributions with finite moments of order four, and hence qualify as \textit{pseudo-Gaussian} versions of the Gaussian LRTs. Due to their importance for applications, throughout this paper, we concentrate on the following two problems: (a) testing the null hypothesis ${\mathcal H}_{0}^{\betab}$ that the first principal direction $ \betab_{1}$ coincides (up to the sign) with some specified unit vector $\betab_{}^{0}$ (the choice of the \textit{first} principal direction here is completely arbitrary, and made for the simplicity of exposition only), and (b) testing the null hypothesis ${\mathcal H}_{0}^{\Lamb }$ that ${\sum_{j=q+1}^{k} \lambda_{ }}/{\sum_{j=1}^{k}\lambda_{ }}=p$ against the one-sided alternative under which ${\sum_{j=q+1}^{k} \lambda_{ }}/{\sum_{j=1}^{k}\lambda_{ }}<p$, $p \in(0,1)$ given. The Gaussian LRT for (a) was introduced in a seminal paper by \citet{A63}. Denoting by ${\lambda}_{j; \Sb}$ and $\betab_{j; \Sb}$, $j=1,\ldots,k$, respectively, the eigenvalues and eigenvectors of $\mathbf{S}^{(n)}$, this test---denote it by $\phi^{(n)}_{\betab ;\mathrm{Anderson}}$---rejects ${\mathcal H}_{0}^{\betab}$ (at asymptotic level $\alpha$) as soon as \begin{eqnarray}\label{AndTesta} Q_{\mathrm{Anderson}}^{(n)} :\!&=& n \bigl[ {\lambda}_{1; \Sb} \betab{}^{0\prime} \bigl(\mathbf{S}^{(n)}\bigr)^{-1} \betab_{}^{0} + {\lambda}_{1; \Sb}^{-1} \betab^{0\prime} \mathbf {S}^{(n)}\betab_{}^{0}-2 \bigr] \nonumber\\[-8pt]\\[-8pt] & = & \frac{n}{\lambda_{1; {\Sb}} }\sum_{j=2}^{k} \frac{(\lambda _{j; {\Sb}}- \lambda_{1; {\Sb}} )^{2}}{\lambda_{j; {\Sb}}^{3}} \bigl( \betab_{j; \Sb}\pr\mathbf{S}^{(n)}\betab^{0} \bigr)^{2}\nonumber \end{eqnarray} exceeds the $\alpha$ upper-quantile of the chi-square distribution with $(k-1)$ degrees of freedom. The behavior of this test being particularly poor under non-Gaussian densities, Tyler (\citeyear{T81}, \citeyear{T83}) proposed a pseudo-Gaussian version $\phi^{(n)}_{\betab;\mathrm {Tyler}}$, which he obtains via an empirical kurtosis correction \begin{equation}\label{TylTesta} Q^{(n)}_{\mathrm{Tyler}}:= \bigl(1+ \hat{\kappa}^{(n)}\bigr)^{-1} Q_{\mathrm {Anderson}}^{(n)} \end{equation} of~(\ref{AndTesta}) (same asymptotic distribution), where $\hat {\kappa}^{(n)}$ is some consistent estimator of the underlying kurtosis parameter $\kappa_k$; see Section~\ref{pseudovec} for a definition. A related test of \citet{S91} addresses the same problem where however $\betab_{1}$ is the first eigenvector of the \textit {correlation} matrix. The traditional Gaussian test for problem (b) was introduced in the same paper by \citet{A63}. For any $k\times k$ diagonal matrix $\Lamb$ with diagonal entries $\lambda_1,\ldots,\lambda_k$, let ${a}_{p,q}({\Lamb}):=2 ( p^{2} \sum_{j=1}^{q} {\lambda }_{j}^{2}+ (1-p)^{2} \sum_{j=q+1}^{k} {\lambda}_{j}^{2} )$. Defining\vspace*{-2pt} $\betab_\Sb:=(\betab_{1; \Sb},\ldots,\betab_{k; \Sb})$ and $\mathbf{c}_{p,q}:= ( -p \mathbf{1}_{q}\pr{\,}\vdots{\,}(1-p) \mathbf{1}_{k-q}\pr)\pr $, with\vspace*{1pt} $\mathbf{1}_\ell:= (1,\ldots, 1)\pr\in\rr^\ell$, and denoting by $\dvec(\Ab)$ the vector obtained by stacking the diagonal elements of a square matrix $\Ab$, Anderson's test, $\phi^{(n)}_{\Lamb; \mathrm{Anderson}}$, say, rejects the null hypothesis at asymptotic level $\alpha$ whenever \begin{eqnarray}\label{TAnd} T_{\mathrm{Anderson}}^{(n)} :\!&=& n^{1/2} ({a}_{p,q}({\Lamb}_{\mathbf{S}}))^{-1/2} \mathbf{c}_{p,q}\pr\dvec\bigl({\betab}_\Sb\pr{\Sb}^{(n)}{\betab}_\Sb\bigr) \nonumber\\[-8pt]\\[-8pt] &=& n^{1/2} ({a}_{p,q}({\Lamb}_{\mathbf{S}}))^{-1/2} \Biggl( (1-p) \sum_{j=q+1}^{k} {\lambda}_{j;\Sb}- p\sum_{j=1}^{q} {\lambda}_{j;\Sb} \Biggr)\nonumber \end{eqnarray} is less than the standard normal $\alpha$-quantile. Although he does not provide any explicit form, \citet{D77} briefly explains how to derive the pseudo-Gaussian version \begin{equation}\label{TDav} T_{\mathrm{Davis}}^{(n)}:= \bigl(1+ \hat{\kappa}{}^{(n)}\bigr)^{-1/2} T_{\mathrm{Anderson}}^{(n)} \end{equation} of~(\ref{TAnd}), where $\hat{\kappa}^{(n)}$ again is any consistent estimator of the underlying kurtosis parameter $\kappa_k$. The resulting test (same asymptotic standard normal distribution) will be denoted as $\phi^{(n)}_{\Lamb; \mathrm{Davis}}$. Being based on empirical covariances, though, the pseudo-Gaussian tests based on~(\ref{TylTesta}) and~(\ref{TDav}) unfortunately remain poorly robust. They still are very sensitive to the presence of outliers---an issue which we do not touch here; see, for example, \citet{CH00}, \citet{SAW06}, and the references therein. Moreover, they do require finite moments of order four---hence lose their validity under heavy tails, and only address the traditional covariance-based concept of principal components. This limitation is quite regrettable, as principal components, irrespective of any moment conditions, clearly depend on the elliptical geometry of underlying distributions only. Recall that an elliptical density over ${\R}^{k}$ is determined by a \textit{location vector} ${\bolds\theta}\in\R^{k}$, a~\textit{scale} parameter $\sigma\in\R _{0}^{+}$ (where $\sigma^2 $ is not necessarily a variance), a~real-valued $k \times k$ symmetric and positive definite matrix ${\Vb }$ called the \textit{shape} matrix, and a \textit{standardized radial density} $f_{1}$ (whenever the elliptical density has finite second-order moments, the shape and covariance matrices $\mathbf{V}$ and $\Sigb_{\mathrm{cov}}$ are proportional, hence share the same collection of eigenvectors and, up to a positive factor, the same collection of eigenvalues). Although traditionally described in terms of the covariance matrix $\Sigb_{\mathrm{cov}}$, most inference problems in multivariate analysis naturally extend to arbitrary elliptical models, with the shape matrix $\Vb$ or the \textit{scatter matrix} $\Sigb:= \sigma^2\Vb$ playing the role of $\Sigb _{\mathrm{cov}}$. Principal components are no exception; in particular, problems (a) and (b) indifferently can be formulated in terms of shape or covariance eigenvectors and eigenvalues. Below, $\Lamb_{\Vb}:=\operatorname{diag}(\lambda_{1;\Vb},\ldots,\lambda_{k;\Vb })$ and $\betab:=(\betab_1,\ldots,\betab_k)$ collect the eigenvalues and eigenvectors of the shape matrix $\Vb$. Our objective in this paper is to provide a class of signed-rank tests which remain valid under arbitrary elliptical densities, in the absence of \textit{any} moment assumption, and hence are not limited to the traditional covariance-based concept of principal components. Of particular interest within that class are the \textit{van der Waerden}---that is, \textit{normal-score}---tests, which are asymptotically equivalent, under Gaussian densities, to the corresponding Gaussian LRTs (the asymptotic optimality of which we moreover establish in Section~\ref{gausscase}, along with local powers). Under non-Gaussian conditions, however, these van der Waerden tests uniformly dominate, in the Pitman sense, the pseudo-Gaussian tests based on~(\ref{TylTesta}) and~(\ref{TDav}) above, which, as a result, turn out to be nonadmissible (see Section~\ref{secare}). Our tests are based on the multivariate signs and ranks previously considered by Hallin and Paindaveine (\citeyear{HP06a}, \citeyear{HP08a}) and \citet{HOP06}. Denote by $\mathbf{X}_1, \ldots, \mathbf{X}_n$ an observed $n$-tuple of $k$-dimensional elliptical vectors with location ${\bolds\theta}$ and shape $\Vb$. Let $\mathbf{Z}_{i}:= \mathbf{V}^{-1/2}(\mathbf{X}_i - {\bolds\theta})$ denote the \textit{sphericized} version of $\mathbf{X}_i$ (throughout $\mathbf{A}^{1/2}$, for a symmetric and positive definite matrix $\mathbf{A}$, stands for the symmetric and positive definite root of $\mathbf A$): the corresponding multivariate signs are defined as the unit vectors $\mathbf{U}_i= \mathbf{U}_i({\bolds\theta}, \Vb):= \mathbf{Z}_i/\Vert\mathbf{Z}_i\Vert$, while the ranks $R^{(n)}_i=R^{(n)}_i({\bolds\theta}, \Vb)$ are those of the norms $\Vert\mathbf{Z}_i\Vert$, $i=1,\ldots, n$. Our rank tests are based on signed-rank covariance matrices of the form \[ {\utSb}{}_{K}^{(n)} := \frac{1}{n} \sum_{i=1}^{n} K \biggl(\frac{ R^{(n)}_{i}}{n+1} \biggr) \mathbf{U}_{i} \mathbf{U}_{i}\pr, \] where $K\dvtx(0,1)\to\R$ stands for some \textit{score function}, and $\mathbf{U}_{i}=\mathbf{U}_i(\hat{\bolds\theta}, \hats{\Vb})$ and $R^{(n)}_{i} = R^{(n)}_i(\hat{\bolds\theta}, \hats{\Vb})$ are computed\vspace*{1pt} from appropriate estimators $\hat{\bolds\theta}$ and $ \hats{\Vb}$ of ${\bolds\theta}$ and~$\Vb$. More precisely,\vspace*{-3pt} for the testing problem (a), the rank-based test ${\utphi }{}^{(n)}_{\betab; K}$ rejects the null hypothesis ${\mathcal H}_{0}^{\betab}$ (at asymptotic level $\alpha$) whenever \[ {\utQ}{}_{K}^{(n)} := \frac{nk(k+2)}{\mathcal{J}_k(K)} \sum_{j=2}^{k} \bigl(\tilde{\betab }_{j}\pr{\utSb}{}_{K}^{(n)}\betab^{0} \bigr)^{2} \] exceeds the $\alpha$ upper-quantile of the chi-square distribution with $(k-1)$ degrees of freedom; here, $\mathcal{J}_k(K)$ is a standardizing constant and $\tilde{\betab}_j$ stands for a constrained estimator of $\Vb$'s $j$th eigenvector; see (\ref {choiceprelim}) for details. As for problem (b), our rank tests ${\utphi}{}^{(n)}_{\Lamb; K}$ are based on statistics of the form \[ {\utT}{}_{K}^{(n)}:= \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} (a_{p,q}(\tilde{\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q}\pr\dvec\bigl(\tilde{\Lamb}_{\Vb}^{1/2}\hat {\betab}\pr{\utSb}{}_{K}^{(n)}\hat{\betab}\tilde{\Lamb}_{\Vb }^{1/2}\bigr), \] where $\tilde{\Lamb}_{\Vb}$ and $\hat{\betab}$ are adequate estimators of $\Lamb_{\Vb}$ and $\betab$, respectively. The null hypothesis ${\mathcal H}_{0}^{\Lamb }$ is to be rejected at asymptotic level $\alpha$ whenever ${\utT}{}_{K}^{(n)}$ is smaller than the standard normal $\alpha$-quantile. These tests are not just validity-robust, they also are efficient. For any smooth radial density $f_1$, indeed, the score function $K=K_{f_1}$ (see Section~\ref{scores}) provides a signed-rank test which is \textit {locally and asymptotically optimal} (\textit{locally and asymptotically most stringent}, in the Le Cam sense) under radial density $f_1$. In particular, when based on \textit{normal or van der Waerden} scores $K=K_{\phi_1}:=\Psi_k^{-1}$, where \label{defphik} $\Psi_k$ denotes the chi-square distribution function with $k$ degrees of freedom, our rank tests achieve the same asymptotic performances as the optimal Gaussian ones at the multinormal, while enjoying maximal validity robustness, since no assumption is required on the underlying density beyond ellipticity. Moreover, the asymptotic relative efficiencies (AREs) under non-Gaussian densities of these van der Waerden tests are uniformly larger than one with respect to their pseudo-Gaussian parametric competitors; see Section~\ref{secare}. On all counts, validity, robustness, and efficiency, our van der Waerden tests thus perform uniformly better than the daily practice Anderson tests and their pseudo-Gaussian extensions. \subsection{Local asymptotic normality for principal components} The methodological tool we are using throughout is Le Cam's theory of \textit{locally asymptotically normal} (LAN) \textit{experiments} [for background reading on LAN, we refer to \citet{Lcam86}, \citet{CY00} or \citet{V98}; see also \citet{S85} or \citet{R94}]. Although this powerful method has been used quite successfully in inference problems for elliptical families [Hallin and Paindaveine (\citeyear{HP02}, \citeyear{HP04}, \citeyear{HP05}, \citeyear{HP06a}), \citet{HOP06} and \citet{HP08a} for location, VARMA dependence, linear models, shape and scatter, resp.], it has not been considered so far in problems involving eigenvectors and eigenvalues, and, as a result, little is\vadjust{\goodbreak} known about optimality issues in that context. The main reason, probably, is that the eigenvectors $\betab$ and eigenvalues $\Lamb$ are complicated functions of the covariance or scatter matrix $\Sigb$, with unpleasant identification problems at possibly multiple eigenvalues. These special features of eigenvectors and eigenvalues, as we shall see, make the LAN approach more involved than in standard cases. LAN (actually, ULAN) has been established, under appropriate regularity assumptions on radial densities, in \citet{HP06a}, for elliptical families when parametrized by a location vector ${\bolds \theta}$ and a scatter matrix $\Sigb$ [more precisely, the vector $\operatorname{vech}(\Sigb)$ resulting from stacking the upper diagonal elements of $\Sigb$]. Recall, however, that LAN or ULAN are properties of the parametrization of a family of distributions, not of the family itself. Now, due to the complicated relation between $({\bolds\theta}, \operatorname{vech} \Sigb)$ and the quantities of interest $\Lamb$ and $\betab$, the $({\bolds\theta}, \operatorname{vech}(\Sigb ))$-parametrization is not convenient in the present context. Another parametrization, involving location, scale, and shape eigenvalues and eigenvectors is much preferable, as the hypotheses to be tested then take simple forms. Therefore, we show (Lemma~\ref{LElemme}) how the ULAN result of \citet{HP06a} carries over to this new parametrization where, moreover, the information matrix, very conveniently, happens to be block-diagonal---a structure that greatly simplifies inference in the presence of nuisance parameters. Unfortunately, this new parametrization, where $\betab$ ranges over the set ${\mathcal SO}_k$ of $k\times k$ real orthogonal matrices with determinant one, raises problems of another nature. The subparameter $\operatorname{vec}(\betab)$ indeed ranges over $\operatorname{vec}({\mathcal SO}_k)$, a~nonlinear manifold of $\R^{k^2} $, yielding a \textit{curved} ULAN experiment. By a \textit{curved experiment}, we mean a parametric model indexed by a $\ell$-dimensional parameter ranging over some nonlinear manifold of $\R^\ell$, such as in \textit{curved} exponential families, for instance. Under a $\operatorname{vec}(\betab)$-parametrization, the local experiments are not the traditional \textit{Gaussian shifts} anymore, but \textit{curved} Gaussian location ones, that is, Gaussian location models under which the mean of a multinormal observation with specified covariance structure ranges over a nonlinear manifold of $\R^\ell$, so that the simple local asymptotic optimality results associated with local Gaussian shifts no longer hold. To the best of our knowledge, such experiments never have been considered in the LAN literature. A third parametrization, however, can be constructed from the fact that $\betab$ is in ${\mathcal SO}_k$ if it can be expressed as the exponential of a $k\times k$ skew-symmetric matrix~$\iotab$. Denoting by $\operatorname{vech}^+(\iotab)$ the vector resulting from stacking the upper off-diagonal elements of $\iotab$, this yields a parametrization involving location, scale, shape eigenvalues and $\operatorname{vech}^+(\iotab)$; the latter subparameter ranges freely over $\R^{k(k-1)/2}$, yielding a well-behaved ULAN parametrization where local experiments converge to the classical Gaussian shifts, thereby allowing for the classical construction [\citet{Lcam86}, Section 11.9] of locally asymptotically optimal tests. The trouble is that translating null hypotheses (a) and (b) into the $\iotab$-space in practice seems unfeasible. Three distinct ULAN structures are thus coexisting on the same families of distributions: (ULAN1) proved in \citet{HP06a} for the $({\bolds \theta}, \operatorname{vech}(\Sigb))$-parametrization, serving as the mother of all subsequent ones; (ULAN2) for the location-scale-eigenvalues--eigenvectors parametrization, where the null hypotheses of interest take simple forms, but the local experiments happen to be \textit{curved} ones; (ULAN3) for the location-scale-eigenvalues--skew symmetric matrix param\- etrization, where everything is fine from a decision-theoretical point of view, with, however, the major inconvenience that explicit solutions cannot be obtained in terms of original parameters. The main challenge of this paper was the delicate interplay between these three structures. Basically, we are showing (Lemma~\ref{LElemme}) how ULAN can be imported from the first parametrization, and (Section \ref {curved}) optimality results from the third parametrization, both to the second one. These results then are used in order to derive locally asymptotically optimal Gaussian, pseudo-Gaussian and rank-based tests for eigenvectors and eigenvalues of shape. This treatment we are giving of curved ULAN experiments, to the best of our knowledge, is original, and likely to apply in a variety of other contexts. \subsection{Outline of the paper} \label{outlinesec} Section~\ref{sec2} contains, for easy reference, some basic notation and fundamental assumptions to be used later on. The main ULAN result, of a nonstandard curved nature, is established in Section~\ref{LANsection}, and its consequences for testing developed in Section \ref {paramtests}. As explained in the \hyperref[intro]{Introduction}, optimality is imported from an untractable parametrization involving skew-symmetric matrices. This is elaborated, in some detail, in Section~\ref{curved}, where a general result is derived, and in (Section~\ref{howeigenvectors}), where that result is applied to the particular case of eigenvectors and (Section~\ref{eigenvaluesprob}) eigenvalues of shape, under arbitrary radial density $f_1$. Special attention is given, in Sections \ref {Gaussbetab} and~\ref{Gausslamb}, to the Gaussian case ($f_1=\phi _1$); in Sections~\ref{pseudovec} and~\ref{pseudoval}, those Gaussian tests are extended to a pseudo-Gaussian context with finite fourth-order moments. Then, in Section~\ref{ranktests}, rank-based procedures, which do not require any moment assumptions, are constructed: Section~\ref{rankHajek} provides a general asymptotic representation result [Proposition~\ref{Hajek}(i)] in the H\' ajek style; asymptotic normality, under the null as well as under local alternatives, follows as a corollary [Proposition~\ref{Hajek}(ii)]. Based on these results, Sections~\ref{gsjdlr} and~\ref{rankeigvalues} provide optimal rank-based tests for the eigenvector and eigenvalue problems considered throughout; Sections~\ref{secare} and \ref{secsimu} conclude with asymptotic relative efficiencies and simulations. Technical proofs are concentrated in the \hyperref[app]{Appendix}. The reader interested in inferential results and principal components only (the form of the tests, their optimality properties and local powers) may skip Sections~\ref{LANsection} and~\ref{paramtests}, which are devoted to curved LAN experiments, and concentrate on Section~\ref{gausscase} for the ``parametric'' procedures, on Section~\ref{ranktests} for the rank-based ones, on Sections~\ref {secare} and~\ref{secsimu} for their asymptotic and finite-sample performances. \subsection{Notation} \label{notasec} The following notation will be used throughout. For any $k \times k$ matrix $\Ab=(A_{ij})$, write $\operatorname{vec}(\Ab)$ for the $k^2$-dimensional vector obtained by stacking the columns of $\Ab$, $\operatorname{vech}(\Ab)$ for the $[k(k+1)/2]$-dimensional vector obtained by stacking the upper diagonal elements of those columns, $\operatorname{vech}^+(\Ab)$ for the $[k(k-1)/2]$-dimensional vector obtained by stacking\vspace*{1pt} the upper off-diagonal elements of the same, and $\dvec(\Ab)=:(A_{11}, (\dvecrond(\Ab))\pr)\pr$ for the $k$-dimensional vector obtained by stacking the diagonal elements of $\Ab;\dvecrond(\Ab)$ thus is $\dvec(\Ab)$ deprived of its first component. Let $\Hb_{k}$ be the $k \times k^{2}$ matrix such that $\Hb_{k} \vecop({\Ab})= \dvec(\Ab)$. Note that we then have that $\Hb_{k}\pr\dvec(\Ab)= \vecop(\Ab)$ for any $k\times k$ diagonal matrix $\Ab$, which implies that $\Hb_{k}\Hb _{k}\pr=\mathbf{I}_k$. Write $\operatorname{diag}(\mathbf{B}_{1}, \ldots, \mathbf{B}_{m})$ for the block-diagonal matrix with blocks ${\Bb}_{1}, \ldots, {\Bb}_m$ and $\mathbf{A}^{\otimes2}$ for the Kronecker product $\mathbf{A}\otimes\mathbf{A}$. Finally, denoting by $\mathbf{e}_\ell$ the $\ell$th vector in the canonical basis of $\R^k$, write $\mathbf{K}_{k}:= \sum_{i,j =1}^{k}(\mathbf{e}_i\mathbf {e}_j\pr)\otimes(\mathbf{e}_j\mathbf{e}_i\pr)$ for the $k^2\times k^2$ \textit{commutation matrix}. \section{Main assumptions}\label{sec2} \subsection{Elliptical densities} \label{defelliptttt} We throughout assume that the observations are elliptically symmetric. More precisely, defining \[ \mathcal{F} := \{ h\dvtx\R^+_0\to\R^+ \dvtx\mu_{k-1;h}<\infty\} , \] where $\mu_{\ell;h}:=\int_0^{\infty} r^{\ell} h(r) \,dr$, and \[ \mathcal{F}_1 := \biggl\{ h_1 \in\mathcal{F} \dvtx(\mu_{k-1; h_1})^{-1}\int_0^{1}r^{k-1}h_1(r) \,d r =1/2 \biggr\}, \] we denote by $\Xb_{1}^{(n)}, \ldots, \Xb^{(n)}_n$ an observed $n$-tuple of mutually independent $k$-dimensional random vectors with probability density function of the form \begin{equation} \label{density} f(\mathbf{x}):=c_{k,f_1} \vert{\bolds\Sigma} \vert^{-1/2} f_{1} \bigl( \bigl( (\mathbf{x} - {\bolds\theta})\pr{\bolds\Sigma}^{-1} (\mathbf{x} - {\bolds\theta}) \bigr)^{1/2} \bigr),\qquad \mathbf{x}\in\R^{k}, \end{equation} for some $k$-dimensional vector ${\bolds\theta}$ (\textit{location}), some symmetric and positive definite $(k\times k)$ \textit{scatter} matrix $\Sigb$, and some $f_1$ in the class $\mathcal{F}_1 $ of \textit {standardized radial densities}; throughout, $\vert\Ab\vert$ stands for the determinant of the square matrix $\Ab$. Define the \textit{elliptical coordinates} of $\Xb_{i}^{(n)}$ as \begin{eqnarray} \label{Ud} \mathbf{U}_{i}^{(n)}({\bolds\theta}, {\Sigb}) &:=& \frac{\Sigb^{-1/2} (\Xb_{i}^{(n)} -{\bolds\theta})}{\| \Sigb^{-1/2} (\Xb _{i}^{(n)} -{\bolds\theta})\|}\quad\mbox{and}\nonumber\\[-8pt]\\[-8pt] d_{i}^{(n)}({\bolds\theta}, {\Sigb}) &:=& \bigl\| \Sigb^{-1/2} \bigl(\Xb_{i}^{(n)} -{\bolds\theta}\bigr)\bigr\|.\nonumber \end{eqnarray} Under the assumption of ellipticity, the \textit{multivariate signs} $\mathbf{U}_{i}^{(n)}({\bolds\theta}, {\Sigb})$,\break $ i=1,\ldots, n $, are i.i.d. uniform over the unit sphere in $\R^k$, and independent of the \textit{standardized elliptical distances} $d^{(n)}_{i} ({\bolds\theta}, {\Sigb})$. Imposing~that $f_1\in\mathcal{F}_1$\break implies that the $d^{(n)}_{i}({\bolds\theta}, {\Sigb})$'s, which have common density $\tilde {f}_{1k}(r) :=\break (\mu_{k-1;f_1})^{-1} r^{k-1} f_1 ( {r} ) I_{[r>0]}$, with distribution function $\tilde{F}_{1k}$, have median one [$\tilde{F}_{1k}(1)=1/2$]---a constraint which identifies $\Sigb$ without requiring any moment assumptions [see \citet{HP06a} for a discussion]. Under finite second-order moments, the scatter matrix $\Sigb$ is proportional to the traditional covariance matrix $\Sigb_{\mathrm{cov}}$ Special instances are the $k$-variate multinormal distribution, with radial density $f_1(r)=\phi_1(r):=\exp(- a_k r^2/2)\label{ak}$, the $k$-variate Student distributions, with radial densities (for $\nu\in\R^+_0$ degrees of freedom) $f_1(r)=f_{1,\nu}^t(r):=(1 + a_{k,\nu} r^2/\nu)^{-(k+\nu)/2}$, and the $k$-variate power-exponential distributions, with radial densities of the form $f_1(r)= f_{1,\eta}^e (r):=\exp(- b_{k,\eta} r^{2\eta})$, $\eta\in\R^+_0$; the positive constants $a_k$, \label{defnak} $a_{k,\nu}$, and $b_{k,\eta}$ are such that $f_1\in\mathcal{F}_1$. The derivation of locally and asymptotically optimal tests at standardized radial density $f_1$ will be based on the \textit{uniform local and asymptotic normality} (ULAN) of the model \textit{at given $f_1$}. This ULAN property---the statement of which requires some further preparation and is delayed to Section~\ref{LANsection}---only holds under some further mild regularity conditions on $f_1$. More precisely, we require $f_1$ to belong to the collection $\mathcal{F}_a$ of all absolutely continuous densities in $\mathcal{F}_1$ for which, denoting by $\dot f_1$ the a.e. derivative of $f_1$ and letting $\varphi_{f_1}:= -{\dot f_1}/f_1$, the integrals \begin{equation}\label{definfo}\qquad \mathcal{I}_k(f_1) := \int_{0}^1 \varphi_{f_1}^2(r) \tilde {f}_{1k}(r) \,dr \quad\mbox{and}\quad \mathcal{J}_k(f_1):=\int_{0}^1 r^2 \varphi_{f_1}^2(r)\tilde{f}_{1k}(r) \,dr \end{equation} are finite. The quantities $\mathcal{I}_k(f_1)$ and $\mathcal {J}_k(f_1)$ play the roles of \textit{radial Fisher information for location} and \textit{radial Fisher information for shape/scale}, respectively. Slightly less stringent assumptions, involving derivatives in the sense of distributions, can be found in \citet{HP06a}, where we refer to for details. The intersection of $\mathcal{F}_a$ and $\mathcal{F}^4_1:=\{f_1\!\in\!\mathcal{F}_1 \dvtx\int _0^\infty r^4\tilde{f}_{1k}(r) \,dr<\infty\}$ will be denoted as $\mathcal{F}_a^4$. \subsection{Score functions} \label{scores} The various \textit{score functions} $K$ appearing in the rank-based statistics to be introduced in Section~\ref{ranktests} will be assumed to satisfy a few regularity assumptions which we are listing here for convenience. \renewcommand{\theassumption}{$(\mathrm{S})$} \begin{assumption}\label{assuS} The score function $K \dvtx(0,1) \rightarrow\rr$ (S1) is continuous and square-integrable, (S2) can be expressed as the difference of two monotone increasing functions, and (S3) satisfies $\int_0^1 K(u) \,du=k$. \end{assumption} Assumption (S3) is a normalization constraint that is automatically satisfied by the score functions $K(u)=K_{f_1}(u):= \varphi_{f_1} ( {\tilde F}{}^{-1}_{1k}(u)) {\tilde F}{}^{-1}_{1k}(u)$\vadjust{\goodbreak} associated with any radial density $f_1\in\mathcal{F}_a$ (at which ULAN holds); see Section~\ref{LANsection}. For score functions $K,K_1,K_2$ satisfying Assumption~\ref{assuS}, let [throughout, $U$ stands for a random variable uniformly distributed over $(0,1)$] \begin{equation}\label{infoK} \mathcal{J}_k(K_1,K_2) :=\mathrm{E}[K_1(U)K_2(U)],\qquad \mathcal{J}_k(K):=\mathcal{J}_k(K,K) \end{equation} and \begin{equation}\label{infoKf} \mathcal{J}_k(K,f_1):= \mathcal{J}_k(K,K_{f_1}); \end{equation} with this notation, $\mathcal{J}_k(f_1) = \mathcal {J}_k(K_{f_1},K_{f_1})$. The \textit{power} score functions $K_a(u) := k(a+1) u^a$ ($a\geq0$), with $\mathcal{J}_k(K_a)=k^2 (a+1)^2/(2a+1)$, provide some traditional score functions satisfying Assumption~\ref{assuS}: the sign, Wilcoxon, and Spearman scores are obtained for $a=0$, $a=1$ and $a=2$, respectively. As for the score functions of the form $K_{f_1}$, an important particular case is that of van der Waerden or \textit{normal} scores, obtained for $f_1=\phi_1$. Then \begin{equation}\label{normalscores} K_{\phi_1}(u)=\Psi_k^{-1}(u) \quad\mbox{and}\quad \mathcal {J}_k(\phi_1)=k(k+2), \end{equation} where $\Psi_k$ was defined in page \pageref{defphik}. Similarly, Student densities $f_1=f^t_{1,\nu}$ (with $\nu$ degrees of freedom) yield the scores \[ K_{f_{1,\nu}^t}(u)=\frac{k(k+\nu)G_{k,\nu}^{-1}(u)}{\nu+kG_{k,\nu }^{-1}(u)} \] and \[ \mathcal{J}_k(f_{1,\nu}^t )=\frac {k(k+2)(k+\nu)}{k+\nu+2}, \] where $G_{k,\nu}$ stands for the Fisher--Snedecor distribution function with $k$ and $\nu$ degrees of freedom. \section{Uniform local asymptotic normality (ULAN) and curved Gaussian location local experiments}\label{LANsection} \subsection{Semiparametric modeling of elliptical families} Consider an i.i.d. $n$-tuple $\mathbf{X}_1^{(n)},\ldots, \mathbf {X}_n^{(n)}$ with elliptical density~(\ref{density}) characterized by $\tetb$, $\Sigb$, and $f_1\dvtx({\bolds\theta},\Sigb)$ or, if a vector is to be preferred, $({\bolds\theta}\pr, (\operatorname {vech} \Sigb)\pr)\pr$, provides a perfectly valid parametrization of the elliptical family with standardized radial density $f_1$. However, in the problems we are considering in this paper, it will be convenient to have eigenvalues and eigenvectors appearing explicitly in the vector of parameters. Decompose therefore the scatter matrix $\Sigb $ into $\Sigb=\sigma^2\mathbf{V}= \betab\Lamb_{\Sigb} \betab\pr=\betab \sigma^2 \Lamb_{\Vb} \betab\pr$, where $\sigma\in\rr^+_0$ is a \textit{scale parameter} (equivariant under multiplication by a positive constant), and $\mathbf V$ a \textit{shape matrix} (invariant under multiplication by a positive constant) with eigenvalues $\Lamb _{\Vb}=\operatorname{diag}(\lambda_{1;\Vb},\ldots, \lambda_{k;\Vb })=\sigma^{-2}\operatorname{diag}(\lambda_{1;\Sigb},\ldots, \lambda_{k;\Sigb}) = \sigma^{-2}\Lamb_{\Sigb}$; $\betab$ is an element of the so-called \textit{special orthogonal group}\vadjust{\goodbreak} ${\mathcal SO}_{k}:= \{ \mathbf{O} | \mathbf{O}\pr\mathbf{O}= \mathbf{I}_k$, $|\mathbf{O}|=1\}$ diagonalizing both $\Sigb$ and $\Vb$, the columns $\betab_1,\ldots, \betab_k$ of which are the eigenvectors (common to $\Sigb$ and $\Vb $) we are interested in. Such decomposition of scatter into scale and shape can be achieved in various ways. Here, we adopt the determinant-based definition of scale \[ \sigma:=\vert\Sigb\vert^{1/2k}= \prod_{j=1}^{k} \lambda_{j;\Sigb} ^{1/2k}\qquad \mbox{hence } \Vb:= \Sigb/ \sigma^2 = \Sigb/ \vert\Sigb\vert^{1/k}, \] which implies that $\vert\Vb\vert=\prod_{j=1}^{k} \lambda_{j;\Vb }=1$. As shown by \citet{P08}, this choice indeed is the only one for which the information matrix for scale and shape is block-diagonal, which greatly simplifies inference. The parametric families of elliptical distributions with specified standardized radial density $f_1$ then are indexed by the $L= k(k+2)$-dimensional parameter \[ \varthetab:= ({\bolds\theta}^{ \prime}, \sigma^{2}, (\dvecrond \Lamb_{\Vb})\pr, (\vecop\betab)\pr)\pr=:(\varthetab_\I\pr ,\vartheta_\II,\varthetab_\III\pr,\varthetab_\IV\pr)\pr, \] where $\dvecrond(\Lamb_{\Vb})=(\lambda_{2;\Vb}, \ldots, \lambda_{k;\Vb})\pr$ since $\lambda_{1;\Vb} =\prod_{j=2}^{k} \lambda_{j;\Vb}^{-1}$.\vspace*{2pt} This $\varthetab$-parametrization however requires a fully identified $k$-tuple of eigenvectors, which places the following restriction on the eigenvalues $\Lamb_{\Vb}$. \renewcommand{\theassumption}{$(\mathrm{A})$} \begin{assumption}\label{assuA} The eigenvalues $\lam_{j;\Vb}$ of the shape matrix $\Vb$ are all distinct, that is, since $\Sigb$ (hence also $\Vb$) is positive definite, $\lam_{1;\Vb} > \lam_{2; \Vb} > \cdots> \lam_{k;\Vb}>0$. \end{assumption} Denote by $\mathrm{P}^{(n)}_{\varthetab; f_1}$ the joint distribution of $\mathbf{X}_1^{(n)},\ldots, \mathbf{X}_n^{(n)}$ under parameter value $\varthetab$ and standardized radial density $f_1\in\mathcal {F}_1$; the parameter space [the definition of which includes Assumption~\ref{assuA}] then is \[ \Thetab: =\rr^k\times\rr^+_0\times\mathcal{C}^{k-1}\times \operatorname{vec}({\mathcal SO}_{k}), \] where $\mathcal{C}^{k-1}$ is the open cone of $(\rr^+_0)^{k-1}$ with strictly ordered (from largest to smallest) coordinates. Since $\operatorname{vec}({\mathcal SO}_{k})$ is a nonlinear manifold of $\rr^{k^2} $: the $\operatorname{vec}(\betab)$-parametri\-zed experiments are \textit{curved experiments}, in which the standard methods [see Section 11.9 of \citet{Lcam86}] for constructing locally asymptotically optimal tests do not apply. It is well known, however [see, e.g., \citet{KG89}], that any element $\betab$ of ${\mathcal SO}_{k}$ can be expressed as the exponential $\exp(\iotab)$ of a $k\times k$ \textit{skew-symmetric matrix}~$\iotab$, itself characterized by the $k(k-1)/2$-vector $\operatorname{vech}^+(\iotab)$ of its upper off-diagonal elements. The differentiable mapping $\hbar\dvtx\operatorname{vech}^+(\iotab) \mapsto\hbar(\operatorname{vech}^+(\iotab)):= \operatorname {vec}(\exp(\iotab))$ from $\rr^{k(k-1)/2}$ to $ {\mathcal SO}_{k}$ is one-to-one, so that $\operatorname{vech}^+(\iotab)\in\rr ^{k(k-1)/2}$ also can be used as a parametrization instead of $\operatorname{vec}(\betab)\in\operatorname{vec}({\mathcal SO}_{k})$. Both parametrizations yield uniform local asymptotic normality (ULAN). Unlike the $\operatorname{vec}(\betab)$-parametrized one, the $\operatorname {vech}^+(\iotab)$-parametrized experiment is not curved, as $\operatorname{vech}^+(\iotab)$ freely\vadjust{\goodbreak} ranges over $\rr^{k(k-1)/2}$, so that the standard methods for constructing locally asymptotically optimal tests apply---which is not the case with curved experiments. On the other hand, neither the $\operatorname{vech}^+(\iotab)$-part of the central sequence, nor the image in the $\operatorname {vech}^+(\iotab)$-space of the null hypothesis $\mathcal{H}_0^{\betab }$ yield tractable forms. Therefore, we rather state ULAN for the curved $\operatorname{vec}(\betab)$-parametrization. Then (Section~\ref{curved}), we develop a general theory of locally asymptotically optimal tests for differentiable hypotheses in curved ULAN experiments. Without Assumption~\ref{assuA}, the $\varthetab$-parametrization is not valid, and cannot enjoy LAN nor ULAN; optimality properties (of a local and asymptotic nature) then cannot be obtained. As far as validity issues (irrespective of optimality properties) are considered, however, this assumption can be weakened. If the null hypothesis $\mathcal{H}^\betab _0$ is to make any sense, the first eigenvector $\betab_1$ clearly should be identifiable, but not necessarily the remaining ones. The following assumption on the $\lambda_{j;\Vb}$'s, under which $\betab _2,\ldots,\betab_k$ need not be identified, is thus minimal in that case. \renewcommand{\theassumption}{$(\mathrm{A}\pr_1)$} \begin{assumption}\label{assuApr1} The eigenvalues of the shape matrix $\Vb$ are such that $\lam_{1;\Vb} > \lam_{2;\Vb} \geq\cdots\geq\lam_{k;\Vb}>0$. \end{assumption} Under Assumption~\ref{assuApr1}, $\Thetab$ is broadened into a larger parameter space $\Thetab\pr_1$, which does not provide a valid parametrization anymore, and for which the ULAN property of Proposition~\ref{LAN} below no longer holds. As we shall see, all the tests we are proposing for $\mathcal{H}^\betab_0$ nevertheless remains valid under the extended null hypothesis $ {\mathcal H}_{0;1}^{\betab\prime}$ resulting from weakening~\ref{assuA} into \ref{assuApr1}. Note that, in case the null hypothesis is dealing with $\betab _q$ instead of $\betab_1$, the appropriate weakening of Assumption~\ref{assuA} is the following. \renewcommand{\theassumption}{$(\mathrm{A}\pr_q)$} \begin{assumption}\label{assuAprq} The eigenvalues of the shape matrix $\Vb$ are such that $\lam_{1;\Vb}\geq\cdots\geq\lambda_{q-1;\Vb}> \lam_{q;\Vb}>\lambda_{q+1;\Vb} \geq\cdots\geq\lam_{k;\Vb}>0$. \end{assumption} This yields enlarged parameter space $\Thetab\pr_q$ and null hypothesis ${\mathcal H}_{0;q}^{\betab\prime}$. Similarly, the null hypothesis $ {\mathcal H}_{0}^{\Lamb}$ requires the identifiability of the groups of $q$ largest (hence $k-q$ smallest) eigenvalues; within each group, however, eigenvalues may coincide, yielding the following assumption. \renewcommand{\theassumption}{$(\mathrm{A}^{\prime\prime}_q)$} \begin{assumption}\label{assuAq} The eigenvalues of the shape matrix $\Vb$ are such that $\lam_{1;\Vb}\geq\cdots\geq\lambda_{q-1;\Vb}\geq \lam_{q;\Vb}>\lambda_{q+1;\Vb} \geq\cdots\geq\lam_{k;\Vb}>0$. \end{assumption} This yields enlarged parameter space $\Thetab^{\prime\prime}_q$ and null hypothesis $ {\mathcal H}_{0;q}^{\Lamb\prime\prime}$, say. As we shall see, the tests we are proposing for $ {\mathcal H}_{0}^{\Lamb}$ remain valid under $ {\mathcal H}_{0;q}^{\Lamb\prime\prime}$. \subsection{Curved ULAN experiments} \label{curvLAN} Uniform local asymptotic normality\break (ULAN) for the parametric families or \textit{experiments} $\mathcal{P}^{(n)}_{f_1}:=\{\mathrm{P}^{(n)}_{\varthetab; f_1} \dvtx \varthetab\in\Thetab \} $, with classical root-$n$ rate, is the main technical tool of this\vadjust{\goodbreak} paper. For any $\varthetab:= ({\bolds\theta}\pr, \sigma^{2}$, $(\dvecrond\Lamb_{\Vb})\pr, (\vecop\betab)\pr)\pr$ $\in \Thetab$, a~\textit{local alternative} is a sequence $\varthetab ^{(n)}\in\Thetab$ such that $(\varthetab^{(n)}- \varthetab) $ is $ O(n^{-1/2})$. For any such $\varthetab^{(n)}$, consider a further sequence $\varthetab^{(n)} +n^{-1/2} \taub^{(n)}$, with $\taub^{(n)} = ( (\taub^{\I(n)})\pr, \tau^{\II(n)} , (\taub^{\III(n)})\pr,\break (\taub^{\IV(n)})\pr)\pr$ such that $\sup_{n} \taub\npr\taub^{(n)} < \infty$ and $\varthetab^{(n)} +n^{-1/2} \taub^{(n)} \in\Thetab$ for all $n$. Note that such $\taub^{(n)}$ exist: $\taub^{\I(n)}$ can be any bounded sequence of $\rr^k$, $\tau ^{\II(n)}$ any bounded sequence with $\tau^{\II(n)}>-n^{1/2}\sigma ^{2(n)}$, $\taub^{\III(n)}$ any bounded sequence of real $(k-1)$-tuples $(\tau^{\III(n)}_1,\ldots, \tau^{\III(n)}_{k-1})$ such that \begin{eqnarray*} 0 &<& \lambda^{(n)}_{k;\Vb} + n^{-1/2}\tau^{\III(n)}_{k-1} < \cdots\\ &<&\lambda^{(n)}_{3;\Vb} + n^{-1/2} \tau^{\III(n)}_2 < \lambda^{(n)}_{2;\Vb} + n^{-1/2}\tau^{\III(n)}_1 \\ &<& \prod_{j=2}^k\bigl(\lambda^{(n)}_{j;\Vb} + n^{-1/2}\tau^{\III (n)}_{j-1}\bigr)^{-1} , \end{eqnarray*} which ensures that the perturbed eigenvalues $\lambda_{j;\Vb }^{(n)}+n^{-1/2}\ell^{(n)}_j$, with \begin{eqnarray}\label{ell1} \ell^{(n)}_1 :\!&=& n^{1/2} \Biggl(\prod_{j=2}^k\bigl(\lambda^{(n)}_{j;\Vb} + n^{-1/2}\tau^{\III(n)}_{j-1}\bigr)^{-1} - \lambda^{(n)}_{1;\Vb} \Biggr) \nonumber\\[-8pt]\\[-8pt] &=& -\lambda^{(n)}_{1;{\Vb}} \sum_{j=2}^k \bigl(\lambda^{(n)}_{j;\Vb }\bigr)^{-1}\tau^{\III(n)}_{j-1} + O(n^{-1/2})\nonumber \end{eqnarray} and $(\ell^{(n)}_2,\ldots,\ell^{(n)}_k):=\taub^{\III(n)}$, still satisfy Assumption~\ref{assuA} and yield determinant value one. Writing ${\bolds\ell}^{(n)}$ for the diagonal $k\times k$ matrix with diagonal elements $\ell^{(n)}_1,\ldots, \ell^{(n)}_k$, we then have \begin{eqnarray*} \operatorname{tr}\bigl(\bigl(\Lamb_{\Vb}^{(n)}\bigr)^{-1} {\bolds\ell}^{(n)}\bigr)&=& \bigl(\lambda^{(n)}_{1;{\Vb}}\bigr)^{-1} \Biggl[ -\lambda^{(n)}_{1;{\Vb}} \sum_{j=2}^k \bigl(\lambda^{(n)}_{j;{\Vb }}\bigr)^{-1}\tau^{\III(n)}_{j-1} + O(n^{-1/2}) \Biggr] \\ &&{} + \sum_{j=2}^k \bigl(\lambda^{(n)}_{j;{\Vb}}\bigr)^{-1} \tau^{\III(n)}_{j-1} \\ &=& O(n^{-1/2}). \end{eqnarray*} Finally, denote by $\mathbf{M}_{k}\pr(\lambda_2,\ldots,\lambda_k)= ( -\lambda_1(\lambda_2^{-1},\ldots, \lambda_k^{-1})\pr {\,}\vdots{\,}\mathbf{I}_{k-1} )\pr$ the value at $(\lambda_2,\ldots , \lambda_k)$ of the Jacobian matrix of \[ ( \lambda_2, \ldots, \lambda_k) \mapsto\Biggl(\lambda_1:=\prod_{j=2}^k\lambda_j^{-1},\lambda_2,\ldots, \lambda_k\Biggr . \] Letting $\Lamb:= \operatorname{diag}(\lambda_1,\lambda_2,\ldots, \lambda_k)$, we have $\mathbf{M}_{k} \pr(\lambda_2,\ldots,\lambda _k)\dvecrond(\mathbf{l})= \dvec(\mathbf{l})$ for any $k \times k$ real matrix $\mathbf{l}$ such that $\operatorname{tr}(\Lamb^{-1} \mathbf {l})=0$. Indeed, \begin{eqnarray* \mathbf{M}_{k} \pr(\lambda_2,\ldots,\lambda_k) \dvecrond(\mathbf{l}) &=& \bigl( -\lambda_1(\lambda_2^{-1},\ldots, \lambda_k^{-1})\pr {\,}\vdots{\,}\mathbf{I}_{k-1} \bigr)\pr(\dvecrond\mathbf{l}) \\ &=& \bigl( - \lambda_{1 } \bigl(\operatorname{tr}(\Lamb^{-1} \mathbf{l})- (\lambda _{1})^{-1} \mathbf{l}_{11}\bigr) {\,}\vdots{\,} (\dvecrond\mathbf{l})' \bigr)'\\ &=& \dvec(\mathbf{l}), \end{eqnarray*} an identity that will be used later on for $\mathbf{M}_{k}^{{\bolds \Lambda}_\Vb}:=\mathbf{M}_{k}(\dvecrond({\bolds\Lambda}_{\Vb}))$. The problem is slightly more delicate for $\taub^{\IV(n)}$, which must be such that $\vecop({\betab}^{(n)}) + n^{-1/2} \taub^{\IV(n)}$ remains in $\vecop({\mathcal SO}_{k})$. That is, $\taub^{\IV(n)}$ must be of the form $ \taub^{\IV(n)}=\vecop(\mathbf{b}^{(n)})$, with \begin{eqnarray}\label{antisym} \mathbf{0} &=& \bigl({\betab}^{(n)}+ n^{-1/2} \mathbf{b}^{(n)} \bigr)\pr \bigl({\betab}^{(n)}+ n^{-1/2} \mathbf{b}^{(n)} \bigr)-\mathbf{I}_k \nonumber\\[-8pt]\\[-8pt] &=& n^{-1/2} \bigl( \betab^{(n)\prime} \mathbf{b}^{(n)}+ \mathbf {b}^{(n)\prime} \betab^{(n)} \bigr) + n^{-1}\mathbf{b}^{(n)\prime}\mathbf{b}^{(n)}.\nonumber \end{eqnarray} That is, $\betab^{(n)\prime} \mathbf{b}^{(n)}+ n^{-1/2}\mathbf {b}^{(n)\prime}\mathbf{b}^{(n)}/2$ should be skew-symmetric. Such local perturbations admit an intuitive interpretation: we have indeed \[ {\betab}^{(n)}+ n^{-1/2} \mathbf{b}^{(n)}= \betab^{(n)}\betab ^{(n)\prime} \bigl({\betab}^{(n)}+ n^{-1/2} \mathbf{b}^{(n)}\bigr) = \betab ^{(n)}\bigl(\mathbf{I}_k + n^{-1/2} \betab^{(n)\prime} \mathbf{b}^{(n)}\bigr) \] an expression in which $\mathbf{I}_k + n^{-1/2}\betab^{(n)\prime} \mathbf{b}^{(n)}$, up to a $O(n^{-1})$ quantity, coincides with the first-order approximation of the exponential of a skew-symmetric matrix, and therefore can be interpreted as an infinitesimal rotation. Identity~(\ref{antisym}) provides a characterization of ${\mathcal SO}_k$ in the vicinity of $\betab^{(n)}$. The tangent space [in $\rr ^{k^2} $, at $\operatorname{vec}(\betab)$] to $\operatorname{vec}({\mathcal SO}_k)$ is obtained by linearizing~(\ref{antisym}). More precisely, this tangent space is of the form \begin{eqnarray} \label{tangentbetab} &&\{ \operatorname{vec}(\betab+ \mathbf{b}) \vert \operatorname{vec}(\mathbf{b})\in\rr^{k^2} \mbox{ and } \betab\pr\mathbf{b} + \mathbf{b}\pr\betab= \mathbf{0} \} \nonumber\\[-8pt]\\[-8pt] &&\qquad = \{ \operatorname{vec}(\betab+ \mathbf{b}) \vert \operatorname{vec}(\mathbf{b})\in\rr^{k^2} \mbox{ and } \betab\pr\mathbf{b} \mbox{ skew-symmetric} \}.\nonumber \end{eqnarray} We then have the following result (see the \hyperref[app]{Appendix} for the proof). \begin{Prop}\label{LAN} The experiment $\mathcal{ P}^{(n)}_{f_1} := \{ \mathrm{P}^{(n)}_{\varthetab;f_1} \vert {\bolds\vartheta}\in\Thetab\}$ is ULAN, with central sequence $\Deltab^{(n)}_{{\bolds\vartheta};f_1}:= ( \Deltab^{\I\prime} _{{\bolds\vartheta};f_1}, \Delta^{\II} _{{\bolds\vartheta};f_1} , \Deltab^{\III\prime}_{{\bolds\vartheta};f_1}, \Deltab^{\IV\prime} _{{\bolds\vartheta};f_1} )\pr$, where [with $d_i:= d_i^{(n)}({\bolds\theta}, \Vb)$ and $\mathbf {U}_i:=\mathbf{U}^{(n)}_i({\bolds\theta}, \Vb)$ as defined in (\ref {Ud}), and letting $\mathbf{M}_{k}^{{\bolds\Lambda}_\Vb}:=\mathbf {M}_{k}(\dvecrond{\bolds\Lambda}_\Vb)$], \begin{eqnarray*} \Deltab^{\I} _{{\bolds\vartheta};f_1} &:=& \frac{1}{\sqrt{n}\sigma} \sum_{i=1}^{n} \varphi_{f_1} \biggl(\frac{d_{i}}{\sigma} \biggr) {\Vb}^{-1/2}\mathbf{U}_{i}, \\ \Delta^{\II} _{{\bolds\vartheta};f_1} &:=& \frac{1}{2\sqrt{n} \sigma^{2}} \sum_{i=1}^{n} \biggl( \varphi_{f_1} \biggl(\frac{d_{i}}{\sigma} \biggr) \frac{d_{i}}{\sigma} -k \biggr), \\ \Deltab^{\III} _{{\bolds\vartheta};f_1} &:=& \frac{1}{2\sqrt{n}} \mathbf{M}_{k}^{{\bolds\Lambda}_\Vb} \mathbf{H}_{k} ( \Lamb_{\Vb}^{-1/2}\betab\pr)^{\otimes2} \sum_{i=1}^{n} \vecop\biggl( \varphi_{f_1} \biggl(\frac{d_{i}}{\sigma} \biggr) \frac{d_{i}}{\sigma} \mathbf{U}_{i}\mathbf{U}\pr _{i} \biggr) \end{eqnarray*} and \[ \Deltab^{\IV} _{{\bolds\vartheta};f_1} := \frac{1}{2\sqrt{n}} \mathbf{G}_{k}^{\betab}\mathbf{L}^{\betab, \Lamb_{\Vb}}_{k} ( {\Vb }^{\otimes2} )^{ -1/2} \sum_{i=1}^{n} \vecop\biggl( \varphi_{f_1} \biggl(\frac{d_{i}}{\sigma} \biggr) \frac{d_{i}}{\sigma} \mathbf{U}_{i}\mathbf{U}\pr _{i} \biggr), \] with ${\Gb}_{k}^{\betab}:=({\Gb}_{k;12}^{\betab} {\Gb }_{k;13}^{\betab} \cdots {\Gb}_{k;(k-1)k}^{\betab}), {\Gb}_{k;jh}^{\betab }:=\mathbf{e}_{j} \otimes {\betab}_{h}-\mathbf{e}_{h} \otimes{\betab}_{j}$ and \[ \mathbf{L}^{\betab, \Lamb_{\Vb}}_{k}:=\bigl(\mathbf{L}^{\betab, \Lamb_{\Vb }}_{k;12} \mathbf{L}^{\betab, \Lamb_{\Vb}}_{k;13} \cdots\mathbf {L}^{\betab, \Lamb_{\Vb}}_{k;(k-1)k}\bigr)^{\prime},\qquad \mathbf{L}^{\betab, \Lamb_{\Vb}}_{k;jh}:=(\lambda_{h;\Vb}-\lambda _{j;\Vb})(\betab_{h} \otimes \betab_{j}), \] and with block-diagonal information matrix \begin{equation}\label{Gamb} \Gamb_{{\bolds\vartheta};f_1}= \operatorname{diag}( \Gamb^{\I}_{{\bolds\vartheta};f_1}, \Gamma^{\II}_{{\bolds \vartheta};f_1},\Gamb^{\III}_{{\bolds\vartheta};f_1},\Gamb^{\IV }_{{\bolds\vartheta};f_1}), \end{equation} where, defining $\mathbf{D}_k(\Lamb_\Vb):= \frac{1}{4} \mathbf {M}_{k}^{{\bolds\Lambda}_\Vb} \mathbf{H}_{k} [ \mathbf{I}_{k^2} + \mathbf{K}_k ] (\Lamb_{\Vb}^{-1})^{\otimes2} \mathbf{H}_{k}\pr (\mathbf{M}_{k}^{{\bolds\Lambda}_\Vb})\pr$, \[ \Gamb^{\I}_{{\bolds\vartheta};f_1}= \frac{\ikf}{k \sigma ^{2}}{\Vb}^{-1},\qquad \Gamma^{\II}_{{\bolds\vartheta};f_1}=\frac{\mathcal {J}_k(f_1)-k^{2}}{4 \sigma^{4}},\qquad \Gamb^{\III}_{{\bolds\vartheta};f_1}=\frac{\mathcal {J}_k(f_1)}{k(k+2)}\mathbf{D}_k(\Lamb_\Vb) \] and \[ \Gamb^{\IV}_{{\bolds\vartheta};f_1} := \frac{1}{4}\frac {\mathcal{J}_k(f_1)}{k(k+2)} \mathbf{G}_{k}^{\betab} \operatorname {diag}\bigl(\nu_{12}^{-1} ,\nu_{13}^{-1}, \ldots, \nu_{(k-1)k}^{-1}\bigr) (\mathbf{G}_{k}^{\betab })\pr, \] where $\nu_{jh}:= \lambda_{j;\Vb} \lambda_{h;\Vb}/(\lambda_{j;\Vb}-\lambda_{h;\Vb})^{2}$. More precisely, for any local alternative $\varthetab^{(n)}$ and any bounded sequence $\taub^{(n)}$ such that $\varthetab^{(n)}+ n^{-1/2}\taub^{(n)}\in\Thetab$, we have, under $\mathrm{P}^{(n)} _{ \varthetab^{(n)};f_1}$, \begin{eqnarray*} \Lambda^{(n)}_{\varthetab^{(n)}+ n^{-1/2}\taub^{(n)}/ \varthetab^{(n)};f_1} :\!&=&\log\bigl(d\mathrm{P}^{(n)}_{\varthetab^{(n)}+ n^{-1/2}\taub^{(n)};f_1 }/ d\mathrm{P}^{(n)}_{\varthetab^{(n)};f_1 } \bigr) \\ &=& \bigl(\taub^{(n)}\bigr)\pr\Deltab^{(n)}_{\varthetab^{(n)};f_1 } -\tfrac{1}{2}\bigl(\taub^{(n)}\bigr)\pr\Gamb_{\varthetab;f_1}\taub^{(n)}+ o_{\mathrm P}(1) \end{eqnarray*} and $ \Deltab_{\varthetab^{(n)};f_1 } \stackrel{\mathcal {L}}{\longrightarrow} \mathcal{N}( \mathbf{0}, \Gamb_{\varthetab;f_1} ) $, as $\ny$. \end{Prop} The block-diagonal structure of the information matrix ${\Gamb }_{\varthetab; f_{1}}$ implies that inference on $\betab$ (resp., $\Lamb_\Vb$) can be conducted under unspecified ${\bolds\theta}$, $\sigma$ and ${\bolds\Lambda_{\mathbf{V}}}$ (resp., $\betab$) as if the latter were known, at no asymptotic cost. The orthogonality between the eigenvalue and eigenvector parts of the central sequence is structural, while that between the eigenvalue and eigenvector parts on one hand and the scale parameter part on the other hand is entirely due to the determinant-based parametrization of scale [see \citet{HP06b} or \citet{P08}]. Note that ${\Gamb_{\varthetab ; f_{1}}^{\IV}}$, with rank $k(k-1)/2 <k^2$, is not invertible. \subsection{Locally asymptotically optimal tests for differentiable hypotheses in curved ULAN experiments}\label{curved} Before addressing testing problems involving eigenvalues and eigenvectors, we need a general theory for locally asymptotically optimal tests in curved ULAN experiments, which we are developing in this section. Consider a ULAN sequence of experiments $\{\mathrm{P}^{(n)}_{\bolds\xi} \dvtx{\bolds\xi} \in\Xib\}$, where $ \Xib$ is an open subset of $ \mathbb{R}^m$, with central sequence $\Deltab_{\bolds\xi}$ and information $\Gamb_{\bolds\xi}$. For the simplicity of exposition, assume that $\Gamb_{\bolds\xi}$ for any ${\bolds\xi}$ has full rank $m$. Let $\hbar\dvtx\Xib\to\mathbb{R}^p$, $p\geq m$, be a continuously differentiable mapping such that the Jacobian matrix $D\hbar({\bolds\xi})$ has full rank $m$ for all ${\bolds\xi}$, and consider the experiments $\{\mathrm{P}^{(n)}_\varthetab\dvtx\varthetab \in \Thetab:=\hbar(\Xib)\}$, where, with a slight abuse of notation, $\mathrm{P}^{(n)}_{\varthetab}:=\mathrm{P}^{(n)}_{\bolds\xi}$ for $\varthetab=\hbar({\bolds\xi})$. This sequence also is ULAN, with central sequence $\Deltab_\varthetab$ and information matrix $\Gamb_\varthetab$ such that [see~(\ref{LAQvartheta2}) and the proof of Lemma~\ref{LElemme}], at $\varthetab=\hbar({\bolds\xi})$, and [up to $o^{(n)}_{\mathrm{P}_\varthetab}(1)$'s which, for simplicity, we omit here] $\Deltab_{\bolds\xi}=D\hbar\pr({\bolds\xi})\Deltab _\varthetab$ and $\Gamb_{\bolds\xi}=D\hbar\pr({\bolds\xi })\Gamb_\varthetab D\hbar({\bolds\xi})$---throughout, we write $D\hbar\pr(\cdot)$, $D\bbar\pr(\cdot)$, etc., instead of $(D\hbar(\cdot))\pr $, $(D\bbar(\cdot))\pr$, etc. In general, $\Thetab$ is a nonlinear manifold of $\rr^p$; the experiment parametrized by $\Thetab$ then is a \textit{curved} experiment. Next, denoting by $C$ an $r$-dimensional manifold in $\mathbb{R}^p$, $r<p$, consider the null hypothesis $\mathcal{H}_0\dvtx\varthetab\in C \cap\Thetab$---in general, a~nonlinear restriction of the parameter space $\Thetab$. The same hypothesis can be expressed in the ${\bolds\xi }$-parametrization as $\mathcal{H}_0\dvtx{\bolds\xi} \in\Xib_0$, where $\Xib_0:=\hbar^{-1}(C \cap\Thetab)$ is a ($\ell $-dimensional, say) submanifold of~$\Xib$. Fix ${\bolds\xi}_0 =\hbar ^{-1}(\varthetab_0)\in\Xib_0$, and let $\lbar\dvtx B\subset \mathbb{R}^\ell\to\Xib$ be a local (at ${\bolds\xi}_0$) chart for this manifold. Define ${\bolds\alpha}_0:=\lbar^{-1}({\bolds\xi}_0)$. At ${\bolds\xi}_0$, $\mathcal{H}_0$ is linearized into $\mathcal{H}_{{\bolds\xi}_0}\dvtx{\bolds\xi} \in{\bolds\xi}_0 + \mathcal{M}(D\lbar({\bolds\alpha}_0))$, where $D\lbar({\bolds \alpha}_0)$ is the Jacobian matrix of $\lbar$ (with rank $\ell$) computed at ${\bolds\alpha}_0$ and $\mathcal{M}(\mathbf{A})$ denotes the vector space spanned by the columns of a matrix $\mathbf A$. At ${\bolds\alpha}_0$, a~locally asymptotically most stringent test statistic (at ${\bolds\xi}_0$) for $\mathcal{H}_{{\bolds\xi}_0}$ is \begin{equation}\label{lams} Q_{{\bolds\xi}_0}:= \Deltab_{{\bolds\xi}_0}\pr \bigl( \Gamb_{{\bolds\xi}_0}^{-1} - D\lbar({\bolds\alpha}_0) ( D\lbar \pr({\bolds\alpha}_0)\Gamb_{{\bolds\xi}_0}D\lbar({\bolds\alpha }_0))^{-1} D\lbar\pr({\bolds\alpha}_0) \bigr) \Deltab_{{\bolds\xi}_0} \end{equation} [see Section 11.9 of \citet{Lcam86}]. This test statistic is nothing else but the squared Euclidean norm of the orthogonal projection, onto the linear space orthogonal to $ \Gamb_{{\bolds\xi}_0}^{1/2}D\lbar ({\bolds\alpha}_0)$, of the standardized central sequence $ \Gamb _{{\bolds\xi}_0}^{-1/2}\Deltab_{{\bolds\xi}_0}$. In view of ULAN, the asymptotic behavior of $\Deltab_{{\bolds\xi }_0}$ is the same under local alternatives in $ \Xib_0$ as under local alternatives in $ {\bolds\xi}_0 + \mathcal{M}(D\lbar({\bolds\alpha }_0))$, so that the same test statistic~$ Q_{{\bolds\xi}_0}$, which (at ${\bolds\xi}_0$) is locally asymptotically most stringent for $\mathcal{H}_{{\bolds\xi}_0}$, is also locally asymptotically most stringent for $\mathcal{H}_{0}$. In many cases, however, it is highly desirable to express the most stringent statistic in the curved $\Thetab$-parametrization, which, as is the case for the eigenvalues/eigenvectors problems considered in this work, is the natural parametrization. This is the objective of the following result (see the \hyperref[app]{Appendix} for the proof). \begin{Prop}\label{Optitest} With the same notation as above, a~locally asymptotically most stringent statistic (at $\varthetab_0$) for testing $\mathcal{H}_{0}\dvtx\varthetab\in C\cap\Thetab$ is \begin{equation} \label{Qxi} Q_{{\bolds\xi}_{0}} = Q_{\varthetab_{0}}:=\Deltab_{\varthetab_0}\pr \bigl( \Gamb_{\varthetab_0}^{-} - D\btilde(\etab_0) ( D\btilde\pr (\etab_0)\Gamb_{\varthetab_0}D\btilde(\etab_0))^{-} D\btilde\pr (\etab_0) \bigr) \Deltab_{\varthetab_0},\hspace*{-34pt} \end{equation} where $\btilde\dvtx A\subset\mathbb{R}^\ell\to\mathbb{R}^p$ is a local (at $\varthetab_0$) chart for the tangent (still at $\varthetab _0$) to the manifold $C\cap\Thetab$, $\etab_{0}:= \bbar ^{-1}(\varthetab_{0})$, and $\Ab^-$ denotes the Moore--Penrose inverse of~$\Ab$. \end{Prop} Hence, a~locally asymptotically most stringent (at ${\bolds\xi}_0$ or $\varthetab_0$, depending on the para\-metrization) test for $\mathcal {H}_0$ can be based on either of the two quadratic forms $Q_{{\bolds \xi}_0}$ or $Q_{\varthetab_0}$, which coincide, and are asymptotically chi-square $[(m-\ell)$ degrees of freedom] under $\mathrm {P}_{{\bolds\xi}_0}^{(n)}= \mathrm{P}_{\varthetab_0}^{(n)}$, for $\varthetab_0 = \hbar({\bolds\xi}_0)$. For practical implementation, of course, an adequately discretized root-$n$ consistent estimator has to be substituted for the unknown $\varthetab _0 $ or ${\bolds\xi}_0$---which asymptotically does not affect the test statistic. Provided that $\Xib$ remains an open subset of $\rr^m$, the assumption of a full-rank information matrix $\Gamb_{\bolds\xi}$ is not required. Hallin and Puri [(\citeyear{HP94}), Lemma~5.12] indeed have shown, in the case of ARMA experiments, that~(\ref{lams}) remains locally asymptotically most stringent provided that generalized inverses (not necessarily Moore--Penrose ones) are substituted for the inverses of noninvertible matrices, yielding \[ Q_{{\bolds\xi}_0}:= \Deltab_{{\bolds\xi}_0}\pr \bigl( \Gamb_{{\bolds\xi}_0}^{-} - D\lbar({\bolds\alpha}_0) ( D\lbar \pr({\bolds\alpha}_0)\Gamb_{{\bolds\xi}_0}D\lbar({\bolds\alpha }_0))^{-} D\lbar\pr({\bolds\alpha}_0) \bigr) \Deltab_{{\bolds\xi}_0}. \] The same reasoning as in the proof of Proposition~\ref{Optitest} then applies, mutatis mutandis, when ``translating'' $Q_{{\bolds\xi}_0}$ into $Q_{\varthetab_0}$ (with appropriate degrees of freedom). \section{Parametrically optimal tests for principal components} \label{paramtests} \subsection{Optimal parametric tests for eigenvectors}\label{howeigenvectors} Testing the hypothesis $\mathcal{H}^{\betab}_0$ on eigenvectors is a particular case of the problem considered in the previous section. The $\operatorname{vech}^+(\iotab)$ parametrization [$\iotab$ an arbitrary skew-symmetric $(k\times k)$ matrix] yields a standard ULAN experiment, with parameter \[ {\bolds\xi}:= ({\bolds\theta}\pr, \sigma^2, (\dvecrond(\Lamb _{\Vb}))\pr, (\operatorname{vech}^+(\iotab))\pr )\pr\in\rr^k\times\rr^+\times\mathcal{C}^{k-1}\times\rr ^{k(k-1)/2}=:\Xib, \] hence $m= k(k+3)/2$, while Proposition~\ref{LAN} provides the curved ULAN experiment, with parameter $\varthetab\in\Thetab\subset\rr ^{p}$ and $p=k(k+2)$. ULAN for the ${\bolds\xi}$-experiment readily follows from the fact that the mapping $ \operatorname{vech}^+(\iotab )\mapsto\operatorname{vec}(\betab)= \operatorname{vec}(\exp(\iotab ))$ is continuously differentiable. As explained before, the block-diagonal structure of the information matrix~(\ref{Gamb}) implies that locally asymptotically optimal inference about $\betab$ can be based on $\Deltab^{\IV}_{\varthetab ;f_1}$ only, as if ${\bolds\theta}$, $\sigma^2$ and $\dvecrond ({\bolds\Lambda_\Vb})$ were specified. Since this also allows for simpler exposition and lighter notation, let us assume that these parameters take on specified values ${\bolds\theta}$, $\sigma^2$ and $(\lambda_{2; \Vb},\ldots, \lambda_{k; \Vb})$, respectively. The resulting experiment then is parametrized either by $\vecop \betab\in\vecop({\mathcal SO}_k)\subset\rr^{k^2}$ (playing the role of $\varthetab\in\Thetab\subset\rr^{p}$ in the notation of Proposition~\ref{Optitest}) or by $\operatorname{vech}^+(\iotab)\in \rr^{k(k-1)/2}$ (playing the role of ${\bolds\xi}$). In this experiment, the null hypothesis $\mathcal{H}^{\betab}_0$ consists in the intersection of the linear manifold $C:= ( \betab^{0\prime},\mathbf{0}_{1\times(k-1)k )\pr+ \mathcal{M}(\Upsi )$, where $\Upsi := ( \mathbf{0}_{k(k-1)\times k},\break \mathbf{I}_{k(k-1)} )\pr, $ with the nonlinear manifold $\operatorname{vec}({\mathcal SO}_k)$. Let $\betab _{0}:=(\betab^{0}, \betab_{2} ,\ldots, \betab_{k})$ be such that $\operatorname{vec}(\betab_0)$ belongs to that intersection. In view of Proposition~\ref{Optitest}, a~most stringent test statistic [at $\operatorname{vec}(\betab_{0})$] for $\mathcal{H}^{\betab}_0$ requires a chart for the tangent to $C\cap\vecop({\mathcal SO}_k)$ at $\operatorname{vec}(\betab_0)$. It follows from~(\ref{tangentbetab}) that this tangent space reduces to \[ \{ \vecop( \betab_{0} + \mathbf{b}) | \mathbf{b}:=(\mathbf{0} , \mathbf {b}_{2}, \ldots, \mathbf{b}_{k}) \mbox{ such that } \betab_{0}\pr \mathbf{b} + {\mathbf{b}}\pr\betab_{0}= \mathbf{0} \}. \] Solving for $\operatorname{vec}(\mathbf{b})=(\mathbf{0}\pr, \mathbf{b}_{2}\pr, \ldots, \mathbf{b}_{k}\pr)\pr$ the system of constraints $\betab_{0}\pr\mathbf {b} + {\mathbf{b}}\pr\betab_{0}= \mathbf{0}$ yields $\operatorname{vec}(\mathbf{b}) \in \mathcal {M}(\mathbf{P}_{k}^{\betab_{0}})$, where \begin{equation}\label{defP}\quad \mathbf{P}_{k}^{\betab_{0}}:= \pmatrix{ \mathbf{0}_{k \times k(k-1)}\vspace*{2pt}\cr \mathbf{I}_{k-1}\otimes[ \mathbf{I}_k- \betab^{0}\betab^{0\prime } ] - \displaystyle\sum_{i,j=1}^{k-1} [\mathbf{e}_{i;k-1}\mathbf{e}_{j;k-1}\pr \otimes \betab_{j+1}\betab_{i+1}\pr]} \end{equation} (with $\mathbf{e}_{i;k-1}$ denoting the $i$th vector of the canonical basis of $\R^{k-1}$). A~local chart for the tangent space of interest is then simply $\btilde \dvtx{\bolds\eta}\in\rr^{(k-1)k}\mapsto\btilde({\bolds\eta}):= \vecop(\betab_0) + \mathbf{P}_{k}^{\betab_{0}}{\bolds\eta}$, with $\etab_0=\btilde^{-1}(\vecop(\betab_{0}))=\mathbf{0}_{(k-1)k}$ and $D\btilde(\etab_0)=\mathbf{P}_{k}^{\betab_{0}}$. Letting $\varthetab_0:=({\bolds\theta}' , \sigma^2, (\dvecrond \Lamb_{\Vb})' ,(\vecop\betab_0)')'$, the test statistic (\ref {Qxi}) takes the form \begin{eqnarray}\label{Qparam}\quad Q_{\varthetab_0 ; f_{1}}^{(n)} &=& \Deltab_{\varthetab_0;f_1}^{\IV\prime} [(\Gamb^{\IV}_{\varthetab_0 ;f_1})^{-}- \mathbf{P}_{k}^{\betab _{0}} ((\mathbf{P}_{k}^{\betab_{0}}) \pr\Gamb^{\IV }_{\varthetab_0 ;f_1} \mathbf{P}_{k}^{\betab_{0}} )^{-} (\mathbf {P}_{k}^{\betab_{0}}) \pr] \Deltab_{\varthetab_0 ;f_1}^{\IV} \nonumber\\[-8pt]\\[-8pt] &=&\frac{nk(k+2)}{ \mathcal{J}_k(f_1)} \sum_{j=2}^{k} \bigl( \betab_j\pr\mathbf{S}_{\varthetab_0;f_1}^{(n)}\betab^0\bigr)^2,\nonumber \end{eqnarray} with \begin{equation}\label{Sparam} \mathbf{S}_{\varthetab;f_1}^{(n)}:= \frac{1}{n} \sum_{i=1}^n \varphi_{f_1} \biggl(\frac{d_{i}({\bolds\theta},\Vb)}{\sigma} \biggr) \frac {d_{i}({\bolds\theta},\Vb)}{\sigma} \Ub_{i}({\bolds\theta},\Vb ) \Ub_{i}\pr({\bolds\theta},\Vb) , \end{equation} where $\Vb$ denotes the unique shape value associated with the parameter $\varthetab$. After simple algebra, we obtain \begin{eqnarray} \label{degreefreedom} &&\Gamb_{\varthetab_{0};f_1}^{\IV} [(\Gamb^{\IV}_{\varthetab_0 ;f_1})^{-}- \mathbf{P}_{k}^{\betab_{0}} ((\mathbf{P}_{k}^{\betab _{0}}) \pr\Gamb^{\IV}_{\varthetab_0 ;f_1} \mathbf{P}_{k}^{\betab _{0}} )^{-} (\mathbf{P}_{k}^{\betab_{0}}) \pr] \nonumber\\[-8pt]\\[-8pt] &&\qquad = \tfrac{1}{2}\mathbf{G}_{k}^{\betab_0} \operatorname{diag}\bigl(\mathbf{I}_{k-1},\mathbf{0}_{(k-2)(k-1)/2 \times (k-2)(k-1)/2}\bigr)(\mathbf{G}_{k}^{\betab_0})\pr, \nonumber \end{eqnarray} which is idempotent with rank $(k-1)$. Since, moreover, $\Deltab _{\varthetab_{0};f_1 }^{\IV}$, under $\mathrm{P}^{(n)}_{\varthetab _{0};f_{1}}$, is asymptotically $\mathcal{N}( \mathbf{0}, \Gamb _{\varthetab_{0};f_1}^{\IV})$, Theorem 9.2.1 in \citet{RM71} then shows that $ Q_{\varthetab_0 ; f_{1}}^{(n)}$, still under $\mathrm {P}^{(n)}_{\varthetab_{0};f_{1}}$, is asymptotically chi-square with $(k-1)$ degrees of freedom. The resulting test, which rejects $\mathcal{H}_0^{\betab}$ at asymptotic level $\alpha$ whenever $ Q_{\varthetab_0 ; f_{1}}^{(n)}$ exceeds the $\alpha$-upper quantile $\chi^2_{k-1,1-\alpha}$ of the $\chi^2_{k-1}$ distribution, will be denoted as $\phi^{(n)}_{\betab; f_1}$.\vspace*{1pt} It is locally asymptotically most stringent, at $\varthetab_0$ and under correctly specified standardized radial density $f_1$ (an unrealistic assumption). Of course, even if $f_{1}$ were supposed to be known, $Q_{\varthetab_0 ; f_{1}}^{(n)}$ still depends on the unspecified ${\bolds\theta}, \sigma^2, \Lamb_{\Vb}$ and $\betab _2,\ldots,\betab_k$. In order to obtain a genuine test statistic, providing a locally asymptotically most stringent test at \textit{any} $\varthetab_0\in\mathcal{H}_0^\betab$ (with an obvious abuse of notation), we would need replacing those nuisance parameters with adequate estimates. We will not pursue any further with this problem here, as it is of little practical interest for arbitrary density $f_1$. The same problem will be considered in Section~\ref{gausscase} for the Gaussian and pseudo-Gaussian versions of~(\ref{Qparam}), then in Section~\ref{ranktests} for the rank-based ones. \subsection{Optimal parametric tests for eigenvalues}\label{eigenvaluesprob} We now turn to the problem of testing the null hypothesis ${\mathcal H}_{0}^{\Lamb}\dvtx\sum_{j=q+1}^{k} \lambda_{j; \Vb}- p \sum _{j=1}^{k}\lambda_{j; \Vb}=0$ against alternatives of the form ${\mathcal H}_{1}^{\Lamb}\dvtx\sum_{j=q+1}^{k} \lambda_{j; \Vb}- p \sum_{j=1}^{k}\lambda_{j; \Vb}<0$, for given $p \in(0,1)$. Letting \[ h\dvtx(\lambda_{2 }, \lambda_{3 }, \ldots,\lambda_{k })\pr\in \mathcal{C}^{k-1} \mapsto \sum_{j=q+1}^{k} \lambda_{j }- p \Biggl( \prod_{j=2}^{k}\lambda_{j }^{-1}+ \sum_{j=2}^{k}\lambda_{j } \Biggr) \] and recalling that $ \prod_{j=1}^{k} \lambda_{j;\Vb}=1$, ${\mathcal H}_{0}^{\Lamb}$ rewrites, in terms of $\dvecrond(\Lamb_{\Vb})$, as ${\mathcal H}_{0}^{\Lamb}\dvtx h( \dvecrond(\Lamb_{\Vb})) =0$, a~highly nonlinear but smooth constraint on $\dvecrond(\Lamb_{\Vb})$. It is easy to check that, when computed at $\dvecrond(\Lamb_{\Vb})$, the gradient of $h$ is \begin{eqnarray*} && \operatorname{grad} h(\dvecrond(\Lamb_{\Vb})) \\ &&\qquad= \bigl( p ( \lambda_{1;\Vb}\lambda_{2;\Vb}^{-1}-1), \ldots, p ( \lambda_{1;\Vb}\lambda_{q;\Vb}^{-1}-1), \\ &&\hspace*{36.74pt} 1+p ( \lambda_{1;\Vb} \lambda_{q+1; \Vb}^{-1}-1 ), \ldots, 1+p ( \lambda_{1;\Vb} \lambda_{k; \Vb}^{-1}-1 ) \bigr)\pr. \end{eqnarray*} Here again, in view of the block-diagonal form of the information matrix, we may restrict our attention to the $\dvecrond(\Lamb_{\Vb })$-part $\Deltab_{\varthetab;f_1}^{\III}$ of the central sequence as if ${\bolds\theta}$, $\sigma^2$ and $\betab$ were known; the parameter space then reduces to the $(k-1)$-dimensional open cone $\mathcal{C}^{k-1}$. Testing a nonlinear constraint on a parameter ranging over an open subset of $\rr^{k-1}$ is much easier however than the corresponding problem involving a curved experiment, irrespective of the possible noninvertibility of the information\vadjust{\goodbreak} matrix. In the noncurved experiment, indeed, a~linearized version ${\mathcal H}_{0,\mathrm{lin}}^{\Lamb} \dvtx\dvecrond(\Lamb_{\Vb})\in \dvecrond(\Lamb_0) + {\mathcal M}\ort(\operatorname{grad} h(\dvecrond\Lamb_0)) $ of ${\mathcal H}_{0}^{\Lamb}$ in the vicinity of $\dvecrond({\bolds \Lambda}_0)$ satisfying $h( \dvecrond\Lamb_0 )=0$ makes sense [${\mathcal M}\ort(\mathbf{A})$ denotes the orthogonal complement of ${\mathcal M}(\mathbf{A})$]. And, as mentioned in Section \ref {curved}, under ULAN, the asymptotic behavior of $\Deltab^{\III} _{{\bolds\vartheta}_0;f_1}$, with $\varthetab_0=({\bolds\theta}\pr ,\sigma^2,(\dvecrond\Lamb_0)\pr,(\vecop\betab)\pr)\pr$, is locally the same under ${\mathcal H}_{0,\mathrm{lin}}^{\Lamb}$ as under ${\mathcal H}_{0}^{\Lamb}$. As for the ``linearized alternative'' ${\mathcal H}_{1,\mathrm{lin}}^{\Lamb} $ consisting of all $\dvecrond\Lamb$ values such that $(\dvecrond\Lamb- \dvecrond \Lamb_0)\pr\operatorname{grad} h(\dvecrond\Lamb_0)<0$, it locally and asymptotically coincides with ${\mathcal H}_{1}^{\Lamb}$: indeed, although the symmetric difference ${\mathcal H}_{1}^{\Lamb} \Delta {\mathcal H}_{1,\mathrm{lin}}^{\Lamb} $, for fixed $n$, is not empty, any $\dvecrond\Lamb_0 + n^{-1/2}\taub^{\III}\in{\mathcal H}_{1,\mathrm{lin}}^{\Lamb} $ eventually belongs to ${\mathcal H}_{1}^{\Lamb} $, and conversely. Therefore, a~locally (at $\dvecrond\Lamb_0$) asymptotically optimal test for ${\mathcal H}_{0,\mathrm{lin}}^{\Lamb}$ against ${\mathcal H}_{1,\mathrm {lin}}^{\Lamb} $ is also locally asymptotically optimal for ${\mathcal H}_{0 }^{\Lamb}$ against ${\mathcal H}_{1 }^{\Lamb} $, and conversely, whatever the local asymptotic optimality concept adopted. Now, in the problem of testing ${\mathcal H}_{0,\mathrm{lin}}^{\Lamb }$ against ${\mathcal H}_{1,\mathrm{lin}}^{\Lamb} $ the null hypothesis is (locally) a hyperplane of $\rr^{k-1}$, with an alternative consisting of the halfspace lying ``below'' that hyperplane. For such one-sided problems (locally and asymptotically, still at $\varthetab_0$) uniformly most powerful tests exist; a \textit{most powerful} test statistic is [\citet{Lcam86}, Section 11.9] \begin{eqnarray}\label{Tparam1} T_{\varthetab_0; f_{1}}^{(n)}&:=& (\operatorname{grad}\pr h(\dvecrond\Lamb_{0}) (\Gamb^{\III}_{{\bolds\vartheta}_0;f_1})^{-1} \operatorname{grad} h(\dvecrond\Lamb_{0}) )^{-1/2} \nonumber\\[-8pt]\\[-8pt] & &{} \times\operatorname{grad}\pr h(\dvecrond\Lamb_{0}) (\Gamb ^{\III}_{{\bolds\vartheta}_0;f_1})^{-1} \Deltab^{\III} _{{\bolds \vartheta}_0;f_1},\nonumber \end{eqnarray} which, under $\mathrm{P}^{(n)}_{\varthetab_0 ; f_1}$, is asymptotically standard normal. An explicit form of $T_{\varthetab_0; f_{1}}^{(n)}$ requires a closed form expression of the inverse of $\Gamb^{\III}_{{\bolds\vartheta };f_1} = ({\mathcal{J}_k(f_1)}/\break{k(k+2)})\times\mathbf{D}_k(\Lamb_\Vb) $. The following lemma provides such an expression for the inverse of $\mathbf {D}_{k}(\Lamb _{\Vb})$ (see the \hyperref[app]{Appendix} for the proof). \begin{Lem} \label{infoinverse} Let $\mathbf{P}_{k}^{\Lamb_\Vb}:=\mathbf{I}_{k^{2}}- \frac{1}{k} \Lamb _{\Vb}^{\otimes2} \vecop(\Lamb_{\Vb}^{-1})(\vecop(\Lamb_{\Vb }^{-1}))\pr$ and $\mathbf{N}_k:=(\mathbf{0}_{(k-1)\times1},\mathbf {I}_{k-1})$. Then, $(\mathbf{D}_{k}(\Lamb_{\Vb}))^{-1}=\mathbf{N}_{k}\mathbf{H}_{k} \mathbf {P}_{k}^{\Lamb_\Vb} (\mathbf{I}_{k^{2}}+ \mathbf{K}_{k}) \Lamb_{\Vb }^{\otimes2} (\mathbf{P}_{k}^{\Lamb_\Vb} )\pr\mathbf{H}_{k}\pr \mathbf{N}_{k}\pr$. \end{Lem} Using this lemma, it follows after some algebra that, for any $\varthetab_0\in{\mathcal H}_{0}^{\Lamb}$, \begin{eqnarray*} && \operatorname{grad}\pr h (\dvecrond\Lamb_{0})(\mathbf{D}_{k}(\Lamb _{0}))^{-1}\operatorname{grad} h(\dvecrond\Lamb_{0})\\ &&\qquad = 2 \Biggl\{ p^{2} \sum_{j=1}^{q} \lambda_{j;0}^{2} + (1-p)^{2} \sum _{j=q+1}^{k} \lambda_{j;0}^{2} \Biggr\}=a_{p,q}(\Lamb_0) \end{eqnarray*} [where ${\bolds\Lambda}\mapsto a_{p,q}({\bolds\Lambda})$ is the mapping defined in~(\ref{TAnd})], and \[ \operatorname{grad}\pr h (\dvecrond\Lamb_{0}) (\mathbf{D}_{k}(\Lamb _{0}))^{-1} \mathbf{M}_{k}^{\Lamb_{0}}\mathbf{H}_{k}({\Lamb _{0}^{-1/2}})^{\otimes2}=\mathbf{c}_{p,q}\pr\mathbf{H}_{k}({\Lamb _{0}^{1/2}})^{\otimes2}. \] This and the definition of $\mathbf{H}_{k}$ yields \begin{equation}\label{Tparam}\qquad T_{\varthetab_0; f_{1}}^{(n)}= \biggl( \frac{nk(k+2)}{\mathcal {J}_k(f_1)} \biggr)^{1/2} (a_{p,q}(\Lamb_{0} ))^{-1/2} \mathbf{c}_{p,q}\pr\dvec\bigl(\Lamb_{0}^{1/2} \betab\pr\mathbf {S}_{\varthetab_0;f_{1}} ^{(n)}\betab\Lamb_{0}^{1/2}\bigr)\hspace*{-20pt} \end{equation} with $\mathbf{S}_{\varthetab; f_{1}}^{(n)}$ defined in~(\ref{Sparam}). The corresponding test, which rejects ${\mathcal H}_{0}^{\Lamb}$ for small values of $T_{\varthetab_0; f_{1}}^{(n)}$, will be denoted as $\phi^{(n)}_{\Lamb; f_1}$. \subsection{Estimation of nuisance parameters}\label{estimnuis} The tests $\phi^{(n)}_{\betab; f_1}$ and $\phi^{(n)}_{\Lamb; f_1}$ derived in Sections~\ref{howeigenvectors} and~\ref{eigenvaluesprob} typically are valid under standardized radial density $f_1$ only; they mainly settle the optimality bounds at given density~$f_1$, and are of little practical value. Due to its central role in multivariate analysis, the Gaussian case ($f_1=\phi_1$) is an exception. In this subsection devoted to the treatment of nuisance parameters, we therefore concentrate on the Gaussian tests $\phi^{(n)}_{\betab; \phi _1}$ and $\phi^{(n)}_{\Lamb; \phi_1}$, to be considered in more detail in Section~\ref{gausscase}. The test statistics derived in Sections~\ref{howeigenvectors} and \ref {eigenvaluesprob} indeed still involve nuisance parameters which in practice have to be replaced with estimators. The traditional way of handling this substitution in ULAN families consists in assuming, for a null hypothesis of the form ${\varthetab} \in{\mathcal H}_{0}$, the existence of a sequence $\hat{\varthetab}^{(n)}$ of estimators of ${\varthetab}$ satisfying all or part of the following assumptions (in the notation of this paper). \renewcommand{\theassumption}{$(\mathrm{B})$} \begin{assumption}\label{assuB} We say that a sequence of estimators ($\hat{\varthetab}^{(n)}, n\in\N$) satisfies Assumption~\ref{assuB} for the null $\mathcal{H}_0$ and the density $f_1$ if $\hat{\varthetab}^{(n)}$ is: \begin{enumerate}[(B3)] \item[(B1)]\hypertarget{B1} \mbox{\textit{constrained}}: $\mathrm{P}^{(n)} _{\varthetab; f_{1}} [\hat{\varthetab}^{(n)} \in{\mathcal H}_{0} ] =1$ for all $n$ and all $\varthetab\in {\mathcal H}_{0}$; \item[(B2)]\hypertarget{B2} \mbox{\textit{$n^{1/2}$-consistent}}: for all $\varthetab\in {\mathcal H}_{0} $, $n^{1/2}(\hat{\varthetab}^{(n)}-\varthetab )=O_{\mathrm{P}}(1)$ under $\mathrm{P}^{(n)}_{\varthetab; f_1}$, as $n\rightarrow\infty$; \item[(B3)]\hypertarget{B3} \mbox{\textit{locally asymptotically discrete}}: for all $\varthetab\in {\mathcal H}_{0}$ and all $c>0$, there exists $M=M(c)>0$ such that the number of possible values of $\hat{\varthetab}^{(n)}$ in balls of the form $\{ \mathbf{t \dvtx n^{1/2} \| (\mathbf{t}-{\varthetab}) \| \leq c\}$ is bounded by $M$, uniformly as $n\rightarrow\infty$. \end{enumerate} \end{assumption} These assumptions will be used later on. In the Gaussian or pseudo-Gaussian context we are considering here, however, Assumption \hyperlink{B3}{(B3)} can be dispensed with under arbitrary densities with finite fourth-order moments. The following asymptotic linearity result characterizes the asymptotic impact,\vspace*{1pt} on $\Deltab_{\varthetab; \phi _1}^{\III}$ and $\Deltab_{\varthetab; \phi_1}^{\IV}$, under any elliptical density $g_1$ with finite fourth-order moments, of estimating $\varthetab$ (see the \hyperref[app]{Appendix} for the proof). \begin{Lem}\label{parametricasymplin} Let Assumption~\ref{assuA} hold, fix $\varthetab\in\Thetab$ and $g_{1} \in {\mathcal F}_{1}^{4}$, and write $D_k(g_1):=\mu_{k+1;g_1}/\mu _{k-1;g_1}$. Then, for any root-$n$ consistent estimator $\hat{\varthetab}:= ( \hat{\varthetab}^{\I\prime} , \hat {\vartheta}^\II, \hat{\varthetab}^{\III\prime}, \hat{\varthetab }^{\IV\prime} )\pr$ of $\varthetab$ under $\mathrm{P}^{(n)}_{\varthetab; g_{1}}$, both\vadjust{\goodbreak} $\Deltab_{\hat{\varthetab}; \phi_1}^{\III}- \Deltab_{\varthetab; \phi_1}^{\III}+ a_{k} ( D_{k}(g_{1})/k) \times\Gamb _{\varthetab; \phi_1}^{\III} n^{1/2}(\hat{\varthetab}^{\III}-{\varthetab}^{\III})$ and\vspace*{-2pt} $\Deltab_{\hat{\varthetab}; \phi_1}^{\IV}- \Deltab_{\varthetab; \phi_1}^{\IV}+ a_{k} ( D_{k}(g_{1})/k) \Gamb _{\varthetab; \phi_1}^{\IV} n^{1/2}(\hat{\varthetab}^{\IV }-{\varthetab}^{\IV}) $ are $o_\mathrm{P}(1)$ under $\mathrm{P}_{\varthetab; g_{1}}^{(n)}$, as $\ny $, where $a_k$ was defined in Section~\ref{defelliptttt}. \end{Lem} \section{\mbox{Optimal Gaussian and pseudo-Gaussian tests for principal components}}\label{gausscase} \subsection{Optimal Gaussian tests for eigenvectors}\label{Gaussbetab} For $f_1=\phi_1$, the test statistic~(\ref{Qparam}) takes the form \begin{equation} \label{betagauss}\quad Q_{\varthetab_0 ; \phi_1 }^{(n)}= n \sum_{j=2}^{k} \bigl({\betab }_{j}\pr\mathbf{S}^{(n)}_{\varthetab_0;\phi_1 } \betab_{}^{0} \bigr)^{2} = n \betab^{0\prime} \mathbf{S}^{(n)}_{\varthetab_0;\phi_1 } (\mathbf {I}_k-\betab_{}^{0}\betab^{0\prime}) \mathbf{S}^{(n)}_{\varthetab _0;\phi_1 } \betab_{}^{0}, \end{equation} with ${\Sb}_{\varthetab;\phi_1 }^{(n)}:= \frac{a_{k}}{n\sigma^2} \sum_{i=1}^{n} \Vb^{-1/2}(\Xb_{i}- {\bolds\theta})(\Xb_{i}- {\bolds\theta})\pr\Vb^{-1/2}$. This statistic still depends on nuisance parameters, to be replaced with estimators. Letting $\Sb^{(n)} = \frac{1}{n} \sum_{i=1}^{n} ({\Xb}_{i}-\bar{\mathbf X})({\Xb}_{i}-\bar {\mathbf{X}})\pr$, a~natural choice for such estimators would be $\hat{\bolds\theta}=\bar{\mathbf X}:=\frac{1}{n}\sum_{i=1}^n\Xb_i$ and \[ \Sb^{(n)} =: \bigl| \Sb^{(n)}\bigr|^{1/k} \hats{\Vb} = \biggl( \frac{| \Sb^{(n)}|^{1/k}}{\hat\sigma^2} \biggr) \hat \sigma^2 \hats{\Vb} =: \biggl( \frac{| \Sb^{(n)}|^{1/k}}{\hat\sigma^2} \biggr) \hat \sigma^2 \hat\betab_\mathbf{V} {\hat\Lamb}_{\Vb}\hat\betab_\mathbf{V}\pr, \] where ${\hat\Lamb}_{\Vb}$ is the diagonal matrix collecting the eigenvalues of $\hats{\Vb}$ (ranked in decreasing order), $\hat\betab _\mathbf{V} := ( \hat\betab_{1;\mathbf{V}}, \ldots, \hat\betab_{k;\mathbf{V}})$ is the corresponding matrix of eigenvectors, and $\hat\sigma^2$ stands for the empirical median of $d_{i}^{2}(\bar{\Xb}, \hats{\Vb})$, $i= 1,\ldots,n$. For $\betab$, however, we need a constrained estimator ${\tilde\betab }$ satisfying Assumption~\ref{assuB} for $\mathcal{H}^\betab_0$ ($\hat\betab_\mathbf{V}$ in general does not). Thus, we rather propose estimating $\varthetab$ by \begin{equation} \label{choiceprelim} \hat{\varthetab}:= (\bar{\mathbf X}\pr, \hat{\sigma}^{2}, (\dvecrond\hat\Lamb_{\Vb})\pr, (\vecop{\tilde\betab}_{0})\pr)\pr, \end{equation} where ${\tilde\betab}_{0}:=( \betab_{}^{0} , \tilde{\betab}_{2}, \ldots , \tilde{\betab}_{k})$ can be obtained from $(\hat\betab_{1;\Vb}, \ldots,\hat\betab_{k;\Vb})$ via the following Gram--Schmidt technique. Let ${\tilde\betab}_{2}:=(\mathbf{I}_k- \betab_{}^{0}\betab_{}^{0 \prime }) \hat{\betab}_{2;\Vb}/ \|(\mathbf{I}_k-\break \betab_{}^{0}\betab_{}^{0 \prime}) \hat{\betab}_{2;\Vb} \|$. By construction, ${\tilde\betab }_{2}$ is the unit-length vector proportional to the projection of the second eigenvector of $\mathbf{S}^{(n)}$ onto the space which is orthogonal to $\betab_{}^{0}$. Iterating this procedure, define \[ {\tilde\betab}_{j}=\frac{(\mathbf{I}_k- \betab_{}^{0}\betab_{}^{0 \prime}-\sum_{h= 2 }^{j-1} \tilde{\betab}_{h}\tilde{\betab }_{h}\pr) \hat{\betab}_{j;\Vb}}{\| (\mathbf{I}_k- \betab_{}^{0}\betab _{}^{0 \prime}-\sum_{h= 2}^{j-1} \tilde{\betab}_{h}\tilde{\betab }_{h}\pr) \hat{\betab}_{j;\Vb}\|},\qquad j=3, \ldots, k. \] The corresponding (constrained) estimator of the scatter $\Sigb$ is $\tilde\Sigb := \hat\sigma^2 \tilde\Vb := \hat\sigma^2 {\tilde\betab}_{0} {\hat\Lamb}_{\Vb} {\tilde \betab}{}'_{0}$. It is easy to see that ${\tilde\betab}_{0}$, under $\varthetab_0 \in {\mathcal H}_{0}^{\betab}$, inherits $\hat\betab_\mathbf{V}$'s root-$n$ consistency, which holds under any elliptical density $g_1$ with finite fourth-order moments. Lemma~\ref{parametricasymplin} thus applies. Combining Lemma \ref {parametricasymplin} with~(\ref{degreefreedom}) and the fact that \[ \mathbf{G}_{k}^{\betab_0} \operatorname{diag}\bigl(\mathbf{I}_{k-1},\mathbf{0}_{(k-2)(k-1)/2 \times (k-2)(k-1)/2}\bigr)(\mathbf{G}_{k}^{\betab_0})\pr\vecop(\tilde{\betab }_{0}- \betab_0)=\mathbf{0} \] (where $\betab_0$ is the matrix of eigenvectors associated with $\varthetab_0$), one easily obtains that substituting $\hat\varthetab$ for $\varthetab_0$ in (\ref{betagauss}) has no asymptotic impact on $Q_{\varthetab_0 ; \phi _1 }^{(n)}$---more precisely, $Q_{\hat\varthetab; \phi_1 }^{(n)}- Q_{\varthetab_0 ; \phi_1 }^{(n)}$ is $o_\mathrm{P}(1)$ as $\ny$ under $\mathrm {P}^{(n)}_{\varthetab _0;g_1}$, with $g_1 \in{\mathcal F}_{1}^{4} $. It follows that $Q_{\hat\varthetab; \phi_1 }^{(n)}$ shares the same asymptotic optimality properties as $ Q_{\varthetab_0 ; \phi_1 }^{(n)}$, irrespective of the value of $\varthetab_0\in\mathcal{H}^\betab_0$. Thus, a~locally and asymptotically \textit{most stringent} Gaussian test of $\mathcal{H}^\betab_0$---denote it by $\phi^{(n)}_{\betab ;\mathcal{N}}$---can be based on the asymptotic chi-square distribution [with $(k-1)$ degrees of freedom] of \begin{eqnarray} \label{QN} Q_{\hat\varthetab; \phi_1 }^{(n)} &=& \frac{n a_{k}^{2}}{\hat{\sigma}^4} \sum_{j=2}^k \bigl( \tilde\betab_j\pr\tilde\Vb^{-1/2}\mathbf{S}^{(n)}\tilde \Vb^{-1/2}\betab^0 \bigr)^2 \nonumber\\ &=& \frac{n a_{k}^{2}}{\hat{\sigma}^4 \hat\lambda_{1; \Vb}} \sum_{j=2}^k \hat\lambda_{j; \Vb}^{-1} \bigl( \tilde\betab _j\pr\mathbf{S}^{(n)}\betab^0 \bigr)^2 \\ &=& \frac{n a_{k}^{2} | \Sb^{(n)}|^{2/k}}{\hat{\sigma}^4 \lambda_{1; \Sb}} \sum_{j=2}^k \lambda_{j; \Sb}^{-1} \bigl( \tilde\betab_j\pr \mathbf{S}^{(n)}\betab^0 \bigr)^2 =:Q_{{\mathcal N}}^{(n)}.\nonumber \end{eqnarray} Since $\hat{\sigma}^{2}/| \Sb^{(n)}|^{1/k}$ converges to $a_{k}$ as $\ny$ under the null ${\mathcal H}_{0}^{\betab}$ and Gaussian densities, one can equivalently use the statistic \[ \bar{Q}_{\mathcal{N}}^{(n)}:=\frac{ n}{\lambda_{1;\mathbf{S}}} \sum _{j=2}^{k} { \lambda}_{j;\mathbf{S}} ^{-1} \bigl(\tilde{\betab }_{j}\pr\mathbf{S}^{(n)}\betab_{}^{0} \bigr)^{2}, \] which, of course, is still a locally and asymptotically \textit{most stringent} Gaussian test statistic. For results on local powers, we refer to Proposition~\ref{pseudogausstestbeta}. This test is valid under Gaussian densities only (more precisely, under radial densities with Gaussian kurtosis). On the other hand, it remains valid in case Assumption~\ref{assuA} is weakened [as in \citet{A63} and Tyler (\citeyear{T81}, \citeyear{T83})] into Assumption~\ref{assuApr1}. Indeed, the consistency of $ \tilde{\Sigb}$ remains unaffected under the null, and $ \betab _{}^{0}$ still is an eigenvector for both $ \tilde{\Sigb}^{-1/2}$ and $\Sigb$, so that $[\mathbf{I}_k-\betab_{}^{0}\betab^{0\prime} ] \tilde{\Sigb}^{-1/2}\Sigb\tilde{\Sigb}^{-1/2} \betab_{}^{0} =\mathbf {0}$. Hence, \begin{eqnarray*} Q_{\mathcal{N}}^{(n)} &=& n a_{k}^{2} \sum_{j=2}^k \bigl( \tilde\betab_j\pr\tilde{\Sigb}^{-1/2}\mathbf {S}^{(n)}\tilde{\Sigb}^{-1/2}\betab^0 \bigr)^2 \\ &=&n a_{k}^{2} \betab_{}^{0\prime} \tilde{\Sigb}^{-1/2}\mathbf{S}^{(n)} \tilde{\Sigb}^{-1/2} \Biggl(\sum_{j=2}^k\tilde\betab _j\tilde\betab_j\pr\Biggr) \tilde{\Sigb}^{-1/2}\mathbf{S}^{(n)} \tilde{\Sigb}^{-1/2} \betab _{}^{0} \\ &=&n a_{k}^{2} \betab_{}^{0\prime} \tilde{\Sigb}^{-1/2}\mathbf{S}^{(n)} \tilde{\Sigb}^{-1/2} [\mathbf{I}_k-\betab_{}^{0}\betab ^{0\prime} ] \tilde{\Sigb}^{-1/2}\mathbf{S}^{(n)} \tilde{\Sigb }^{-1/2} \betab_{}^{0} \\ &=&n a_{k}^{2} \betab_{}^{0\prime} \tilde{\Sigb}^{-1/2}\bigl(\mathbf {S}^{(n)}- a_{k}^{-1} \Sigb\bigr) \tilde{\Sigb}^{-1/2} \\ & &{}\times [\mathbf{I}_k-\betab_{}^{0}\betab^{0\prime} ] \tilde{\Sigb }^{-1/2}\bigl(\mathbf{S}^{(n)}- a_{k}^{-1}\Sigb\bigr) \tilde{\Sigb}^{-1/2} \betab _{}^{0} \\ &=&n a_{k}^{2} \betab_{}^{0\prime}{\Sigb}^{-1/2}\bigl(\mathbf{S}^{(n)}- a_{k}^{-1}\Sigb\bigr){\Sigb}^{-1/2} \\ & &{}\times [\mathbf{I}_k-\betab_{}^{0}\betab^{0\prime}]{\Sigb}^{-1/2}\bigl(\mathbf {S}^{(n)}- a_{k}^{-1} \Sigb\bigr){\Sigb}^{-1/2} \betab_{}^{0} +o_\mathrm {P}(1), \end{eqnarray*} as $n\rightarrow\infty$ under ${\mathcal H}_{0;1}^{\betab\prime}$. Since $n^{1/2} a_{k} {\Sigb}^{-1/2}(\mathbf{S}^{(n)}- a_{k}^{-1} \Sigb ){\Sigb}^{-1/2} \betab^{0}$ is asymptotically $\mathcal {N}(\mathbf{0},\mathbf{I}_{k} + \betab^{0} \betab^{0 \prime} )$ as $\ny$ under ${\mathcal H}_{0;1}^{\betab\prime}$ and Gaussian densities, this idempotent quadratic form remains asymptotically chi-square, with $(k-1)$ degrees of freedom, even when~\ref{assuA} is weakened into~\ref{assuApr1}, as was to be shown.\looseness=-1 This test is also invariant under the group of transformations $\mathcal{G}_{\mathrm{rot},\circ}$ mapping $(\Xb _1,\ldots ,\Xb_n)$ onto $(\mathbf{O}\Xb_1+\mathbf{t},\ldots,\mathbf{O}\Xb _n+\mathbf{t})$, where $\mathbf{t}$ is an arbitrary $k$-vector and $\mathbf{O} \in {\mathcal SO}_{k}^{\betab^{0}}:= \{ \mathbf{O} \in{\mathcal SO}_{k} \vert \mathbf{O} \betab^{0}= \betab^{0} \}$, provided that the estimator of $\varthetab_0$ used is equivariant under the same group---which the esti\-mator $\hat\varthetab$ proposed in~(\ref{choiceprelim}) is. Indeed, denoting by $Q_{\mathcal N}^{(n)}(\mathbf{O},\mathbf{t}) $, $\hat \varthetab (\mathbf{O},\mathbf{t})$, $\Lamb_{\Sb}(\mathbf{O},\mathbf{t})$, $\tilde \betab (\mathbf{O},\mathbf{t})$, $\tilde\Sigb(\mathbf{O},\mathbf{t})$ and $\Sb ^{(n)}(\mathbf{O},\mathbf{t})$ the statistics $Q_{\mathcal N}^{(n)}$, $\hat \varthetab$, $\Lamb_{\Sb}$, $\tilde\betab$, $\tilde\Sigb$ and $\Sb^{(n)}$ computed from the transformed sample $(\mathbf{O} \Xb_{1}^{(n)}+\mathbf{t}, \ldots, \mathbf{O} \Xb_{n}^{(n)}+\mathbf{t})$, one easily checks that, for any $\mathbf{O}\in{\mathcal SO}_{k}^{\betab^0}$, $\Lamb_{\Sb}(\mathbf {O},\mathbf{t})=\Lamb_{\Sb}$, $\tilde\betab(\mathbf{O},\mathbf{t}) = \mathbf{O} \tilde\betab$, $\tilde\Sigb(\mathbf{O},\mathbf{t}) = \mathbf {O}{\tilde \Sigb} \mathbf{O}\pr$ and $\Sb^{(n)}(\mathbf{O},\mathbf{t}) = \mathbf {O}\Sb ^{(n)}\mathbf{O}\pr$, so that (noting that $\mathbf{O}\pr\betab^0=\betab^0$) \[ Q_{\mathcal N}^{(n)}(\mathbf{O},\mathbf{t}) = n a_{k}^{2} \sum_{j=2}^k \bigl(\tilde\betab_j\pr\mathbf{O}\pr\mathbf{O}\tilde \Sigb^{-1/2} \mathbf{O}\pr\mathbf{O}\mathbf{S}^{(n)}\mathbf{O}\pr\mathbf {O}\tilde\Sigb^{-1/2} \mathbf{O}\pr\betab^0 \bigr)^2 =Q_{\mathcal N}^{(n)}. \] Finally, let us show that $Q_{\mathrm{Anderson}}^{(n)}$ and $Q_{\mathcal{N}}^{(n)}$ asymptotically coincide, under ${\mathcal H}_{0;1}^{\betab\prime}$ and Gaussian densities, hence also under contiguous alternatives. This asymptotic equivalence indeed is not a straightforward consequence of the definitions~(\ref{AndTesta}) and (\ref{QN}). Since $\sum_{j=2}^{k} {\lambda}_{j; \mathbf{S}}^{-1} (\betab_{j; \Sb}\betab_{j; \mathbf{S}}\pr- \tilde\betab_j\tilde \betab_j\pr)$ is $o_\mathrm{P}(1)$ and $n^{1/2}(\mathbf{S}^{(n)}- \tilde \betab_0{ \Lamb}_\mathbf{S}\tilde\betab_0\pr)$ is $O_\mathrm{P}(1)$ as $n\to\infty$, under ${\mathcal H}_{0;1}^{\betab\prime}$ and Gaussian densities [with ${\Lamb}_\mathbf{S}:=\operatorname {diag}({\lambda }_{1; \mathbf{S}},\ldots,{\lambda}_{k; \mathbf{S}})$], it follows from Slutsky's lemma that \begin{eqnarray*} Q_{\mathrm{Anderson}}^{(n)} &=& \frac{n}{{\lambda}_{1; \mathbf{S}}} \sum_{j=2}^{k} {\lambda}_{j; \mathbf{S}}^{-1} [ ({\lambda}_{j; \mathbf{S}}-{\lambda}_{1;\mathbf{S}}) {\betab}_{j; \mathbf{S}}\pr\betab_{}^{0} ]^{2} \\ &=& \frac{n}{{\lambda}_{1; \mathbf{S}}} \sum_{j=2}^{k} {\lambda}_{j; \mathbf{S}}^{-1} \bigl[ {\betab}_{j; \mathbf{S}}\pr\bigl(\mathbf{S}^{(n)}- \tilde\betab_0{ \Lamb}_\mathbf{S}\tilde\betab_0\pr\bigr) \betab _{}^{0} \bigr]^{2} \\ &=& \frac{n}{{\lambda}_{1; \mathbf{S}}} \sum_{j=2}^{k} {\lambda}_{j; \mathbf{S}}^{-1} \bigl[ \tilde{\betab}_{j}\pr\bigl(\mathbf{S}^{(n)}- \tilde \betab_0{ \Lamb}_\mathbf{S}\tilde\betab_0\pr\bigr) \betab_{}^{0} \bigr]^{2}+o_\mathrm{P}(1) \\ &=& \frac{n}{{\lambda}_{1; \mathbf{S}}} \sum_{j=2}^{k} {\lambda}_{j; \mathbf{S}}^{-1} \bigl( \tilde{\betab}_{j}\pr\mathbf{S}^{(n)}\betab _{}^{0} \bigr)^{2}+o_\mathrm{P}(1) \\ &=& \bar{Q}_{\mathcal{N}}^{(n)}+ o_\mathrm{P}(1) \end{eqnarray*} as $\ny$, still under ${\mathcal H}_{0;1}^{\betab\prime}$ and Gaussian densities. The equivalence between $Q_{\mathrm {Anderson}}^{(n)}$ and $Q_{\mathcal{N}}^{(n)}$ in the Gaussian case then follows since $\bar{Q}_{\mathcal{N}}^{(n)}={Q}_{\mathcal {N}}^{(n)}+ o_\mathrm{P}(1)$ as $\ny$, under ${\mathcal H}_{0;1}^{\betab \prime}$ and Gaussian densities. \subsection{Optimal Gaussian tests for eigenvalues}\label{Gausslamb} Turning to ${\mathcal H}_{0}^{\Lamb}$, we now consider the Gaussian version of the test statistic $T_{\varthetab_0 ; f_{1}}^{(n)}$ obtained in Section~\ref{eigenvaluesprob}. In view of~(\ref{Tparam}), we have \begin{equation}\label{Tgauss} T_{\varthetab_0; \phi_1 }^{(n)} = n^{1/2}( a_{p,q}(\Lamb_{0} ))^{-1/2} \mathbf{c}_{p,q}\pr\dvec \bigl(\Lamb_{0}^{1/2}\betab\pr\Sb_{\varthetab_0;\phi_1 }^{(n)}\betab \Lamb_{0}^{1/2}\bigr) \end{equation} [recall that $\mathcal{J}_k(\phi_1 ) =k(k+2)$; see (\ref {normalscores})]. Here also we have to estimate $\varthetab_0$ in order to obtain a genuine test statistic. By using the fact that $\betab\Lamb_{0}\betab\pr=\Vb_0$ (where all parameter values refer to those in $\varthetab_0$), we obtain that, in~(\ref{Tgauss}), \begin{eqnarray} && n^{1/2}\mathbf{c}_{p,q}\pr\dvec\bigl(\Lamb_{0}^{1/2}\betab\pr\Sb _{\varthetab_0;\phi_1 }^{(n)}\betab\Lamb_{0}^{1/2}\bigr)\nonumber\\[-8pt]\\[-8pt] &&\qquad= \frac{n^{1/2} a_{k}}{\sigma^{2}} \mathbf{c}_{p,q}\pr\dvec \Biggl(\betab\pr\frac{1}{n}\sum_{i=1}^n(\Xb_i-{\bolds\theta})(\Xb _i-{\bolds\theta})\pr \betab\Biggr),\nonumber \end{eqnarray} a $O_\mathrm{P}(1)$ expression which does not depend on $\Lamb_{0}$. In view of Lemma~\ref{parametricasymplin} and the block-diagonal form of the information matrix, estimation of ${\bolds\theta}$, $\sigma^2$ and $\betab$ has no asymptotic impact on the eigenvalue part $\Deltab _{\varthetab; \phi_1 }^{\III}$ of the central sequence, hence on~$ T_{\varthetab_{0}; \phi_1 }^{(n)}$.\vspace*{2pt} As for $a_{p,q}(\Lamb_0 )$, it is a continuous function of $\Lamb_0$, so that, in view of Slutsky's lemma, plain consistency of the estimator of $\Lamb_0$ is sufficient. Consequently, we safely can use here the unconstrained estimator \begin{equation}\label{thetachapeaulamb} \hat{\varthetab}:= (\bar{\mathbf X}\pr, \hat{\sigma}^{2}, (\dvecrond\hat\Lamb_{\Vb})', (\vecop{\hat\betab_{\Vb}})\pr)\pr; \end{equation} see the beginning of Section~\ref{Gaussbetab}. Using again the fact that, under Gaussian densities, $\hat{\sigma}^{2}/| \Sb ^{(n)}|^{1/k}$ converges to $a_{k}$ as $\ny$, a~locally and asymptotically \textit{most powerful} Gaussian test statistic therefore is given by \begin{eqnarray}\label{commeAnderson} T_{\mathcal{N}}^{(n)} :\!&=& \frac{n^{1/2} a_{k}}{\hat{\sigma}^{2}} ({a}_{p,q}(\hat{\Lamb }_{\mathbf{V}}))^{-1/2} \mathbf{c}_{p,q}\pr\dvec\bigl(\hat\betab_{\Vb }\pr{\Sb}^{(n)}\hat\betab_{\Vb}\bigr)\nonumber\\ &=& \frac{n^{1/2} a_{k}|\Sb^{(n)}|^{1/k}}{\hat{\sigma}^{2}} ({a}_{p,q}({\Lamb}_{\mathbf{S}}))^{-1/2} \mathbf{c}_{p,q}\pr\dvec \bigl(\hat\betab_{\Vb}\pr{\Sb}^{(n)}\hat\betab_{\Vb}\bigr)\nonumber\\[-8pt]\\[-8pt] &=&n^{1/2} (a_{p,q}({\Lamb}_\mathbf{S}))^{-1/2} \Biggl( (1-p) \sum_{j=q+1}^k \lambda_{j;\Sb}-p \sum_{j=1}^q \lambda _{j;\Sb} \Biggr)\nonumber\\ &&{} +o_\mathrm{P}(1),\nonumber \end{eqnarray} under Gaussian densities as $\ny$. The corresponding test, $\phi^{(n)}_{\Lamb; \mathcal{N}}$ say, rejects $\mathcal{H}^\Lamb_0$ whenever $T_{\mathcal{N}}^{(n)}$ is smaller than the standard normal $\alpha$-quantile; (\ref {commeAnderson}) shows that $T_{\mathcal{N}}^{(n)}$ coincides [up to $o_\mathrm{P}(1)$] with $T_{\mathrm{Anderson}}^{(n)}$ given in (\ref {TAnd}), which entails that (i) $\phi^{(n)}_{\Lamb; \mathrm {Anderson}}$ is also locally and asymptotically most powerful under Gaussian densities, and that (ii) the validity of $\phi^{(n)}_{\Lamb; \mathcal{N}}$ extends to $\mathcal{H}^{\Lamb\prime\prime} _{0;q}$ (since the validity of $\phi^{(n)}_{\Lamb; \mathrm{Anderson}}$ does). \subsection{Optimal pseudo-Gaussian tests for eigenvectors} \label{pseudovec} The Gaussian tests $\phi^{(n)}_{\betab; \mathcal{N}}$ and $\phi ^{(n)}_{\Lamb; \mathcal{N}}$ of Sections~\ref{Gaussbetab} and \ref{Gausslamb} unfortunately are valid under multinormal densities only (more precisely, as we shall see, under densities with Gaussian kurtosis). It is not difficult, however, to extend their validity to the whole class of elliptical populations with finite fourth-order moments, while maintaining their optimality properties at the multinormal. Let us first introduce the following notation. For any $g_{1}\in \mathcal{F}_1^{4}$, let (as in Lem\-ma~\ref{parametricasymplin}) $D_k(g_{1}):=\mu _{k+1;g_1}/\mu_{k-1;g_1} =\sigma^{-2} \mathrm{E}_{\varthetab;g_{1}} [d^2_{i}(\tetb,\Vb)]=\int_0^1({\tilde G}{}^{-1}_{1k}(u))^2 \,du$ and $E_k(g_{1}) := \sigma^{-4} \mathrm{E}_{\varthetab;g_{1}} [d^4_{i}(\tetb,\Vb)]=\int_0^1({\tilde G}{}^{-1}_{1k}(u))^4 \,du$, where ${\tilde G}_{1k}(r):= \break(\mu_{k-1;g_1})^{-1} \int_0^r s^{k-1} g_1(s) \,ds$; see Section~\ref{defelliptttt}. Then \[ \kappa_k(g_{1}):= \frac{k}{k+2} \frac{E_k(g_{1})}{D_k^{2}(g_{1})}-1 \] is the \textit{kurtosis} of the elliptic population with radial density $g_{1}$ [see, e.g., page 54 of \citet{A03}]. For Gaussian densities, $E_k(\phi_1 ) = k(k+2)/a_{k}^{2}$, $D_k(\phi_{1})=k/a_{k}$ and $\kappa_k(\phi_1 )=0$. Since the asymptotic covariance matrix of $\Deltab_{{\varthetab} ; \phi_1 }^{\IV}$ under $\mathrm{P}^{(n)}_{\varthetab;g_{1}}$ (with $\varthetab\in\mathcal{H}^\betab_0$ and $g_{1} \in\mathcal {F}_{1}^{4}$) is $ (a_{k}^{2}E_{k}(g_{1})/k(k+2)) \Gamb^{\IV }_{{\varthetab} ;\phi_1}$, it is natural to base our pseudo-Gaussian tests on statistics of the form [compare with the $f_1=\phi_1$ version of~(\ref{betagauss})] \begin{eqnarray*} Q_{\varthetab_0, \mathcal{N}*}^{(n)} :\!&=& \frac{k(k+2)}{a_{k}^{2}E_{k}(g_{1})} \Deltab_{{\varthetab_0} ; \phi_1 }^{\IV\prime} [ (\Gamb^{\IV}_{\varthetab_0 ;\phi_1})^{-} - \mathbf{P}_{k}^{\betab_{0}} ((\mathbf{P}_{k}^{\betab_{0}}) \pr \Gamb^{\IV}_{\varthetab_0 ; \phi_1} \mathbf{P}_{k}^{\betab_{0}} )^{-} (\mathbf{P}_{k}^{\betab_{0}}) \pr] \Deltab_{{\varthetab_0} ; \phi_1 }^{\IV} \\ & = & \bigl(1+ \kappa_{k}(g_{1})\bigr)^{-1} \frac{k^{2}}{D_{k}^{2}(g_{1}) a_{k}^{2}} Q_{\varthetab_0, \phi_{1}}^{(n)} =:\bigl(1+ \kappa_{k}(g_{1})\bigr)^{-1} Q_{\varthetab_{0}, \mathcal{N}}(g_{1}). \end{eqnarray*} As in the Gaussian case, and with the same $\hat\varthetab$ as in~(\ref{choiceprelim}), Lemma~\ref{parametricasymplin} entails that $ Q_{\hat\varthetab, \phi_{1}}^{(n)}= Q_{\varthetab_0, \phi _{1}}^{(n)}+ o_\mathrm{P}(1), $ as $\ny$ under $\mathrm{P}^{(n)}_{\varthetab_{0}; g_{1}}$, with $\varthetab_{0} \in{\mathcal H}_{0}^{\betab}$ and $g_{1} \in {\mathcal F}_{1}^{4}$. Since $\hat{\sigma}^{2}/|\mathbf{S}^{(n)}|^{1/k}$ consistently estimates $k/D_k(g_1)$ under $\mathrm{P}^{(n)}_{\varthetab_0; g_{1}}$, with $\varthetab_0 \in{\mathcal H}_{0}^{\betab}$ and $g_{1} \in {\mathcal F}_{1}^{4}$, it follows from Slutsky's lemma that \[ \hat{Q}_{\hat{\varthetab}, \mathcal{N}} := \frac{\hat{\sigma}^{2}}{|\mathbf{S}^{(n)}|^{1/k} a_{k}^{2}} Q_{\hat \varthetab, \phi_{1}}^{(n)} \] satisfies $\bar{Q}_{\mathcal{N}}^{(n)}= \hat{Q}_{\hat{\varthetab}, \mathcal{N}}= Q_{\varthetab_0, \mathcal{N}}(g_{1})+o_\mathrm{P}(1)$ as $\ny$, still under $ {\mathcal H}_{0}^{\betab}$, $g_{1} \in{\mathcal F}_{1}^{4}$. The pseudo-Gaussian test $\phi^{(n)}_{\betab;\mathcal {N}*}$ we propose is based on \begin{equation}\label{QN*} Q_{\mathcal{N}*}^{(n)} := (1+\hat\kappa_{k})^{-1} \bar{Q}_{\mathcal {N}}^{(n)} , \end{equation} where $\hat{\kappa}_k := (kn^{-1}\sum_{i=1}^n \hat{d}_{i}^4)/((k+2)(n^{-1}\sum_{i=1}^n \hat {d}_{i}^2)^2)-1 $, with $\hat{d}_{i}:={d}_{i}(\bar{\Xb}, \mathbf{S}^{(n)})$. The statistic $Q_{\mathcal{N}*}^{(n)}$ indeed remains asymptotically chi-square [$(k-1)$ degrees of freedom] under $\mathcal{H}^{\betab \prime} _{0;1}$ for any $g_{1} \in\mathcal{F}_{1}^{4}$. Note that\vspace*{1pt} $\phi^{(n)}_{\betab;\mathcal{N}*}$ is obtained from $\phi ^{(n)}_{\betab;\mathcal{N}}$ by means of the standard kurtosis correction of \citet{SB87}, and asymptotically coincides with $\phi^{(n)}_{\betab;\mathrm{Tyler}}$; see~(\ref{TylTesta}). Local powers for $\phi^{(n)}_{\betab;\mathcal{N}*}$ classically follow from applying Le Cam's third lemma. Let $\taub^{(n)}:=((\taub^{\I(n)})\pr, \tau^{\II(n)}, (\taub ^{\III(n)})\pr, (\taub^{\IV(n)})\pr)\pr, $ with $\taub^{(n)\prime}\taub^{(n)}$ uniformly bounded, where $ \taub^{\IV(n)} =\vecop(\mathbf{b}^{(n)})$ is a perturbation of $\operatorname{vec}(\betab_0)=\vecop (\betab^{0},\betab_2,\ldots, \betab_k)$ such that $\betab_0\pr \mathbf{b}$, with $\mathbf{b}= ( \mathbf{b}_{1}^{\prime}, \ldots, \mathbf{b}_{k}^{\prime} )\pr:=\lim_{\ny}\mathbf{b}^{(n)}$, is skew-symmetric; see~(\ref{antisym}) and~(\ref{tangentbetab}). Assume furthermore that the corresponding perturbed value of $\varthetab_0\in \mathcal{H}^\betab_0$ does not belong to $\mathcal{H}^\betab_0$, that is, $\mathbf{b}_1\neq\mathbf{0}$, and define \begin{eqnarray}\label{rbeta} r_{\varthetab_0 ; \taub}^{\betab} :\!&=& \lim_{\ny} \bigl(\operatorname{vec} \mathbf{b}^{(n)}\bigr) ^\prime\mathbf{G}_{k}^{\betab_{0}} \operatorname{diag}\bigl(\nu_{12}^{-1}, \ldots , \nu{}^{-1}_{1k}, \mathbf{0}_{1\times{(k-2)(k-1)}/{2} }\bigr)\nonumber\\ &&\hspace*{20.4pt}{}\times (\mathbf{G}_{k}^{\betab_{0}})\pr\bigl(\operatorname{vec} \mathbf {b}^{(n)}\bigr)\\ &=& 4 \sum_{j=2}^{k} \nu_{1j}^{-1} (\betab\pr_{j} \mathbf{b}_1)^{2}.\nonumber \end{eqnarray} The following result summarizes the asymptotic properties of the pseudo-Gaussian tests $\phi^{(n)}_{\betab; \mathcal{N}*}$. Note that optimality issues involve $\mathcal{H}^\betab_0$ [hence require Assumption~\ref{assuA}], while validity extends to $\mathcal {H}^{\betab\prime} _{0;1}$ [which only requires Assumption~\ref{assuApr1}]. \begin{Prop} \label{pseudogausstestbeta} \textup{(i)} $Q^{(n)}_{\mathcal{N}*}$ is asymptotically chi-square with $(k-1)$ degrees of freedom under $\bigcup_{\varthetab\in{\mathcal H}_{0;1}^{\betab\prime}} \bigcup_{g_{1}\in\mathcal{F}_{1}^{4} } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$, and asymptotically noncentral chi-square, still with $(k-1)$ degrees of freedom, but with noncentrality parameter $r_{\varthetab; \taub}^{\betab}/4(1+\kappa_k(g_1)) $ under $\mathrm {P}^{(n)}_{\varthetab+n^{-1/2}\taub^{(n)};g_{1}}$, with $\varthetab \in\mathcal{H}^\betab_0$, $g_{1}\in\mathcal{F}_{a}^{4}$, and $\taub^{(n)}$ as described above;\vadjust{\goodbreak} {\smallskipamount=0pt \begin{longlist}[(iii)] \item[(ii)] the sequence of tests $\phi^{(n)}_{\betab; \mathcal{N}*}$ rejecting the null whenever $Q^{(n)}_{\mathcal{N}*}$ exceeds the $\alpha$ upper-quantile $\chi^2_{k-1;1-\alpha}$ of the chi-square distribution with $(k-1)$ degrees of freedom has asymptotic size $\alpha$ under $\bigcup_{\varthetab\in {\mathcal H}_{0;1}^{\betab\prime} }\bigcup_{g_{1}\in\mathcal {F}_{1}^{4} } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$; \item[(iii)] the pseudo-Gaussian tests $\phi^{(n)}_{\betab; \mathcal{N}*}$ are asymptotically equivalent, under $\bigcup_{\varthetab\in{\mathcal H}_{0;1}^{\betab\prime} }\{\mathrm{P}^{(n)}_{\varthetab; \phi_1 }\}$ and contiguous alternatives, to the optimal parametric Gaussian tests $\phi^{(n)}_{\betab; \mathcal{N}}$; hence, the sequence $\phi ^{(n)}_{\betab; \mathcal{N}*}$ is locally and asymptotically most stringent, still at asymptotic level $\alpha$, for $\bigcup_{\varthetab\in {\mathcal H}_{0;1}^{\betab\prime}}\bigcup_{g_{1}\in\mathcal {F}_{1}^{4} } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$ against alternatives of the form $\bigcup_{\varthetab\notin{\mathcal H}_{0}^{\betab}} \{ \mathrm {P}^{(n)}_{\varthetab;\phi_1 }\}$. \end{longlist}} \end{Prop} Of course, since $\hat\kappa_{k}$ is invariant under $\mathcal {G}_{\mathrm{rot},\circ}$, the pseudo-Gaussian test inherits the $\mathcal {G}_{\mathrm{rot},\circ}$-invariance features of the Gaussian one. \subsection{Optimal pseudo-Gaussian tests for eigenvalues}\label{pseudoval} As in the previous section, the asymptotic null distribution of the Gaussian test statistic $T_{\mathcal{N}}^{(n)}$ is not standard normal anymore under radial density $g_1$ as soon as $\kappa_k(g_1)\neq \kappa_k(\phi_1)$. The Gaussian test $\phi^{(n)}_{\Lamb; \mathcal {N}}$ thus is\vspace*{1pt} not valid (does not have asymptotic level $\alpha$) under such densities. The same reasoning as before leads to a similar kurtosis correction, yielding a pseudo-Gaussian test statistic \[ T_{\mathcal{N}{*}}^{(n)}:=(1+\hat\kappa_k)^{-1/2} \tilde {T}_{\mathcal{N}}^{(n)}, \] where $\tilde{T}_{\mathcal{N}}^{(n)}:=n^{1/2}(a_{p,q}({\Lamb}_\mathbf {S}))^{-1/2} ( (1-p) \sum_{j=q+1}^k \lambda_{j;\Sb}-p\sum_{j=1}^q \lambda_{j;\Sb })$ and $\hat{\kappa}_k$ is as in Section~\ref{pseudovec}. This statistic coincides with $T_{\mathrm{Davis}}^{(n)}$ given in~(\ref{TDav}). Here also,\vspace*{1pt} local powers are readily obtained via Le Cam's third lemma. Let $\taub^{(n)}:=((\taub^{\I(n)})^\prime, \tau^{\II(n)}, (\taub ^{\III(n)})^\prime, (\taub^{\IV(n)})^\prime)\pr$, with $\taub ^{(n)\prime}\taub^{(n)}$ uniformly bounded, where $\taub^{\III(n)}:= \dvecrond(\mathbf{l}^{(n)})$ is such that $\mathbf{l}:=\lim_{\ny}\mathbf{l}^{(n)}:=\lim_{\ny} \operatorname{diag}(\ell^{(n)}_{1}, \break \ldots, \ell^{(n)}_{k})$ satisfies $\operatorname{tr}(\Lamb_{\Vb}^{-1} \mathbf{l})=0$ [see~(\ref{ell1}) and the comments thereafter], and define \begin{equation} \label{rLamb}\qquad r_{\varthetab; \taub}^{\Lamb_{\Vb}} := \lim_{\ny}\operatorname {grad} h(\dvecrond(\Lamb_{\Vb}))\pr\taub^{\III(n)} = (1-p) \sum_{j=q+1}^k \mathbf{l}_{j} -p\sum_{j=1}^q \mathbf{l}_{j}. \end{equation} The following proposition summarizes the asymptotic properties of the resulting pseudo-Gaussian tests $\phi^{(n)}_{\Lamb_\Vb;\mathcal{N}*}$. \begin{Prop} \label{pseudogausstestlambda} \textup{(i)} $T^{(n)}_{\mathcal{N}*}$ is asymptotically normal, with mean zero under $\bigcup_{\varthetab\in{\mathcal H}_{0;q}^{\Lamb\prime \prime} } \bigcup_{g_{1}\in\mathcal{F}_{1}^{4} } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$, mean $ ({4a_{p,q}(\Lamb_\Vb) (1+ \kappa_{k}(g_{1}))}^{-1/2} r_{\varthetab; \taub}^{\Lamb_{\Vb}} $ under $\mathrm{P}^{(n)}_{\varthetab+n^{-1/2}\taub^{(n)};g_{1}}$, $\varthetab\in {\mathcal H}_{0}^{\Lamb}$, $g_{1}\in\mathcal{F}_a^4$ and $ \taub^{(n)}$ as described above, and variance one under both;\vadjust{\goodbreak} {\smallskipamount=0pt \begin{longlist}[(iii)] \item[(ii)] the sequence of tests $\phi^{(n)}_{\Lamb; \mathcal{N}*}$ rejecting the null whenever $T^{(n)}_{\mathcal{N}*}$ is less than the standard normal $\alpha$-quantile $z_\alpha$ has asymptotic size $\alpha$ under\break $\bigcup_{\varthetab\in {\mathcal H}_{0;q}^{\Lamb\prime\prime}}\bigcup_{g_{1}\in \mathcal{F}_{1}^{4} } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$; \item[(iii)] the pseudo-Gaussian tests $\phi^{(n)}_{\Lamb; \mathcal{N}*}$ are asymptotically equivalent, under $\bigcup_{\varthetab\in {\mathcal H}_{0;q}^{\Lamb\prime\prime} }\{\mathrm{P}^{(n)}_{\varthetab ; \phi_1 }\}$ and contiguous alternatives, to the optimal parametric Gaussian tests $\phi^{(n)}_{\Lamb; \mathcal{N}}$; hence, the sequence $\phi^{(n)}_{\Lamb; \mathcal{N}*}$ is locally and asymptotically most powerful, still at asymptotic level $\alpha$, for $\bigcup_{\varthetab\in {\mathcal H}_{0;q}^{\Lamb\prime\prime} }\bigcup_{g_{1}\in\mathcal {F}_{1}^{4} } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$ against alternatives of the form $\bigcup_{\varthetab\notin{\mathcal H}_{0}^{\Lamb}} \{ \mathrm {P}^{(n)}_{\varthetab;\phi_1 }\}$. \end{longlist}} \end{Prop} \section{Rank-based tests for principal components} \label{ranktests} \subsection{Rank-based statistics: Asymptotic representation and asymptotic normality}\label{rankHajek} The parametric tests proposed in Section~\ref{paramtests} are valid under specified radial densities $f_1$ only, and therefore are of limited practical value. The importance of the Gaussian tests of Sections~\ref{Gaussbetab} and \ref{Gausslamb} essentially follows from the fact that they belong to usual practice, but Gaussian assumptions are quite unrealistic in most applications. The pseudo-Gaussian procedures of Sections~\ref{pseudovec} and~\ref{pseudoval} are more appealing, as they only require finite fourth-order moments. Still, moments of order four may be infinite and, being based on empirical covariances, pseudo-Gaussian procedures remain poorly robust. A~straightforward idea would consist in robustifying them by substituting some robust estimate of scatter for empirical covariance matrices. This may take care of validity-robustness issues, but has a negative impact on powers, and would not achieve efficiency-robustness. The picture is quite different with the rank-based procedures we are proposing in this section. While remaining valid under completely arbitrary radial densities, these methods indeed also are efficiency-robust; when based on Gaussian scores, they even uniformly outperform, in the Pitman sense, their pseudo-Gaussian counterparts (see Section~\ref{secare}). Rank-based inference, thus, in this problem as in many others, has much to offer, and enjoys an extremely attractive combination of robustness and efficiency properties. The natural framework for principal component analysis actually is the semiparametric context of elliptical families in which ${\bolds\theta }$, $\dvecrond(\Lamb_\Vb)$, and $\betab$ (not $\sigma^2$) are the parameters of interest, while the radial density $f$ [equivalently, the couple $(\sigma^2, f_1)$] plays the role of an infinite-dimensional nuisance. This semiparametric model enjoys the double structure considered in \citet{HW03}, which allows for efficient rank-based inference: the fixed-$f_1$ subexperiments, as shown in Proposition~\ref{LAN} are ULAN, while the fixed-$({\bolds\theta}$, $\dvecrond(\Lamb_\Vb)$, $\betab)$ subexperiments [equivalently, the fixed-$({\bolds\theta}, \Vb)$ subexperiments] are generated by groups of transformations acting on the observation space. Those groups here are of the form\vadjust{\goodbreak} $\mathcal{G}_{{\bolds\theta}, \Vb}^{(n)}, \sirc$ and consist of the \textit{continuous monotone radial transformations} ${\mathcal{G}} ^{(n)}_h$ \begin{eqnarray*} {\mathcal{G}}^{(n)}_h(\Xb_{1}, \ldots, \Xb _{n})&=&{\mathcal{G}}^{(n)}_h \bigl( {\bolds\theta}+ d_{1}({\bolds\theta}, \Vb) \Vb^{1/2} \mathbf {U}_{1}({\bolds\theta}, \Vb), \ldots, \\ & &\hspace*{43pt} {\bolds\theta}+ d_{n}({\bolds\theta}, \Vb) \Vb^{1/2} \mathbf {U}_{n}({\bolds\theta}, \Vb) \bigr) \\ :\!&=& \bigl({\bolds\theta}+ h(d_{1}({\bolds\theta}, \Vb)) \Vb^{1/2} \mathbf{U}_{1}({\bolds\theta }, \Vb), \ldots, \\ && \hspace*{25.4pt}{\bolds\theta}+ h(d_{n}({\bolds\theta}, \Vb)) \Vb^{1/2} \mathbf {U}_{n}({\bolds\theta}, \Vb)\bigr), \end{eqnarray*} where $h\dvtx\R^{+} \rightarrow\R^{+}$ is continuous, monotone increasing, and satisfies\break $\lim_{r \rightarrow\infty}h(r)= \infty$ and $h(0)=0$. The group $\mathcal{G}_{{\bolds\theta}, \Vb}^{(n)}, \sirc$ generates the fixed-$({\bolds\theta}, \Vb)$ family of distributions $ \bigcup_{\sigma^{2}} \bigcup_{f_{1}}\{\mathrm {P}_{{\bolds\theta}, \sigma^{2}, \dvecronds(\Lamb_\Vb),\vecop (\betab) ; f_{1}}^{(n)}\}$.\vspace*{1pt} The general results of \citet{HW03} thus indicate that efficient inference can be based on the corresponding maximal invariants, namely the vectors \[ \bigl(R^{(n)}_{1}({\bolds\theta}, \Vb), \ldots,R^{(n)}_{n}({\bolds \theta}, \Vb), \Ub_{1}({\bolds\theta}, \Vb), \ldots, \Ub _{n}({\bolds\theta}, \Vb) \bigr) \] of ranks and multivariate signs, where $R^{(n)}_{i}({\bolds\theta}, \Vb)$ denotes the rank of $d_{i}({\bolds\theta}, \Vb)$ among $d_{1}({\bolds\theta }, \Vb), \ldots, d_{n}({\bolds\theta}, \Vb)$. Test statistics based on such invariants automatically are distribution-free under $\bigcup_{\sigma^{2}} \bigcup_{f_{1}}\{\mathrm{P}_{{\bolds\theta }, \sigma^{2}, \dvecronds(\Lamb_\Vb),\vecop(\betab) ; f_{1}}^{(n)}\}$. Letting $R_{i}:=R_{i}({\bolds\theta}, \Vb)$ and $\Ub_{i}:=\Ub _{i}({\bolds\theta}, \Vb)$, define \[ {\utDelta}{}_{ \varthetab; K}^{\III}:= \frac{1}{2\sqrt{n}} {\Mb}_{k}^{\Lamb_{\Vb}} \mathbf{H}_{k} ( {\Lamb_{\Vb}^{-1/2}\betab\pr} )^{\otimes2} \sum_{i=1}^{n} K \biggl(\frac{R^{(n)}_{i}}{n+1} \biggr)\vecop( \mathbf{U}_{i}\mathbf{U}_{i}\pr) \] and \[ {\utDelta}{}_{ \varthetab; K}^{\IV}:= \frac{1}{2\sqrt{n}} \mathbf{G}_{k}^{\betab} \mathbf{L}_{k}^{\betab,\Lamb_{\Vb}} (\mathbf{V}^{\otimes2})^{-1/2} \sum_{i=1}^{n} K \biggl( \frac {R^{(n)}_{i}}{n+1} \biggr) \vecop(\Ub_{i}\Ub_{i}\pr). \] Associated with ${\utDelta}{}_{ \varthetab; K}^{\III}$ and ${\utDelta}{}_{ \varthetab; K}^{\IV}$, let \[ \ubDelta_{\varthetab; K, g_{1}}^{\III}:= \frac{1}{2\sqrt{n}} {\Mb}_{k}^{\Lamb_{\Vb}} \mathbf{H}_{k} ( {\Lamb_{\Vb}^{-1/2}\betab\pr} )^{\otimes2} \sum_{i=1}^{n} K \biggl(\tilde{G}_{1k} \biggl(\frac {d_{i}({\bolds\theta}, \Vb)}{\sigma} \biggr) \biggr)\vecop (\mathbf{U}_{i}\mathbf{U}_{i}\pr) \] and \[ \ubDelta_{\varthetab; K, g_{1}}^{\IV}:= \frac{1}{2\sqrt{n}} \mathbf{G}_{k}^{\betab} \mathbf{L}_{k}^{\betab, \Lamb_{\Vb}} (\mathbf {V}^{\otimes2})^{-1/2} \sum_{i=1}^{n}K \biggl( \tilde{G}_{1k} \biggl(\frac{d_{i}({\bolds\theta}, \Vb)}{\sigma} \biggr) \biggr) \vecop (\Ub_{i}\Ub_{i}\pr), \] where $\tilde{G}_{1k}$ is as in Section~\ref{pseudovec}. The following proposition provides an asymptotic representation and asymptotic normality result for ${\utDelta}{}_{ \varthetab; K}^{\III }$ and ${\utDelta}{}_{ \varthetab; K}^{\IV}$. \begin{Prop} \label{Hajek} Let Assumption~\ref{assuS} hold for the score function $K$. Then: \begin{longlist} \item (\textit{asymptotic representation})\vspace*{-2pt} $({\utDelta}{}_{ \varthetab; K}^{\III\prime},{\utDelta}{}_{ \varthetab; K}^{\IV\prime})\pr=(\ubDelta_{ \varthetab; K, g_{1}}^{\III\prime}, \ubDelta_{ \varthetab; K, g_{1}}^{\IV\prime })\pr+ o_{L^{2}}(1)$ as $\ny$, under $\mathrm{P}^{(n)}_{\varthetab;g_1}$, for any $\varthetab \in\Thetab$ and $g_1\in\mathcal{F}_1$;\vspace*{2pt} \item (\textit{asymptotic normality}) let Assumption~\ref{assuA} hold and consider a bound\-ed~sequence $\taub^{(n)}:=((\taub^{\I(n)})^{\prime}, \tau ^{\II(n)} , (\taub^{\III(n)})^{\prime} , (\taub^{\IV(n)})^{\prime } )\pr$ such that both $\taub^\III:=\lim_{\ny}\taub^{\III(n)}$ and $\taub^\IV:=\lim_{\ny}\taub^{\IV(n)}$ exist. Then $(\ubDelta _{ \varthetab; K, g_{1}}^{\III\prime}, \ubDelta_{ \varthetab; K, g_{1}}^{\IV\prime} )\pr$ is asymptotically normal, with mean zero and mean \[ \frac{\mathcal{J}_k(K,g_{1})}{k(k+2)} \pmatrix{ \mathbf{D}_{k}(\Lamb_{\Vb}) \taub^{\III} \cr \frac{1}{4} \mathbf{G}_{k}^{\betab} \operatorname{diag}\bigl(\nu_{12}^{-1}, \ldots, \nu_{(k-1)k}^{-1}\bigr) (\mathbf{G}_{k}^{\betab})\pr\taub^{\IV}}\vspace*{-2pt} \] [where $\mathcal{J}_k(K,g_{1})$ was defined in~(\ref{infoKf})], under $\mathrm{P}^{(n)}_{\varthetab; g_{1}}$ (any $\varthetab\in\Thetab$ and $g_1\in\mathcal{F}_1$) and $\mathrm{P}^{(n)}_{\varthetab+ n^{-1/2} \taub^{(n)}; g_{1}}$ (any $\varthetab\in\Thetab$ and $g_1\in \mathcal{F}_a$), respectively, and block-diagonal covariance matrix $\operatorname{diag} (\Gamb_{\varthetab; K}^{\III},\Gamb_{\varthetab; K}^{\IV} )$ under both, with \[ \Gamb_{\varthetab; K}^{\III}:=\frac{\mathcal{J}_k(K)}{k(k+2)} \mathbf{D}_{k}(\Lamb_{\Vb})\vspace*{-2pt} \] and \begin{equation}\label{GambK} \Gamb_{\varthetab; K}^{\IV}:=\frac{\mathcal{J}_k(K)}{4k(k+2)} \mathbf{G}_{k}^{\betab} \operatorname{diag}\bigl(\nu_{12}^{-1}, \ldots, \nu _{(k-1)k}^{-1}\bigr)(\mathbf{G}_{k}^{\betab})\pr.\vspace*{-2pt} \end{equation} \end{longlist} \end{Prop} The proofs of parts (i) and (ii) of this proposition are entirely similar to those of Lemma 4.1 and Proposition 4.1, respectively, in \citet{HP06a}, and therefore are omitted. In case $K=K_{f_1}$ is the score function associated with $f_1\in \mathcal{F}_a$, and provided that Assumption~\ref{assuA} holds (in order for the central sequence $\Deltab_{ \varthetab; f_{1}}$ of Pro\-position~\ref{LAN} to make sense), $\ubDelta_{ \varthetab; K_{f_1}, f_{1}}^{\III}$ and $\ubDelta_{ \varthetab; K_{f_1}, f_{1}}^{\IV}$, under $\mathrm{P}^{(n)}_{\varthetab; f_{1}}$ clearly coincide with $\Deltab_{ \varthetab; f_{1}}^{\III}$ and $\Deltab_{ \varthetab; f_{1}}^{\IV}$. Therefore, ${\utDelta}{}_{ \varthetab; K_{f_1}}^{\III }$ and ${\utDelta}{}_{ \varthetab; K_{f_1}}^{\IV}$ constitute rank-based, hence distribution-free, versions of those central sequence components. Exploiting this, we now construct signed-rank tests for the two problems we are interested in.\vspace*{-2pt} \subsection{Optimal rank-based tests for eigenvectors} \label{gsjdlr} Proposition~\ref{Hajek} provides the theoretical tools for constructing rank-based tests for ${\mathcal H}_{0}^{\betab}$ and computing their local powers. Letting again $\varthetab_0:=({\bolds \theta}\pr, \sigma^2, (\dvecrond\Lamb_\Vb)\pr,(\vecop\betab _0)\pr)\pr$, with $\betab_0=(\betab^0,\betab_2,\ldots, \betab_k )$, define the rank-based analog of~(\ref{Qparam}) \begin{eqnarray}\label{Qrank}\quad {\utQ}{}_{\varthetab_0 ;K}^{(n)} :\!&=& {\utDelta}{}_{\varthetab_0 ; K}^{\IV\prime} [(\Gamb^{\IV}_{\varthetab_0 ; K})^{-}- \mathbf{P}_{k}^{\betab _{0}^{}} ( (\mathbf{P}_{k}^{\betab_{0}^{}})\pr\Gamb^{\IV }_{\varthetab_0 ; K} \mathbf{P}_{k}^{\betab_{0}^{}} )^{-}(\mathbf {P}{}^{\betab_{0}^{}}_{k})\pr ] {\utDelta}{}_{\varthetab_0 ; K}^{\IV} \nonumber\\[-9pt]\\[-9pt] &=&\frac{nk(k+2)}{ \mathcal{J}_k(f_1)} \sum_{j=2}^{k} \bigl( \betab_j\pr{\utSb}{}^{(n)}_{\varthetab_0 ;K} \betab^0\bigr)^2,\nonumber\vspace*{-2pt} \end{eqnarray} where $ {\utSb}{}^{(n)}_{\varthetab;K}:=\frac{1}{n} \sum_{i=1}^{n} K (\frac{R^{(n)}_{i}({\bolds\theta}, {\Vb})}{n+1} ) \Ub _{i}({\bolds\theta}, {\Vb}) \Ub\pr_{i}({\bolds\theta}, {\Vb} ). $\vadjust{\goodbreak} In order\vspace*{-4pt} to turn ${\utQ}{}_{\varthetab_0 ;K}^{(n)}$ into a genuine test statistic, as in the parametric case, we still have to replace $\varthetab_0$ with some adequate estimator $\hat{\varthetab}$ satisfying, under as large as possible a class of densities, Assumption \ref{assuB} for $\mathcal{H}_0^\betab$. In particular, root-$n$ consistency should hold without any moment assumptions. Denote by $\hat{{\bolds \theta}}_{\mathrm{HR}}$ the \citet{HR02} affine-equivariant median, and by $\hats{\Vb}_{\mathrm{Tyler}}$ the shape estimator of \citet{T87}, normalized so that it has determinant one: both are root-$n$ consistent under any radial density $g_1$. Factorize $\hats{\Vb}_{\mathrm{Tyler}}$ into $\hat\betab_{\mathrm {Tyler}}\hat\Lamb_{\mathrm{Tyler}}\hat\betab_{\mathrm{Tyler}}\pr $. The estimator we are proposing (among many possible ones) is $ \hat\varthetab=(\hat{{\bolds\theta}}_{\mathrm{HR}}\pr, {\sigma }^{2}, (\dvecrond\hat{\Lamb}_{\mathrm{Tyler}})\pr,\break (\vecop \tilde{\betab}_{0})\pr)\pr$, where the constrained estimator $\tilde{\betab}_{0} := (\betab ^{0}, \tilde{\betab}_{2}, \ldots, \tilde{\betab}_{k})$ is constructed from $\hat\betab_{\mathrm{Tyler}}$ via the same Gram--Schmidt procedure as was applied in Section~\ref{Gaussbetab} to the eigenvectors $\hat\betab_{\Vb}$ of $\hats{\Vb}:=\Sb^{(n)}/|\Sb ^{(n)}|^{1/k}$; note that $\sigma^2$ does not even appear in ${\utQ}{} _{\varthetab_0 ;K}^{(n)}$, hence needs not be estimated. In view of~(\ref{Qrank}), \begin{eqnarray} \label{Qchapeau} {\utQ}{}_{K}^{(n)}&=& {\utQ}{}^{(n)}_{\hat\varthetab;K}\nonumber\\ &=& \frac{nk(k+2)}{\mathcal{J}_k(K)} \betab^{0\prime} {\utSb }{}^{(n)}_{\hat\varthetab;K} (\mathbf{I}_k-\betab^{0}\betab^{0\prime }) {\utSb}{}_{\hat\varthetab;K}^{(n)}\betab_{}^{0 } \\ & =& \frac{nk(k+2)}{\mathcal{J}_k(K)} \bigl\| [\betab_{}^{0\prime} \otimes(\mathbf{I}_k-\betab _{}^{0}\betab^{0\prime}) ] \bigl(\vecop\utSb{}^{(n)}_{\hat \varthetab;K} \bigr) \bigr\|^2,\nonumber \end{eqnarray} where the\vspace*{-2pt} ranks and signs in $\utSb{}_{\hat\varthetab;K}$ are computed at $\hat\varthetab$, that is, $R^{(n)}_{i}:=R^{(n)}_{i}(\hat {{\bolds\theta}}_{\mathrm{HR}}, \tilde{\betab}_{0} \hat\Lamb _{\mathrm{Tyler}}\tilde{\betab}_{0} \pr)$ and ${\Ub}_{i}={\Ub }_{i}(\hat{{\bolds\theta}}_{\mathrm{HR}}, \tilde{\betab}_{0} \hat \Lamb_{\mathrm{Tyler}}\tilde{\betab}_{0} \pr)$. Let us show that substituting $\hat{\varthetab}$ for $\varthetab_{0}$ in (\ref {Qrank}) has no asymptotic impact on ${\utQ}{}_{\varthetab_0 ;K}^{(n)}$---that is, ${\utQ}{}_{\varthetab_0 ;K}^{(n)}-{\utQ }{}_{K}^{(n)}=o_\mathrm{P}(1)$ as $\ny$ under\vspace*{-4pt} $\mathrm {P}^{(n)}_{\varthetab _0; g_1}$, with $g_{1}\in\mathcal{F}_a$, $\varthetab_0 \in{\mathcal H}_{0;1}^{\betab\prime}$. The proof, as usual, relies on an asymptotic linearity property which, in turn, requires ULAN. The ULAN property of Proposition~\ref{LAN}, which was motivated by optimality issues in tests involving $\betab$ and $\Lamb_\Vb$, here cannot help us, as it does not hold under Assumption~\ref{assuApr1}. Another ULAN property, however, where Assumption~\ref{assuA} is not required, has been obtained by \citet{HP06a} for another parametrization---based on $({\bolds\theta},\sigma^2,\Vb)$---of the same families of distributions, and perfectly fits our needs here. Defining $\mathbf{J}_{k}:= (\vecop\mathbf{I}_{k})(\vecop\mathbf {I}_{k})\pr $ and $\mathbf{J}_{k}^{\perp}:= \mathbf{I}_{k^{2}}- \frac{1}{k} \mathbf {J}_{k}$, it follows from Proposition A.1 in \citet{HOP06} and Lemma 4.4 in \citet{K87} that, for any locally asymptotically discrete [Assumption \hyperlink{B3}{(B3)}] and root-$n$ consistent [Assumption \hyperlink{B2}{(B2)}] sequence $( \hat{\bolds\theta}^{(n)}, \hats{\Vb}^{(n)})$ of estimators of location and shape, one has \begin{eqnarray} \label{HPres} && \mathbf{J}_{k}^{\perp} \sqrt{n} \vecop({\utSb}{}_{\hat\varthetab ;K}- {\utSb}{}_{\varthetab;K}) \nonumber\\ &&\quad{} + \frac{{\mathcal J}_k(K,g_{1})}{4k(k+2)} \biggl[\mathbf{I}_{k^{2}}+\mathbf{K}_{k}-\frac{2}{k}\mathbf{J}_{k} \biggr]({\Vb }^{-1/2})^{\otimes2} n^{1/2}\vecop\bigl(\hats{\Vb}^{(n)}- \Vb\bigr)\\ &&\qquad= o_\mathrm{P}(1)\nonumber \end{eqnarray} as $\ny$ under $\mathrm{P}^{(n)}_{\varthetab_0 ;g_{1}}$, with $\varthetab_0 \in\Thetab$ and $g_{1} \in\mathcal{F}_a$. This result readily applies to any adequately discretized version of $(\hat {{\bolds\theta}}_{\mathrm{HR}}, \tilde{\betab}_{0} \hat\Lamb _{\mathrm{Tyler}}\tilde{\betab}_{0} \pr)$ at $\varthetab_0 \in {\mathcal H}_{0;1}^{\betab\prime}$. It is well known, however, that discretization, although necessary for asymptotic statements, is not required in practice [see pages 125 or 188 of \citet{CY00} for a discussion on this point]; we therefore do not emphasize discretization any further in the notation, and henceforth assume that $\hat\varthetab$, whenever needed, has been adequately discretized. Using~(\ref{HPres}) and the fact that $ [\betab^{0\prime} \otimes(\mathbf{I}_k-\betab_{}^{0}\betab ^{0\prime}) ]\mathbf{J}_{k}=\mathbf{0}$, we obtain,\break under $\mathrm{P}^{(n)}_{\varthetab_0 ;g_{1}}$ with ${{\varthetab}_0} \in{\mathcal H}_{0;1}^{\betab\prime}$ and $g_{1} \in\mathcal{F}_a$, since $\mathbf{K}_{k}(\vecop\mathbf {A})=\vecop(\mathbf{A}\pr)$ for any $k\times k$ matrix $\mathbf{A}$ and since $\betab^0$ under ${{\varthetab}_0} \in{\mathcal H}_{0;1}^{\betab\prime}$ is an eigenvector of ${\Vb}^{-1/2}\tilde {\betab}_{0} \hat\Lamb_{\mathrm{Tyler}}\tilde{\betab}_{0} \pr {\Vb}^{-1/2}$, \begin{eqnarray* &&\sqrt{n} [\betab^{0\prime} \otimes(\mathbf{I}_k-\betab _{}^{0}\betab^{0\prime}) ] \vecop\bigl(\utSb{}^{(n)}_{\hat \varthetab;K}-\utSb{}^{(n)}_{\varthetab;K}\bigr) \\ &&\qquad = -\frac{{\mathcal J}_k(K,g_{1})}{2k(k+2)} n^{1/2} [\betab ^{0\prime} \otimes(\mathbf{I}_k-\betab_{}^{0}\betab^{0\prime}) ] ({\Vb}^{-1/2})^{\otimes2} \vecop(\tilde{\betab}_{0} \hat\Lamb _{\mathrm{Tyler}}\tilde{\betab}_{0} \pr- \Vb)\\ &&\qquad\quad{} +o_\mathrm{P}(1) \\ &&\qquad = -\frac{{\mathcal J}_k(K,g_{1})}{2k(k+2)}n^{1/2} \vecop\bigl((\mathbf{I}_k-\betab_{}^{0}\betab^{0\prime}) [{\Vb }^{-1/2}\tilde{\betab}_{0} \hat\Lamb_{\mathrm{Tyler}}\tilde {\betab}_{0} \pr{\Vb}^{-1/2}-\mathbf{I}_k]\betab^0\bigr)\\ &&\qquad\quad{} +o_\mathrm{P}(1) \\ &&\qquad =o_\mathrm{P}(1), \end{eqnarray*} as $\ny$. In\vspace*{-6pt} view of~(\ref{Qchapeau}), we therefore conclude that $ {\utQ}{}_{K}^{(n)}-{\utQ}{}_{\varthetab_0 ; K}^{(n)} = o_\mathrm{P}(1)$ as $\ny$, still under under ${{\varthetab}_0} \in{\mathcal H}_{0;1}^{\betab\prime}$, as was to be shown. The following result summarizes the results of this section. \begin{Prop} \label{ranktestbeta}Let Assumption~\ref{assuS} hold for the score function $K$. Then: \begin{longlist} \item ${\utQ}{}_{K}^{(n)}$ is asymptotically\vspace*{-2pt} chi-square with $(k-1)$ degrees of freedom under $\bigcup_{\varthetab\in{\mathcal H}_{0;1}^{\betab\prime}} \bigcup_{ g_{1}\in\mathcal{F}_a } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$, and asymptotically noncentral chi-square, still with $(k-1)$ degrees of freedom, and noncentrality parameter \[ \frac{{\mathcal J}^{2}_k(K, g_{1} ) }{4k(k+2)\mathcal{J}_k(K)} r_{\varthetab; \taub}^{\betab} \] under $\mathrm{P}^{(n)}_{\varthetab+n^{-1/2}\taub^{(n)};g_{1}}$, for $\varthetab\in {\mathcal H}_{0}^{\betab}$ and $g_{1}\in\mathcal{F}_a$, with $\taub ^{(n)}$ as in Proposition~\ref{pseudogausstestbeta} and $ r_{\varthetab; \taub}^{\betab} $ defined in~(\ref{rbeta}); \item the sequence\vspace*{-1pt} of tests ${\utphi}{}^{(n)}_{\betab; K}$ rejecting the null when $Q_{K}^{(n)}$ exceeds the $\alpha$ upper-quantile of the chi-square distribution with $(k-1)$ degrees of freedom has asymptotic size $\alpha$ under $\bigcup_{\varthetab\in{\mathcal H}_{0}^{\betab \prime}}\bigcup_{ g_{1}\in\mathcal{F}_a} \{ \mathrm{P}^{(n)}_{\varthetab ;g_{1}}\}$; \item for scores\vspace*{-2pt} $K=K_{f_1}$, with $f_{1}\in\mathcal{F}_a$, ${\utphi }{}^{(n)}_{\betab; K}$ is locally asymptotically most stringent, at asymptotic level $\alpha$, for $\bigcup_{\varthetab\in{\mathcal H}_{0}^{\betab\prime}}\bigcup_{ g_{1}\in\mathcal{F}_a} \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$ against alternatives of the form $\bigcup_{\varthetab\notin{\mathcal H}_{0}^{\betab}} \{ \mathrm {P}^{(n)}_{\varthetab;f_{1}}\}$.\vspace*{-1pt} \end{longlist} \end{Prop} Being measurable with respect to\vspace*{-2pt} signed-ranks, ${\utQ}{}_{K}^{(n)}$ is asymptotically invariant under continuous monotone radial transformations, in the sense that it is asymptotically equivalent (in probability) to a random variable that is strictly invariant under such transformations. Furthermore, it is easy to show that it enjoys the same $\mathcal{G}_{\mathrm{rot},\circ}$-invariance features as the parametric, Gaussian, or pseudo-Gaussian test statistics \subsection{Optimal rank-based tests for eigenvalues}\label{rankeigvalues} Finally, still from the results of Proposition~\ref{Hajek}, we construct signed-rank tests for the null hypothesis ${\mathcal H}_{0}^{\Lamb}$. A~rank-based counterpart of~(\ref{Tparam1}) and~(\ref{Tparam}) [at $\varthetab_0=({\bolds\theta}\pr, \sigma^2, (\dvecrond\Lamb _0)\pr,\break (\vecop\betab)\pr)\pr\in\mathcal{H}^\Lamb_0$] is, writing $\Vb_0$ for $\betab\Lamb_0\betab\pr$, \begin{eqnarray}\label{Trank}\qquad {\utT}{}_{\varthetab_0; K}^{(n)} & = & (\operatorname{grad}\pr h(\dvecrond\Lamb_0) (\Gamb^{\III}_{{\bolds\vartheta}_{0}; K})^{-1}\operatorname{grad} h(\dvecrond\Lamb_0) )^{-1/2} \nonumber\\ & &{} \times\operatorname{grad}\pr h(\dvecrond\Lamb_{0}) (\Gamb ^{\III}_{{\bolds\vartheta}_{0}; K})^{-1} {\utDelta}{}^{\III} _{{\bolds\vartheta}_{0} ; K} \\ & = & \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q}(\Lamb _0 ))^{-1/2} \mathbf{c}_{p,q}\pr\dvec\bigl( \Lamb_0^{1/2}\betab\pr{\utSb }{}^{(n)}_{\varthetab_{0},K} \betab\Lamb_0^{1/2}\bigr). \nonumber \end{eqnarray} Here again, we have to estimate $\varthetab_0$. Note that, unlike the quantity\break $\operatorname{grad}\pr h(\dvecrond\Lamb_{0} ) (\Gamb^{\III }_{{\bolds\vartheta}_{0}; \phi_1})^{-1} \Deltab^{\III} _{{\bolds \vartheta}_{0}; \phi_1}$ appearing in the Gaussian or pseudo-Gaus\-sian~cases, $\operatorname{grad}\pr h(\dvecrond\Lamb_{0} ) (\Gamb^{\III }_{{\bolds\vartheta}_{0}; K})^{-1} {\utDelta}{}^{\III} _{{\bolds \vartheta}_{0}; K}$ does depend\vspace*{-2pt} on $\Lamb_{0}$ [see the comments below~(\ref{Tgauss})]. Consequently, we have to carefully select an estimator $\hat\varthetab$ that has no influence on the asymptotic behavior of ${\utT}{}_{\varthetab_0; K}^{(n)}$ under~${\mathcal H}_{0;q}^{\Lamb\prime\prime}$. To this end, consider Tyler's estimator of shape $\hats{\mathbf {V}}_{\mathrm {Tyler}}(=:\hat{\betab}_{\mathrm{Tyler}}\hat{\Lamb}_{\mathrm {Tyler}}\hat{\betab}_{\mathrm{Tyler}}\pr$, with obvious notation) and define \[ \dvec(\tilde{\Lamb}_{\mathrm{Tyler}}):= \bigl(\mathbf{I}_k- \mathbf {c}_{p,q}(\mathbf{c}_{p,q}\pr\mathbf{c}_{p,q})^{-1}\mathbf{c}_{p,q}\pr\bigr) (\dvec\hat{\Lamb}_{\mathrm{Tyler}}). \] Then the estimator of shape $ \tilde{\Lamb}_{\Vb}:= \tilde{\Lamb}_{\mathrm{Tyler}}/ |\tilde {\Lamb}_{\mathrm{Tyler}}|^{1/k} $ is clearly constrained: $\mathbf{c}_{p,q}\pr(\dvec\tilde{\Lamb}_{\Vb })=0$ and $|\tilde{\Lamb}_{\Vb}|=1$. The resulting preliminary estimator $\hat{\varthetab}$ is \begin{equation} \label{initialresti} \hat{\varthetab}:= (\hat{{\bolds\theta}}_{\mathrm{HR}}\pr, {\sigma}^{2}, (\dvecrond\tilde{\Lamb}_{\Vb})\pr, (\vecop\hat{\betab }_{\mathrm{Tyler}})\pr)\pr, \end{equation} where $\hat{{\bolds\theta}}_{\mathrm{HR}}$ still denotes the \citet{HR02} affine-equivariant median. The test statistic we propose is then \begin{eqnarray}\label{717} {\utT}{}_{K}^{(n)} :\!& = & {\utT}{}_{\hat\varthetab; K}^{(n)} \nonumber\\ &=& ( \operatorname{grad}\pr h(\dvecrond\tilde{\Lamb}_{\Vb} ) (\Gamb^{\III}_{\hat{\bolds\vartheta}; K})^{-1} \operatorname{grad} h( \dvecrond\tilde{\Lamb}_{\Vb} ) )^{-1/2} \nonumber\\ & &{} \times \operatorname{grad}\pr h(\dvecrond\tilde{\Lamb}_{\Vb}) (\Gamb^{\III}_{\hat{\bolds\vartheta}; K})^{-1} {\utDelta}{}^{\III} _{\hat{\bolds\vartheta}; K} \\ & = & \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} (\tilde{\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\nonumber\\ &&{}\times\dvec\bigl(\tilde{\Lamb}_{\Vb}^{1/2}\hat{\betab }_{\mathrm{Tyler}} \pr{\utSb}{}_{\hat{\varthetab};K}^{(n)}\hat {\betab}_{\mathrm{Tyler}} \tilde{\Lamb}_{\Vb}^{1/2}\bigr), \nonumber \end{eqnarray} where ${\utSb}{}_{\hat{\varthetab};K}^{(n)}:=\frac{1}{n} \sum_{i=1}^{n} K (\frac{\hat R^{(n)}_{i}}{n+1} ) {\hat\Ub}_{i}{\hat\Ub }_{i}\pr$, with $ \hat R^{(n)}_{i} := R^{(n)}_{i}(\hat{{\bolds\theta}}_{\mathrm {HR}}, \hat{\betab}_{\mathrm{Tyler}} \tilde{\Lamb}_{\Vb} \hat{\betab}_{\mathrm{Tyler}} ^{ \prime}) $ and $ \hat \Ub_i:=\Ub_i(\hat{{\bolds\theta}}_{\mathrm{HR}} , \hat{\betab}_{\mathrm{Tyler}} \tilde{\Lamb}_{\Vb} \hat{\betab}_{\mathrm{Tyler}} ^{ \prime})$. The following lemma shows that the substitution of $\hat\varthetab$ for ${\varthetab}$ in~(\ref{initialresti}) has no asymptotic effect on ${\utT }{}_{\varthetab ; K}^{(n)}$ (see the \hyperref[app]{Appendix} for a proof). \begin{Lem} \label{alihoprimeprime} Fix $\varthetab\in{\mathcal H}_{0;q}^{\Lamb\prime\prime}$ and $g_{1} \in\mathcal{F}_{a}$, and let $\hat{\varthetab}$ be the estimator in~(\ref{initialresti}). Then ${\utT}{}_{K}^{(n)}-{\utT }{}_{\varthetab; K}^{(n)}$ is $o_\mathrm{P}(1)$ as $\ny$, under $\mathrm {P}^{(n)}_{\varthetab; g_{1}}$. \end{Lem} The following result summarizes the results of this section. \begin{Prop} \label{ranktestlambda} Let Assumption~\ref{assuS} hold for the score function $K$. Then: \begin{longlist} \item ${\utT}{}^{(n)}_{K}$ is asymptotically standard normal under $\bigcup_{\varthetab\in{\mathcal H}_{0;q}^{\Lamb\prime\prime}} \bigcup_{g_{1} \in{\mathcal F}_a} \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$, and asymptotically normal with mean \[ \frac{\mathcal{J}_k(K,g_{1})}{\sqrt{4k(k+2) a_{p,q}(\Lamb_\Vb )\mathcal{J}_k(K)}} r_{\varthetab; \taub}^{\Lamb_{\Vb}} \] and variance 1 under $\mathrm{P}^{(n)}_{\varthetab+n^{-1/2}\taub^{(n)};g_{1}}$, with $\varthetab\in{\mathcal H}_{0}^{\Lamb_{\Vb}}$, $g_{1} \in{\mathcal F}_a$, $ \taub^{(n)}$ as in Proposition~\ref{pseudogausstestlambda}, and $r_{\varthetab; \taub}^{\Lamb_{\Vb}}$ defined in~(\ref{rLamb}); \item the sequence of tests ${\utphi}{}^{(n)}_{\Lamb; K}$ rejecting the null whenever ${\utT}{}^{(n)}_{K}$ is less than the standard normal $\alpha$-quantile $z_\alpha$ has asymptotic size $\alpha$ under\break $\bigcup_{\varthetab\in {\mathcal H}_{0;q}^{\Lamb\prime\prime}}\bigcup_{g_{1}\in\mathcal {F}_a } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$; \item for scores\vspace*{-2pt} $K=K_{f_1}$ with $f_{1} \in{\mathcal F}_a$, the sequence of tests ${\utphi}{}^{(n)}_{\Lamb; K}$ is locally and asymptotically most powerful, still at asymptotic level $\alpha$, for $\bigcup_{\varthetab\in {\mathcal H}_{0;q}^{\Lamb\prime\prime}}\bigcup_{g_{1}\in\mathcal {F}_a } \{ \mathrm{P}^{(n)}_{\varthetab;g_{1}}\}$ against alternatives of the form $\bigcup_{\varthetab\notin{\mathcal H}_{0}^{\Lamb}} \{ \mathrm {P}^{(n)}_{\varthetab;f_1 }\}$. \end{longlist} \end{Prop} \section{Asymptotic relative efficiencies}\label{secare} The asymptotic relative efficiencies (AREs) of the rank-based tests of Section~\ref{ranktests} with respect to their Gaussian and pseudo-Gaussian competitors of Sections~\ref{gausscase} are readily obtained as ratios of noncentrality parameters under local alternatives (squared ratios of standardized asymptotic shifts for the one-sided problems on eigenvalues). Denoting by $\mathrm{ARE}^{\varthetab,\taub}_{k,g_1}(\phi^{(n)}_1/\phi ^{(n)}_2)$ the ARE, under local\vspace*{-2pt} alternatives of the form $\mathrm {P}^{(n)}_{\varthetab+n^{-1/2}\taub;g_1}$, of a sequence of tests $\phi^{(n)}_1$ with respect to the sequence~$\phi^{(n)}_2$, we thus have the following result. \begin{Prop} \label{6are} Let Assumptions~\ref{assuS} and~\ref{assuB} hold for the score function $K$ and (with the appropriate null hypotheses and densities) for the estimators $\hat \varthetab$ described in the previous sections. Then, for any $g_1 \in\mathcal{F}_a^{4}$, \[ \mathrm{ARE}^{\varthetab,\taub}_{k,g_1}\bigl({\utphi}{}^{(n)}_{\betab; K}/\phi ^{(n)}_{\betab; \mathcal{N}*}\bigr)=\mathrm{ARE}^{\varthetab,\taub }_{k,g_1}\bigl({\utphi}{}^{(n)}_{\Lamb; K}/\phi ^{(n)}_{\Lamb; \mathcal{N}*}\bigr) := \frac{(1+\kappa_{k}(g_1))\mathcal{J}^2_k(K,g_1)}{k(k+2)\mathcal{J}_k(K)}. \] \end{Prop} Table~\ref{AREtable} provides numerical values of these AREs for various values of the space \begin{table} \caption{AREs of the van\vspace*{-3pt} der Waerden (vdW), Wilcoxon (W) and Spearman (SP) rank-based tests~${\utphi}{}^{(n)}_{\betab; K}$ and $ {\utphi}{}^{(n)}_{\Lamb; K}$ with respect to their pseudo-Gaussian counterparts, under $k$-dimensional Student (with $5$, $8$ and $12$ degrees of freedom), Gaussian, and power-exponential densities (with parameter $\eta=2,3,5$), for $k=2$, $3$, $4$, $6$, $10$, and $k \rightarrow\infty$} \label{AREtable} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcccccccc@{}} \hline & & \multicolumn{7}{c@{}}{\textbf{Underlying density}}\\[-4pt] & & \multicolumn{7}{c@{}}{\hrulefill}\\ $\bolds K$& $\bolds k$ & $\bolds{t_{5}}$ & $\bolds{t_{8}}$ & $\bolds{t_{12}}$ & $\bolds{\mathcal{N}}$ & $\bolds{e_2}$ & $\bolds{e_3}$ & $\bolds{e_5}$ \\ \hline vdW & \phantom{0}2 & 2.204 & 1.215 & 1.078 & 1.000 & 1.129 & 1.308 & 1.637 \\ & \phantom{0}3 & 2.270 & 1.233 & 1.086 & 1.000 & 1.108 & 1.259 & 1.536 \\ & \phantom{0}4 & 2.326 & 1.249 & 1.093 & 1.000 & 1.093 & 1.223 & 1.462 \\ & \phantom{0}6 & 2.413 & 1.275 & 1.106 & 1.000 & 1.072 & 1.174 & 1.363 \\ & 10 & 2.531 & 1.312 & 1.126 & 1.000 & 1.050 & 1.121 & 1.254 \\ & $\infty$ & 3.000 & 1.500 & 1.250 & 1.000 & 1.000 & 1.000 & 1.000 \\ [3pt] W & \phantom{0}2 & 2.258 & 1.174 & 1.001 & 0.844 & 0.789 & 0.804 & 0.842 \\ & \phantom{0}3 & 2.386 & 1.246 & 1.068 & 0.913 & 0.897 & 0.933 & 1.001 \\ & \phantom{0}4 & 2.432 & 1.273 & 1.094 & 0.945 & 0.955 & 1.006 & 1.095 \\ & \phantom{0}6 & 2.451 & 1.283 & 1.105 & 0.969 & 1.008 & 1.075 & 1.188 \\ & 10 & 2.426 & 1.264 & 1.088 & 0.970 & 1.032 & 1.106 & 1.233 \\ & $\infty$ & 2.250 & 1.125 & 0.938 & 0.750 & 0.750 & 0.750 & 0.750 \\ [3pt] SP & \phantom{0}2 & 2.301 & 1.230 & 1.067 & 0.934 & 0.965 & 1.042 & 1.168 \\ & \phantom{0}3 & 2.277 & 1.225 & 1.070 & 0.957 & 1.033 & 1.141 & 1.317 \\ & \phantom{0}4 & 2.225 & 1.200 & 1.051 & 0.956 & 1.057 & 1.179 & 1.383 \\ & \phantom{0}6 & 2.128 & 1.146 & 1.007 & 0.936 & 1.057 & 1.189 & 1.414 \\ & 10 & 2.001 & 1.068 & 0.936 & 0.891 & 1.017 & 1.144 & 1.365 \\ & $\infty$ & 1.667 & 0.833 & 0.694 & 0.556 & 0.556 & 0.556 & 0.556 \\ \hline \end{tabular*} \end{table} dimension $k$ and selected radial densities $g_1$ (Student, Gaussian and power-expo\-nential), and for the van der Waerden tests ${\utphi}{}^{(n)}_{ \betab;{\mathrm{vdW}}}$ and ${\utphi }{}^{(n)}_{ \Lamb;{\mathrm{vdW}}}$, the\vspace*{-2pt} Wilcoxon tests ${\utphi}{}^{(n)}_{ \betab; K_1}$ and ${\utphi}{}^{(n)}_{ \Lamb ; K_1}$, and the Spearman tests ${\utphi}{}^{(n)}_{ \betab; K_2}$ and ${\utphi}{}^{(n)}_{ \Lamb; K_2}$ (the score functions $K_a$, $a>0$ were defined in Section~\ref{scores}). These values coincide with the ``AREs for shape'' obtained in \citet{HP06a}, which implies [\citet{P06}] that the AREs of van der Waerden tests with respect to their pseudo-Gaussian counterparts are uniformly larger than or equal to one (an extension of the classical Chernoff--Savage property): \[ \inf_{g_1} \operatorname{ARE}^{\varthetab,\taub}_{k,g_1}\bigl({\utphi }{}^{(n)}_{\betab; \mathrm{vdW}}/\phi^{(n)}_{\betab; \mathcal{N}*}\bigr)=\inf_{g_1} \operatorname{ARE}^{\varthetab,\taub}_{k,g_1}\bigl({\utphi}{}^{(n)}_{\Lamb; \mathrm{vdW}}/\phi^{(n)}_{\Lamb; \mathcal{N}*}\bigr)=1. \] \section{Simulations} \label{secsimu} In this section, we investigate via simulations the finite-sample performances of the following tests: (i) the Anderson test $\phi_{ \betab;\mathrm{Anderson}}^{(n)}$, the optimal Gaussian test $\phi_{ \betab; {\mathcal N}}^{(n)}$, the pseudo-Gaussian test $\phi_{ \betab; {\mathcal N}{*}}^{(n)}$, the robust test $\phi^{(n)}_{\betab;\mathrm{Tyler}}$ based on $Q_{\mathrm{Tyler}}^{(n)}$, and various rank-based tests $\phi_{ \betab; K}^{(n)}$ (with van der Waerden, Wilcoxon, Spearman and sign scores, but also with scores achieving optimality at $t_1$, $t_3$ and $t_5$ densities), all for the null hypothesis ${\mathcal H}_{0}^{\betab}$ on eigenvectors; (ii) the optimal Anderson test $\phi_{ \Lamb;\mathrm {Anderson}}^{(n)}=\phi_{ \Lamb; {\mathcal N}}^{(n)}$, the pseudo-Gaussian test $\phi_{ \Lamb; {\mathcal N}{*}}^{(n)} =\phi^{(n)}_{ \Lamb; \mathrm{Davis}}$ based on $T_{\mathrm {Davis}}^{(n)}$, and various rank-based tests $\phi_{ \Lamb; K}^{(n)}$ (still with van der Waerden, Wilcoxon, Spearman, sign, $t_1$, $t_3$ and $t_5$ scores), for the null hypothesis ${\mathcal H}_{0}^{ \Lamb} $ on eigenvalues. Simulations were conducted as follows. We generated $N = 2500$ mutually independent samples of i.i.d. trivariate ($k=3$) random vectors $ {\bolds\varepsilon}_{\ell;j}$, $\ell=1, 2, 3, 4, j=1,\ldots, n=100, $ with spherical Gaussian (${\bolds\varepsilon}_{1;j}$), $t_{5}$ (${\bolds\varepsilon}_{2;j}$), $t_3$ (${\bolds\varepsilon}_{3;j}$) and $t_{1}$ (${\bolds\varepsilon}_{4;j}$) densities, respectively. Letting \[ \Lamb:= \pmatrix{ 10 & 0 & 0 \cr0 & 4 & 0 \cr0 & 0 & 1},\qquad \mathbf{B}_{\xi}:= \pmatrix{ \cos(\pi\xi/12) & - \sin(\pi\xi/12) & 0 \cr\sin(\pi\xi/12) & \cos(\pi\xi/12) & 0 \cr0& 0& 1} \] and \[ \mathbf{L}_{\xi}:= \pmatrix{ 3 \xi& 0 & 0 \cr0 & 0 & 0 \cr0& 0& 0}, \] each ${\bolds\varepsilon}_{\ell;j}$ was successively transformed into \begin{equation}\label{sample1}\quad \Xb_{\ell;j ;\xi}= \mathbf{B}_{\xi} \Lamb^{1/2}{\bolds\varepsilon}_{\ell ;j},\qquad \ell=1, 2, 3, 4, j=1,\ldots,n, \xi=0,\ldots,3 , \end{equation} and \begin{equation}\label{sample2}\quad\qquad \Yb_{\ell;j;\xi}= (\Lamb+\mathbf{L}_{\xi})^{1/2}{\bolds\varepsilon }_{\ell;j},\qquad \ell=1, 2, 3, 4, j=1,\ldots,n, \xi=0,\ldots, 3 . \end{equation} The value $ \xi=0$ corresponds to the null hypothesis ${\mathcal H}_{0}^{\betab}\dvtx\betab_{1}=(1, 0, 0)\pr$ for the $\Xb_{\ell;j;\xi }$'s and the null hypothesis ${\mathcal H}_{0}^{\Lamb}\dvtx{\sum _{j=q+1}^{k} \lambda_{j; \Vb}}/{\sum_{j=1}^{k} \lambda_{j; \Vb }}=1/3$ (with $q=1$ and $k=3$) for the $\Yb_{\ell;j;\xi}$'s; $\xi =1,2,3$ characterizes increasingly distant alternatives. We then performed the tests listed under (i) and (ii) above in $N=2500$ independent replications of such samples. Rejection frequencies are reported in Table~\ref{simuresu2} for ${\mathcal H}_{0}^{\betab}$ and in Table~\ref{simuresu3} for ${\mathcal H}_{0}^{\Lamb}$. \begin{table} \caption{Rejection frequencies (out of $N=2500$ replications), under the null ${\mathcal H}_{0}^{\betab}$ and increasingly distant alternatives (see Section \protect\ref{secsimu} for details), of the Anderson test $\phi^{(n)}_{\betab; \mathrm {Anderson}}$, the Tyler test $\phi^{(n)}_{\betab; \mathrm{Tyler}}$, the parametric Gaussian test $\phi^{(n)}_{\betab; \mathcal{N}}$, its pseudo-Gaussian version $\phi^{(n)}_{\betab; \mathcal{N}*}$, and the signed-rank tests with van der Waerden, $t_\nu$ ($\nu=1$, $3$, $5$), sign, Wilcoxon, and Spearman scores, ${\utphi}{}^{(n)}_{\betab; \mathrm{vdW}}$, ${\utphi}{}^{(n)}_{\betab; t_{1, \nu}}$, ${\utphi}{}^{(n)}_{\betab; \mathrm{S}}$, ${\utphi}{}^{(n)}_{\betab; \mathrm{W}}$, and ${\utphi}{}^{(n)}_{\betab; \mathrm{SP}}$, respectively. Sample size is $n=100$. All tests were based on asymptotic 5\% critical values} \label{simuresu2} {\fontsize{8.9}{13}\selectfont{ \begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}lcccccccc@{}} \hline & \multicolumn{8}{c@{}}{$\bolds\xi$}\\[-4pt] & \multicolumn{8}{c@{}}{\hrulefill}\\ \textbf{Test} & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} \\ \hline & \multicolumn{4}{c}{$\mathcal{N}$} & \multicolumn{4}{c@{}}{$t_{5}$}\\[-4pt] & \multicolumn{4}{c}{\hrulefill} & \multicolumn{4}{c@{}}{\hrulefill}\\ $\phi^{(n)}_{\betab; \mathrm{Anderson}}$ & 0.0572 & 0.3964 & 0.8804 & 0.9852 & 0.2408 & 0.4940 & 0.8388 & 0.9604 \\[2pt] $\phi^{(n)}_{\betab; \mathcal{N}}$ & 0.0528 & 0.3724 & 0.8568 & 0.9752 & 0.2284 & 0.4716 & 0.8168 & 0.9380 \\[2pt] $\phi^{(n)}_{\betab; \mathrm{Tyler}}$ & 0.0572 & 0.3908 & 0.8740 & 0.9856 & 0.0612 & 0.2520 & 0.6748 & 0.8876 \\[2pt] $\phi^{(n)}_{\betab; \mathcal{N}*}$ & 0.0524 & 0.3648 & 0.8512 & 0.9740 & 0.0544 & 0.2188 & 0.6056 & 0.8156 \\[2pt] ${\utphi}{}^{(n)}_{\betab; \mathrm{vdW}}$ & 0.0368 & 0.2960 & 0.8032 & 0.9608 & 0.0420 & 0.2328 & 0.6908 & 0.9056 \\ ${\utphi}{}^{(n)}_{\betab; t_{1,5}}$ & 0.0452 & 0.3204 & 0.8096 & 0.9596 & 0.0476 & 0.2728 & 0.7440 & 0.9284 \\ ${\utphi}{}^{(n)}_{\betab; t_{1,3}}$ & 0.0476 & 0.3104 & 0.7988 & 0.9532 & 0.0496 & 0.2760 & 0.7476 & 0.9280\\ ${\utphi}{}^{(n)}_{\betab; t_{1,1}}$ & 0.0488 & 0.2764 & 0.7460 & 0.9220 & 0.0552 & 0.2652 & 0.7184 & 0.9024 \\ ${\utphi}{}^{(n)}_{\betab; \mathrm{S}}$ & 0.0448 & 0.2268 & 0.6204 & 0.8392 & 0.0496 & 0.2164 & 0.6236 & 0.8324 \\ ${\utphi}{}^{(n)}_{\betab; \mathrm{W}}$ & 0.0456 & 0.3144 & 0.8012 & 0.9556 & 0.0484 & 0.2808 & 0.7464 & 0.9320 \\ ${\utphi}{}^{(n)}_{\betab; \mathrm{SP}}$ & 0.0444 & 0.3096 & 0.8160 & 0.9576 & 0.0464 & 0.2548 & 0.7068 & 0.9152 \\ [2pt] & \multicolumn{4}{c}{$t_{3}$} & \multicolumn{4}{c@{}}{$t_{1}$}\\[-4pt] & \multicolumn{4}{c}{\hrulefill} & \multicolumn{4}{c@{}}{\hrulefill}\\ $\phi^{(n)}_{\betab; \mathrm{Anderson}}$ & 0.4772 & 0.6300 & 0.8532 & 0.9452 & 0.9540 & 0.9580 & 0.9700 & 0.9740 \\[2pt] $\phi^{(n)}_{\betab; \mathcal{N}}$ & 0.4628 & 0.6040 & 0.8304 & 0.9168 & 0.9320 & 0.9384 & 0.9472 & 0.9480 \\[2pt] $\phi^{(n)}_{\betab; \mathrm{Tyler}}$ & 0.0892 & 0.2248 & 0.5364 & 0.7508 & 0.5704 & 0.5980 & 0.6584 & 0.7444 \\[2pt] $\phi^{(n)}_{\betab; \mathcal{N}*}$ & 0.0616 & 0.1788 & 0.4392 & 0.6092 & 0.4516 & 0.4740 & 0.5160 & 0.5624 \\[2pt] ${\utphi}{}^{(n)}_{\betab; \mathrm{vdW}}$ & 0.0444 & 0.2172 & 0.6464 & 0.8676 & 0.0472 & 0.1656 & 0.5104 & 0.7720 \\ ${\utphi}{}^{(n)}_{\betab; t_{1,5}}$ & 0.0488 & 0.2628 & 0.7120 & 0.9076 & 0.0560 & 0.2100 & 0.6068 & 0.8508 \\ ${\utphi}{}^{(n)}_{\betab; t_{1,3}}$ & 0.0500 & 0.2728 & 0.7156 & 0.9116 & 0.0576 & 0.2156 & 0.6292 & 0.8672 \\ ${\utphi}{}^{(n)}_{\betab; t_{1,1}}$ & 0.0476 & 0.2688 & 0.7100 & 0.9084 & 0.0548 & 0.2256 & 0.6600 & 0.8856 \\ ${\utphi}{}^{(n)}_{\betab; \mathrm{S}}$ & 0.0492 & 0.2202 & 0.6188 & 0.8352 & 0.0512 & 0.2116 & 0.6172 & 0.8448 \\ ${\utphi}{}^{(n)}_{\betab; \mathrm{W}}$ & 0.0520 & 0.2708 & 0.7136 & 0.9120 & 0.0552 & 0.2148 & 0.6148 & 0.8604 \\ ${\utphi}{}^{(n)}_{\betab; \mathrm{SP}}$ & 0.0544 & 0.2436 & 0.6648 & 0.8776 & 0.0580 & 0.1824 & 0.5200 & 0.7740 \\ \hline \end{tabular*}}} \end{table} \begin{table} \caption{Rejection frequencies (out of $N=2500$ replications), under the null ${\mathcal H}_{0}^{\Lamb}$ and increasingly distant alternatives (see Section \protect\ref{secsimu}), of the optimal Gaussian test $\phi^{(n)}_{\Lamb; \mathcal{N}} = \phi^{(n)}_{\Lamb; \mathrm{Anderson}}$, its pseudo-Gaussian version $\phi^{(n)}_{\Lamb; \mathcal{N}*} = \phi^{(n)}_{ \Lamb; \mathrm{Davis}}$, and the signed-rank tests with van der Waerden, $t_\nu$ ($\nu= 1$, $3$, $5$), sign, Wilcoxon, and Spearman sco\-res ${\utphi }{}^{(n)}_{\Lamb; \mathrm{vdW}}$, ${\utphi}{}^{(n)}_{\Lamb; t_{1, \nu }}$, ${\utphi}{}^{(n)}_{\Lamb; \mathrm{S}}$, ${\utphi}{}^{(n)}_{\Lamb; \mathrm{W}}$, ${\utphi}{}^{(n)}_{\Lamb; \mathrm{SP}}$. Sample size is $n = 100$. All tests were based on asymptotic 5\% critical values and (in parentheses) simulated ones} \label{simuresu3} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lllll@{}} \hline & \multicolumn{4}{c@{}}{$\bolds\xi$}\\[-4pt] & \multicolumn{4}{c@{}}{\hrulefill}\\ \textbf{Test} & \multicolumn{1}{c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{1}} & \multicolumn{1}{c}{\textbf{2}} & \multicolumn{1}{c@{}}{\textbf{3}} \\ \hline & \multicolumn{4}{c@{}}{$\mathcal{N}$}\\[-4pt] & \multicolumn{4}{c@{}}{\hrulefill}\\ $\phi^{(n)}_{\Lamb; \mathcal{N}}=\phi^{(n)}_{\Lamb; \mathrm {Anderson}}$ & 0.0460 & 0.4076 & 0.8308 & 0.9604 \\[2pt] $\phi^{(n)}_{\Lamb; \mathcal{N}*}=\phi^{(n)}_{ \Lamb; \mathrm{ Davis}}$ & 0.0432 & 0.3976 & 0.8220 & 0.9572 \\[2pt] ${\utphi}{}^{(n)}_{\Lamb; \mathrm{vdW}}$ & 0.0608 (0.0480) & 0.4604 (0.4116) & 0.8576 (0.8280) & 0.9668 (0.9596) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,5}}$ & 0.0728 (0.0480) & 0.4804 (0.3972) & 0.8572 (0.8116) & 0.9644 (0.9504) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,3}}$ & 0.0748 (0.0496) & 0.4804 (0.3884) & 0.8524 (0.7964) & 0.9612 (0.9432) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,1}}$ & 0.0780 (0.0504) & 0.4532 (0.3572) & 0.8160 (0.7320) & 0.9448 (0.9112) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{S}}$ & 0.0864 (0.0508) & 0.3980 (0.3088) & 0.7384 (0.6408) & 0.9028 (0.8552) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{W}}$ & 0.0744 (0.0480) & 0.4816 (0.3908) & 0.8544 (0.8012) & 0.9640 (0.9464) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{SP}}$ & 0.0636 (0.0460) & 0.4664 (0.4096) & 0.8564 (0.8200) & 0.9668 (0.9584) \\ & \multicolumn{4}{c@{}}{$t_{5}$}\\[-4pt] & \multicolumn{4}{c@{}}{\hrulefill}\\ $\phi^{(n)}_{\Lamb; \mathcal{N}}=\phi^{(n)}_{\Lamb; {\mathrm {Anderson}}}$ & 0.1432 & 0.4624 & 0.7604 & 0.9180 \\[2pt] $\phi^{(n)}_{\Lamb; \mathcal{N}*}=\phi^{(n)}_{ \Lamb; \mathrm{ Davis}}$ & 0.0504 & 0.2768 & 0.5732 & 0.7988 \\[2pt] ${\utphi}{}^{(n)}_{\Lamb; \mathrm{vdW}}$ & 0.0692 (0.0548) & 0.4256 (0.3772) & 0.7720 (0.7404) & 0.9444 (0.9324) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,5}}$ & 0.0736 (0.0492) & 0.4544 (0.3772) & 0.7980 (0.7372) & 0.9524 (0.9332) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,3}}$ & 0.0732 (0.0452) & 0.4576 (0.3748) & 0.7968 (0.7320) & 0.9524 (0.9288) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,1}}$ & 0.0776 (0.0416) & 0.4448 (0.3484) & 0.7832 (0.6952) & 0.9436 (0.9116) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{S}}$ & 0.0768 (0.0436) & 0.4060 (0.3172) & 0.7180 (0.6360) & 0.9100 (0.8592) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{W}}$ & 0.0732 (0.0456) & 0.4512 (0.3756) & 0.7972 (0.7364) & 0.9524 (0.9308) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{SP}}$ & 0.0764 (0.0544) & 0.4360 (0.3736) & 0.7776 (0.7304) & 0.9480 (0.9300) \\ \hline \end{tabular*} \end{table} \setcounter{table}{2} \begin{table} \caption{(Continued.)} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lllll@{}} \hline & \multicolumn{4}{c@{}}{$\bolds\xi$}\\[-4pt] & \multicolumn{4}{c@{}}{\hrulefill}\\ \textbf{Test} & \multicolumn{1}{c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{1}} & \multicolumn{1}{c}{\textbf{2}} & \multicolumn{1}{c@{}}{\textbf{3}} \\ \hline & \multicolumn{4}{c@{}}{$t_{3}$}\\[-4pt] & \multicolumn{4}{c@{}}{\hrulefill}\\ $\phi^{(n)}_{\Lamb; \mathcal{N}}=\phi^{(n)}_{\Lamb; \mathrm {Anderson}}$ & 0.2572 & 0.5308 & 0.7200 & 0.8596 \\[2pt] $\phi^{(n)}_{\Lamb; \mathcal{N}*}=\phi^{(n)}_{ \Lamb; \mathrm{ Davis}}$ & 0.0368 & 0.1788 & 0.3704 & 0.5436 \\[2pt] ${\utphi}{}^{(n)}_{\Lamb; \mathrm{vdW}}$ & 0.0708 (0.0560) & 0.4088 (0.3260) & 0.7540 (0.7040) & 0.9304 (0.9140) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,5}}$ & 0.0812 (0.0544) & 0.4472 (0.3524) & 0.7936 (0.7240) & 0.9416 (0.9208) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,3}}$ & 0.0832 (0.0560) & 0.4556 (0.3568) & 0.7944 (0.7256) & 0.9452 (0.9192) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,1}}$ & 0.0924 (0.0548) & 0.4464 (0.3400) & 0.7812 (0.7024) & 0.9364 (0.8996) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{S}}$ & 0.0936 (0.0604) & 0.4104 (0.2928) & 0.7320 (0.6404) & 0.9012 (0.8528) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{W}}$ & 0.0832 (0.0572) & 0.4488 (0.3580) & 0.7956 (0.7272) & 0.9448 (0.9180) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{SP}}$ & 0.0796 (0.0576) & 0.4212 (0.3412) & 0.7572 (0.7020) & 0.9276 (0.9044) \\ & \multicolumn{4}{c@{}}{$t_{1}$}\\[-4pt] & \multicolumn{4}{c@{}}{\hrulefill}\\ $\phi^{(n)}_{\Lamb; \mathcal{N}}=\phi^{(n)}_{\Lamb; \mathrm {Anderson}}$ & 0.7488 & 0.8000 & 0.8288 & 0.8528 \\[2pt] $\phi^{(n)}_{\Lamb; \mathcal{N}*}=\phi^{(n)}_{ \Lamb; \mathrm{ Davis}}$ & 0.0072 & 0.0080 & 0.0172 & 0.0296 \\[2pt] ${\utphi}{}^{(n)}_{\Lamb; \mathrm{vdW}}$ & 0.0724 (0.0596) & 0.3500 (0.3032) & 0.6604 (0.6176) & 0.8600 (0.8332) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,5}}$ & 0.0824 (0.0512) & 0.3836 (0.3120) & 0.7312 (0.6492) & 0.9036 (0.8664) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,3}}$ & 0.0828 (0.0532)& 0.3936 (0.3108) & 0.7488 (0.6644) & 0.9168 (0.8776) \\ ${\utphi}{}^{(n)}_{\Lamb; t_{1,1}}$ & 0.0864 (0.0532) & 0.4088 (0.3104) & 0.7612 (0.6720) & 0.9264 (0.8832) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{S}}$ & 0.0920 (0.0556) & 0.3896 (0.3028) & 0.7336 (0.6488) & 0.9092 (0.8564) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{W}}$ & 0.0824 (0.0524) & 0.3872 (0.3072) & 0.7376 (0.6552) & 0.9108 (0.8728) \\ ${\utphi}{}^{(n)}_{\Lamb; \mathrm{SP}}$ & 0.0752 (0.0588) & 0.3536 (0.2992) & 0.6648 (0.6064) & 0.8604 (0.8220) \\ \hline \end{tabular*}\vspace*{-3pt} \end{table} Inspection of Table~\ref{simuresu2} confirms our theoretical results. Anderson's $\phi_{ \betab;\mathrm{Anderson}}^{(n)}$ meets the level constraint at\vspace*{2pt} Gaussian densities only; $\phi^{(n)}_{\betab;\mathrm {Tyler}}$ (equivalently, $\phi^{(n)}_{\betab; \mathcal{N}*}$) further survives the $t_5$ but not the $t_3$ or $t_1$ densities which have infinite fourth-order moments. In contrast, the rank-based tests for eigenvectors throughout satisfy the nominal asymptotic level condition (a 95\% confidence interval here has half-width 0.0085). Despite the relatively small sample size $n=100$, empirical power and ARE rankings almost perfectly agree. The results for eigenvalues, shown in Table~\ref{simuresu3}, are slightly less auspicious. While the Gaussian and pseudo-Gaussian tests remain hopelessly sensitive to the violations of Gaussian and fourth-order moments, respectively, the rank tests, when based on asymptotic critical values, all significantly overreject, indicating that asymptotic conditions are not met for $n=100$. We therefore propose an alternative construction for critical values. Lemma \ref {alihoprimeprime} indeed implies that the asymptotic distribution of the test statistic ${\utT}{}^{(n)}_{K}$ (based\vspace*{-3pt} on the ranks and signs of estimated residuals) is the same, under $\mathrm{P}^{(n)}_{\varthetab _0; g_{1}}$, $\varthetab_0 \in{\mathcal H}_{0;q}^{\Lamb\prime\prime }$, as that of ${\utT}{}_{\varthetab_0, K}^{(n)}$ (based on the ranks and signs of \textit{exact} residuals, which are distribution-free). The latter distribution can be simulated, and its simulated quantiles provide valid approximations of the exact ones. The following critical values were obtained from $M=100$,000 replications:\vadjust{\goodbreak} $-1.7782$ for van der Waerden, $-1.8799$ for $t_5$-scores, $-1.8976$ for $t_3$-scores, $-1.9439$ for $t_1$-scores, $-1.9320$ for sign scores, $-1.8960$ for Wilcoxon and $-1.8229$ for Spearman. Note that they all are smaller than $-1.645$, which is consistent with the overrejection phenomenon. The corresponding rejection frequencies are reported in parentheses in Table~\ref{simuresu3}. They all are quite close to the nominal probability level $\alpha= 5\%$, while empirical powers are in line with theoretical ARE values. \begin{appendix}\label{app} \section*{Appendix} We start with the proof of Proposition~\ref{LAN}. To this end, note that although generally stated as a property of a parametric sequence of families of the form $\mathcal{P}^{(n)}= \{\mathrm{P}^{(n)}_{\bolds \omega} \vert{\bolds\omega}\in{\bolds\Omega}\}$ ($n\in\N$), LAN (ULAN)\vadjust{\goodbreak} actually is a property of the parametrization ${\bolds \omega}\mapsto\mathrm{P}^{(n)}_{\bolds\omega}$, ${\bolds\omega}\in {\bolds\Omega}$ of $\mathcal{P}^{(n)}$ (i.e., of a bijective map from $\Omegab$ to $\mathcal{P}^{(n)}$). When parametrized with ${\bolds\omega}:=({\bolds\theta}\pr, (\operatorname{vech} \Sigb )\pr)\pr$, ${\bolds\omega}\in\Omegab:=\R^k\times\operatorname {vech}(\mathcal{S}_k)$, where $\mathcal{S}_k$ stands for the class of positive definite symmetric real $k\times k$ matrices, the elliptical families we are dealing with here have been shown to be ULAN in \citet{HP06a}, with central sequence \setcounter{equation}{0} \begin{equation} \label{HPcentral} \Deltab_{{\bolds\omega}}^{(n)}:= \pmatrix{\displaystyle n^{-1/2} \sum_{i=1}^{n} \varphi_{f_1}(d_{i}) {\Sigb}^{-1/2}\mathbf{U}_{i} \vspace*{2pt}\cr \displaystyle\frac{ 1}{2\sqrt{n}}\mathbf{P}_{k} ( {\Sigb}^{\otimes2} )^{ -1/2} \sum_{i=1}^{n} \vecop\bigl( \varphi_{f_1} (d_{i}) d_{i} \mathbf{U}_{i} \mathbf{U}_{i} \pr -\mathbf{I}_{k}\bigr)}, \end{equation} with $d_{i} =d_{i}({\bolds\theta},\Sigb)$ and $\mathbf{U}_{i}=\mathbf {U}_{i}({\bolds\theta},\Sigb)$, where $\mathbf{P}_{k}\pr$ denotes the \textit{duplication matrix} [such that $\mathbf{P}_{k}\pr\vechop(\mathbf {A})=\vecop(\mathbf{A})$ for any $k \times k$ symmetric matrix $\mathbf{A}$]. The families we are considering in this proposition are slightly different, because the $\varthetab$-parametrization requires $k$ identifiable eigenvectors. However, denoting by $\Omegab^B:=\R ^k\times\operatorname{vech}(\mathcal{S}_k^B)$, where $\mathcal {S}^B_k$ is the set of all matrices in $\mathcal{S}_k$ compatible with Assumption~\ref{assuA}, the mapping \begin{eqnarray*} &&\dbar\dvtx{\bolds\omega} = ({\bolds\theta}\pr, (\operatorname{vech} \Sigb)\pr)\pr\in\Omegab^B \\ &&\quad\mapsto\quad\dbar({\bolds\omega}) := \bigl({\bolds\theta}\pr, (\operatorname{det} \Sigb)^{1/k} , (\dvecrond\Lamb_\Sigb)\pr/ (\operatorname{det} \Sigb)^{1/k}, (\vecop\betab)\pr\bigr)\pr\in \Thetab \end{eqnarray*} from the open subset $\Omegab^B$ of $\R^{k+k(k+1)/2}$ to $\Thetab$ is a differentiable mapping such that, with a small abuse of notation, $\mathrm{P}^{(n)}_{{\bolds\omega}; f_1} = \mathrm{P}^{(n)}_{\varthetab =\hspace*{1pt}\dbarr({\bolds\omega}); f_1} $ and $\mathrm{P}^{(n)}_{\varthetab; f_1} = \mathrm{P}^{(n)}_{{\bolds\omega}=\hspace*{1pt}\dbarr^{-1} (\varthetab); f_1} $ (with $\hspace*{2pt}\dbar^{-1}$ defined on $\Thetab$ only). The proof of Proposition~\ref{LAN} consists in showing how ULAN in the ${\bolds \omega}$-parametrization implies ULAN in the $\varthetab $-parametrization, and how the central sequences and information matrices are related to each other. Let us start with a lemma. \begin{Lem}\label{LElemme} Let the parametrization ${\bolds\omega}\mapsto\mathrm{P}^{(n)}_{\bolds \omega}$, ${\bolds\omega}\in{\bolds\Omega}$, where $\Omegab$ is an open subset of $\R^{k_1}$ be ULAN for $\mathcal{P}^{(n)}= \{\mathrm {P}^{(n)}_{\bolds\omega} \vert{\bolds\omega}\in{\bolds\Omega }\}$, with central sequence $\Deltab^{(n)}_{\bolds\omega}$ and information matrix $\Gamb_{\bolds\omega}$. Let $\dbar\dvtx{\bolds \omega}\mapsto\varthetab:= \dbar({\bolds\omega})$ be a continuously differentiable mapping from $\R^{k_1}$ to $\R^{k_2}$ ($k_2\geq k_1$) with full column rank Jacobian matrix $D\dbar({\bolds \omega})$ at every ${\bolds\omega}$. Write $\Thetab:= \dbar (\Omegab)$, and assume that $\varthetab\mapsto\mathrm {P}^{(n)}_\varthetab$, $\varthetab\in{\Thetab}$ provides another parametrization of $\mathcal{P}^{(n)}$. Then, $\varthetab\mapsto\mathrm{P}^{(n)}_\varthetab$, $\varthetab\in {\Thetab}$ is also ULAN, with [at $\varthetab=\dbar({\bolds\omega})$] central sequence $\Deltab^{(n)}_\varthetab= (D^-\dbar({\bolds\omega}) )\pr\Deltab ^{(n)}_{\bolds\omega}$ and information matrix $\Gamb_\varthetab= (D^-\dbar({\bolds\omega }))\pr\Gamb_{\bolds\omega}D^-\dbar({\bolds\omega})$, where $D^-\dbar({\bolds\omega}):=((D\dbar({\bolds\omega}) )\pr D\dbar({\bolds\omega}))^{-1}(D\dbar({\bolds\omega}))\pr$ is the Moore--Penrose inverse of $D\dbar({\bolds\omega})$. \end{Lem} \begin{pf} Throughout, let $\varthetab$ and ${\bolds\omega}$ be such that $\varthetab= \dbar({\bolds\omega})$. Consider $\varthetab\in\Thetab$ and an arbitrary sequence $\varthetab^{(n)}=\varthetab+ O(n^{-1/2})\in\Thetab$. The characterization of ULAN for the $\varthetab$-parametrization involves bounded sequence $\taub_{**}^{(n)}\in\R^{k_2}$ such that the perturbation $\varthetab^{(n)}+ n^{-1/2}\taub_{**}^{(n)}$ still belongs to $\Thetab$. In order for $\varthetab^{(n)}+ n^{-1/2}\taub_{**}^{(n)}$ to belong to $\Thetab$, it is necessary that $\taub_{**}^{(n)}$ be of the form $\taub_*^{(n)}+ o(1)$, with $\taub_*^{(n)}$ in the tangent space to $\Thetab$ at $\varthetab^{(n)}$, hence of the form $\taub^{(n)}+ o(1)$ with $\taub^{(n)}$ in the tangent space to $\Thetab$ at $\varthetab$, that is, $\taub^{(n)}=D\dbar({\bolds\omega})\wb ^{(n)}$ for some bounded sequence $\wb^{(n)}\in\R^{k_1}$. It follows from differentiability that, letting ${\bolds\omega}^{(n)}=\dbar ^{-1}(\varthetab^{(n)})$, \begin{eqnarray} \label{c'estlui} \varthetab^{(n)} + n^{-1/2}\taub_{**}^{(n)} & = & \varthetab^{(n)}+ n^{-1/2}D\dbar({\bolds\omega})\wb^{(n)}+ o(n^{-1/2}) \nonumber\\ & = & \dbar\bigl({\bolds\omega}^{(n)}\bigr) + n^{-1/2}D\dbar({\bolds\omega})\wb ^{(n)} + o(n^{-1/2}) \nonumber\\[-8pt]\\[-8pt] & = & \dbar\bigl({\bolds\omega}^{(n)}\bigr) + n^{-1/2}D\dbar\bigl({\bolds \omega}^{(n)}\bigr)\wb^{(n)}+ o(n^{-1/2}) \nonumber\\ & = & \dbar\bigl({\bolds\omega}^{(n)}+ n^{-1/2} \wb^{(n)}+ o(n^{-1/2})\bigr). \nonumber \end{eqnarray} Hence, turning to local log-likelihood ratios, in view of ULAN for the ${\bolds\omega}$-parametrization, \begin{eqnarray}\label{onyest} &&\log\bigl(d\mathrm{P}{}_{\varthetab^{(n)}+ n^{-1/2}\taub _{**}^{(n)}}^{(n)}/d\mathrm{P}^{(n)}_{\varthetab^{(n)}} \bigr) \nonumber\\ &&\qquad= \log\bigl(d\mathrm{P}^{(n)}_{ {\bolds\omega}^{(n)}+ n^{-1/2} \wb ^{(n)}+ o(n^{-1/2})}/d\mathrm{P}^{(n)}_{{\bolds\omega}^{(n)}} \bigr) \\ &&\qquad= \wb^{(n)\prime}\Deltab^{(n)}_{{\bolds\omega}^{(n)}} -\tfrac {1}{2} \wb^{(n)\prime}\Gamb_{\bolds\omega}\wb^{(n)}+o_\mathrm{P}(1)\nonumber \end{eqnarray} under $\mathrm{P}^{(n)}_{ {\bolds\omega}^{(n)}} = \mathrm{P}^{(n)}_{ \varthetab ^{(n)}}$-probability, as $\ny$. Now, the LAQ part of ULAN for the $\varthetab$-parametrization requires, for some random vector $\Deltab^{(n)}_{\varthetab^{(n)}}$ and constant matrix $\Gamb_\varthetab$, \begin{equation}\label{LAQvartheta}\qquad \log\bigl(d\mathrm{P}^{(n)}_{\varthetab^{(n)}+ n^{-1/2}\taub _{**}^{(n)}}/d\mathrm{P}^{(n)}_{\varthetab^{(n)}} \bigr) = \taub _{**}^{(n)\prime}\Deltab^{(n)}_{\varthetab^{(n)}}-\tfrac{1}{2} \taub_{**}^{(n)\prime}\Gamb_\varthetab\taub_{**}^{(n) } +o_\mathrm{P}(1) \end{equation} under the same $\mathrm{P}^{(n)}_{ {\bolds\omega}^{(n)}} = \mathrm{P}^{(n)}_{ \varthetab ^{(n)}}$ probability distributions with, in view of~(\ref{c'estlui}), $\taub_{**}^{(n)}= D\dbar({\bolds\omega})\wb^{(n)} + o(1)$. Identifying~(\ref{onyest}) and~(\ref{LAQvartheta}), we obtain that LAQ is satisfied for the $\varthetab$-parametrization, with any $\Deltab^{(n)}_\varthetab$ satisfying \begin{equation}\label{LAQvartheta2} (D\dbar({\bolds\omega}))\pr\Deltab^{(n)}_\varthetab= \Deltab ^{(n)}_{\bolds\omega}. \end{equation} Now, let $\mathbf{t}_i$ be the $i$th column of $D\dbar({\bolds\omega })$, $i=1,\ldots,k_1$, and choose $\mathbf{t}_{k_1+1},\ldots,\break \mathbf{t}_{k_2}\in\mathbb{R}^{k_2}$ in such a way that they span the orthogonal complement of $\mathcal{M}(D\dbar({\bolds\omega}))$. Then $\{\mathbf{t}_{i}$, $i=1,\ldots,k_2\}$ is a basis of $ \mathbb {R}^{k_2}$, so that there exists a unique $k_2$-tuple $(\delta _{\varthetab;1}^{(n)},\ldots, \delta^{(n)}_{\varthetab; k_2})\pr$ such that $\Deltab^{(n)}_\varthetab=\sum_{i=1}^{k_2} \delta ^{(n)}_{\varthetab;i} \mathbf{t}_i$. With this notation,~(\ref{LAQvartheta2}) yields \begin{eqnarray*} \Deltab^{(n)}_{\bolds\omega} &=&(D\dbar({\bolds\omega}))\pr\Deltab^{(n)}_\varthetab\\ &=&\sum_{i=1}^{k_2} \delta^{(n)}_{\varthetab;i} (D\dbar({\bolds \omega}))\pr\mathbf{t}_i = \sum_{i=1}^{k_1} \delta^{(n)}_{\varthetab;i} (D\dbar({\bolds \omega}))\pr\mathbf{t}_i\\ &=& (D\dbar({\bolds\omega}))\pr D\dbar({\bolds\omega}) \underline \Deltab^{(n)}_{\varthetab}, \end{eqnarray*} where we let $\underline\Deltab^{(n)}_{\varthetab}:=(\delta ^{(n)}_{\varthetab;1} ,\ldots,\delta^{(n)}_{\varthetab;k_1})\pr$. Since $D\dbar({\bolds\omega})$ has full column rank, this entails (i) $\Deltab^{(n)}_\varthetab=D\dbar({\bolds\omega}) \underline \Deltab^{(n)}_{\varthetab}$ and (ii) $\underline\Deltab^{(n)}_{\varthetab}=((D\dbar({\bolds\omega }))\pr D\dbar({\bolds\omega}))^{-1} \Deltab^{(n)}_{\bolds\omega }$, hence $\Deltab^{(n)}_\varthetab=(D^-\dbar({\bolds\omega}))\pr \Deltab^{(n)}_{\bolds\omega}$. As a linear transformation of $\Deltab^{(n)}_{\bolds\omega}$, $\Deltab^{(n)}_\varthetab$ clearly also satisfies the asymptotic normality part of ULAN, with the desired $\Gamb_\varthetab$. \end{pf} The following slight extension of Lemma~\ref{LElemme} plays a role in the proof of Proposition~\ref{LAN} below. Consider a parametrization ${\bolds\omega}=({\bolds\omega}_a\pr,{\bolds\omega}_b\pr)\pr \mapsto\mathrm{P}^{(n)}_{\bolds\omega}$, ${\bolds\omega}\in{\bolds \Omega}\times\mathcal{V}$, where $\Omegab$ is an open subset of $\R ^{k_1}$ and $\mathcal{V}\subset\R^m$ is a $\ell$-dimensional manifold in $\R^m$, and assume that it is ULAN for $\mathcal {P}^{(n)}= \{\mathrm{P}^{(n)}_{\bolds\omega} \vert{\bolds\omega }\in{\bolds\Omega}\times\mathcal{V} \}$, with central sequence $\Deltab^{(n)}_{\bolds\omega}$ and information matrix $\Gamb _{\bolds\omega}$. Let $\dbar_a$ be a continuously differentiable mapping from $\R^{k_1}$ to $\R^{k_2}$ ($k_2\geq k_1$) with full column rank Jacobian matrix $D\dbar_a ({\bolds\omega}_a)$ at every~${\bolds\omega}_a$, and assume that $\varthetab:= \dbar({\bolds \omega})\mapsto\mathrm{P}^{(n)}_\varthetab$, $\varthetab\in{\Thetab} \times\mathcal{V}$ [with $ {\Thetab} := \dbar_a(\Omegab)$], where \[ \dbar\dvtx \Omegab\times\mathcal{V} \to\Thetab\times\mathcal{V}\qquad {\bolds\omega}= ( {\bolds\omega}_a, {\bolds\omega}_b )\pr \quad\mapsto\quad\dbar({\bolds\omega})= ( \dbar_a({\bolds\omega}_a) , {\bolds\omega}_b )\pr \] provides another parametrization of $\mathcal{P}^{(n)}$. Then the proof of Lemma~\ref{LElemme} straightforwardly extends to show that $\varthetab\mapsto\mathrm{P}^{(n)}_\varthetab$, $\varthetab\in {\Thetab}\times\mathcal{V}$ is also ULAN, still with [at $\varthetab =\dbar({\bolds\omega})$] central sequence $\Deltab^{(n)}_\varthetab = (D^-\dbar({\bolds\omega}) )\pr\Deltab^{(n)}_{\bolds\omega}$ and information matrix $\Gamb_\varthetab= (D^-\dbar({\bolds\omega }))\pr\Gamb_{\bolds\omega}D^-\dbar({\bolds\omega})$. \begin{pf*}{Proof of Proposition~\ref{LAN}} Consider the differentiable mappings $\dbar_1 \dvtx\break{\bolds\omega }:=({\bolds\theta}\pr, (\vechop\Sigb)\pr)\pr\mapsto\dbar_1 (\omega)=({\bolds\theta}', (\dvec\Lamb_{\Sigb})\pr, (\vecop \betab)\pr)'$ and $\dbar_2 \dvtx\dbar_1 ({\bolds\omega})=({\bolds \theta}'$,\break $(\dvec\Lamb_{\Sigb})\pr, (\vecop\betab)\pr)' \mapsto \dbar_2 (\dbar_1 ({\bolds\omega}))=({\bolds\theta}', \sigma ^{2},(\dvecrond\Lamb_{\Vb})\pr, (\vecop\betab)\pr)' \in \Thetab$, the latter being invertible. Applying Lemma~\ref{LElemme} twice (the second time in its ``extended form,'' since the $\betab$-part of the parameter is invariant under $\dbar_2$) then yields \begin{eqnarray*} \Deltab_{\varthetab}^{(n)} &=& (D\dbar_2 (\dbar_1({\bolds\omega})))^{\prime-1} D \dbar_1 ({\bolds\omega})((D \dbar_1 ({\bolds\omega}))\pr D \dbar_1 ({\bolds\omega}))^{-1} \Deltab^{(n)}_{{\bolds\omega}} \\ &=& (D \dbar_2^{-1} (\dbar({\bolds\omega})))\pr D \dbar_1 ({\bolds \omega})((D \dbar_1 ({\bolds\omega}))\pr D \dbar_1 ({\bolds\omega }))^{-1} \Deltab^{(n)}_{{\bolds\omega}} . \end{eqnarray*} In view of the definition of $\mathbf{M}_{k}^{\Lamb_{\Vb}}$ (Section~\ref{curvLAN}), the Jacobian matrix, computed at~$\varthetab $, of the inverse mapping $\dbar_2 ^{-1}$ is \[ D \dbar_2 ^{-1}( \varthetab)=\pmatrix{ \mathbf{I}_{k} & \mathbf{0} & \mathbf{0} & \mathbf{0} \cr\mathbf{0} & \dvec(\Lamb_{\Vb}) & \sigma^{2} (\mathbf{M}_{k}^{\Lamb_{\Vb}}) \pr& \mathbf{0} \cr\mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{I}_{k^{2}}}. \] An explicit expression for $D \dbar_1 ({\bolds\omega})$ was obtained by Kollo and Neudecker [(\citeyear{KN93}), page 288]: \begin{eqnarray} \label{ttttt} D \dbar_1 ({\bolds\omega}) = \pmatrix{ \mathbf{I}_{k} & \mathbf{0} \cr\mathbf{0} & \Xib_{\betab, \Lamb_{\Sigb}} \mathbf{P}_{k}\pr},\nonumber\\[-8pt]\\[-8pt] \eqntext{\mbox{with } \Xib_{\betab, \Lamb_{\Sigb}}:= \pmatrix{\displaystyle \mathbf{H}_{k}(\betab\pr)^{\otimes2} \cr \displaystyle\betab_{1}\pr\otimes[\betab(\lambda_{1;\Sigb}\mathbf{I}_{k} - \Lamb _{\Sigb})^{-}\betab\pr] \cr \vdots\cr \displaystyle\betab_{k}\pr\otimes[\betab(\lambda_{k;\Sigb}\mathbf{I}_{k} - \Lamb _{\Sigb})^{-}\betab\pr]}.} \end{eqnarray} The result then follows from a direct, though painful, computation, using the fact that \[ (\mathbf{P}_{k}\Xib_{\betab, \Lamb_{\Sigb}}\pr\Xib_{\betab, \Lamb _{\Sigb}}\mathbf{P}_{k}\pr)^{-1}= (\mathbf{P}_{k}\pr)^{-} (\betab \otimes\betab) \operatorname{diag}(l_{11 ; \Sigb}, l_{12; \Sigb}, \ldots , l_{kk; \Sigb}) (\betab\pr\otimes\betab\pr)\mathbf{P}_{k}^{-}, \] with $l_{ij ; \Sigb}= 1$ if $i=j$ and $l_{ij; \Sigb}= (\lambda_{i; \Sigb}- \lambda_{j; \Sigb})^{-2}$ if $i \neq j$; $(\mathbf{P}_{k}\pr )^{-}$ here stands for the Moore--Penrose inverse of $\mathbf{P}_{k}$ [note that $(\mathbf{P}_{k}\pr)^{-}$ is such that\break $\mathbf{P}_{k}\pr(\mathbf {P}_{k}\pr)^{-} \vecop(\mathbf{A})= \vecop(\mathbf{A})$ for any symmetric matrix $\mathbf{A}$]. \end{pf*} \begin{pf*}{Proof of Proposition~\ref{Optitest}} Proceeding as in the proof of Lemma~\ref{LElemme}, let $\mathbf{v}_i$ be the $i$th column of $D\hbar({\bolds\xi}_0)$, $i=1,\ldots,m$, and choose $\mathbf{v}_{m+1},\ldots,\mathbf{v}_p\in\mathbb{R}^p$ spanning the orthogonal complement of $\mathcal{M}(D\hbar({\bolds\xi}_0))$. Then there exists a unique $p$-tuple $(\delta_{\varthetab_0;1},\ldots, \delta_{\varthetab_0; p})\pr$ such that $\Deltab_{{\varthetab _0}}=\sum_{i=1}^p \delta_{\varthetab_0;i} \mathbf{v}_i$ (since $\mathbf {v}_i, i=1,\ldots,p$ spans $\mathbb{R}^p$) and \begin{eqnarray} \label{ma} \Deltab_{{\bolds\xi}_0} &=& D\hbar\pr({\bolds\xi}_0)\Deltab_{\varthetab_0} = \sum_{i=1}^p \delta_{\varthetab_0;i} D\hbar\pr({\bolds\xi}_0) \mathbf{v}_i = \sum_{i=1}^m \delta_{\varthetab_0;i} D\hbar\pr({\bolds\xi}_0) \mathbf{v}_i\nonumber\\[-8pt]\\[-8pt] &=& \mathbf{C}_{\hbar}({\bolds\xi}_0) \Deltab^m_{\varthetab_0},\nonumber \end{eqnarray} where $\mathbf{C}_{\hbar}({\bolds\xi}_0):=D\hbar\pr({\bolds\xi}_0) D\hbar({\bolds\xi}_0)$ and $\Deltab^m_{\varthetab_0}:=(\delta_{\varthetab_0;1} ,\ldots ,\delta_{\varthetab_0;m})\pr$. Hence, we also have $\Gamb_{{\bolds\xi}_0}=\mathbf{C}_{\hbar}({\bolds\xi}_0) \Gamb ^m_{\varthetab_0}\mathbf{C}_{\hbar}({\bolds\xi}_0)$, where $ \Gamb ^m_{\varthetab_0}$ is the asymptotic covariance matrix of $ \Deltab ^m_{\varthetab_0}$ under $\mathrm{P}^{(n)}_{\varthetab_0}$. Using the fact that $\mathbf{C}_{\hbar}({\bolds\xi}_0)$ is invertible, this yields \begin{eqnarray*} Q_{{\bolds\xi}_0}:\!&=& (\Deltab^m_{\varthetab_0})\pr\mathbf{C}_{\hbar}({\bolds\xi}_0) (\mathbf{C}_{\hbar}({\bolds\xi}_0)\Gamb^m_{\varthetab_0}\mathbf {C}_{\hbar}({\bolds\xi}_0))^{-1} \mathbf{C}_{\hbar}({\bolds\xi}_0) \Deltab^m_{\varthetab_0} \\ & &{} - (\Deltab^m_{\varthetab_0})\pr\mathbf{C}_{\hbar}({\bolds\xi}_0) D\lbar({\bolds\alpha}_0) ( D\lbar\pr({\bolds\alpha}_0) \mathbf{C}_{\hbar}({\bolds\xi}_0) \Gamb^m_{\varthetab_0} \mathbf{C}_{\hbar}({\bolds\xi}_0) D\lbar ({\bolds\alpha}_0))^{-1} \\ &&\hspace*{9.8pt}{}\times D\lbar\pr({\bolds\alpha}_0) \mathbf{C}_{\hbar}({\bolds\xi}_0) \Deltab^m_{\varthetab_0}\\ &=& (\Deltab^m_{\varthetab_0})\pr (\Gamb^m_{\varthetab_0})^{-1} \Deltab^m_{\varthetab_0}\\ &&{} - (\Deltab^m_{\varthetab_0})\pr(\Gamb^m_{\varthetab_0})^{-1/2} {\bolds\Pi}((\Gamb^m_{\varthetab_0})^{1/2} \mathbf{C}_{\hbar}({\bolds \xi}_0) D\lbar({\bolds\alpha}_0)) (\Gamb^m_{\varthetab_0})^{-1/2} \Deltab^m_{\varthetab_0}\\ &=&\!: Q_{{\bolds\xi}_0,1}-Q_{{\bolds\xi}_0,2}, \end{eqnarray*} where ${\bolds\Pi}(\mathbf{P}):=\mathbf{P}(\mathbf{P}\pr\mathbf {P})^{-1}\mathbf{P}\pr$ denotes the projection matrix on $\mathcal {M}(\mathbf{P})$.\vadjust{\goodbreak} Let $\bbar\dvtx A\subset\mathbb{R}^\ell\to\mathbb{R}^p$ be a local (at $\varthetab_0$) chart for the manifold $C\cap\Thetab$, and assume, without loss of generality, that $\etab_0=\bbar ^{-1}(\varthetab_0)$. Since $D\hbar({\bolds\xi}_0)$ has maximal rank, it follows from~(\ref{ma}) that $\Deltab_{\varthetab_0}= D\hbar({\bolds\xi }_0) \Deltab^m_{\varthetab_0}$. Hence, the statistic \begin{equation}\label{Qxibar}\quad \bar{Q}_{\varthetab_0} := \Deltab_{\varthetab_0}\pr \bigl( \Gamb_{\varthetab_0}^{-} - D\bbar(\etab_0) ( D\bbar\pr(\etab _0)\Gamb_{\varthetab_0}D\bbar(\etab_0))^{-} D\bbar\pr(\etab_0) \bigr) \Deltab_{\varthetab_0} \end{equation} [the squared Euclidean norm of the orthogonal projection, onto the linear space orthogonal to $ \Gamb_{\varthetab_0}^{1/2}D\bbar(\etab _0)$, of the standardized central sequence $(\Gamb_{\varthetab _0}^{1/2})^-\Deltab_{\varthetab_0}$] can be written as \begin{eqnarray*} \bar{Q}_{\varthetab_0} &=& (\Deltab^m_{\varthetab_0})\pr D\hbar\pr({\bolds\xi}_0) ( D\hbar({\bolds\xi}_0)\Gamb_{\varthetab_0}^m D\hbar\pr({\bolds \xi}_0))^{-} D\hbar({\bolds\xi}_0)\Deltab_{\varthetab_0}^m \\ & &{} - (\Deltab_{\varthetab_0}^m)\pr D\hbar\pr({\bolds\xi}_0) D\bbar(\etab_0) ( D\bbar\pr(\etab_0) D\hbar({\bolds\xi }_0)\Gamb_{\varthetab_0}^m D\hbar\pr({\bolds\xi}_0) D\bbar(\etab _0))^{-}\\ &&\hspace*{9.8pt}{}\times D\bbar\pr(\etab_0) D\hbar({\bolds\xi}_0)\Deltab_{\varthetab_0}^m\\ &=& (\Deltab^m_{\varthetab_0})\pr D\hbar\pr({\bolds\xi}_0) ( D\hbar({\bolds\xi}_0)\Gamb_{\varthetab_0}^m D\hbar\pr({\bolds \xi}_0))^{-} D\hbar({\bolds\xi}_0)\Deltab_{\varthetab_0}^m \\ & &{} - (\Deltab_{\varthetab_0}^m)\pr(\Gamb^m_{\varthetab_0})^{-1/2} {\bolds\Pi}( (\Gamb_{\varthetab_0}^m)^{1/2} D\hbar\pr({\bolds\xi }_0) D\bbar(\etab_0)) (\Gamb^m_{\varthetab_0})^{-1/2} \Deltab_{\varthetab_0}^m\\ &=&\!: \bar{Q}_{\varthetab_0,1}-\bar{Q}_{\varthetab_0,2}. \end{eqnarray*} Since $D\hbar({\bolds\xi}_0)$ has full rank, the standard properties of Moore--Penrose inverses entail $Q_{{\bolds\xi}_0,1}=\bar {Q}_{\varthetab_0,1}$. As for $Q_{{\bolds\xi}_0,2}$ and $\bar {Q}_{\varthetab_0,2}$, they are equal if \[ \mathcal{M}((\Gamb^m_{\varthetab_0})^{1/2} \mathbf{C}_{\hbar}({\bolds \xi}_0) D\lbar({\bolds\alpha}_0)) = \mathcal{M}((\Gamb_{\varthetab_0}^m)^{1/2} D\hbar\pr({\bolds\xi }_0) D\bbar(\etab_0)). \] Since $\Gamb_{\varthetab_0}^m$ and $\mathbf{C}_{\hbar}({\bolds\xi }_0)$ are invertible, the latter equality holds if $ \mathcal{M}( D\lbar({\bolds\alpha}_0)) = \mathcal{M}( (\mathbf{C}_{\hbar}({\bolds\xi}_0))^{-1} D\hbar\pr ({\bolds\xi}_0) D\bbar(\etab_0)), $ or, since $D\hbar({\bolds\xi}_0)$ has full rank, if \begin{eqnarray*} &&\mathcal{M}( D\hbar({\bolds\xi}_0) D\lbar({\bolds\alpha}_0)) = \mathcal{M}( D\hbar({\bolds\xi}_0) (\mathbf{C}_{\hbar}({\bolds\xi }_0))^{-1} D\hbar\pr({\bolds\xi}_0) D\bbar(\etab_0)) \\ &&\hspace*{87.2pt}\bigl(= \mathcal{M}( {\bolds\Pi}(D\hbar({\bolds\xi}_0)) D\bbar(\etab _0)) \bigr), \end{eqnarray*} which trivially holds true. Hence, $Q_{{\bolds\xi}_0,2}=\bar {Q}_{\varthetab_0,2}$, so that $Q_{{\bolds\xi}_0}=\bar {Q}_{\varthetab_0}$. Eventually, the linear spaces orthogonal to $\Gamb^{1/2}_{\varthetab _0}D\bbar(\etab_0 )$ and to $\Gamb^{1/2}_{\varthetab_0 }D\btilde(\etab_0 )$ do coincide, so that the statistic $Q_{\varthetab_0}$, which is obtained by substituting $\btilde$ for $\bbar$ in~(\ref{Qxibar}), is equal to $Q_{\varthetab_0}$(\mbox{$=$}$Q_{{\bolds\xi}_0})$. This establishes the result. \end{pf*} We now turn to the proofs of Lemmas~\ref{infoinverse} and \ref {parametricasymplin}. \begin{pf*}{Proof of Lemma~\ref{infoinverse}} The proof consists in checking that postmultiplying $\mathbf{D}_{k}(\Lamb_{\Vb})$ with $\mathbf{N}_{k}\mathbf{H}_{k} \mathbf{P}_{k}^{\Lamb_\Vb} (\mathbf{I}_{k^{2}}+ \mathbf{K}_{k}) \Lamb_{\Vb}^{\otimes2} (\mathbf{P}_{k}^{\Lamb_\Vb } )\pr\mathbf{H}_{k}\pr\mathbf{N}_{k}\pr$ yields the $(k-1)$-dimensional identity matrix ($\mathbf{P}_{k}^{\Lamb_\Vb}$ and $\mathbf{N}_{k}$ are defined in the statement of the lemma). That is, we show that \begin{eqnarray} \label{idone} && \tfrac{1}{4} {\Mb}_{k}^{\Lamb_{\Vb}} \mathbf{H}_{k} (\mathbf{I}_{k^2} + \mathbf{K}_k) (\Lamb_{\Vb}^{-1})^{\otimes2} \mathbf{H}_{k}\pr ({\Mb}_{k}^{\Lamb_{\Vb}})\pr \mathbf{N}_{k}\mathbf{H}_{k} \nonumber\\ & &\quad{} \times\mathbf{P}_{k}^{\Lamb_\Vb} (\mathbf{I}_{k^{2}}+ \mathbf{K}_{k}) \Lamb_{\Vb}^{\otimes2} (\mathbf{P}_{k}^{\Lamb_\Vb} )\pr\mathbf{H}_{k}\pr \mathbf{N}_{k}\pr\\ &&\qquad= \mathbf{I}_{k-1}.\nonumber \end{eqnarray} First of all, note that the definition of ${\Mb}_{k}^{\Vb}$ (see Section~\ref{curvLAN}) entails that, for any $k \times k$ real matrix $\mathbf{l}$ such that $\operatorname {tr}({\Lamb}_{\Vb}^{-1}\mathbf{l})=0$, $ (\mathbf{M}_{k}^{\Lamb_{\Vb}} )\pr\mathbf{N}_{k} \mathbf{H}_{k} (\vecop\mathbf{l}) = (\mathbf{M}_{k}^{\Lamb_{\Vb}} )\pr(\dvecrond\mathbf{l}) = \dvec(\mathbf{l}) = \mathbf{H}_{k} (\vecop\mathbf{l}). $ Hence, since (letting $\mathbf{E}_{ij}:=\mathbf{e}_{i}\mathbf{e}_{j}\pr + \mathbf{e}_{j}\mathbf{e}_{i}\pr$) \begin{eqnarray*} \mathbf{P}_{k}^{\Lamb_\Vb} ( \mathbf{I}_{k^{2}}+ \mathbf{K}_{k}) &=& \mathbf{I}_{k^{2}}+ \mathbf{K}_{k}- \frac{2}{k} \Lamb_{\Vb}^{\otimes 2} \vecop(\Lamb_{\Vb}^{-1})(\vecop(\Lamb_{\Vb}^{-1}))\pr \\[-2pt] &=& \sum_{i,j=1}^k \vecop\biggl( \frac{1}{2} \mathbf{E}_{ij}- \frac {1}{k}{(\Lamb_{\Vb}^{-1})_{ij}} \Lamb_{\Vb} \biggr) (\vecop\mathbf{E}_{ij})\pr \\[-2pt] &=&\!: \sum_{i,j=1}^k ( \vecop\mathbf{F}_{ij}^{\Lamb_{\Vb}} ) (\vecop\mathbf{E}_{ij})\pr, \end{eqnarray*} with $ \operatorname{tr}( \Lamb_{\Vb}^{-1} \mathbf{F}_{ij}^{\Lamb_{\Vb }})=0$, for all $i,j=1,\ldots,k$, we obtain that $ ({\Mb}_{k}^{\Lamb_{\Vb}})\pr \mathbf{N}_{k}\mathbf{H}_{k} \times\mathbf{P}_{k}^{\Lamb_\Vb} (\mathbf{I}_{k^{2}}+ \mathbf{K}_{k}) = \mathbf{H}_{k} \mathbf{P}_{k}^{\Lamb_\Vb} (\mathbf{I}_{k^{2}}+ \mathbf{K}_{k}) $. Now, using the fact that\break $\mathbf{H}_{k}\pr\mathbf{H}_{k} (\Lamb_{\Vb}^{-1})^{\otimes2}(\mathbf {I}_{k^2} + \mathbf{K}_k) \mathbf{H}_{k}\pr =(\Lamb_{\Vb}^{-1})^{\otimes2}(\mathbf{I}_{k^2} + \mathbf{K}_k) \mathbf {H}_{k}\pr$, the left-hand side\break of~(\ref{idone}) reduces to \begin{equation}\label{idtwo} \tfrac{1}{4} {\Mb}_{k}^{\Lamb_{\Vb}} \mathbf{H}_{k} (\mathbf{I}_{k^2} + \mathbf{K}_k) (\Lamb_{\Vb}^{-1})^{\otimes2} \mathbf{P}_{k}^{\Lamb_\Vb} (\mathbf{I}_{k^{2}}+ \mathbf{K}_{k}) \Lamb_{\Vb}^{\otimes2} (\mathbf{P}_{k}^{\Lamb_\Vb} )\pr \mathbf{H}_{k}\pr\mathbf{N}_{k}\pr. \end{equation} After straightforward computation, using essentially the well-known property of the Kronecker product $\vecop(\mathbf{A}\mathbf{B}\mathbf {C})=(\mathbf{C}\pr\otimes\mathbf{A}) \vecop(\mathbf{B})$ and the fact that $\mathbf{M}_{k}^{\Lamb_{\Vb}}\times\mathbf{H}_{k} (\vecop{\Lamb}_{\Vb }^{-1})=\mathbf{0}$ and $\mathbf{H}_{k} \mathbf{K}_{k}= \mathbf{H}_{k}$, (\ref{idtwo}) reduces to\break ${\Mb}_{k}^{\Lamb_{\Vb}} \mathbf{H}_{k} \mathbf{H}_{k}\pr\mathbf{N}_{k}\pr$. The result follows, since $\mathbf{H}_{k}\mathbf{H}_{k}\pr= \mathbf {I}_{k}$ and ${\Mb}_{k}^{\Lamb_{\Vb}}\mathbf{N}_{k}\pr=\mathbf{I}_{k-1}$. \end{pf*} \begin{pf*}{Proof of Lemma~\ref{parametricasymplin}} All stochastic convergences in this proof are as $\ny$ under $\mathrm {P}^{(n)}_{\varthetab; g_{1}}$, for some fixed $\varthetab\in\Thetab $ and $g_{1} \in{\mathcal F}_{1}^{4}$. It follows from \begin{equation} \label{lll1} \Mb_{k}^{\Lamb_{\Vb}}\mathbf{H}_{k}(\betab\pr)^{\otimes2}(\Vb^{-1}) ^{\otimes2}\operatorname{vec} \Vb= \Mb_{k}^{\Lamb_{\Vb}}\mathbf{H}_{k} (\operatorname{vec} \Lamb_{\Vb} ^{-1})=\mathbf{0} \end{equation} and \begin{equation} \label{lll2} \mathbf{L}_{k}^{\betab,\Lamb_{\Vb}} (\Vb^{-1} )^{\otimes2} \operatorname{vec} {\Vb}= \mathbf{L}_{k}^{\betab,\Lamb_{\Vb}} (\vecop {\Vb }^{-1})=\mathbf{0}, \end{equation} that \begin{eqnarray* \Deltab^{\III} _{\varthetab;\phi_1 } &=& \frac{a_{k}}{2\sqrt{n}} \Mb_{k}^{\Lamb_{\Vb}}\mathbf{H}_{k}(\Lamb _{\Vb}^{-1/2} \betab\pr)^{\otimes2}\\[-2pt] &&{}\times\sum_{i=1}^{n} \frac{d_{i}^2({\bolds\theta}, \Vb)}{\sigma^2} \operatorname{vec}(\Ub_{i}({\bolds\theta}, \Vb)\Ub_{i}\pr({\bolds\theta }, \Vb))\\[-2pt] &=& \frac{a_{k}}{2\sqrt{n}\sigma^2} \Mb_{k}^{\Lamb_{\Vb}}\mathbf {H}_{k}(\betab\pr)^{\otimes2}({\Vb}^{-1})^{\otimes2} \\[-2pt] & &{} \times\sum_{i=1}^{n} \operatorname{vec}\bigl((\Xb_{i}-{\bolds\theta})(\Xb _{i}-{\bolds\theta})\pr- \bigl(D_{k}(g_{1})/k\bigr) \Sigb\bigr) \end{eqnarray*} and \begin{eqnarray* \Deltab^{\IV} _{\varthetab;\phi_1 } &=& \frac{a_{k}}{2\sqrt{n}} \mathbf{G}_{k}^{\betab} \mathbf {L}_{k}^{\betab,\Lamb_{\Vb}} (\Vb^{-1/2})^{\otimes2}\sum_{i=1}^{n} \frac{d_{i}^2({\bolds\theta}, \Vb)}{\sigma^2} \operatorname{vec}(\Ub_{i}({\bolds\theta}, \Vb)\Ub_{i}\pr({\bolds\theta },\Vb))\\ &=& \frac{a_{k}}{2\sqrt{n}\sigma^2} \mathbf{G}_{k}^{\betab} \mathbf {L}_{k}^{\betab,\Lamb_{\Vb}} (\Vb^{-1})^{\otimes2} \\ & &{} \times\sum_{i=1}^{n} \operatorname{vec}\bigl((\Xb_{i}-{\bolds\theta})(\Xb _{i}-{\bolds\theta})\pr- \bigl(D_{k}(g_{1})/k\bigr) \Sigb\bigr). \end{eqnarray*} Hence, using a root-$n$ consistent estimator $\hat{\varthetab}:= (\hat{{\bolds\theta}}\pr, \hat{\sigma }^{2}, (\dvecrond\hat{\Lamb}_{\Vb})\pr, (\vecop\hat{\betab })\pr)\pr$ and letting $\hat{\Sigb}:= \hat{\sigma}^{2} \hat {\betab} \hat{\Lamb}_{\Vb}\hat{\betab}\pr$, Slutsky's lemma yields \begin{eqnarray*} \Deltab^{\III} _{\hat\varthetab;\phi_1 } &=& \frac{a_{k}}{2\sqrt{n}\hat\sigma^2} \Mb_{k}^{\hat{\Lamb }_{\Vb}}\mathbf{H}_{k}(\hat{\betab}\pr)^{\otimes2}({\hats{\Vb }}{}^{-1})^{\otimes2}\\ & &{} \times\sum_{i=1}^{n} \operatorname{vec}\bigl((\Xb_{i}-\hat{{\bolds\theta}} )(\Xb_{i}-\hat{{\bolds\theta}})\pr- \bigl({D}_{k}(g_{1})/k\bigr)\hat\Sigb \bigr)\nonumber\\ &=& \frac{a_{k}}{2\sqrt{n}\hat\sigma^2} \Mb_{k}^{\hat{\Lamb }_{\Vb}}\mathbf{H}_{k}(\hat{\betab}\pr)^{\otimes2}({\hats{\Vb }}{}^{-1})^{\otimes2} \\ & &{} \times\Biggl\{\sum_{i=1}^{n} \operatorname{vec}\bigl((\Xb_{i}-{\bolds\theta })(\Xb_{i}-{\bolds\theta})\pr-\bigl({D}_{k}(g_{1})/k\bigr) \Sigb \bigr) \\ &&\hspace*{17.8pt}{} - n \operatorname{vec}\bigl((\bar{\Xb} - {\bolds\theta})(\hat{{\bolds \theta}} - {\bolds\theta})\pr\bigr) - n \operatorname{vec}\bigl((\hat{{\bolds \theta}} - {\bolds\theta})(\bar{\Xb} - {\bolds\theta})\pr\bigr) \\ &&\hspace*{17.8pt}{} + n \operatorname{vec}\bigl((\hat{{\bolds\theta}} - {\bolds\theta})(\hat {{\bolds\theta}} - {\bolds\theta})\pr\bigr) - n \bigl({D}_{k}(g_{1})/k\bigr) \operatorname{vec}(\hat\Sigb- \Sigb) \Biggr\} \\ &=& \Deltab^{\III} _{\varthetab;\phi_1 } - \frac{a_{k} D_{k}(g_{1})}{2k \sigma^2} \Mb_{k}^{{\Lamb}_{\Vb}}\mathbf{H}_{k}({\betab }\pr)^{\otimes2}({{\Vb}}^{-1})^{\otimes2} n^{1/2} \operatorname{vec}(\hat\Sigb- \Sigb)\\ &&{} +o_\mathrm{P}(1), \end{eqnarray*} and, similarly, \begin{eqnarray*} \Deltab^{\IV} _{\hat\varthetab;\phi_1 } &=& \frac{a_{k}}{2\sqrt{n}\hat\sigma^2} \mathbf{G}_{k}^{\hat{\betab }} \mathbf{L}_{k}^{\hat{\betab},\hat{\Lamb}_{\Vb}}({\hats{\Vb }}{}^{-1})^{\otimes2} \\ & &{} \times\sum_{i=1}^{n} \operatorname{vec}\bigl((\Xb_{i}-\hat{{\bolds\theta}} )(\Xb_{i}-\hat{{\bolds\theta}})\pr-\bigl({D}_{k}(g_{1})/k\bigr)\hat\Sigb \bigr)\\ &=& \Deltab^{\IV} _{\varthetab;\phi_1 } - \frac {a_{k}D_{k}(g_{1})}{2k\sigma^2} \mathbf{G}_{k}^{{\betab}} \mathbf {L}_{k}^{{\betab},{\Lamb_{\Vb}}}({{\Vb}}^{-1})^{\otimes2} n^{1/2} \operatorname{vec}(\hat\Sigb- \Sigb)\\ &&{} +o_\mathrm{P}(1). \end{eqnarray*} Writing $\hat\Sigb- \Sigb=(\hat\sigma^2-\sigma^2)\hats{\Vb} +\sigma^2 (\hats{\Vb}-\Vb)$, applying Slutsky's lemma again, and using~(\ref{lll1}),~(\ref{lll2}) and the fact that $\mathbf{K}_{k}\vecop (\mathbf{A})= \vecop(\mathbf{A}\pr)$, we obtain \begin{eqnarray}\label{astrois}\quad \Deltab^{\III} _{\hat\varthetab;\phi_1 } &=& \Deltab^{\III} _{\varthetab;\phi_1 } - \frac{a_{k} {D}_{k}(g_{1})}{2k} \Mb_{k}^{{\Lamb}_{\Vb}}\mathbf{H}_{k}({\betab}\pr )^{\otimes2}({{\Vb}}^{-1})^{\otimes2} n^{1/2} \operatorname{vec}(\hats{\Vb}- \Vb)\nonumber\\ &&{} +o_\mathrm{P}(1) \nonumber\\[-8pt]\\[-8pt] &=& \Deltab^{\III} _{\varthetab;\phi_1 } - \frac {a_{k}{D}_{k}(g_{1})}{4k} \Mb_{k}^{{\Lamb}_{\Vb}}\mathbf{H}_{k}({\betab }\pr)^{\otimes2}({{\Vb}}^{-1})^{\otimes2} [\mathbf{I}_{k^2}+\mathbf{K}_k]\nonumber\\ & &\hspace*{37.1pt}{} \times n^{1/2} \operatorname{vec}(\hat \Vb- \Vb)+o_\mathrm{P}(1)\nonumber \end{eqnarray} and \begin{eqnarray}\label{asquatre} \Deltab^{\IV} _{\hat\varthetab;\phi_1 } &=& \Deltab^{\IV} _{\varthetab;\phi_1 } - \frac{a_{k} {D}_{k}(g_{1})}{2k} \mathbf{G}_{k}^{{\betab}} \mathbf{L}_{k}^{{\betab}, {\Lamb_{\Vb}}}({{\Vb}}^{-1})^{\otimes2} n^{1/2} \operatorname{vec}(\hats{\Vb}- \Vb) +o_\mathrm{P}(1) \nonumber\hspace*{-35pt}\\ &=& \Deltab^{\IV} _{\varthetab;\phi_1 } - \frac {a_{k}{D}_{k}(g_{1})}{4k} \mathbf{G}_{k}^{{\betab}} \mathbf {L}_{k}^{{\betab },{\Lamb_{\Vb}}}({{\Vb}}^{-1})^{\otimes2} [\mathbf{I}_{k^2}+\mathbf{K}_k] n^{1/2} \operatorname{vec}(\hat \Vb- \Vb)\hspace*{-35pt}\\ &&{}+o_\mathrm{P}(1).\nonumber\hspace*{-35pt} \end{eqnarray} Now, \citet{KN93} showed that \[ n^{1/2} \pmatrix{\dvec(\hat{\Lamb}_{\Vb}- \Lamb_{\Vb}) \cr \vecop(\hat{\betab}- \betab)} = n^{1/2} \Xib_{\betab, \Lamb_{\Vb}} \vecop( \hats{\Vb}- \Vb) + o_\mathrm{P}(1), \] where $\Xib_{\betab, \Lamb_{\Vb}}$ was defined in~(\ref{ttttt}). Similar computations as in the proof of Proposition~\ref{LAN} then yield \begin{eqnarray}\label{deltmeth} && n^{1/2} \vecop(\hats{\Vb}- \Vb) \nonumber\\ &&\qquad= n^{1/2} (\Xib_{\betab, \Lamb_{\Vb}}\pr\Xib_{\betab, \Lamb_{\Vb}})^{-1} \Xib_{\betab, \Lamb_{\Vb}}\pr \pmatrix{\dvec(\hat{\Lamb}_{\Vb}- \Lamb_{\Vb}) \cr \vecop(\hat{\betab}- \betab) }+ o_\mathrm{P}(1) \nonumber\\[-8pt]\\[-8pt] &&\qquad= (\mathbf{L}_{k}^{{\betab},{\Lamb_{\Vb}}})\pr(\mathbf{G}_{k}^{\betab})\pr n^{1/2} \vecop(\hat{\betab}- \betab) \nonumber\\ & &\qquad\quad{} + \betab^{\otimes2} \mathbf{H}_{k} \pr n^{1/2} \dvec( \hat{\Lamb }_{\Vb}- \Lamb_{\Vb}) + o_\mathrm{P}(1).\nonumber \end{eqnarray} The result for $\Deltab^{\III} _{\hat\varthetab;\phi_1}$ then follows by plugging~(\ref{deltmeth}) into~(\ref{astrois}) and using the facts that $\mathbf{H}_{k}({\betab}\pr)^{\otimes2}({{\Vb }}{}^{-1})^{\otimes2} (\mathbf{L}_{k}^{{\betab},{\Lamb_{\Vb}}})\pr=\mathbf{0}$ and $n^{1/2} \dvec( \hat{\Lamb}_{\Vb}- \Lamb_{\Vb})=n^{1/2} (\Mb_{k}^{{\Lamb}_{\Vb}} )\pr\dvecrond( \hat{\Lamb }_{\Vb}- \Lamb_{\Vb})+o_\mathrm{P}(1)$ as $\ny$ (the latter is a direct consequence of the definition of $\Mb_{k}^{{\Lamb}_{\Vb}}$ and the delta method). As for the result for $\Deltab^{\IV} _{\hat \varthetab;\phi_1}$, it follows similarly by plugging (\ref {deltmeth}) into~(\ref{asquatre}) by noting that $\mathbf {G}_{k}^{{\betab }} \mathbf{L}_{k}^{{\betab},{\Lamb_{\Vb}}}({{\Vb}}^{-1})^{\otimes2} [\mathbf{I}_{k^2}+\mathbf{K}_k]\betab^{\otimes2} \mathbf{H}_{k} \pr =\mathbf{0}$. \end{pf*} \begin{pf*}{Proof of Lemma~\ref{alihoprimeprime}} Throughout fix $\varthetab=({\bolds\theta}\pr,\sigma^2,(\dvecrond \Lamb)\pr,(\vecop\betab)\pr)\pr\in\mathcal{H}_{0;q}^{\Lamb \prime\prime}$ and $g_1\in\mathcal{F}_a$, and define $\tilde{\Vb }:= \hat{\betab}_{\mathrm{Tyler}} \tilde{\Lamb}_{\Vb} \hat {\betab}_{\mathrm{Tyler}} ^{ \prime}$. Since $\mathbf{K}_{k} \vecop (\mathbf{A})= \vecop(\mathbf{A}\pr)$ and $\mathbf{c}_{p,q} \pr \mathbf{H}_{k} \hat{\betab}_{\mathrm{Tyler}}^{\prime\otimes2} ({\tilde{\Vb}}{}^{1/2})^{\otimes2} (\vecop\mathbf{I}_{k}) =\mathbf{0}$, we obtain, from~(\ref{HPres}), \begin{eqnarray} \label{HPasymplin} {\utT}{}_{K}^{(n)} &=& \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} (\tilde{\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\mathbf{H}_{k} \hat{\betab}_{\mathrm {Tyler}}^{\prime\otimes2} ({\tilde{\Vb}}{}^{1/2})^{\otimes2} \mathbf{J}_{k}^{\perp} \vecop\bigl( {\utSb}{}_{\hat\varthetab;K}^{(n)}\bigr) \nonumber\hspace*{-30pt}\\ &=& \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} (\tilde{\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\mathbf{H}_{k} \hat{\betab}_{\mathrm {Tyler}}^{\prime\otimes2} ({\tilde{\Vb}}{}^{1/2})^{\otimes2} \mathbf{J}_{k}^{\perp} \vecop\bigl( {\utSb}{}_{\varthetab; K}^{(n)}\bigr) \nonumber\hspace*{-30pt}\\[-8pt]\\[-8pt] & &{} - \biggl(\frac{{\mathcal J}_{k}^{2}(K,g_{1})}{4 k(k+2)\mathcal {J}_k(K)} \biggr)^{1/2} ( a_{p,q} (\tilde{\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\mathbf{H}_{k} \hat{\betab}_{\mathrm {Tyler}}^{\prime\otimes2} ({\tilde{\Vb}}{}^{1/2})^{\otimes2} \nonumber\hspace*{-30pt}\\ & &\hspace*{11pt}{} \times({\Vb}^{-1/2})^{\otimes2} n^{1/2}\vecop(\tilde{\mathbf{V}}- \Vb) + o_\mathrm{P}(1)\nonumber\hspace*{-30pt} \end{eqnarray} as $\ny$, under $\mathrm{P}^{(n)}_{\varthetab; g_{1}}$. We now show that the second term in~(\ref{HPasymplin}) is $o_\mathrm {P}(1)$ as $\ny$, under $\mathrm{P}^{(n)}_{\varthetab; g_{1}}$. Since $n^{1/2}\vecop(\tilde{\mathbf{V}}- \Vb)$ is $O_\mathrm{P}(1) , Slutsky's lemma yields \begin{eqnarray*} && ( a_{p,q} (\tilde{\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\mathbf{H}_{k} \hat{\betab}_{\mathrm {Tyler}}^{\prime\otimes2} ({\tilde{\Vb}}{}^{1/2})^{\otimes2} ({\Vb}^{-1/2})^{\otimes2} n^{1/2}\vecop(\tilde{\mathbf{V}}- \Vb)\\ &&\qquad = ( a_{p,q} ({\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\mathbf{H}_{k} \hat{\betab}_{\mathrm {Tyler}}^{\prime\otimes2}\hspace*{1pt} n^{1/2}\vecop(\tilde{\mathbf{V}}- \Vb) + o_\mathrm{P}(1). \end{eqnarray*} By construction of the estimator $\tilde{\Lamb}_{\Vb}$, $\mathbf {c}_{p,q} \pr\mathbf{H}_{k} \hat{\betab}_{\mathrm{Tyler}}^{\prime \otimes2} (\vecop\tilde{\mathbf{V}})=0$, so that we have to show that $n^{1/2}\mathbf{c}_{p,q} \pr\mathbf{H}_{k} \vecop(\hat{\betab}_{\mathrm {Tyler}}\pr\Vb\hat{\betab}_{\mathrm{Tyler}})$ is $o_\mathrm{P}(1)$. We only do so for $\varthetab$ values such that $ \lambda_{1; \Vb}= \cdots= \lambda_{q; \Vb}=: \lambda_{1}^{*} > \lambda_{2}^{*}:= \lambda_{q+1; \Vb}= \cdots= \lambda_{k; \Vb}, $ which is the most difficult case (extension to the general case is straightforward, although notationally more tricky). Note that the fact that $\varthetab\in\mathcal{H}_{0;q}^{\Lamb \prime\prime}$ then implies that \begin{equation}\label{ffff} -pq\lambda_{1}^{*}+(1-p)(k-q)\lambda_{2}^{*}=0. \end{equation} Partition $\mathbf{E}:= \betab\pr\hat{\betab}_{\mathrm {Tyler}}$ into \begin{equation} \label{fsdff} \mathbf{E}= \pmatrix{\mathbf{E}_{11} & \mathbf{E}_{12} \cr \mathbf{E}_{21} & \mathbf{E}_{22}}, \end{equation} where\vspace*{1pt} $\mathbf{E}_{11}$ is $q \times q$ and $\mathbf{E}_{22}$ is $(k-q) \times(k-q)$. As shown in Anderson [(\citeyear{A63}), page 129] $n^{1/2}(\mathbf {E}_{11}\mathbf{E}_{11}\pr- \mathbf{I}_{q})= o_\mathrm {P}(1)=n^{1/2}(\mathbf{E}_{22}\mathbf{E}_{22}\pr- \mathbf{I}_{k-q})$ and $n^{1/2}\mathbf{E}_{12}=O_\mathrm{P}(1)=n^{1/2}\mathbf{E}_{21}'$ as $\ny$, under $\mathrm{P}^{(n)}_{\varthetab; g_{1}}$ [actually, \citet{A63} proves this only for $\mathbf{E}=\betab\pr\betab_{\Sb}$ and under Gaussian densities, but his proof readily extends to the present situation]. Hence, still as $\ny$, under $\mathrm{P}^{(n)}_{\varthetab; g_{1}}$, \begin{eqnarray} && n^{1/2} \mathbf{c}_{p,q} \pr\mathbf{H}_{k} \vecop(\hat{\betab}_{\mathrm {Tyler}}\pr\Vb\hat{\betab}_{\mathrm{Tyler}}) \nonumber\\ &&\qquad = -p \{ n^{1/2} \lambda_{1}^{*} \operatorname{tr}(\mathbf{E}_{11}\pr \mathbf{E}_{11}) + n^{1/2} \lambda_{2}^{*} \operatorname{tr}(\mathbf {E}_{21}\pr\mathbf{E}_{21}) \} \nonumber\\ &&\qquad\quad{} + (1-p) \{ n^{1/2} \lambda_{1}^{*} \operatorname{tr}(\mathbf{E}_{12}\pr \mathbf{E}_{12}) + n^{1/2} \lambda_{2}^{*} \operatorname{tr}(\mathbf {E}_{22}\pr\mathbf{E}_{22}) \} \\ &&\qquad = -p \{ n^{1/2} \lambda_{1}^{*} \operatorname{tr}(\mathbf{I}_{q}) \} + (1-p) \{ n^{1/2} \lambda_{2}^{*} \operatorname{tr}(\mathbf{I}_{k-q}) \} + o_\mathrm{P}(1)\nonumber\\ &&\qquad = o_\mathrm{P}(1); \nonumber \end{eqnarray} see~(\ref{ffff}). We conclude that the second term in~(\ref{HPasymplin}) is $o_\mathrm{P}(1)$, so that \begin{eqnarray*} {\utT}{}_{K}^{(n)} & = & \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} (\tilde{\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\\ &&{} \times\mathbf{H}_{k} \hat{\betab}_{\mathrm {Tyler}}^{\prime\otimes2} ({\tilde{\Vb}}{}^{1/2})^{\otimes2} \mathbf{J}_{k}^{\perp} \vecop\bigl( {\utSb}{}_{\varthetab; K}^{(n)}\bigr) \\ &&{} + o_\mathrm{P}(1) \\ &=& \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} (\tilde{\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\\ &&{} \times\mathbf{H}_{k} \mathbf{E}^{\prime\otimes2} (\betab \pr)^{\otimes2} ({\tilde{\Vb}}{}^{1/2})^{\otimes2} \mathbf{J}_{k}^{\perp} \vecop\bigl( {\utSb}{}_{\varthetab; K}^{(n)}\bigr)\\ &&{} + o_\mathrm{P}(1). \end{eqnarray*} Since $n^{1/2} \mathbf{J}_{k}^{\perp} \vecop( {\utSb}{}_{\varthetab; K}^{(n)})$ is $O_\mathrm{P}(1)$ under $\mathrm{P}_{\varthetab; g_{1}}^{(n)}$, Slutsky's lemma entails \begin{eqnarray}\qquad\hspace*{5pt} {\utT}{}_{K}^{(n)}& = & \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} ({\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\nonumber\\ &&{} \times\mathbf{H}_{k} (\operatorname{diag}(\mathbf {E}_{11}\pr, \mathbf{E}_{22}\pr))^{\otimes2} (\betab\pr)^{\otimes2} \nonumber\\ & &{} \times({{\Vb}}{}^{1/2})^{\otimes2} \mathbf{J}_{k}^{\perp} \vecop\bigl( {\utSb}{}_{\varthetab; K}^{(n)}\bigr)+ o_\mathrm{P}(1)\nonumber\\[-8pt]\\[-8pt] & = & \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} ({\Lamb}_\Vb))^{-1/2} \mathbf{c}_{p,q} \pr\nonumber\\ &&{} \times\mathbf{H}_{k} (\operatorname{diag}(\mathbf {E}_{11}\pr, \mathbf{E}_{22}\pr))^{\otimes2} (\betab\pr)^{\otimes2} \nonumber\\ & &{} \times({{\Vb}}{}^{1/2})^{\otimes2} \vecop\bigl( {\utSb }{}_{\varthetab; K}^{(n)}\bigr)+ o_\mathrm{P}(1),\nonumber \end{eqnarray} where we used the facts that ${\utSb}{}_{\varthetab; K}^{(n)}$ is $O_\mathrm{P}(1)$ and that \begin{eqnarray*} && n^{1/2} \mathbf{c}_{p,q} \pr\mathbf{H}_{k} (\operatorname{diag}(\mathbf {E}_{11}\pr , \mathbf{E}_{22}\pr))^{\otimes2} (\betab\pr)^{\otimes2}({{\Vb }}^{1/2})^{\otimes2} (\vecop\mathbf{I}_{k}) \\ &&\qquad = n^{1/2} \{ -p \lambda_{1}^{*} \operatorname{tr}(\mathbf {E}_{11}\mathbf{E}_{11}^\prime) + (1-p) \lambda_{2}^{*} \operatorname {tr}(\mathbf{E}_{22}\mathbf{E}_{22}^\prime) \} \\ &&\qquad = n^{1/2} \{ - p \lambda_{1}^{*} \operatorname{tr}(\mathbf{I}_{q}) + (1-p) \lambda_{2}^{*} \operatorname{tr}(\mathbf{I}_{k-q}) \} +o_\mathrm{P}(1)\\ &&\qquad= o_\mathrm{P}(1). \end{eqnarray*} Then, putting [with the same partitioning as in~(\ref{fsdff})] \[ \betab\pr{\Vb}^{1/2} {\utSb}{}_{\varthetab; K}^{(n)}{\Vb }^{1/2}\betab=:\mathbf{D}_{\varthetab; K}^{(n)}=: \pmatrix{\bigl(\mathbf{D}_{\varthetab; K}^{(n)}\bigr)_{11} & \bigl(\mathbf {D}_{\varthetab; K}^{(n)}\bigr)_{12} \vspace*{2pt}\cr\bigl(\mathbf{D}_{\varthetab; K}^{(n)}\bigr)_{21} & \bigl(\mathbf{D}_{\varthetab; K}^{(n)}\bigr)_{22}}, \] the asymptotic properties of ${\utSb}{}_{\varthetab; K}^{(n)}$ and $\mathbf{E}_{jj}$, $j=1,2$ imply that \begin{eqnarray*} {\utT}{}_{K}^{(n)} & =& \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} ({\Lamb}_{\Vb}))^{-1/2} \bigl\{ -p \operatorname{tr}\bigl(\mathbf {E}_{11}\pr\bigl(\mathbf{D}_{\varthetab; K}^{(n)}\bigr)_{11} \mathbf{E}_{11}\bigr) \\ & &\hspace*{145.6pt}{} + (1-p) \operatorname{tr}\bigl(\mathbf{E}_{22}\pr\bigl(\mathbf{D}_{\varthetab; K}^{(n)}\bigr)_{22} \mathbf{E}_{22}\bigr) \bigr\} +o_\mathrm{P}(1) \\[-4pt] & =& \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} ({\Lamb}_{\Vb}))^{-1/2} \bigl\{ -p \operatorname{tr}\bigl(\bigl(\mathbf{D}_{\varthetab; K}^{(n)}\bigr)_{11} \bigr) \\ & &\hspace*{145pt}{} + (1-p) \operatorname{tr}\bigl(\bigl(\mathbf{D}_{\varthetab; K}^{(n)}\bigr)_{22}\bigr) \bigr\} +o_\mathrm{P}(1) \\&=& \biggl(\frac{nk(k+2)}{\mathcal{J}_k(K)} \biggr)^{1/2} ( a_{p,q} ({\Lamb}_{\Vb}))^{-1/2} \mathbf{c}_{p,q} \pr\mathbf{H}_{k} (\betab\pr)^{\otimes2} ({{\Vb }}^{1/2})^{\otimes2} \vecop\bigl( {\utSb}{}_{\varthetab; K}^{(n)}\bigr) + o_\mathrm{P}(1) \\ &=& {\utT}{}_{\varthetab; K}^{(n)}+ o_\mathrm{P}(1) \end{eqnarray*} as $\ny$, under $\mathrm{P}^{(n)}_{\varthetab; g_{1}}$, which establishes the result. \end{pf*} \begin{pf*}{Proof of Proposition~\ref{ranktestbeta}} Fix $\varthetab _0\in {\mathcal H}_{0;1}^{\betab\prime}$ and $g_1\in\mathcal{F}_a$. We have already shown in Section~\ref{gsjdlr} that $ {\utQ}{}^{(n)}_{K}- {\utQ}{}_{\varthetab_0,K}^{(n)}=o_\mathrm{P}(1) $ as $\ny$ under $\mathrm{P}^{(n)}_{\varthetab_0;g_1}$. Proposition \ref {Hajek}(i) then yields \begin{eqnarray}\label{repres2}\qquad {\utQ}{}^{(n)}_{K} &=& \Deltab^{\IV\prime} _{\varthetab_0 ;K,g_1 } [(\Gamb^{\IV}_{\varthetab_0 ; K})^{-} - \mathbf{P}_{k}^{\betab_{}^{0}} ( (\mathbf{P}_{k}^{\betab_{}^{0}})\pr\Gamb ^{\IV}_{\varthetab_0 ; K} \mathbf{P}_{k}^{\betab_{}^{0}} )^{-} (\mathbf{P}_{k}^{\betab_{}^{0}})\pr ] \Deltab^{\IV} _{\varthetab_0 ;K,g_1 } \nonumber\\[-8pt]\\[-8pt] &&{} +o_\mathrm{P}(1),\nonumber \end{eqnarray} still as $\ny$ under $\mathrm{P}^{(n)}_{\varthetab_0;g_1}$. Now, since \begin{eqnarray*} && \Gamb^{\IV}_{\varthetab_0;K} [(\Gamb^{\IV}_{\varthetab_0 ; K})^{-}- \mathbf{P}_{k}^{\betab _{}^{0}} ( (\mathbf{P}_{k}^{\betab_{}^{0}})\pr\Gamb^{\IV }_{\varthetab_0 ; K} \mathbf{P}_{k}^{\betab_{}^{0}} )^{-}(\mathbf {P}_{k}^{\betab_{}^{0}})\pr ] \\ & &\qquad =\tfrac{1}{2}\mathbf{G}_{k}^{\betab_0}\operatorname{diag}\bigl(\mathbf {I}_{k-1},\mathbf{0}_{(k-2)(k-1)/2 \times(k-2)(k-1)/2}\bigr) (\mathbf{G}_{k}^{\betab_0} )\pr \end{eqnarray*} is idempotent\vspace*{-4pt} with rank $(k-1)$ [compare with~(\ref{degreefreedom})], it follows that ${\utQ}{}_{K}^{(n)}$ is asymptotically chi-square\vspace*{2pt} with $(k-1)$ degrees of freedom under $\mathrm{P}^{(n)}_{\varthetab_0;g_1}$, which establishes the null-hypothesis part of (i). For local alternatives, we restrict to those parameter values $\varthetab_0\in{\mathcal H}_{0}^{\betab}$ for which we have ULAN. From contiguity,~(\ref{repres2}), also holds under alternatives\vadjust{\goodbreak} of the form $\mathrm{P}^{(n)}_{\varthetab _0+n^{-1/2}\taub^{(n)};g_1}$. Le Cam's third lemma then implies that ${\utQ}{}_{K}^{(n)}$, under $ \mathrm{P}^{(n)}_{\varthetab _0+n^{-1/2}\taub^{(n)};g_1}$, is asymptotically noncentral chi-square, still with $(k-1)$ degrees of freedom, but with noncentrality parameter \begin{eqnarray*} &&\lim_{\ny} \bigl\{ \bigl(\taub^{\IV(n)}\bigr)\pr\\ &&\qquad\hspace*{4pt}{}\times \bigl[ {\Gamb}^{\IV}_{\varthetab _0;K,g_{1}} [(\Gamb^{\IV}_{\varthetab_0 ; K})^{-}- \mathbf {P}_{k}^{\betab_{}^{0}} ( (\mathbf{P}_{k}^{\betab_{}^{0}})\pr\Gamb ^{\IV}_{\varthetab_0 ; K} \mathbf{P}_{k}^{\betab_{}^{0}} )^{-}(\mathbf {P}_{k}^{\betab_{}^{0}})\pr ] {\Gamb}^{\IV}_{\varthetab_0;K,g_{1}}\bigr] \taub^{\IV(n)} \bigr\}. \end{eqnarray*} Evaluation of this limit completes part (i) of the proof. As for parts\vspace*{-4pt} (ii) and (iii), the fact that ${\utphi}{}^{(n)}_{\betab; K} $ has asymptotic level $\alpha$ directly follows from the asymptotic null distribution just established and the classical Helly--Bray theorem, while asymptotic optimality under $K_{f_1}$ scores is a consequence of the asymptotic equivalence, under density $f_1$, of ${\utQ}{}^{(n)}_{K_{f_1}}$ and the optimal parametric test statistic for density $f_1$. \end{pf*} \begin{pf*}{Proof of Proposition~\ref{ranktestlambda}} Fix $\varthetab_0\in{\mathcal H}_{0;q}^{\Lamb\prime\prime}$ and $g_1\in\mathcal{F}_a$. It directly follows from Lemma~\ref{alihoprimeprime} and Proposition~\ref{Hajek} that \begin{eqnarray*} {\utT}{}_{ K}^{(n)}&=& ( \operatorname{grad}\pr h(\dvecrond\Lamb_{\Vb}^{0}) (\Gamb ^{\III}_{{\bolds\vartheta_0}; K})^{-1} \operatorname{grad} h(\dvecrond\Lamb_{\Vb}^{0}) )^{-1/2} \\ & &{} \times\operatorname{grad} \pr h(\dvecrond\Lamb_{\Vb}^{0}) (\Gamb ^{\III}_{{\bolds\vartheta_0}; K})^{-1} \Deltab^{\III} _{{\bolds \vartheta_0}; K, g_{1}} +o_\mathrm{P}(1) \end{eqnarray*} as $\ny$, under $\mathrm{P}^{(n)}_{\varthetab_0;g_1}$, hence also---provided that $\varthetab_0\in{\mathcal H}_{0}^{\Lamb }$---under the contiguous sequences $\mathrm{P}^{(n)}_{\varthetab_0+n^{-1/2}\taub^{(n)};g_1}$. Parts (i) and (ii) result from the fact that ${\utDelta}{}_{\varthetab_0 ; K,g_{1}}^{\III}$ is asymptotically normal with mean zero under $\mathrm {P}^{(n)}_{\varthetab_0 ; g_{1}}$ and mean\looseness=-1 \[ \lim_{\ny} \bigl\{ {\mathcal{J}_k(K,g_{1})}/{\bigl(k(k+2)\bigr)}\mathbf{D}_{k}(\Lamb_{\Vb}) \taub^{\III(n)} \bigr\} \]\looseness=0 under\vspace*{1pt} $\mathrm{P}^{(n)}_{\varthetab_0 + n^{-1/2} \taub^{(n)}; g_{1}}$ (Le Cam's third lemma; again, for $\varthetab_0\in{\mathcal H}_{0}^{\Lamb}$), and with covariance matrix $ {\mathcal {J}_k(K)}/{(k(k+2))} \mathbf{D}_{k}(\Lamb_{\Vb})$ under both. Parts (iii) and (iv) follow as in the previous proof. \end{pf*} \end{appendix} \section*{Acknowledgments} The authors very gratefully acknowledge the extremely careful and insightful editorial handling of this unusually long and technical paper. The original version received very detailed and constructive comments from two anonymous referees and a (no less anonymous) Associate Editor. Their remarks greatly helped improving the exposition.
2,877,628,089,990
arxiv
\section{ Introduction} \label{sec:introduction} The refined de Sitter (dS) conjecture, the latest version of the Swampland conjecture, was recently proposed by Ooguri, Palti, Shiu and Vafa~\cite{Ooguri:2018wrx} after an earlier version proposed by Obied, Ooguri, Spondyneikeo and Vafa~\cite{Obied:2018sgi}. The conjecture states that any scalar potential $V(\phi)$ for scalar fields in a low energy effective theory of a consistent quantum gravity must satisfy at least one of the following conditions: \begin{align*} &||\nabla V|| \geq c_1 \frac{V}{M_P},&\text{(Condition-1)}\\ &{\rm min}(\nabla_i \nabla_j V) \leq -c_2 \frac{V}{M_P^2},&\text{(Condition-2)} \end{align*} where $c_1$ and $c_2$ are universal, positive constants of order unity and ${\rm min}(\nabla_i \nabla_j V)$ of the second condition is the minimum eigenvalue of the Hessian $\nabla_i \nabla_j V$ in an orthonormal frame.\footnote{In literatures, $c$ and $c'$ are also widely used instead of $c_1$ and $c_2$ in this paper. Note $c_1$ and $c_2$ are for the first and the second derivatives of the potential, respectively.} One of the most straightforward and intriguing implications of the conjecture is that the cosmological constant scenario, for which $||\nabla V_{c.c}||=0$ and $V_{c.c.}>0$, is ruled out but the quintessence field with an exponentially decaying potential $V_{Q} \left(Q\right) = \Lambda_{Q}^{4} e^{-c_{Q} Q}$ would consistently explain the late time expansion if $c_Q \geq c_1$~(see a recent review of quintessence model~\cite{Tsujikawa:2013fta}). There have been many follow-up papers considering various implications of the conjecture~\cite{Fukuda:2018haz,Wang:2018kly, Das:2018rpg, Antoniadis:2018ngr, Ashoorioon:2018sqb, Motaharfar:2018zyb, Odintsov:2018zai,Dimopoulos:2018upl,Kawasaki:2018daf,Hamaguchi:2018vtv, Lin:2018kjm, Anguelova:2018vyr, Ellis:2018xdr, Halverson:2018cio, Brandenberger:2018xnf, Bena:2018fqc, Moritz:2018ani, Visinelli:2018utg, DAmico:2018mnx, Han:2018yrk, Brandenberger:2018wbg}. Distinctively from the (Condition-1), the newly added (Condition-2) is rather easily satisfied for a generic potential in low scale, $\Delta \phi \ll M_P$, because \begin{eqnarray} M_P^2 \frac{\nabla_i \nabla_j V}{V} \sim - \frac{M_P^2}{\Delta \phi^2} \ll -c_2 \sim -\mathcal{O}(1). \end{eqnarray} However, inflationary dynamics, which takes place at a high scale, is constrained by the conditions as analyzed recently in Ref.~\cite{Fukuda:2018haz}. The dS conjecture indeed provides some new insights in viewing each model so that we are motivated to examine a new kind of model, which has not been examined so far: a minimal gauge inflation model introduced in Ref.~\cite{Gong:2018jer}. In the next section, Sec.~\ref{sec:model}, we start from the inflaton potential of minimal gauge inflation model and apply the dS conjecture to read out the consistency conditions in generic field space. In Sec.~\ref{sec:predictions}, now considering the latest cosmological observations from the Planck 2018 and also the polarization measurements from BICEP/{\it Keck}, we re-examine the dS conditions for a fully realistic case, which can provide the most interesting understanding of the model in light of the dS conjecture. Finally we conclude in Sec.~\ref{sec:conclusion}. \begin{figure}[t] \centering \includegraphics[width=.88\columnwidth]{fig_pot} \caption{\label{fig:potential} The potential for the minimal gauge inflation model in $\phi/f_{\rm eff}=(0,\pi)$.} \end{figure} \section{Minimal gauge inflation and the Conditions of refined Swampland conjecture} \label{sec:model} A minimal gauge inflation model is based on a higher dimensional theory of non-Abelian ${\rm SU}(2)$ gauge symmetry on the orbifold, $S^2/\mathbb{Z}_2$~\cite{Gong:2018jer} with only a few free parameters, the compactification radius, $R$, and the gauge coupling constant, $g_5$ in five dimensions. The model is supposed to be the simplest realization of this category~\cite{ArkaniHamed:2003wu, Kaplan:2003aj,Park:2007sp, Kubo:2001zc}. The inflaton is identified with the extra dimensional component of the gauge field, $A_5\sim \phi$, and its potential is protected by the higher dimensional gauge symmetry itself but generated at one-loop level by gauge self interactions. As a result, the model is extremely predictive.\footnote{In Ref.~\cite{Cacciapaglia:2005da}, the electroweak symmetry breaking is realized by a fully radiatively generated potential.} It is desired to check if this model is consistent with the refined dS conjecture. The inflaton potential of minimal gauge inflation model is \begin{eqnarray} V(\phi) &=& V_0 \sum_{n=1}^\infty \frac{1}{n^5}\left[1-\cos \frac{n\phi}{f_{\rm eff}}\right] \\ &=&-\frac{V_0}{2} \left[{\rm Li}_5(e^{i\phi/f_{\rm eff}}) +{\rm Li}_5(e^{-i\phi/f_{\rm eff}}) -2 \zeta(5) \right],\nonumber \end{eqnarray} where two of the most important model parameters, $V_0$ and $f_{\rm eff}$, are introduced: $V_0 = \frac{9}{(2\pi)^6 R^4}$ is the scale of the potential and $f_{\rm eff} =\frac{1}{\sqrt{2 \pi R}g_5} = \frac{1}{2\pi g_4 R}$ is the `effective decay constant'. The potential is composed of infinitely many periodic terms. It has a maximum at $\phi/f_{\rm eff} =\pi$ and we only consider the physical region in $\phi/f_{\rm eff} =(0,\pi)$. The inflation starts below $\phi/f_{\rm eff}\approx \pi$ then roll-down to the true vaccum at $\phi/f_{\rm eff}=0$. The model may be regarded as a UV completion of the natural inflation models~\cite{Freese:1990rb, Adams:1992bn,Kim:2004rp}. The shape of the potential is depicted in Fig.~\ref{fig:potential}. \subsection{Condition-1} We first request (Condition-1) or $M_P ||V'||/V \geq c_1$. A convenient function is introduced: \begin{align} C\left(\frac{\phi}{f_{\rm eff}}\right) &\equiv f_{\rm eff} \frac{||V'||}{V} \\ &=\frac{i\left[{\rm Li}_4(e^{-i \phi/f_{\rm eff}})-{\rm Li}_4(e^{i \phi/f_{\rm eff}})\right]}{\left[{\rm Li}_5(e^{i\phi/f_{\rm eff}}) +{\rm Li}_5(e^{-i\phi/f_{\rm eff}}) -2 \zeta(5)\right]}. \end{align} The function is plotted in Fig.~\ref{fig:CD}~(upper curve). As the function is monotonically decreasing within the range of our interest, (Condition-1) sets up a upper limit for $\phi$: \begin{align} &C\left(\frac{\phi}{f_{\rm eff}}\right) \geq c_1 \frac{ f_{\rm eff}}{M_P} \nonumber \\ &\Leftrightarrow ~~\phi \leq \phi_* = f_{\rm eff}~ C^{-1}\left(c_1\frac{f_{\rm eff}}{M_P}\right). \label{eq:C1} \end{align} The critical value, $\phi_*$, is determined for a given value of $c_1 f_{\rm eff}/M_P$. In principle, the parameter is constrained by the inflationary observables. We will be discuss this in Sec.~\ref{sec:predictions}. \begin{figure}[t] \centering \includegraphics[width=.95\columnwidth]{fig_CD2} \caption{\label{fig:CD} $C(\phi/f_{\rm eff})$ and $D(\phi/f_{\rm eff})$ in $\phi/f_{\rm eff}=(0,\pi)$. The locations of $\phi_*$ and $\phi_\star$ are also depicted. } \end{figure} \subsection{Condition-2} The potential is convex ($V''>0$) for a small field value of $\phi/f_{\rm eff}$ then becomes concave ($V''<0$) toward the plateau located at the top. The inflection point ($V''=0$) locates at $\phi/f_{\rm eff} \approx 1.45$. To analyze the second condition, we introduce a convenient function, $D$, which encapsulates the information of the curvature of the potential: \begin{align} D\left(\frac{\phi}{f_{\rm eff}}\right) &\equiv f_{\rm eff}^2 \frac{V''}{V} \\ &=-\frac{{\rm Li}_3(e^{-i \phi/f_{\rm eff}})+{\rm Li}_3(e^{i \phi/f_{\rm eff}})}{\left[{\rm Li}_5(e^{i\phi/f_{\rm eff}}) +{\rm Li}_5(e^{-i\phi/f_{\rm eff}}) -2 \zeta(5)\right]}. \end{align} The shape of the $D$ is depicted in Fig.~\ref{fig:CD} (lower curve). The function is monotonically decreasing and (Condition-2) limits the validity range of $\phi$ and sets the lower bound $\phi_\star$ as \begin{align} D\left(\frac{\phi}{f_{\rm eff}}\right) &\leq -c_2 \frac{f_{\rm eff}^2}{M_P^2} \nonumber \\ \Leftrightarrow ~~&\phi \geq \phi_\star = f_{\rm eff}~ D^{-1}\left(-c_2\frac{ f_{\rm eff}^2}{M_P^2}\right). \label{eq:C2} \end{align} \subsection{Condition-1 and Condition-2} In principle, (Condition-1) in Eq.~\ref{eq:C1} and (Condition-2) in Eq.~\ref{eq:C2} are independent. However, at least one of the conditions can be satisfied in the whole region of $\phi$ if the lower bound of (Condition-2), $\phi_\star$, is smaller than the upper bound of (Condition-1), $\phi_*$ or \begin{eqnarray} \frac{\phi_\star}{f_{\rm eff}}=D^{-1}\left(-c_2 \frac{f_{\rm eff}^2}{M_P^2}\right) \leq C^{-1}\left(c_1\frac{f_{\rm eff}}{M_P}\right) =\frac{\phi_*}{f_{\rm eff}}. \end{eqnarray} This inequality is a generic condition that should be taken as the guiding principle of a model, which has similar features: growing potential having an inflection point in the middle toward the top of the potential. There are many examples of this kind including the Higgs inflation~\cite{Bezrukov:2007ep, Bezrukov:2009db, Hamada:2014iga, Hamada:2014wna,Bezrukov:2014ipa,Bezrukov:2014bra}. If the above condition is not satisfied, there exists a region of $\phi \in (\phi_*, \phi_\star)$ where neither condition is satisfied. Indeed, as one can notice in the figure, $C\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} D$ in $\phi/f_{\rm eff} \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} \pi/4$ so that one can actually find a set of values $(c_1,c_2)$ for a given value of $f_{\rm eff}/M_P$ satisfying the desired condition as one can clearly see in Fig.~\ref{fig:CD}. For instance, with $(c_1,c_2) =(0.3,0.1)$ and $f_{\rm eff}/M_P\approx 1.2$, $(\phi_\star,\phi_*)\approx (1.6f_{\rm eff},2.4f_{\rm eff})$ thus the dS criterion is satisfied. In Fig.~\ref{fig:c1c2} we depicted the critical lines in the plane of $(c_1,c_2)$ for various values of $f_{\rm eff}$: $f_{\rm eff}/M_P =0.9, 1.0, 1.1, 1.2$. The regions below the lines (colored parts), the critical condition, $\phi_\star \leq \phi_*$, is satisfied thus the condition of the refined dS conjecture is fulfilled and belongs to the ``Landscape" but in the regions above the lines, the potential may not have a consistent quantum gravitational UV completion or belongs to the ``Swampland". Until this far, we examine the implications of the refined dS conjecture to generic field space of minimal gauge inflation model. Using the method, we found that the allowed range of $(c_1,c_2)$ depends on $f_{\rm eff}$. On the other hand, if $(c_1,c_2)$ are known {\it a priori}, we can set the theoretically preferred range of the model parameters. Instead, in this paper, we take the observational data as the guideline of a theory and try to set the bound on $(c_1,c_2)$ in the next section. \begin{figure}[t] \centering \includegraphics[width=.95\columnwidth]{fig_c1c2} \caption{\label{fig:c1c2} The critical line of $(c_1,c_2)$. The region above the line is excluded by the dS conjecture or belongs to the ``Swampland". } \end{figure} \section{Inflationary predictions} \label{sec:predictions} \begin{figure}[t] \centering \includegraphics[width=.98\columnwidth]{fig_exp} \caption{\label{fig:exp} The critical line of $(c_1,c_2)$ consistent with the observational data from Planck+BICEP2+{\it Keck}~\cite{Akrami:2018odb, Aghanim:2018eyx, Ade:2015xua,Ade:2018gkx}. \label{fig:exp}} \end{figure} The inflationary observables, the power spectrum of the curvature perturbation, ${\mathcal P}_\zeta$, the corresponding spectral index, $n_s$, the tensor-to-scalar ratio, $r$, of minimal gauge inflation are obtained in Ref.~\cite{Gong:2018jer}: \begin{align} {\mathcal P}_\zeta &\approx \frac{V_0 f_{\rm eff}^2}{6\pi^2 M_P^6}e^{-N_e M_P^2/f_{\rm eff}^2}, \\ n_s &\approx 1-\frac{M_P^2}{f_{\rm eff}^2}, \\%\left(1+2e^{-N_e M_P^2/f_{\rm eff}^2}\right), \\ r &\approx 8\frac{M_P^2}{f_{\rm eff}^2} e^{-N_e M_P^2/f_{\rm eff}^2}, \end{align} where $N_e$ is the number of efolds. From these, we get the model parameters and $N_e$ as \begin{eqnarray} V_0&\approx& \frac{48 \pi^2 {\mathcal P}_\zeta (1-n_s)^2}{r} M_P^4,\\ f_{\rm eff}&\approx& \frac{1}{\sqrt{1-n_s}} M_P,\\ N_e&\approx& \frac{1}{1-n_s} \log \frac{8(1-n_s)}{r}. \end{eqnarray} We take the reference values \begin{align} {\mathcal P}_\zeta =2.5 \times 10^{-9}, n_s =0.965 \pm 0.004 \end{align} from the recent data of Cosmic Microwave Background radiation (CMB) from the Planck observatory~\cite{Akrami:2018odb, Aghanim:2018eyx, Ade:2015xua} and also the data taken by the BICEP2/{\it Keck} CMB polarization experiments~\cite{Ade:2018gkx} then we determine the model parameters, $V_0$ and $f_{\rm eff}$, as well as the allowed window for $r$ as \begin{align} \left.V_0\right|_{\rm Planck+BICEP2+{\it Keck}} &\approx (3.0-4.2)\times 10^{-8} M_P^4, \\ \left.f_{\rm eff}\right|_{\rm Planck+BICEP2+{\it Keck}} &\approx 5.3 M_P, \\ \left.r\right|_{\rm Planck+BICEP2+{\it Keck}} &\approx 0.034-0.049, \end{align} for the requested number of efolds, $N_e=50-60$. Notice that the tensor-to-scalar ratio $r$ is within the future probe. The determined values of $V_0$ and $f_{\rm eff}$ would give the compactification radius and the gauge coupling constant: \begin{align} RM_P &\approx 7.7-8.4, \nonumber \\ g_4 &\approx (3.5-3.9)\times 10^{-3}, \end{align} which look consistent with the quantum gravity and the perturbative gauge theory with $g_4 \ll 4\pi$. Finally, having determined the input parameters of the model we now can directly check the dS conjecture. The Fig.~\ref{fig:exp} is depicted to show the parametric region of $(c_1,c_2)$ which is consistent with the dS conjecture as well as the observational data. The allowed values of $c_1$ and $c_2$ are typically $c_1\sim 0.15$ and $c_2 \sim 0.01$ or smaller. The values are not strictly ${\cal O}(1)$ as requested in the conjecture but still close numerically. \section{Conclusion} \label{sec:conclusion} The latest swampland conjecture could provide important implications to the low energy effective theory models which may or may not be consistent with the quantum gravity theory. The conjecture suggests two related but independent conditions for any scalar potential $V(\phi)$ of a low energy effective theory of a consistent quantum gravity: \begin{align*} ||\nabla V|| \geq c_1\frac{V}{M_P},~~~ {\rm min}(\nabla_i \nabla_j V) \leq -c_2\frac{V}{M_P^2}, \end{align*} which we call (Condition-1) and (Condition-2), respectively, in this paper. The parameters $c_1$ and $c_2$ are supposed to be universal but unknown positive constants. In this paper, we closely examine a minimal gauge inflation model as a concrete example of potentially realistic inflationary model and apply the dS conjecture to see the consistency with a quantum gravity theory. Interestingly, the potential indeed allows a parametric region in $(c_1 \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1,c_2 \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1)$. If we apply the latest cosmological observations from Planck 2018 and also BICEP2+{\it Keck}, the allowed region shrinks but still exists as is clearly seen in Fig.~\ref{fig:exp}. Finally, we would emphasize that the method developed in this paper can be applied to {\it any theory} with a similarly behaving potential: growing monotonically, having an inflection point on the way to the top. \acknowledgments SC is thankful to Kohei Kamada for valuable comments and Matt Reece for discussion during the CERN-TH institute in summer 2018. This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (No.2016R1A2B2016112) and (NRF-2018R1A4A1025334).
2,877,628,089,991
arxiv
\section{Introduction} \setcounter{equation}{0} In this paper, we study the properties of the parabolic Besov spaces ${\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n+1})$ and the parabolic Sobolev spaces ${\mathcal L}^p_\al({\bf R}^{n+1})$ for $1 \leq p \leq \infty, \,\, \al \in {\bf R}$. We also, study the relation between the parabolic Besov spaces in ${\bf R}^{n}_T = \{(X,t) \,| \, X \in {\bf R}^n, \,\, 0 < t < T \}, \,\, 0 < T \leq \infty $ and the standard Besov space in $\R$. The parabolic Sobolev spaces were studied in \cite{FT} and \cite{R}. The authors in \cite{FT} proved the trace theorem of parabolic functions which is similar to the usual Sobolev spaces (see \cite{BL}). In fact, the parabolic Besov and the parabolic Sobolev spaces are particular case of the anisotropic Sobolev spaces and anisotropic Besov spaces, respectively, with dilation matrix $\de_\ep =(\ep^2, \ep, \cdots, \ep)$ (see \cite{DT1}, \cite{DT2}, \cite{L} and \cite{N}). For the properties about the usual Besov and Sobolev spaces, we refer \cite{BL}, \cite{ N}, \cite{P}, \cite{S}, \cite{ Tr} and the references therein. The Besov and Sobolev spaces have being used in boundary value problems of several elliptic type partial differential equations in bounded domain in $\R$. When boundary data is given with the function in some Besov or Sobolev spaces, one can find the solutions of the boundary value problems of elliptic type partial differential equations which contained in the corresponding spaces with boundary data (see \cite{BS}, \cite{CC}, \cite{ FMM}, \cite{JK}). Like the Besov and Sobolev spaces, functions in parabolic Besov and Sobolev spaces can be used with boundary data and solutions of initial-boundary value problems of parabolic type partial differential equations in bounded cylinder (see \cite{JM} for the case heat equation). In section \ref{sec1}, we introduce a parabolic Sobolev space ${\mathcal L}^p_\al ({\bf R}^{n+1})$ and parabolic Besov space ${\mathcal B}_p^{\al, \frac12 \al} ({\bf R}^{n+1})$. The properties to the parabolic Sobolev and parabolic Besov spaces are also stated. In section \ref{sec1-2}, we show that $f \in {\mathcal L}^p_\al ({\bf R}^{n+1}), \,\, 1< p<\infty, \,\, \al \in {\bf R}$ is equivalent to $f, D_{X_k} f, D_t^{\frac12} f \in {\mathcal L}^p_{\al -1} ({\bf R}^{n+1})$ for all $1 \leq k \leq n$ (see Theorem \ref{iterates}), and that $f \in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n+1})$, $1 \leq p \leq \infty, \,\, \al \in {\bf R}$ is equivalent to $f, D_{X_k} f, D_t^{\frac12} f \in {\mathcal B}_p^{\al -1, \frac12 \al -\frac12} ({\bf R}^{n+1})$ for all $1 \leq i \leq n$ (see Theorem \ref{iterateb}). Here $D^\frac12_t $ is a fractional differential operator whose Fourier transform in term of $t$ variable is defined by $\widehat{D^\frac12_t} f(X,\tau) = |\tau|^\frac12 \widehat{f}(X,\tau)$. Our result in section \ref{sec1-2} can be compared with the results of V. Gopala Rao and B. Frank Jones. In \cite{R}, V. Gopala Rao showed that $f \in {\mathcal L}_\al^p ({\bf R}^{n+1})$ is equivalent to $ f , \, \, D_t(f* h_1) \in {\mathcal L}_{\al-1}^p ({\bf R}^{n+1})$, where $*$ is a convolution in ${\bf R}^{n+1}$ and $h_1(X,t) = c_1 t^{\frac{-n-1}{2}} e^{-\frac{|X|^2}{4t}}$ if $t > 0$ and $h_1(X,t) =0$ if $t < 0$. In \cite{J}, B. Frank Jones induced several equivalent norms of parabolic Besov spaces. He also showed that $f\in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n+1}), \,\, 1 \leq p \leq \infty, \,\, \al \in {\bf R}$ is equivalent to $f, \,\, D_{X_k}f \in {\mathcal B}^{\al-1, \frac12 \al -\frac12}_p ({\bf R}^{n+1})$ and $D_t f \in {\mathcal B}^{\al -2,\frac12 \al -1}_p ({\bf R}^{n+1})$. In section \ref{sec4}, we characterize the parabolic Besov spaces in ${\bf R}^{n}_T = \{ (X,t) \in {\bf R}^{n+1} \, | \, 0 < t < T \}, \,\, 0 < T \leq \infty$. We show that the parabolic Besov spaces in ${\bf R}^{n}_T$ are also interpolation spaces and have the same properties as the Theorem \ref{iterateb}. In the section \ref{sec5}, we study the properties of the solution $u$ of the heat equation with initial data $f\in {\mathcal B}_p^{\al -\frac2p} ({\bf R}^{n})$. We show that $u \in {\mathcal B}_p^{\al,\frac12 \al} ({\bf R}^{n}_T)$, and we investigate an equivalent relation between the parabolic Besov norm $\| u\|_{{\mathcal B}_p^{\al,\frac12 \al} ({\bf R}^{n}_T)}$ and the usual Besov norm $ \| f\|_{{\mathcal B}_p^{\al -\frac2p} ({\bf R}^{n})}$. For $ 0 \leq \al, \,\, 1 \leq p \leq \infty$ and $ f \in {\mathcal B}^{\al -\frac2p}_p (\R) $, we define a function by \begin{align}\label{main4} u (X,t) = <f, \Ga(X- \cdot, t)>: = \left\{\begin{array}{l} <f, \Ga(X- \cdot, t)>_\al, \quad 0 \leq \al < \frac2p, \\ \int_{\R} \Ga(X-Y,t) f(Y) dY, \quad \frac2p \leq \al, \end{array} \right. \end{align} where $\Ga (X,t) = c_n t^{-\frac{n}2} e^{-\frac{|X|^2}{4t}}$ if $t > 0$ and $\Ga(X,t) =0$ if $t < 0$, and $<\cdot, \cdot>_\al$ is duality pairing between ${\mathcal B}^{\al -\frac2p}_p (\R) $ and ${\mathcal B}^{-\al +\frac2p}_q (\R), \,\, \frac1p + \frac1q =1$. It is easy to see that $u$ is a solution to the heat equation in ${\bf R}^{n}_\infty$ with the initial value $f $. Our main result in section \ref{sec5} are stated as follows. \begin{theo}\label{mainresult} Let $f \in {\mathcal B}^{\al - \frac2p}_p (\R)$ and $u$ be defined by (\ref{main4}). Let $1 \leq p \leq \infty$, $\al>0$ and $T < \infty$. Then $u\in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n}_T) $ with \begin{align*} \| u \|_{{\mathcal B}^{\al,\frac12 \al}_p ({\bf R}^{n}_T)} \approx \| f\|_{ {\mathcal B}^{\al - \frac2p}_p (\R)}. \end{align*} \end{theo} The notation $A \approx B$ means that there are positive constants $c$ and $C$ independent of $f$ such that $c \leq \frac{A}{B} \leq C$. Our result can be compared with the result of H. Triebel. In section 1.8.1 in \cite{Tr3}, H. Triebel showed that for $1 < p < \infty$ and $\al > \frac2p$ and $m > \frac12(\al -\frac2p)$, \begin{align*} \|f \|^p_{L^p(\R)} + \int_0^\infty \int_{\R} t^{mp-\frac12p (\al -\frac2p)} | D_t^mu(X,t)|^p dXdt\approx \| f\|^p_{{\mathcal B}^{\al - \frac2p}_p (\R)}. \end{align*} In this paper, we denote that $A \lesssim B$ means that $A \leq c B$ for positive constant $c$ depending only on $n, p,$ and $T$. We denote $\hat{\cdot}$ as the Fourier transform in ${\bf R}, \,\, {\bf R}^n$ or ${\bf R}^{n+1}$. \section{Parabolic Sobolev and parabolic Besov spaces on ${\bf R}^{n+1}$ } \setcounter{equation}{0} \label{sec1} For $\al \in {\bf R}$, we consider a distribution $H_\al(\xi,\tau)$ whose Fourier transform in ${\bf R}^{n+1}$ is defined by \begin{eqnarray*} \widehat{ H_{\al}} (\xi,\tau) = c_\al(1 + 4\pi^2 |\xi|^2 + i \tau)^{-\frac{\al}{2}}, \quad (\xi, \tau) \in {\bf R}^n \times {\bf R}. \end{eqnarray*} For $\al \in {\bf R}, \,\, 1\leq p \leq \infty$, we define the parabolic Sobolev space ${\mathcal L}^p_{\al} ({\bf R}^{n+1})$ by \begin{eqnarray*} {\mathcal L}^p_\al ({\bf R}^{n+1}) = \{ f \in {\mathcal S}'({\bf R}^{n+1}) \, | \, f = H_{\al} * g, \quad \mbox{for some} \quad g \in L^p ({\bf R}^{n+1}) \} \end{eqnarray*} with norm \begin{align*} \|f\|_{{\mathcal L}^p_\al ({\bf R}^{n+1})} : = \| g \|_{L^p({\bf R}^{n+1})} \, ( = \| H_{-\al} * f \|_{L^p({\bf R}^{n+1})} ), \end{align*} where $*$ is a convolution in ${\bf R}^{n+1}$ and ${\mathcal S}^{'}({\bf R}^{n+1})$ is the dual space of the Schwartz space ${\mathcal S}({\bf R}^{n+1})$. In particular, when $\al =0$, we have that ${\mathcal L}^p_0({\bf R}^{n+1}) = L^p({\bf R}^{n+1})$. Next, we define a parabolic Besov space. Let $\phi \in {\mathcal S} ({\bf R}^{n+1})$ such that \begin{eqnarray*} \left\{\begin{array}{rl} &\hat \phi(\xi,\tau) > 0 \quad \mbox{ on } 2^{-1} < |\xi| + |\tau|^\frac12 < 2 ,\\ & \hat \phi(\xi,\tau)=0 \quad \mbox{ elsewhere }, \\ &\sum_{ -\infty < i < \infty } \hat \phi(2^{-i}\xi, 2^{-2i}\tau) =1 \ ( (\xi,\tau) \neq (0,0)). \end{array}\right. \end{eqnarray*} We define functions $\phi_i, \,\, \psi \in {\mathcal S}({\bf R}^{n+1})$ whose Fourier transforms are written by \begin{eqnarray}\label{psi1} \begin{array}{ll} \widehat{\phi_i}(\xi, \tau) &= \hat \phi(2^{-i} \xi, 2^{-2i} \tau) \quad (i = 0, \pm 1, \pm 2 , \cdots)\\ \widehat{\psi}(\xi, \tau) & = 1- \sum_{i=1}^\infty \hat \phi (2^{-i} \xi, 2^{-2i} \tau). \end{array} \end{eqnarray} Note that $\phi_i = 2^{(i+2)n} \phi(2^i X, 2^{2i} t)$. For $\al \in {\bf R}$ we define the parabolic Besov space ${\mathcal B}^{\al,\frac12 \al}_{pq} ({\bf R}^{n+1})$ by \begin{eqnarray*} {\mathcal B}^{\al,\frac12 \al}_{pq} ({\bf R}^{n+1}) = \{ f \in {\mathcal S}^{'}({\bf R}^{n+1}) \, | \, \|f\|_{{\mathcal B}^{\al, \frac12 \al}_{pq}} < \infty \, \} \end{eqnarray*} with the norms \begin{align*} \|f\|_{{\mathcal B}^{\al, \frac12 \al}_{pq}} :& = \| \psi * f\|_{L^p} + ( \sum_{ 1 \leq i < \infty} (2^{\al i} \|\phi_i * f\|_{L^p})^q)^{\frac1q}, \quad 1 \leq q < \infty,\\ \|f\|_{{\mathcal B}^{\al, \frac12 \al}_{p\infty}} :& = \sup (\| \psi * f\|_{L^p} , \,\, 2^{\al i} \|\phi_i * f\|_{L^p}), \end{align*} where $*$ is a convolution in ${\bf R}^{n+1}$. When $p=q$, we simply denote ${\mathcal B}^{\al,\frac12 \al}_{pp} $ by ${\mathcal B}^{\al,\frac12 \al}_p$. The following properties can be shown by the same argument as for the usual Sobolev space and the Besov space in $\R$. \begin{prop} \label{prop2} \begin{itemize} \item[(1)] The definition of $ {\mathcal B}^{\al, \frac12 \al}_{pq} ({\bf R}^{n+1})$ does not depend on the choice of the function $\phi$, \item[(2)] The real interpolation method gives \begin{eqnarray*} ( {\mathcal L}^{p}_{\al_0} , {\mathcal L}^{p}_{\al_1} )_{\te, q} = {\mathcal B}^{\al,\frac12 \al}_{pq} \end{eqnarray*} for $ 1 \leq p \leq \infty, \, \, \al = (1-\te) \al_0 + \te \al_1, 0 < \te < 1,$ and \begin{eqnarray*} ({\mathcal B}_{pq_0}^{\al_0,\frac12 \al_0} , {\mathcal B}_{pq_1}^{\al_1,\frac12 \al_1} )_{\te, r} = {\mathcal B}^{\al,\frac12 \al}_{pr} \end{eqnarray*} for $\al_0 \neq \al_1, \, \, 1 \leq p, r, q_0, q_1 \leq \infty , \al = (1-\te) \al_0 + \te \al_1$. \item[(3)] For $0 < \al < 2$, the the parabolic Besov norm $\| f\|_{{\mathcal B}^{\al,\frac12 \al}_p}$ is equivalent to the norm \begin{align}\label{besovnorm1} \|f\|_{L^p} & + \Big(\int_{\R} \int_{{\bf R} \times {\bf R}} \frac{|f(X,t) - f(X,s)|^p}{|t-s|^{1 + \frac12 p\al}}dtds dX \Big)^{\frac1p} \\ & + \Big(\int_{{\bf R}} \int_{\R \times \R} \frac{|f(X+Y,t) -2 f(X,t) - f(X-Y,t)|^p}{|Y|^{n + p\al}}dYdX dt\Big)^{\frac1p}\nonumber \end{align} if $1 \leq p < \infty$; \begin{align}\label{besovnorm2} \|f\|_{L^\infty} & + \sup_{X ,t,s, t \neq s} \frac{|f(X,t) - f(X,s)|}{|t-s|^{ \frac12 p\al}} \\ & + \sup_{t,s, X,Y,Y \neq 0 } \frac{|f(X+Y,t) -2 f(X,t)-f(X-Y,t)|^p}{|Y|^{p\al}}.\nonumber \end{align} if $ p = \infty$. \item[(4)] The operator $S_\te: {\mathcal L}^p_{\al} \ri {\mathcal L}^p_{\al +\te}, \,\, S_\al f = H_\al * f$ is isomorphism for all $\al, \, \te \in {\bf R}$ and $1 \leq p \leq \infty$. \item[(5)] ${\mathcal S}({\bf R}^{n+1})$ is dense subset of ${\mathcal L}^p_\al({\bf R}^{n+1})$ for all $\al \in {\bf R}$ and $1 \leq p \leq \infty$. \item[(6)] ${\mathcal L}^p_{\al_1} ({\bf R}^{n+1}) \subset {\mathcal L}^p_{\al_2} ({\bf R}^{n+1}) $ for $\al_2 < \al_1$. \end{itemize} \end{prop} For the details of the proof of Proposition \ref{prop2} we refer \cite{BL} for $(2)$ (in particular Definition 6.2.2, Theorem 6.2.4 and Theorem 6.4.5 in \cite{BL}), and refer \cite{DT2} (Theorem 3) for $(3)$. It is not difficult to derive (4) -(6) (see \cite{BL}). For the sake of later use, we define $L^p(\R)$- multiplier ( $L^p({\bf R}^{n+1})$- multiplier) as follows. \begin{defin} We say that $\mu \in {\mathcal S}' (\R)$ is $L^p(\R)$-multiplier if \begin{align}\label{multiplier norm} \| {\mathcal F}^{-1} (\mu \hat f) \|_{L^p (\R)} \leq M \| f\|_{L^p (\R)} \end{align} for all $f \in {\mathcal S}(\R)$, where ${\mathcal F}^{-1} (f)$ is the inverse Fourier transform of $f$. We call the minimal constant $M$ satisfying \eqref{multiplier norm} $L^p$-mutiplier norm of $\mu$. \end{defin} Similarly, we define $L^p({\bf R}^{n+1})$-multiplier. We introduce the Marcinkiewicz multiplier theorem (see Theorem $4.6^{'}$ in \cite{S}). \begin{prop}\label{Marcinkiwitz} Let $m$ be a bounded function on ${\bf R}^n \setminus \{ 0\}$. Suppose also \begin{itemize} \item[(a)] $|\mu( \xi)| \leq B$,\\ \item[(b)] for each $0 < k \leq n$, \begin{align*} \sup_{\xi_{k+1}, \cdots \xi_{n}} \int_{\rho} |\frac{\pa^k \mu}{\pa \xi_1 \pa \xi_2 \cdots \pa \xi_k}| d\xi_1 \cdots d\xi_k \leq B \end{align*} as $\rho$ ranges over dyadic rectangles of ${\bf R}^k$ (If $k =n$, the " $\sup$ " sign is omitted). \item[(c)] The condition analogous to (b) is valid for every for one of the $n ! $ permutations of the variables $\xi_1, \, \xi_2, \, \cdots \xi_n$. \end{itemize} Then $mu$ is $L^p$-multiplier, $1 < p<\infty$ and the multiplier norm depend only on $B, \, p$ and $n$. \end{prop} We denote by $D^i_{X_k}, \, i \in {\bf N} \cup \{ 0\} $ the $i$ times derivatives with respect to $X_k$. When $i =1$, we denote $D^1_{X_k} = D_{X_k}$. We also denote $D^{\be}_{X}$ by the $D_{X_1}^{\be_1} \cdots D_{X_n}^{\be_n}$ for $\be \in ({\bf N} \cup \{0\})^n$. We denote by $D^\frac12_t $ the pseudo-differential operator whose Fourier transform is defined by $\widehat{D^\frac12_t f}(\tau) = |\tau|^{\frac12} \hat{f} (\tau)$ for complex-valued function $f$. It is well-known that \begin{align}\label{half} D^{\frac12}_t f(t) = c \int_{{\bf R}} \frac{f(t) - f(s)}{|t-s|^{\frac32}} ds \end{align} for complex-value function $f$. For non-negative integer, we also denote $ D^{ i}_t f$ by $i$ times derivatives of $f$ and $ D^{ i + \frac12}_t f$ by $D^\frac12_t D^{i}_t f$, respectively. Note that $ D_tf = HD^\frac12 D^\frac12 f$. \section{The properties of parabolic Sobolev and parabolic Besov spaces } \setcounter{equation}{0} \label{sec1-2} In this section, we study the properties of parabolic Sobolev and parabolic Besov spaces. \begin{theo}\label{iterates} Let $ 1 < p < \infty$ and $\al \in {\bf R}$. Then $f \in {\mathcal L}^p_\al ({\bf R}^{n+1})$ if and only if $f, D_{X_k} f, D_t^{\frac12} f \in {\mathcal L}^p_{\al -1} ({\bf R}^{n+1})$ for all $1 \leq k \leq n$. Furthermore, \begin{eqnarray}\label{Iterates} \|f\|_{ {\mathcal L}^p_\al}\approx \| f\|_{{\mathcal L}^p_{\al -1}} + \sum_{ 1 \leq k \leq n}\|D_{X_k} f\|_{{\mathcal L}^p_{\al-1}} + \|D_t^{\frac12} f\|_{{\mathcal L}^p_{\al-1}}. \end{eqnarray} \end{theo} \begin{proof}\ First, we assume $\al =1$. Suppose $f \in {\mathcal L}^p_{1} ({\bf R}^{n+1})$ so that $f = H_{1} * g$ for some $g \in L^p({\bf R}^{n+1})$. Then, for $1 \leq k \leq n$, we have \begin{align}\label{0620} \begin{array}{ll}\vspace{2mm} \widehat{D_{X_k} f } = \frac{-2\pi \xi_k}{(1 +4 \pi^2|\xi|^2 + i \tau)^{\frac12}} \hat g, \quad \widehat{D^t_{\frac12} f} = \frac{|\tau|^{\frac12}}{(1 + 4 \pi^2|\xi|^2 + i \tau)^{\frac12}} \hat{g}. \end{array} \end{align} Applying Proposition \ref{Marcinkiwitz}, we have that $ \mu_{1k} (\xi, \tau) = \frac{-2\pi \xi_k}{(1 + 4 \pi^2|\xi|^2 + i \tau)^{\frac12}} , \,\, \mu_2(\xi, \tau)=\frac{|\tau|^{\frac12}}{(1 + 4 \pi^2 |\xi|^2 + i \tau)^{\frac12}} $ are $L^p({\bf R}^{n+1})$ multipliers for $1 < p < \infty$. Then, from \eqref{0620}, we get \begin{align*} \|D_{X_k} f\|_{L^p} &= \|{\mathcal F}^{-1}( \mu_{1k}(\xi, \tau) \hat g )\|_{L^p} \lesssim \|g\|_{L^p} = \|f\|_{{\mathcal L}^p_1} \quad 1 \leq k \leq n,\\ \|D_t^{\frac12} f\|_{ L^p}& = \| {\mathcal F}^{-1}( \mu_2(\xi, \tau) \hat g ) \|_{L^p} \lesssim \|g\|_{L^p} = \|f\|_{{\mathcal L}^p_1}. \end{align*} From (6) in Proposition \ref{prop2}, we obtain $\| f\|_{L^p} \lesssim\| f\|_{{\mathcal L}^p_1}$. Hence, we proved the one-side of Theorem \ref{iterates}. Now, we prove the converse inequality. Suppose $ f, \,\, D^\frac12_t f , \,\, D_{X_k} f\in L^p({\bf R}^{n+1}), \,\, 1 \leq k \leq n$. We claim that $f = H_1 * g$ for some $g \in L^p({\bf R}^{n+1})$ satisfying \begin{align}\label{0627} \| g\|_{L^p} \lesssim \big( \| f\|_{L^p} + \sum_{1\leq k\leq n} \|D_{X_k} f\|_{L^p} + \| D^\frac12_t f\|_{L^p} \big). \end{align} If then, $f = H_1 * g\in {\mathcal L}^p_1 ({\bf R}^{n+1}) $ with $\| f\|_{{\mathcal L}^p_1}\lesssim\big( \| f\|_{L^p} + \sum_{1\leq k\leq n} \|D_{X_k} f\|_{L^p} + \| D^\frac12_t f\|_{L^p} \big)$, and this will complete the proof of Theorem \ref{iterates}. To prove the claim, let us $R_k, \,\, 1 \leq k \leq n$ be Riesz transforms in $\R$. Then, we have \begin{align*} {\mathcal F}^{-1}(( 1 + |\xi| + |\tau|^{\frac12 }) \hat f) = f + \sum_{1 \leq k \leq n} R_k \frac{\pa f}{\pa x_k} + D^{\frac12}_t f \in L^p ({\bf R}^{n+1}). \end{align*} Set $\hat K (\xi,\tau) = \frac{(1 + 4 \pi^2|\xi|^2 + i\tau)^{\frac12}}{1 + |\xi| + |\tau|^{\frac12}}$ and $g = K *\Big(f + \sum_{1 \leq k \leq n} R_k \frac{\pa f}{\pa x_k} + D^{\frac12}_t f \Big) $. Applying Proposition \ref{Marcinkiwitz}, we have that $ \hat K (\xi,\tau)$ is $L^p({\bf R}^{n+1})$-multiplier. Hence we have $g \in L^p({\bf R}^{n+1})$. Hence, \eqref{Iterates} holds for $\al =1$. For general $\al \in {\bf R}$, by (4) in Proposition \ref{prop2}, we have that $S_{\al -1}:{\mathcal L}^p_1 \ri {\mathcal L}^p_\al$ and $S_{\al-1}: L^p \ri {\mathcal L}^p_{\al-1}$ are isomorphism whose inverses are $ S^{-1}_{\al -1} = S_{-\al +1}$. Note that $D_{X_k} S_{-\al +1} f = S_{-\al +1} D_{X_k} f $ and $ D^\frac12_tS_{-\al +1} f = S_{-\al +1} D^\frac12_t f$. Hence, we get \begin{align*} f \in {\mathcal L}^p_\al & \Leftrightarrow S^{-1}_{\al -1 } f = S_{-\al +1} f \in {\mathcal L}^p_1\\ & \Leftrightarrow S_{-\al +1} f, \,\, D_{X_k} S_{-\al +1} f (= S_{-\al +1} D_{X_k} f), \,\, D^\frac12_tS_{-\al +1} f (= S_{-\al +1} D^\frac12_t f) \in L^p\\ & \Leftrightarrow f, \,\, D_{X_k} f, \,\, D^\frac12_t f \in {\mathcal L}^p_{\al-1}. \end{align*} Hence, we complete the proof of \eqref{Iterates}. \end{proof} \begin{coro}\label{iterates2} Let $ 1 < p < \infty$ and $\al \in {\bf R}$. Then $f \in {\mathcal L}^p_2 ({\bf R}^{n+1})$ if and only if $f, \, D_{X_k} D_{X_l} f, \,D_t f \in L^p ({\bf R}^{n+1})$ for all $1 \leq k,l \leq n$. Furthermore, \begin{align} \begin{array}{ll}\vspace{2mm} \|f\|_{ {\mathcal L}^p_\al} \approx \| f \|_{{\mathcal L}^p_{\al -2}} + \sum_{0 \leq k,l\leq n}\|D_{X_k} D_{X_l} f\|_{{\mathcal L}^p_{\al-2}} + \|D_ f\|_{{\mathcal L}^p_{\al-2}}. \end{array} \end{align} \end{coro} \begin{proof} As the proof of Theorem \ref{iterates}, it suffices to show the Corollary when $\al =2$. Suppose $f \in {\mathcal L}^2_p({\bf R}^{n+1})$. Since ${\mathcal L}_0^p({\bf R}^{n+1})= L^p({\bf R}^{n+1})$, applying the Theorem \ref{iterates} two times, we have \begin{align}\label{equiv3-1} \begin{array}{ll}\vspace{2mm} & \| f\|_{ {\mathcal L}^p_2} \approx \| f \|_{ L^p } + \sum_{0 \leq k\leq n}\|D_{X_k } f\|_{L^p } + \|D_t^\frac12 f\|_{L^p } + \sum_{0 \leq k,l\leq n}\| D_{X_kX_l } f\|_{ L^p } \\ & \hspace{30mm} + \sum_{0 \leq k\leq n}\|D_t^\frac12 D_{X_k} f\|_{ L^p } + \|D_t f\|_{ L^p }. \end{array} \end{align} Hence, if $f \in {\mathcal L}^p_2({\bf R}^{n+1})$, then we have \begin{align*} \| f \|_{L^p } + \sum_{0 \leq k,l\leq n}\|D_{X_k} D_{X_l} f\|_{L^p } + \|\frac{\pa f}{\pa t}\|_{L^p } \lesssim \|f\|_{ {\mathcal L}^p_2}. \end{align*} Conversely, suppose that $ f, \,\, D_{X_k} D_{X_l} f, \, D_t f \in L^p ({\bf R}^{n+1})$. Note that applying Proposition \ref{Marcinkiwitz}, we have that $\nu_1(\xi, \tau)=\frac{|\tau|^\frac12}{1 + 4\pi^2|\xi|^2 + i\tau}, \,\, \nu_{2,k}(\xi, \tau)=\frac{2\pi i \xi_k}{1 + 4\pi^2|\xi|^2 + i\tau}, \,\, \nu_{3,k}(\xi, \tau) =\frac{2\pi i \xi_k |\tau|^\frac12}{1 + 4\pi^2|\xi|^2 + i\tau},$ $ 1 < p < \infty $ are $L^p({\bf R}^{n+1})$-multipliers. Then, we have \begin{align} \label{0628} \begin{array}{ll}\vspace{2mm} &\widehat{D_t^\frac12 f} = \nu_1(\xi, \tau) (1 + 4\pi^2|\xi|^2 + i\tau) \hat f, \,\, \widehat{D_{X_k} f} = \nu_{2,k}(\xi, \tau) (1 + 4\pi^2|\xi|^2 + i\tau) \hat f,\\ & \hspace{30mm} \widehat{D^\frac12_t D_{X_k} f} = \nu_{3,k}(\xi, \tau) (1 + 4\pi^2|\xi|^2 + i\tau) \hat f. \end{array} \end{align} Note that $ {\mathcal F}^{-1}((1 + 4\pi^2|\xi|^2 + i\tau) \hat f ) = f + \sum_{1 \leq k \leq n} D^2_{X_k} f + D_t f $. Hence, from \eqref{0628}, we have \begin{align}\label{0620-2} \| D^\frac12_{t} f\|_{ L^p} + \| D_{X_k} f\|_{ L^p} + \| D^\frac12_{t} D_{X_k}f\|_{ L^p} \lesssim \big( \| f\|_{ L^p} + \| D_{t} f\|_{ L^p} + \sum_{1 \leq k, l \leq n}\| D_{X_k} D_{ X_l }f\|_{ L^p}\big). \end{align} With \eqref{equiv3-1}, \eqref{0620-2} and the assumption, this implies \begin{align*} \|f\|_{ {\mathcal L}^p_2} \lesssim\big( \| f \|_{L^p} + \sum_{0 \leq k,l\leq n}\|D_{X_k} D_{X_l} f\|_{ L^p} + \|D_t f\|_{ L^p}\big). \end{align*} Hence, we completed the proof of Corollary \ref{iterates2}. \end{proof} Now, we define parabolic Sobolev space $ \tilde W^{\al, \frac12 \al}_p({\bf R}^{n+1})$ and $W^{2\al, \al}_p({\bf R}^{n+1})$ for positive integer $\al$ and $1 \leq p \leq \infty$ by \begin{align*} \tilde W^{\al, \frac12 \al}_p({\bf R}^{n+1}) :& = \{ f \in L^p({\bf R}^{n+1}) \, | \, \,\, D_X^\be D^{\frac{l}2}_t f \in L^p({\bf R}^{n+1}), \quad |\be| + l \leq \al \,\, \},\\ W^{2\al, \al}_p({\bf R}^{n+1}) : &= \{ f \in L^p({\bf R}^{n+1}) \, | \, \,\, D_X^\be D^l_t f \in L^p({\bf R}^{n+1}), \quad |\be| + 2 l \leq 2\al \,\, \} \end{align*} with norms \begin{align*} \|f\|_{\tilde W^{\al, \frac12 \al}_p }: = \sum_{|\be| + \frac12l \leq \al} \| D^\be_X D^{\frac12 l}_t f\|_{L^p}, \quad \|f\|_{W^{ 2\al, \al}_p }: = \sum_{|\be| + l \leq 2\al} \| D^\be_X D^{ l}_t f\|_{L^p}. \end{align*} \begin{rem} \begin{itemize} \item[(1)] From the Theorem \ref{iterates} and Corollary \ref{iterates2}, if $\al$ is non-negative integer and $1 < p < \infty$, then we have \begin{align} {\mathcal L}^p_\al ({\bf R}^{n+1}) = \tilde W^{\al, \frac12 \al}_p ({\bf R}^{n+1}), \quad {\mathcal L}^p_{2\al} ({\bf R}^{n+1}) = \tilde W^{2\al, \al}_p ({\bf R}^{n+1}) = W^{2\al, \al}_p ({\bf R}^{n+1}) \end{align} with the equivalent norms. \item[(2)]When $p = 1$ or $p= \infty$, the spaces ${\mathcal L}^p_\al ({\bf R}^{n+1})$ and $\tilde W^{\al, \frac12 \al}_p ({\bf R}^{n+1}) $ are different spaces, and ${\mathcal L}^p_{2\al} ({\bf R}^{n+1})$, $ \tilde W^{2\al, \al}_p ({\bf R}^{n+1}) $ and $ W^{2\al, \al}_p ({\bf R}^{n+1})$ are different spaces each other. \end{itemize} \end{rem} Next, we study about the properties of parabolic Besov spaces. \begin{theo}\label{iterateb} Let $ 1 \leq p \leq \infty$ and $\al \in {\bf R}$. Then $f \in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n+1})$ if and only if $f, D_{X_k} f, D_t^{\frac12} f \in {\mathcal B}_p^{\al -1, \frac12 \al -\frac12} ({\bf R}^{n+1})$ for all $1 \leq k \leq n$. Furthermore, \begin{eqnarray}\label{iteratedb} \|f\|_{ {\mathcal B}^{\al, \frac12 \al}_p} \approx \|f\|_{{\mathcal B}^{\al-1, \frac12 \al -\frac12}_p} + \sum_{1 \leq k \leq n}\|D_{X_k} f\|_{{\mathcal B}^{\al-1, \frac12 \al -\frac12}_p} + \|D^{\frac12}_tf\|_{{\mathcal B}^{\al-1, \frac12 \al -\frac12}_p}. \end{eqnarray} \end{theo} \begin{proof} If $1 < p <\infty$, then by Theorem \ref{iterates} and the property of interpolation spaces (see (2) of Proposition \ref{prop2}), \eqref{iteratedb} holds. Hence we have only to consider the critical case $p=1$ and $p=\infty$. Since the proofs are exactly same, we only prove in the case of $p =1$. Suppose that $ f \in {\mathcal B}^{\al, \frac12 \al }_1({\bf R}^{n+1})$. Then by the definition of the parabolic Besov space, we have \begin{eqnarray*} \|f\|_{{\mathcal B}^{\al, \frac12 \al}_1} = \|f * \psi\|_{L^1} + \sum_{1 \leq i < \infty} 2^{\al i} \|f * \phi_i\|_{L^1} < \infty. \end{eqnarray*} Note that by construction of $ \psi$ and $\phi_i$ in section \ref{sec1}, we have $\hat \psi + \hat \phi_1 + \hat \phi_2=1$ in $supp \, (\hat \psi + \hat \phi_1)$ and $\hat \phi_{i-1} + \hat \phi_i + \hat \phi_{i+1}=1$ in $supp \, \hat \phi_i$ for $i \geq 2$. Hence, using $D_{X_k} (f*g) = (D_{X_k} f)* g = f * ( D_{X_k} g)$, we have \begin{align*} (D_{X_k} f )* \psi &= f * \psi * D_{X_k}( \psi + \phi_1 + \phi_2),\\ (D_{X_k} f) * \phi_1 & = f * \phi_1 * D_{X_k}( \psi + \phi_1 + \phi_2),\\ (D_{X_k} f )* \phi_i & = f * \phi_i * D_{X_k}( \phi_{i-1} + \phi_i + \phi_{i+1}),\quad i \geq 2. \end{align*} Note that $\|D_{X_k} \psi\|_{L^1} \lesssim$ and $ \|D_{X_k} \phi_i \|_{L^1} \lesssim2^i$. Hence, by Young's inequality, we have \begin{align*} \| (D_{X_k} f )* \psi \|_{L^1} & \leq \| f * \psi \|_{L^1} \| D_{X_k}( \psi + \phi_1 + \phi_2) \|_{L^1} \lesssim\| f * \psi \|_{L^1},\\ \| (D_{X_k} f )* \phi_1 \|_{L^1} & \leq \| f * \phi_1 \|_{L^1} \| D_{X_k}( \psi + \phi_1 + \phi_2) \|_{L^1} \lesssim\| f * \phi_1\|_{L^1},\\ \| (D_{X_k} f )* \phi_i \|_{L^1} & \leq \| f * \phi_i\|_{L^1} \| D_{X_k}( \phi_{i-1} + \phi_{i} + \phi_{i-1}) \|_{L^1} \lesssim 2^i\| f * \phi_i \|_{L^1}, \quad i \geq 2. \end{align*} Hence, we have \begin{align*} \|D_{X_k} f\|_{{\mathcal B}^{\al -1,\frac12 \al -\frac12}_1} &= \|D_{X_k} f * \psi\|_{L^1} + \sum_{1 \leq i < \infty} 2^{(\al-1) i} \|D_{X_k} f * \phi_i\|_{L^1}\\ & \lesssim \big( \| f * \psi\|_{L^1} + \sum_{1 \leq i < \infty} 2^{\al i} \| f * \phi_i\|_{L^1} \big)\\ & = \|f\|_{{\mathcal B}^{\al-1, \frac12 \al -\frac12}_1}. \end{align*} Similarly, we obtain \begin{align*} (D^{\frac12}_t f) * \psi &= f * \psi * D^\frac12_t( \psi + \phi_1 + \phi_2),\\ (D^{\frac12}_t f) * \phi_1 & = f * \phi_1 * D^\frac12_t( \psi + \phi_1 + \phi_2),\\ (D^{\frac12}_t f) * \phi_i & = f * \phi_i * D^\frac12_t( \phi_{i-1} + \phi_i + \phi_{i+1}), \quad i \geq 2. \end{align*} Note that using \eqref{half} and change of variables, we have \begin{eqnarray}\label{0618} \begin{array}{ll} \|D^\frac12_t \phi_i\|_{L^1} &= c\int_{{\bf R}^{n+1}}| \int_{{\bf R}} \frac{\phi_i(X,t) - \phi_i (X,s)}{|t-s|^{\frac32}}ds|dXdt\\ & \lesssim2^i \int_{{\bf R}^{n+1}} \int_{{\bf R}} \frac{|\phi (X,t) - \phi (X,s)|}{|t-s|^{\frac32}}dsdXdt \\ & \lesssim2^i\|\phi\|_{{\mathcal B}^{1,\frac12}_1 ({\bf R}^{n+1})}\\ \|D^{\frac12}_t \psi \|_{L^1} &\lesssim\| \psi\|_{{\mathcal B}^{1,\frac12}_1 ({\bf R}^{n+1})}. \end{array} \end{eqnarray} As the same reason to the case of $D_{X_k} f$, using Young's inequality, we have $\|D^\frac12_t f\|_{{\mathcal B}^{\al-1,\frac12 \al -\frac12}_1} \lesssim\| f\|_{{\mathcal B}^{\al-1, \frac12 \al -\frac12}_1}$. Hence, we proved one side of \eqref{iteratedb}. Conversely, we suppose that $\|f\|_{{\mathcal B}^{\al -1,\frac12 \al -\frac12}_1} , \|D_{X_k}f\|_{{\mathcal B}^{\al -1,\frac12 \al -\frac12}_1},\|D^\frac12_t f\|_{{\mathcal B}^{\al -1,\frac12 \al -\frac12}_1} < \infty.$ Since $\phi$ is supported in $\{(\xi,\tau) \in {\bf R}^{n+1} \, | \, 2^{-1} < |\xi| + |\tau|^\frac12 < 2 \}$, we have that $\frac{1}{(- 4\pi^2|\xi|^2 +i\tau)} \phi (\xi, \tau) \in {\mathcal S}({\bf R}^{n+1})$. We define $\Phi$ and $\Phi_i$ by the functions whose Fourier transforms are written by $ \hat \Phi(\xi, \tau)=\frac{1}{ -4\pi^2|\xi|^2 +i\tau} \phi (\xi, \tau) $ and $\hat \Phi_i(\xi, \tau) = \hat \Phi (2^{-i}\xi, 2^{-2i} \tau)$. Then, for $i \geq 2$, we have \begin{align}\label{0628-2} \begin{array}{ll} \vspace{2mm} \widehat{ f* \phi_i} & =\hat f \hat \phi_i ( \hat \phi_{i-1} +\hat \phi_i + \hat \phi_{i+1})\\ \vspace{2mm} &= \hat f \hat \phi_i \frac{-4\pi^2|\xi|^2 + i\tau}{ -4\pi^2|\xi|^2 + i \tau} \big(\hat \phi_{i-1} +\hat \phi_i + \hat \phi_{i+1} \big) \\ \vspace{2mm} &= 2^{-2i} \sum_{1 \leq k \leq n} \widehat {D_{X_k}f} \hat \phi_i \big( \widehat{D_{X_k}\Phi_{i-1}} + \widehat {D_{X_k} \Phi_i} +\widehat{D_{X_k}\Phi_{i+1}} \big)\\ & \quad + 2^{-2i}\widehat{D^\frac12_t f} \hat \phi_i \big( \widehat{HD^\frac12_t \Phi_{i-1}} + \widehat{HD^\frac12_t \Phi_i} + \widehat{HD^\frac12_t \Phi_{i+1}} \big), \end{array} \end{align} where $H$ is the Hilbert transform. We used the fact that $D_t \Phi = H D^{\frac12}_t D^{\frac12}_t \Phi$. Note that $\|D_{X_k}\Phi_i\|_{L^1} \lesssim2^i$. Moreover, \begin{align*} H D^\frac12_t \Phi_i(X,t) &= \lim_{\ep \ri 0} \int_{\ep< |t-s| < \frac{1}{\ep}} \frac{sign(t-s)}{|t-s|^{\frac32}}\Phi_i(X,s)ds\\ &= \lim_{\ep \ri 0} \int_{\ep< |t-s| < \frac{1}{\ep}} \frac{sign(t-s)}{|t-s|^{\frac32}}(\Phi_i(X,s) - \Phi_i (X,t))ds, \end{align*} where $sign(t) =1$ if $t >0$ and $sign(t) =-1$ if $t < 0$. Hence, using change of variables (see \eqref{0618}), we get \begin{eqnarray*} \|HD^\frac12_t\Phi_i\|_{L^1} \lesssim\int_{{\bf R}^{n+1}} \int_{{\bf R}} \frac{ |\Phi_i(X,s) - \Phi_i (X,t)|}{|t-s|^{\frac32}}dsdXdt \lesssim2^i \|\Phi\|_{{\mathcal B}^{1,\frac12}_1({\bf R}^{n+1})}. \end{eqnarray*} Hence, applying Young's inequality in \eqref{0628-2}, we have \begin{align}\label{Sovk} \| f* \phi_i\|_{L^1} \leq 2^{-i} (\sum_{1 \leq k \leq n} \|D_{X_k}f * \phi_i\|_{L^1} + \|D^{\frac12}_t f * \phi_i \|_{L^1}) \quad i \geq 2. \end{align} Hence by (\ref{Sovk}), we have \begin{align*} \|f\|_{{\mathcal B}^{\al, \frac12 \al}_1} &= \|f * \psi\|_{L^1} + \sum_{1 \leq i < \infty} 2^{\al i} \|f * \phi_i\|_{L^1} \\ & \lesssim\Big( \|f * \psi\|_{L^1} + \| f* \phi_1\|_{L^1} + \sum_{2 \leq i < \infty} 2^{ (\al -1) i} \big( \sum_{1 \leq k \leq n} \|D_{X_k}f * \phi_i\|_{L^1} + \|D^{\frac12}_t f * \phi_i \|_{L^1} \big) \Big)\\ & \lesssim\Big(\|f\|_{{\mathcal B}^{\al -1,\frac12 \al -\frac12}_1} + \|D_{X_k}f\|_{{\mathcal B}^{\al -1,\frac12 \al -\frac12}_1} + \|D^\frac12_tf\|_{{\mathcal B}^{\al -1,\frac12 \al -\frac12}_1}\Big). \end{align*} Hence, we completed the proof of Theorem \ref{iterateb}. \end{proof} By (\ref{besovnorm1}), (\ref{besovnorm2}) and Theorem \ref{iterateb}, we get the following Corollary; \begin{coro}\label{iterateb2} \begin{itemize} \item[(1)] Let $ 1 \leq p \leq \infty$ and $\al \in {\bf R}$ such that $2i < \al < 2i+2$ for positive integer $l$. Then $f \in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n+1})$ if and only if $f, \, D^\be_{X} f, D^i_t f \in {\mathcal B}_p^{\al -2i,\frac12 \al -i} ({\bf R}^{n+1})$ for all $|\be| = 2i$. Furthermore, \begin{align*} \|f\|_{ {\mathcal B}^{\al , \frac12 \al}_p} \approx & \|f\|_{{\mathcal B}^{\al -2i,\frac12 \al -i}_p} + \sum_{ |\be| =2 i }\|D^\be_{X} f\|_{{\mathcal B}^{\al -2i,\frac12 \al -i}_p} + \|D^l_t f \|_{{\mathcal B}^{\al -2i,\frac12 \al -i}_p}. \end{align*} \item[(2)] In particular, for $1 \leq p < \infty$, we have \begin{align*} &\|f\|^p_{ {\mathcal B}^{\al, \frac12 \al}_p} \approx \|f\|^p_{W^{2i, i }_p} + \int_{\R}\!\! \int_{{\bf R} \times {\bf R}} \frac{|D^i_t f(X,t) - D^i_t f(X,s)|^p}{|t-s|^{ 1 + p\frac12(\al -2i )}}dtdsdX \\ & \qquad + \sum_{ |\be| =2i} \int_{{\bf R}} \int_{\R \times \R } \frac{|D^\be_{X} f(X+Y,t) -2 D^\be_{X} f(X,t) + D^\be_{X} f(X-Y,t)|^p}{|Y|^{ n + p(\al -2i )}}dXdYdt \end{align*} and \begin{align*} &\|f\|_{{\mathcal B}^{\al, \frac12 \al}_\infty} \approx \|f\|_{ W^{2 i, i }_\infty} +\sup_{X ,t,s, t \neq s} \frac{| D^i_tf(X,t) - D^i_tf(X,s)|}{|t-s|^{ \frac12 (\al -2i )}} \\ & \hspace{10mm} + \sum_{|\be| =2 i } \sup_{t,s, X,Y,Y \neq 0 } \frac{|D^\be_{ X} f(X+Y,t) -2D^\be_{X} f(X,t)-D^\be_{X} f(X-Y,t)|}{|Y|^{\al -2 i }}. \end{align*} \end{itemize} \end{coro} \begin{proof} Applying Theorem \ref{iterateb} two times, we obtain one side of (1). To show that the right side of (2) implies the left side of (3), we replace $\frac{1}{ - 4\pi^2|\xi|^2 +i\tau} \phi (\xi, \tau) $ by $\frac{1}{( - 4\pi^2|\xi|^2)^i + (i\tau)^i} \phi (\xi, \tau) $ in \eqref{0628-2} and apply the proof of Theorem \ref{iterateb}. (2) holds because of (1) and \eqref{prop2}. \end{proof} \section{Parabolic Sobolev and parabolic Besov space in ${\bf R}^{n}_T$} \setcounter{equation}{0} \label{sec4} If $i$ is non-negative integer, we define the parabolic Sobolev space $W_p^{2i,i} ({\bf R}^n_T), \,\, 0 < T \leq \infty$ by \begin{eqnarray*} W_p^{2i,i} ({\bf R}^n_T) = \{ f \, | \, D^{\be}_{X}D_t^l f \in L^p ({\bf R}^n_T), \,\, 0 \leq |\be | + 2l \leq 2i \}, \end{eqnarray*} so that the norm in $W^{2i,i}_p ({\bf R}^n_T)$ is defined by \begin{align*} \|f\|_{W^{2i,i}_p({\bf R}^n_T)} &= \Big(\sum_{2l + |\be| \leq 2i} \int \int_{{\bf R}^n_T} |D^{\be}_{X} D_t^l f(X,t) |^p dXdt\Big)^{\frac1p}, \quad 1 \leq p < \infty,\\ \|f\|_{W^{2i,i}_\infty({\bf R}^n_T)} & = \sum_{2l + |\be| \leq 2i} \sup_{(X,t) \in {\bf R}^n_T } |D^{\be}_{X} D_t^l f(X,t)|, \quad p= \infty \end{align*} Let $ 2i< \al <2i+2$. We would like to define parabolic Besov space ${\mathcal B}^{\al,\frac12 \al}_p ({\bf R}^n_T).$ We say that $f \in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^n_T)$ if and only if \begin{align*} & \|f\|^p_{W^{2i,i}_p ({\bf R}^n_T)}+\sum_{|\be| + 2l = 2i}\Big[ \int_{\R}\!\! \int_0^T\int_0^T \frac{|D_{X}^\beta D^l_t f(X,t) - D_{X}^\beta D^l_t f(X,s)|^p}{|t-s|^{ 1 + \frac12p (\al -2i)}}dtdsdX \\ & \int_0^T \int_{\R \times \R } \frac{|D_{X}^\beta D^l_tf(X+Y,t) -2 D_{X}^{\beta}D^l_tf(X,t) + D_{X}^\beta D^l_t f(X-Y,t)|^p}{|Y|^{ n + p(\al -2i)}}dXdYdt\Big] < \infty \end{align*} if $1 \leq p < \infty$ and \begin{align*} & \|f\|_{W^{2i,i}_\infty ({\bf R}^n_T)}+\sum_{|\be| + 2l = 2i}\Big[ \sup_{X,t,s, t \neq s} \frac{|D_{X}^\beta D^l_t f(X,t) - D_{X}^\beta D^l_t f(X,s)|}{|t-s|^{ \frac12 (\al -2i)}} \\ &+ \sup_{t,X,Y, Y\neq 0} \frac{|D_{X}^\beta D^l_tf(X+Y,t) -2 D_{X}^{\beta}D^l_tf(X,t) + D_{X}^\beta D^l_t f(X-Y,t)| }{|Y|^{\al -2i}} \Big] < \infty. \end{align*} \begin{prop}\label{prop3} Let $1 \leq p \leq \infty$. Suppose that there is a bounded linear operator $ E_{ {\bf R}^n_T }: W^{2i,i}_p ({\bf R}^n_T) \ri {\mathcal L}_{2i}^p ({\bf R}^{n+1})$ for all non-negative integer $i$ and $1 \leq p \leq \infty$ such that $E_{ {\bf R}^n_T} f =f$ in ${\bf R}^n_T$. Then for $0 < \te < 1, \,\, i < l$, we get $ (W^{2i,i}_p ( {\bf R}^n_T), W^{2l,l}_p ({\bf R}^n_T))_{p,\te}={\mathcal B}^{\al,\frac12 \al}_p ({\bf R}^n_T)$, where $\al = (1-\te) 2i + \te 2l$. \end{prop} \begin{proof} Applying Theorem 4.12 and Corollary 4.13 in \cite{BS}, for $i \in {\bf N}$, $1 \leq p \leq \infty$ and $0 < \te <1$, we obtain that $(L^p({\bf R}^{n+1}), W^{2i, i}_p({\bf R}^{n+1}))_{\te p} = {\mathcal B}^{2i \te, i \te}_p({\bf R}^{n+1}) $. Using the Proposition 2.4 and the Proposition 2.17 in \cite{JK}, we obtain Proposition \ref{prop3}. \end{proof} To apply the Proposition \ref{prop3}, we define extension operators from $W^{2i,i}_p ({\bf R}^{n}_\infty)$ to ${\mathcal L}^p_{2i} ({\bf R}^{n+1})$ and from $W^{2i,i}_p ({\bf R}^{n}_T)$ to ${\mathcal L}^p_{2i} ({\bf R}^{n+1})$. For $f \in W^{2i,i}_p ({\bf R}^{n}_\infty)$ we define extension $E_2 f$ of $f$ by \begin{align}\label{extension} E_2 f(X,t) = \left\{\begin{array}{l} f(X,t), \quad t\geq 0,\\ \sum_{1 \leq j\leq 2i+1} \la_j f (X, -jt) \quad t \leq 0, \end{array} \right. \end{align} where the coefficients $\la_1, \cdots , \la_{2i+1}$ are the unique solution of the $(2i+1) \times (2i+1)$ system of linear equations $$ \sum_{1 \leq j \leq 2i+1} (-j)^l \la_j =1, \quad l = 0, 1, \cdots ,2 i. $$ Then $E_2 f \in W^{2i,i}_p ({\bf R}^{n+1})$ with $ E_2f|_{{\bf R}^{n}_\infty} = f, \,\, \|E_2 f\|_{W^{2i,i}_p ({\bf R}^{n+1})} \leq c \| f\|_{W^{2i,i}_p ({\bf R}^{n}_\infty) }$ (see Theorem 4.26 in \cite{A}). We apply (\ref{extension}) to define the extension operator in $W^{2i,i}_p (\R_T )$. Let $g \in W^{2i,i}_p (\R_T )$. We define an extension $E_3$ by \begin{align*} E_3 g(X,t) =\te (t) \left \{\begin{array}{ll} \sum_{1 \leq j \leq 2i+1} \la_j g (X,-jt) & \quad -T < t <0,\\ g(X,t) & \quad 0 < t <T,\\ \sum_{1 \leq j \leq 2i+1} \la_j g (X,- j (2T-t)) & \quad T < t < 2T \end{array} \right. \end{align*} and $E_3 g (X,t) =0$ otherwise, where $\te \in C^\infty_c ({\bf R})$ such that $\te \equiv 1 $ in $(0, T)$ and $supp \, \te \subset (-T, 2T)$. Then $E_3 g|_{{\bf R}^{n}_T} = g$ and $\|E_3g \|_{W^{2i,i}_p ({\bf R}^{n+1})} \lesssim\| g\|_{W^{2i,i}_p (\R_T )}$. By Proposition \ref{prop3}, we have the following theorem. \begin{theo}\label{RBesov} Then, for $0 < \al $ and $1 \leq p \leq \infty$, ${\mathcal B}^{\al, \frac12 \al}_p (\R_T)$ is real interpolation space, that is, $(L^p(\R_T), W_p^{2i,i} (\R_T))_{p,\te} = {\mathcal B}^{2(1-\te) i, (1 -\te) i}_p (\R_T)$, $0 < T \leq \infty$. \end{theo} \begin{theo}\label{iterates3} Then, for $\al \geq 2$, and $1 \leq p \leq \infty$, $f \in {\mathcal B}^{\al,\frac12 \al}_p (\R_T)$ if and only if $ f, \,D_{X_k} f, \, D_{X_k} D_{ X_j } f, \, D_t f \in {\mathcal B}^{\al -2,\frac12 \al -1}_p (\R_T)$, $0 < T \leq \infty$. \end{theo} \begin{proof} Because of the similarity of the proof, we consider only the case of ${\bf R}^{n}_\infty$. We define extension operator, \begin{align*} E_4 f(X,t) =\left \{ \begin{array}{ll} f(X,t) \quad & t> 0,\\ \sum_{1 \leq j \leq 2i+1} (-j) \la_j f(X,-jt)\quad & t < 0. \end{array} \right. \end{align*} Then, $E_4 : W^{2l-2,l-1}_p ({\bf R}^{n}_\infty) \ri W^{2l-2,l-1}_p ({\bf R}^{n+1}), \, 0 \leq l \leq i$ is bounded operator and so by \eqref{RBesov}, we get $E_4 : {\mathcal B}^{\al, \frac12 \al}_p (\R_\infty) \ri {\mathcal B}^{\al,\frac12 \al}_p ({\bf R}^{n+1}), \,\, \al > 0,\,\, 1 \leq p \leq \infty$ is bounded operator. Note that \begin{align}\label{iterates4} D_{X_k} (E_2 f) = E_2 (D_{X_k} f), \,\, D_{X_i }D_{X_k} ( E_2 f) = E_2(D_{X_i}D_{X_k} f) ,\,\, D_t( E_2 f) = E_4 (D_t f). \end{align} Let $f \in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n+1}_+)$. Then $E_2 f \in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n+1})$ and by Corollary \ref{iterates2}, we have $$ E_2 f, \,\, D_{X_k}( E_2 f), \,\, D_{X_i X_k}( E_2 f), \,\, D_t( E_2 f) \in {\mathcal B}^{\al -2,\frac12 \al -1}_p ({\bf R}^{n+1}). $$ Hence by (\ref{iterates4}), we have \begin{align}\label{down} f, \,\, D_{X_k} f, \, D_{X_i}D_{ X_k} f, \,\, D_t f \in {\mathcal B}^{\al -2,\frac12 \al -1}_p ({\bf R}^{n}_\infty). \end{align} Conversely, suppose that \eqref{down} is true. Then $$ E_2f, \, E_2 D_{X_k} f, \, E_2 D_{X_i}D_{X_k}f, \, E_4 D_t f \in {\mathcal B}^{\al -2,\frac12 \al -1}_p ({\bf R}^{n+1}). $$ By (\ref{iterates4}) and Corollary \ref{iterateb2}, we have $E_2 f \in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n+1})$. Hence $E_2 f|_{\R_\infty} = f \in {\mathcal B}^{\al, \frac12 \al}_p(\Om).$ \end{proof} \begin{rem}\label{remark} Let $\al \geq 1$. If $ u \in {\mathcal B}^{\al, \frac12 \al}_p ({\bf R}^{n}_T)$, $0 < T\leq \infty$. Combining Theorem \ref{RBesov} and Theorem \ref{iterates3} we obtain the estimate \begin{align} \| D_X u \|_{{\mathcal B}^{\al-1, \frac12 \al -\frac12}_p ({\bf R}^{n}_T)} \lesssim\| u \|_{{\mathcal B}^{\al,\frac12 \al}_p ({\bf R}^{n}_T)}. \end{align} \end{rem} \section{Proofs of Theorem \ref{mainresult} } \setcounter{equation}{0} \label{sec5} In this section, we study the relation of usual Besov spaces ${\mathcal B}_p^{\al-\frac2p}(\R)$ and parabolic Besov spaces ${\mathcal B}^{\al,\frac12 \al}_p ({\bf R}^{n}_T)$. \begin{theo}\label{frac3p} Let $0 < T < \infty$. Let $ f \in {\mathcal B}_p^{- \frac2p}(\R)$ and $u$ be defined by (\ref{main4}). Then, for $ 1 \leq p \leq \infty$, we have \begin{align}\label{boundary2} \| u\|_{L^p(\R_T)} \lesssim \| f\|_{{\mathcal B}^{-\frac2p}_p (\R)}. \end{align} \end{theo} (Compare with the section 1.8.1 in \cite{Tr3}). We introduce a function $\phi' \in {\mathcal S} ({\bf R}^{n})$, the Schwartz space in $\R$, such that \begin{eqnarray*} \left\{\begin{array}{ll} \hat{ \phi}'(\xi) > 0, & \mbox{ on } 2^{-1} < |\xi| < 2,\\ \hat{\phi}' (\xi) = 0, &\mbox{ elsewhere}, \end{array} \right. \\ \sum_{-\infty < i < \infty} \hat{\phi}'(2^{-i}\xi) =1 ,& ( \xi \neq 0). \end{eqnarray*} We define functions $\phi'_i, \,\,\psi' \in {\mathcal S}(\R)$ whose Fourier transforms are written by \begin{eqnarray}\label{psi} \begin{array}{ll} \hat{\phi'_i}(\xi) &= \hat{\phi}'(2^{-i} \xi), \quad i = 0, \pm 1, \pm 2 , \cdots,\\ \hat{\psi'}(\xi) & = 1- \sum_{1 \leq i < \infty} \hat{\phi}' (2^{-i} \xi). \end{array} \end{eqnarray} As we defined the parabolic Besov space, we define a Besov space in $\R$. For $\al \in {\bf R}$ we define the Besov space ${\mathcal B}^{\al,\frac12 \al}_{pq} ({\bf R}^{n})$ by \begin{eqnarray*} {\mathcal B}^{\al,\frac12 \al}_{pq} ({\bf R}^{n}) = \{ f \in {\mathcal S}^{'}({\bf R}^{n}) \, | \, \|f\|_{{\mathcal B}^{\al, \frac12 \al}_{pq}} < \infty \, \} \end{eqnarray*} with the norms \begin{align*} \|f\|_{{\mathcal B}^{\al, \frac12 \al}_{pq}} :& = \| \psi' * f\|_{L^p} + ( \sum_{ 1 \leq i < \infty} (2^{\al i} \|\phi'_i * f\|_{L^p})^q)^{\frac1q}, \quad 1 \leq q < \infty,\\ \|f\|_{{\mathcal B}^{\al, \frac12 \al}_{p\infty}} :& = \sup (\| \psi' * f\|_{L^p} , \,\, 2^{\al i} \|\phi'_i * f\|_{L^p}), \end{align*} where $*$ is a convolution in ${\bf R}^{n}$. When $p=q$, we simply denote ${\mathcal B}^{\al,\frac12 \al}_{pp} $ by ${\mathcal B}^{\al,\frac12 \al}_p$. \begin{lemm}\label{multiplier} Let $\hat{\Psi}'(\xi) = \hat{\psi}'(\xi) + \hat{\phi}'( 2^{-1}\xi) + \hat{\phi}'( 2^{-2}\xi)$ and $\hat{\Phi}' (\xi) = \hat{\phi}'(2^{-1} \xi) + \hat{\phi}' (\xi) + \hat{\phi}' (2\xi)$. Let $\hat{\Phi}^{'}_i (\xi) = \hat{\Phi}^{'}(2^{-i} \xi), \,\, i \geq 2$ and let $\rho_{ti}(\xi) = \hat{\Phi}^{'}_i ( \xi) e^{-t|\xi|^2}$ for each integer $i\geq 2$. Then, $\rho_{ti}( \xi)$s' are $L^p({\bf R}^{n})$-multipliers with norms $M(t,i)$ for $1 \leq p \leq \infty$. Furthermore, for $t > 0$ \begin{align}\label{multiplier2} M(t,i) & \lesssim e^{-\frac14 t2^{2i}}\sum_{0 \leq l \leq L} t^l 2^{2il} \lesssim e^{-\frac18 t2^{2i}}, \end{align} where $L =[\frac{n}2] +1$. \end{lemm} \begin{proof} Let $t > 0$. The $L^p(\R)$-multiplier norms $M(t,i)$ of $\rho_{ti}(\xi) $ are equal to $L^p(\R)$-multiplier norms of $\rho_{ti}^{'}(\xi) = \hat{\Phi}^{'}(\xi) e^{-t2^{2i} |\xi|^2}$ (see Theorem 6.1.3 in \cite{BL}). To prove our lemma, we make use of the Lemma 6.1.5 in \cite{BL}. Let $\be = ( \be_1, \cdots , \be_n),$ where $\be_i$ are non-negative integers. Then, we have \begin{align*} |D^\be_{\xi} \rho_{ti}^{'}(\xi)| &\lesssim e^{-\frac14t 2^{2i}} \sum_{0 \leq l \leq |\be|} t^l 2^{2il} \chi_{\frac14 < |\xi| < 4} (\xi), \end{align*} where $\chi$ is a characteristic function. Let $L=[\frac{n}2] + 1$ and $\te = \frac{n}{2L}$. Then by Lemma 6.1.5 in \cite{BL}, the $L^p(\R)$-multiplier norms of $\rho^{'}_{ti}$ are dominated by \begin{align*} \|\rho'_{ti}\|_{L^2(\R)}^{1 - \te} \sup_{|\be|= L} \|D^\be \rho^{'}_{ti} \|_{L^2(\R)}^\te &\lesssim e^{-\frac14t 2^{2i}} \sum_{0 \leq l \leq L} t^l 2^{2il}. \end{align*} This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{frac3p}] Since the proof is similar, we only show in the case $1 \leq p < \infty$. To prove Theorem \ref{frac3p}, we use $\hat{\psi}' (\xi) + \sum_{1 \leq i < \infty} \hat{\phi}'(2^{-i} \xi) =1$ for all $\xi \in \R.$ Note that \begin{align*} \hat u(\xi,t) = \big( \hat{\Psi}'(\xi) \hat{\psi}' (\xi) + \hat{\Psi}'(\xi) \hat{\phi}' (2^{-1} \xi) \big) e^{-t|\xi |^2} \hat f + \sum_{i=2}^\infty \hat{\Phi}'(2^{-i} \xi) \hat{\phi}' (2^{-i} \xi) e^{-t|\xi |^2} \hat f. \end{align*} where $\hat u$ is the Fourier transform in $\R$. Hence, we have \begin{align*} \int_0^T \int_{\R} | u(X,t)|^pdXdt & \leq c_p\int_0^T \int_{\R} |{\mathcal F}^{-1} \Big( \big(\hat{\Psi}'(\xi) \hat{\psi}'(\xi) + \hat{\Psi}' (\xi) \hat{\phi}'_1 (\xi) \big) e^{-t|\xi|^2} \hat{f} \Big)|^p dXdt\\ & \quad + c_p\int_0^T \int_{\R} |{\mathcal F}^{-1} \Big(\sum_{2 \leq i < \infty} \hat{\Phi}'_i(\xi) e^{-t|\xi|^2} \hat{\phi}'_i(\xi) \hat{f} \Big)|^p dXdt. \end{align*} Note that by Young's inequality, we have \begin{align}\label{norm} \int_{\R} |\Ga(\cdot, t) *\Psi'| dX \leq \int_{\R} |\Psi' (X)| dX < \infty, \end{align} Applying Young's inequality again, the first term is dominated by \begin{align}\label{negative2} \int_0^T \big(\| f * \psi' \|^p _{L^p (\R)} + \| f * \phi'_1 \|^p _{L^p (\R)} \big) dt. \end{align} Since $\ \Phi^{'}_i( \xi) e^{-t |\xi|^2}$ are $L^p(\R)$-multipliers with norms $M(t,i)$ (see lemma \ref{multiplier}), we have \begin{align*} & \int_0^T \int_{\R} |{\mathcal F}^{-1} \Big(\sum_{ 1 \leq i < \infty} \psi^{''}_i( \xi) e^{-t|\xi|^2} \phi'_i(\xi) \hat{f} \Big)|^p dXdt\\ & \leq \int_0^T \Big(\sum_{t2^{2i} \leq 1} M(t,i) \| f * \phi'_i\|_{L^p} \Big)^pdt\\ & \quad + \int_0^T \Big(\sum_{t2^{2i} \geq 1} M(t,i) \| f * \phi'_i\|_{L^p} \Big)^pdt\\ & = I_1 + I_2. \end{align*} By Lemma \ref{multiplier}, for $t 2^{2i} \leq 1$, we have $ M(t,i) \lesssim.$ Since $\al < \frac2p$, we take $a \in {\bf R}$ satisfying $\al -\frac2p < a < 0$ and using H$\ddot{o}$lder inequality, we have \begin{align*} I_1 & \lesssim \int_0^T \Big(\sum_{t2^{2i} \leq 1 }2^{-\frac{p}{p-1}ai} \Big)^{p-1} \sum_{t2^{2i} \leq 1} 2^{pai} \| f * \phi'_i\|^p_{L^p}dt \\ & \lesssim \int_0^T t^{\frac12pa } \sum_{t2^{2i} \leq 1} 2^{pai} \| f * \phi'_i\|^p_{L^p}dt\\ & \lesssim \sum_{1 \leq i < \infty} 2^{pai} \| f * \phi'_i\|^p_{L^p} \int_0^{2^{-2i}} t^{\frac12pa }dt\\ & = c \sum_{1\leq i < \infty} 2^{-2i } \| f * \phi'_i\|^p_{L^p}. \end{align*} Now, we estimate $I_2$. By Lemma \ref{multiplier}, we have that $ M(t,i) \lesssim(t2^{2i})^{-m} \sum_{0 \leq i \leq L} t^i 2^{2ii} \lesssim2^{(2L-2m)i} t^{L-m} $ for $t 2^{2i} \geq 1$ and $m>0$. Let us take $m$ and $b$ satisfying $b >0$ and $\frac{p}2(2L -2m) + \frac12 pb +1 < 0$. Then, we get \begin{align*} &I_2 \lesssim \int_0^T \Big(\sum_{t2^{2i} \geq 1} 2^{(2L-2m)i} t^{L-m} \| f * \phi'_i\|_{L^p} \Big)^pdt \\ & \lesssim \int_0^\infty t^{\frac{p}2(2L -2m) } \Big(\sum_{t2^{2i} \geq 1}2^{-\frac{p}{p-1} bi} \Big)^{p-1} \sum_{t2^{2i} \geq 1} 2^{pbi}2^{p(2L-2m)i} \| f * \phi'_i\|^p_{L^p}dt \\ & \lesssim \int_0^\infty t^{\frac{p}2( 2L-2m ) + \frac12pb} \sum_{t2^{2i} \geq 1} 2^{pbi}2^{p(2L-2m)i} \| f * \phi'_i\|^p_{L^p}dt\\ & \lesssim \sum_{1 \leq i < \infty} 2^{pbi}2^{p(2L-2m)i} \| f * \phi'_i\|^p_{L^p} \int_{2^{-2i}}^\infty t^{\frac{p}2(2L -2m ) + \frac12pb}dt\\ & =c\sum_{1 \leq i < \infty} 2^{-2i } \| f * \phi'_i\|^p_{L^p}. \end{align*} Hence, we complete the proof of theorem \ref{frac3p}. \end{proof} \begin{theo}\label{frac2p} Let $1 \leq p \leq \infty$ and $i$ be a non-negative integer. Let $ f \in {\mathcal B}^{2i - \frac2p}(\R)$ and $u$ be defined by (\ref{main4}). Then, for $ T > 0$, we have \begin{align}\label{boundary2} \| u\|_{W^{2i,i}_p({\bf R}^{n}_T)}\lesssim \| f\|_{{\mathcal B}^{2i-\frac2p}_p (\R)}. \end{align} \end{theo} \begin{proof} From Theorem \ref{frac3p}, (\ref{boundary2}) holds for $i =0$. Let $ i > 0$. We denote $\De= \sum_{1 \leq k\leq n} D^2_{X_k}$ and $ \De^{l+1} = \De \De^l$ for $l \geq 2$. Since $D^{l}_t D_X^\be u (X,t)= \De^lD_X^\be u(X,t) = < \De^l D_X^\be f, \Ga (X -\cdot,t)>$ for $|\be| + 2l \leq 2 i$, by Theorem \ref{frac3p}, we have \begin{align*} \|D^l_tD_X^\be u\|_{ L^p({\bf R}^{n}_T)} & \lesssim \| \De^l D_X^\be f\|_{{\mathcal B}_p^{-\frac2p} (\R)} \lesssim \| f \|_{{\mathcal B}_p^{2i-\frac2p} (\R)}. \end{align*} For the last inequality, we used the well-known fact \begin{align}\label{equal} \|f\|_{{\mathcal B}^\al_p(\R)} \approx \|f \|_{{\mathcal B}^{\al -1}_p (\R)} + \| D_X f\|_{{\mathcal B}^{\al -1}_p (\R)} \end{align} for each $\al \in {\bf R}$ and $1 \leq p \leq \infty$ (see \cite{BL}). This completes the proof of Theorem \ref{frac2p}. \end{proof} In fact, for $ i \geq 1, \,\, 1 < p < \infty$ the Theorem \ref{frac2p} is known result before (see \cite{La}). \begin{theo}\label{inequality6} Let $f \in {\mathcal B}^{ -\frac2p}_p(\R)$ and $u$ be defined by (\ref{main4}).Then, for $ 1 \leq p \leq \infty$, \begin{align}\label{eequivalent} \| f\|_{{\mathcal B}^{-\frac2p}_p(\R)} & \lesssim \| u\|_{L^p (\R_T)}. \end{align} \end{theo} \begin{proof} Since the proof of the case $p = \infty$ is similar, we only prove the case $1 \leq p < \infty$. Note that the $L^p(\R)$-multiplier norms of $\hat{\phi}' (2^{-i} \xi) e^{|2^{-i} \xi|^2}$ are equal to the $L^p(\R)$-multiplier norm of $\hat{\phi}'(\xi) e^{|\xi|^2}$, where $\hat{\phi}'$ is defined in \eqref{psi} (see Theorem 6.1.3 in \cite{BL}). Using Lemma 6.1.5 in \cite{BL}, we have the $L^p(\R)$-multiplier norm of $ \hat{\phi}'(\xi) e^{|\xi|^2}$ is finite. Hence, for $1 \leq p < \infty$, we have \begin{align*} (2^{ -\frac2p i}\| f* \phi'_i\|_{L^p (\R)})^p &= 2^{ -2 i } \int_{\R} | {\mathcal F}^{-1} ( \hat{\phi}'(2^{-i}\xi) e^{2^{-2i}|\xi|^2} e^{-2^{-2i}|\xi|^2} \hat{f})|^p dX \\ & \lesssim2^{ -2 i} \int_{\R} | u (X, 2^{ - 2i}) |^p dX \\ & \lesssim \int_{2^{-2i}}^{2^{-2i+2}} \int_{\R} | u (X, 2^{ - 2i}) |^p dX dt \\ & \lesssim \int_{2^{-2i}}^{2^{-2i+2}} \int_{\R} \big( 2^{-i(n+2)} \int_{ J_{2^{-i -1} (X, 2^{-2i} )}} u(Y,s) dYds \big)^p dXdt \\ & \lesssim \int_{2^{-2i}}^{2^{-2i+2}} \int_{\R} | u (X, t) |^p dX dt, \end{align*} where $J_r (X,t) = \{ (Y,s) \in {\bf R}^{n+1} \, | \, |X-Y| < r ,\,\, |t-s|^\frac12 < r \}$. Hence, we have \begin{align*} \sum_{ 1 \leq i <\infty} (2^{-\frac2p i } \| f* \phi'_i\|_{L^p (\R)})^p & \lesssim \sum_{1 \leq i <\infty} \int_{2^{-2i}}^{2^{-2i+2}} \int_{\R} | u (X, t) |^p dX dt\\ & \lesssim \int_0^1 \int_{\R} | u (X, t) |^p dX dt\\ & \lesssim \int_0^1 \int_{\R} | u (X, t) |^p dX dt. \end{align*} Similarly, the $L^p(\R)$-multiplier of $\hat{\psi}'(\xi) e^{\frac12 |\xi|^2}$ is finite. Hence, we have \begin{align*} \| f * \psi'\|^p_{L^p(\R) } & \lesssim \int_{\R} | {\mathcal F}^{-1} ( \psi' e^{ \frac12 |\xi|^2} e^{- \frac12|\xi|^2} \hat{f})|^p \\ & \lesssim \int_{\R} | u (X, \frac12)|^p dX \\ & \lesssim \int_{\R} \int_{J_{\frac{1}4} (X,\frac12)} |u(Y,s)|^p dYds dX \\ & \lesssim \int_0^1 \int_{\R} |u (Y,s)|^p dYds. \end{align*} Hence, we proved Theorem \ref{inequality6} when $T =1$. For general $T > 0$, we use scaling. Note that \begin{align*} v(X,t) = u(T^\frac12X,T t) = \int_{\R} \Ga(X-Y, t) f_T(Y) dY, \end{align*} where $f_T(Y) = f(T^\frac12Y)$. Hence, we have \begin{align*} \|f_T\|_{ {\mathcal B}^{-\frac2p}_p } \lesssim\int_0^1\int_{\R} |v(X,t) |^p dXdt = c T^{-\frac{n+2}2} \int_0^T\int_{\R} |u(X,t) |^p dXdt. \end{align*} Since $\|f \|_{ {\mathcal B}^{-\frac2p}_p } \lesssim_T \|f_T\|_{ {\mathcal B}^{-\frac2p}_p } $, we obtain Theorem \ref{inequality6} for general $0 < T < \infty$. \end{proof} \begin{theo}\label{inequality5} Let $1 \leq p \leq \infty$ and $i$ be a non-negative integer. Let $f \in {\mathcal B}^{2i-\frac2p}_p(\R)$ and $u$ is defined by (\ref{main4}). Then \begin{align}\label{equivalent} \| f\|_{{\mathcal B}^{2i-\frac2p}_p(\R)} \lesssim \| u \|_{W_p^{2i,i}({\bf R}^{n}_T)}. \end{align} \end{theo} \begin{proof} In Theorem \ref{inequality6}, we have (\ref{equivalent}) for $ i=0$. Let $i > 0$. Notice that for $|\be| \leq 2i$, we have $D_{X}^\beta u (X,t) = c_n \int_{\R} t^{-\frac{n}2} e^{-\frac{|X-Y|^2}{4t}} D_{Y}^\beta f (Y) dY. $ By \eqref{equal} and \eqref{eequivalent}, we have \begin{align*} \| f \|_{{\mathcal B}^{2i -\frac2p} (\R )} \lesssim \sum_{|\be| \leq 2i} \| D_{X}^\beta f\|_{{\mathcal B}^{-\frac2p}_p (\R)} \lesssim \sum_{|\be| + 2l \leq 2i} \|D_{X}^\beta u \|_{L^p ({\bf R}^{n+1}_T)} \lesssim \| u\|_{W_p^{2i,i} ({\bf R}^{n}_T)} . \end{align*} This completes the proof of Theorem \ref{inequality5}. \end{proof} Combining Theorem \ref{frac3p}-Theorem \ref{inequality5} and by the real interpolation property, we obtain the result of Theorem \ref{mainresult}.
2,877,628,089,992
arxiv
\section{Introduction} It is an important problem to estimate the size of a maximum independent set in a graph, and Hoffman's bound\footnote{ See \cite{Haemers} for the history of the bound, and some related results including Delsarte's LP bound and Lov\'asz' theta bound. } is one the most useful algebraic tools for the problem. Recently, Filmus, Golubev, and Lifshitz \cite{FGL} extended the bound to hypergraphs. In this paper we apply these bounds to some problems concerning multiply intersecting families with biased measures. We start with the easiest and the most basic result about intersecting families with a biased measure. Let $V$ be a finite set and let $\mathcal A\subset 2^V$ be a family of subsets of $V$. For a fixed real number $p$ with $0<p<1$ we define the $p$-biased measure of the family $\mathcal A$ by \[ \mu_p(\mathcal A) = \sum_{A\in\mathcal A} p^{|A|} (1-p)^{|V|-|A|}. \] By definition it follows that $\mu_p(2^V)=1$. We say that $\mathcal A$ is \emph{intersecting} if $A\cap A'\neq\emptyset$ for all $A,A'\in\mathcal A$. A typical intersecting family is \[ \mathcal S=\{A\in 2^V:v\in A\} \] for some fixed $v\in V$. This family is called a \emph{star} centered at $v$. The star can be rewritten as $\{\{v\}\cup B:B\in 2^{W}\}$ where $W=V\setminus\{v\}$, and it follows that \[ \mu_p(\mathcal S)=\sum_{A\in\mathcal S} p\cdot p^{|A|-1} (1-p)^{|V|-|A|} =p \sum_{B\in 2^{W}} p^{|B|} (1-p)^{|W|-|B|} = p. \] Indeed, it is not difficult to show that if $p\leq \frac12$ and $\mathcal A\subset 2^V$ is intersecting, then $\mu_p(\mathcal A)\leq p$, see e.g., \cite{AK,FFFLO}, or Chapter~12 in \cite{FT2018}. We will extend this result in several ways. To state our problems and results we need some more notation and definitions. Let $n, r$ be positive integers and let $[n]:=\{1,2,\ldots,n\}$. We say that a family of subsets $\mathcal A\subset 2^{[n]}$ is \emph{$r$-wise intersecting} if $A_1\cap A_2\cap\cdots\cap A_r\neq\emptyset$ for all $A_1,A_2,\ldots,A_r\in\mathcal A$. Let $1>p_1\geq p_2\geq\cdots\geq p_n>0$ be real numbers, and let ${\bm p}=(p_1,p_2,\ldots,p_n)$. We define the ${\bm p}$-biased measure (or $\mu_{{\bm p}}$-measure) $\mu_{{\bm p}}:2^{[n]}\to(0,1)$ by \[ \mu_{{\bm p}}(A):=\prod_{i\in A}p_i\prod_{j\in[n]\setminus A}(1-p_j) \] for $A\in 2^{[n]}$, and for a family $\mathcal A\subset 2^{[n]}$ we define \[ \mu_{{\bm p}}(\mathcal A) := \sum_{A\in\mathcal A} \mu_{{\bm p}}(A). \] The star centered at $i\in[n]$ is an $r$-wise intersecting family with $\mu_{{\bm p}}(\mathcal A)=p_i$. Fishburn et al.\ \cite{FFFLO} studied the maximal $\mu_{{\bm p}}$-measure for 2-wise intersecting families using combinatorial tools. Then Suda et al.\ \cite{STT} extended their result to cross-intersecting families (see Theorem~\ref{STT-thm} in the last section) by solving a semidefinite programming problem, and posed the following conjecture. \begin{conj}[\cite{STT}]\label{conj1} Let $1>p_1\geq p_2\geq\cdots\geq p_n>0$ and ${\bm p}=(p_1,p_2,\ldots,p_n)$. Let $p_3<\frac12$. If $\mathcal A\subset 2^{[n]}$ is a $2$-wise intersecting family, then $\mu_{{\bm p}}(\mathcal A)\leq p_1$. Moreover, if $p_1>p_3$, or $p_1<\frac12$, then equality holds if and only if $\mathcal A$ is a star centered at some $i\in[n]$ with $p_1=p_i$. \end{conj} This conjecture is true if the condition $p_3\leq\frac12$ is replaced with $p_2\leq\frac12$, which is proved in \cite{FFFLO} and \cite{STT}. In this paper we apply Hoffman's bound to show the following result which supports the conjecture. \begin{thm}\label{thm1} Let $1>p_1\geq p_2\geq\cdots\geq p_n>0$ and ${\bm p}=(p_1,p_2,\ldots,p_n)$. Let $p_3<\frac12$. Suppose that $p_1\leq\frac12$ or $1-p_2>p_3$. If $\mathcal A\subset 2^{[n]}$ is a $2$-wise intersecting family, then $\mu_{{\bm p}}(\mathcal A)\leq p_1$. Moreover equality holds if and only if $\mathcal A$ is a star centered at some $i\in[n]$ with $p_1=p_i$. \end{thm} Frankl and the author \cite{FT2003} studied $r$-wise intersecting families with a $\mu_{{\bm p}}$-measure where ${\bm p}=(p,p,\ldots,p)$, and proved that if $p<\frac{r-1}r$ then $\mu_{{\bm p}}(\mathcal A)\leq p$ for every $r$-wise intersecting family $\mathcal A\subset 2^{[n]}$. The proof was combinatorial. Filmus et al.\ \cite{FGL} gave a new proof by extending Hoffman's bound to $r$-uniform hypergraphs ($r$-graphs). In this paper we further extend their method to obtain the following result. \begin{thm}\label{thm2} Let $1>p_1\geq p_2\geq\cdots\geq p_n>0$ and ${\bm p}=(p_1,p_2,\ldots,p_n)$. If $\frac23>p_2$ and $\mathcal A\subset 2^{[n]}$ is a $3$-wise intersecting family, then $\mu_{\bm p}(\mathcal A) \leq p_1$. Moreover equality holds if and only if $\mathcal A$ is a star centered at some $i\in[n]$ with $p_1=p_i$. \end{thm} Friedgut \cite{Friedgut} studied the case $r=2$ and ${\bm p}=(p,p,\ldots,p)$, and found a stability result. We combine his method with FGL bound to get a stability result for the case $r=3$. \begin{thm}\label{thm3} Let $0<p<\frac 23$ be fixed, and let ${\bm p}=(p,p,\ldots,p)$. Then there exists a positive constant $\epsilon_p$ such that the following holds for all $0<\epsilon<\epsilon_p$. If $\mathcal A\subset 2^{[n]}$ is a $3$-wise intersecting family with $\mu_{{\bm p}}(\mathcal A)=p-\epsilon$, then there exists a star $\mathcal B\subset 2^{[n]}$ such that \begin{itemize} \item[(i)] if $p\leq\frac12$ then $\mathcal A\subset\mathcal B$, and \item[(ii)] if $p>\frac12$ then $\mu_{{\bm p}}(\mathcal A\triangle\mathcal B)<(C_p+o(1))\epsilon$, where $C_p=\frac{16p(1-p)^2}{(2p-1)(3-4p)}$, and the $o(1)$ term vanishes as $\epsilon\to 0$. \end{itemize} \end{thm} Finally we mention that there are different and more combinatorial approaches to the related problems concerning weighted intersecting families, see, e.g., \cite{BE} and \cite{B}. In Section~2 we prepare tools for the proofs, and then we prove Theorems~\ref{thm1}--\ref{thm3} in Section~3. In Section~4 we discuss easy generalization and some related problems. \section{Preliminaries}\label{sec:prelim} In this section we collect some tools used to prove our results. In the first two subsections we reproduce the proof of Hoffman's bound for a hypergraph established by Filmus, Golubev, and Lifshitz, for in the next section we will use not only the bound itself but also some equalities and inequalities appeared in the proof. Our formulation and definitions follow those in \cite{Filmus}, which are slightly different from \cite{FGL} (but of course essentially the same). In the last subsection we present some basic facts about spectral information related to families with the $\mu_{\bm p}$-measure. Remark for notation: In this paper we often identify a set and its indicator. Let $V$ be a finite set and let $\{0,1\}^V$ denote the set of boolean functions from $V$ to $\{0,1\}$. For $I\in 2^V$ we write ${\bf 1}_I\in\{0,1\}^V$ to denote the indicator of $I$, that is, \[ {\bf 1}_I(v)=\begin{cases} 1 &\text{if }v\in I,\\ 0 &\text{if }v\not\in I, \end{cases} \] where $v\in V$. We identify $I$ and ${\bf 1}_I$, and we write $v\in{\bf 1}_I$ to mean $v\in I$. For simplicity we just write ${\bf 1}$ to mean ${\bf 1}_V$, so ${\bf 1}(v)=1$ for all $v\in V$. For $x,y\in V$ and a matrix $T$, where the rows and columns are indexed by $V$, we write $(T)_{x,y}$ for the $(x,y)$-entry of $T$. Throughout the paper let $q:=1-p$ and $q_i:=1-p_i$. \subsection{Hoffman's bound for a graph} Let $V$ be a finite set with $|V|\geq 2$. We say that $\mu_2:V\times V\to\mathbb R$ is a symmetric signed measure if \begin{itemize} \item $\mu_2(x,y)=\mu_2(y,x)$ for all $x,y\in V$, and \item $\sum_{x\in V}\sum_{y\in V}\mu_2(x,y)=1$. \end{itemize} Note that $\mu_2(x,y)$ can be negative, which is essential for the proof Theorem~\ref{thm1}. Let $\mu_1:V\to\mathbb R$ be the marginal of $\mu_2$, that is, \begin{align}\label{eq2:mu1} \mu_1(x):=\sum_{y\in V}\mu_2(x,y). \end{align} Then $\sum_{x\in V}\mu_1(x)=\sum_{x\in V}\sum_{y\in V}\mu_2(x,y)=1$. \begin{defn} Let $\mu_2:V\times V\to\mathbb R$ be a symmetric signed measure. We say that $G=(V,\mu_2)$ a weighted graph if \begin{align}\label{positivity} \mu_1(x)>0 \text{ for all } x\in V. \end{align} \end{defn} In this paper we only deal with $\mu_2$ whose marginal $\mu_1$ satisfies \eqref{positivity}. Here $V$ is the vertex set, and each $(x,y)\in V\times V$ is an ordered edge (or a directed edge) including loops with possibly negative weight $\mu_2(x,y)$. So $(x,y)$ should be considered a non-edge if and only if $\mu_2(x,y)=0$. We say that $I\subset V$ is an \emph{independent set} if $x,y\in I$ implies $\mu_2(x,y)=0$. Write $\mu_1(I)$ for $\sum_{x\in I}\mu_1(x)$, and define the independence ratio $\alpha(G)$ by \[ \alpha(G):=\max\{\mu_1(I):\text{$I$ is an independent set in $G$}\}. \] \begin{example} Let $G=(V,E)$ be a usual simple $d$-regular graph. Let us construct a symmetric measure $\mu_2$ so that $\mu_1$ becomes a uniform measure $\mu_1(x)\equiv 1/|V|$. To this end we just set \[ \mu_2(x,y):=\begin{cases} 0 & \text{if }\{x,y\}\not\in E\\ \frac1{d|V|} & \text{if }\{x,y\}\in E. \end{cases} \] Then $(V,\mu_2)$ is a weighted graph, and in this case $\alpha(G)=|I|/|V|$, where $I\subset V$ is a usual maximum independent set in $G$. \qed \end{example} Let $\mathbb R^V$ be the set of functions from $V$ to $\mathbb R$, and for $f,g\in\mathbb R^V$ let \begin{align*} \mathbb E_{\mu_1}[f]&:=\sum_{x\in V}f(x)\mu_1(x),\\ \mathbb E_{\mu_2}[f,g]&:=\sum_{x\in V}\sum_{y\in V}f(x)g(y)\mu_2(x,y). \end{align*} Since $\mu_2$ is symmetric it follows $\mathbb E_{\mu_2}[f,g]=\mathbb E_{\mu_2}[g,f]$. \begin{fact}\label{fact0} Let $\varphi:={\bf 1}_I\in\{0,1\}^V$ be the indicator of an independent set $I$. Then we have $\mathbb E_{\mu_1}[\varphi]=\mu_1(I)$ and $\mathbb E_{\mu_2}[\varphi,\varphi]=0$. \end{fact} \begin{proof} Indeed we have \[ \mathbb E_{\mu_1}[\varphi]=\sum_{x\in V}\varphi(x)\mu_1(x)=\sum_{x\in I}\mu_1(x)=\mu_1(I). \] Since $\mu_2(x,y)=0$ for $x,y\in I$ we also have \[ \mathbb E_{\mu_2}[\varphi,\varphi]=\sum_{x\in V}\sum_{y\in V}\varphi(x)\varphi(y)\mu_2(x,y) =\sum_{x\in I}\sum_{y\in I}\mu_2(x,y)=0. \] \end{proof} We define a measure version of the adjacency matrix, which is an extension of the usual adjacency matrix, and we simply call it an adjacency matrix in this paper. \begin{defn} Let $G=(V,\mu_2)$ be a weighted graph. We define the adjacency matrix $T=T(G)$. This is a $|V|\times|V|$ matrix, and for $x,y\in V$ the $(x,y)$-entry of $T$ is given by \begin{align}\label{def:T} (T)_{x,y}=\frac{\mu_2(x,y)}{\mu_1(x)}. \end{align} \end{defn} We introduce an inner product ${\langle} \cdot,\cdot{\rangle}_{\mu_1}:\mathbb R^V\times\mathbb R^V\to\mathbb R$ by \begin{align}\label{inner product} {\langle} f,g{\rangle}_{\mu_1}:=\mathbb E_{\mu_1}[fg]=\sum_{x\in V}f(x)g(x)\mu_1(x). \end{align} Note that the condition \eqref{positivity} is necessary to define the above inner product properly. Clearly ${\langle} f,g{\rangle}_{\mu_1}={\langle} g,f{\rangle}_{\mu_1}$. We list some easy facts. (We include the proof in Appendix.) \begin{fact}\label{fact1} Let $G=(V,\mu_2)$ be a weighted graph with the adjacency matrix $T$. Let $f,g\in\mathbb R^V$ and $\varphi\in\{0,1\}^V$. \begin{itemize} \item[(i)] ${\langle} f,Tg{\rangle}_{\mu_1}=\mathbb E_{\mu_2}[f,g]$. \item[(ii)] ${\langle} f,Tg{\rangle}_{\mu_1}={\langle} Tf,g{\rangle}_{\mu_1}$, that is, $T$ is self-adjoint. \item[(iii)] $T{\bf 1}={\bf 1}$. \item[(iv)] ${\langle}{\bf 1},{\bf 1}{\rangle}_{\mu_1}=1$. \item[(v)] ${\langle} \varphi,{\bf 1}{\rangle}_{\mu_1}=\mathbb E_{\mu_1}[\varphi]$. \item[(vi)] ${\langle} \varphi,\varphi{\rangle}_{\mu_1}=\mathbb E_{\mu_1}[\varphi]$. \end{itemize} \end{fact} \begin{setup}\label{setup} Let $G=(V,\mu_2)$ be a weighted graph with the adjacency matrix $T$. By Fact~\ref{fact1} (iii) the matrix $T$ has eigenvector ${\bf 1}$ with the eigenvalue $1$. Since $T$ is self-adjoint, $T$ has $|V|$ real eigenvalues $l_0=1,l_1,l_2,\ldots,l_{|V|-1}$ with the corresponding eigenvectors ${\bm v}_0={\bf 1},{\bm v}_1,{\bm v}_2\ldots,{\bm v}_{|V|-1}$, that is, $T{\bm v}_i=l_i{\bm v}_i$. We may assume that these vectors consist of an orthonormal basis of $\mathbb R^V$ with respect to the inner product ${\langle}\cdot,\cdot{\rangle}_{\mu_1}$. Then, for $\varphi\in\{0,1\}^V$, we can expand $\varphi$ using the basis: \begin{align}\label{eq:setup} \varphi=\widehat\varphi_0{\bf 1}+\sum_{i\geq 1}\widehat\varphi_i{\bm v}_i, \end{align} where $\widehat\varphi_i={\langle} \varphi,{\bm v}_i{\rangle}_{\mu_1}$. Let $\lambda_{\min}(T)$ denote the minimum eigenvalue of $T$. \qed \end{setup} \begin{fact}\label{fact2} Let $\varphi\in\{0,1\}^V$. \begin{itemize} \item[(i)] $\mathbb E_{\mu_1}[\varphi]=\widehat\varphi_0$ and $\mathbb E_{\mu_1}[\varphi]=\widehat\varphi_0^2+\sum_{i\geq 1}\widehat\varphi_i^2$. \item[(ii)] $\mathbb E_{\mu_2}[\varphi,\varphi]=\widehat\varphi_0^2+\sum_{i\geq 1}\widehat\varphi_i^2l_i$. \end{itemize} \end{fact} \begin{lemma}\label{E[f,f]>} For $\varphi\in\{0,1\}^V$ we have $\mathbb E_{\mu_2}[\varphi,\varphi]\geq \mathbb E_{\mu_1}[\varphi]\big(1-(1-\lambda_{\min}(T))(1-\mathbb E_{\mu_1}[\varphi])\big)$. \end{lemma} \begin{proof} By Fact~\ref{fact2} (ii) we have \begin{align*} \mathbb E_{\mu_2}[\varphi,\varphi] &= \widehat\varphi_0^2+\sum_{i\geq 1} \widehat\varphi_i^2l_i\\ &\geq \widehat\varphi_0^2+\lambda_{\min}(T)\sum_{i\geq 1} \widehat\varphi_i^2\\ &= \mathbb E_{\mu_1}[\varphi]^2+\lambda_{\min}(T)(\mathbb E_{\mu_1}[\varphi]- \mathbb E_{\mu_1}[\varphi]^2)\quad\text{by Fact~\ref{fact2} (i)}\\ &= \mathbb E_{\mu_1}[\varphi]\big(1-(1-\lambda_{\min}(T))(1-\mathbb E_{\mu_1}[\varphi])\big). \end{align*} \end{proof} \begin{thm}[Hoffman's bound, see \cite{Haemers}]\label{thm:Hoffman bound for 2-graph} Let $G=(V,\mu_2)$ be a weighted graph with the adjacency matrix $T$. Let $\varphi$ be the indicator of an independent set of $G$. Suppose that $\lambda_{\min}(T)<1$. Then we have \[ 1-\mathbb E_{\mu_1}[\varphi]\geq\frac1{1-\lambda_{\min}(T)}, \] and \[ \alpha(G)\leq\frac{-\lambda_{\min}(T)}{1-\lambda_{\min}(T)}. \] \end{thm} \begin{proof} By Lemma~\ref{E[f,f]>} with $\mathbb E_{\mu_2}[\varphi,\varphi]=0$ it follows \[ 0\geq \mathbb E_{\mu_1}[\varphi]\big(1-(1-\lambda_{\min}(T))(1-\mathbb E_{\mu_1}[\varphi])\big) =\mathbb E_{\mu_1}[\varphi]\big(\lambda_{\min}(T)+(1-\lambda_{\min}(T))\mathbb E_{\mu_1}[\varphi]\big). \] Since $\mathbb E_{\mu_1}(\varphi)>0$ and $\lambda_{\min}(T)<1$ we get the desired inequality. \end{proof} \subsection{Hoffman's bound for a 3-graph} Let $V$ be a finite set with $|V|\geq 2$, and let $\mu_3:V^3\to\mathbb R$ be a symmetric signed measure, that is, \begin{itemize} \item $\mu_3(x,y,z)=\mu_3(p,q,r)$ whenever $(p,q,r)$ is a permutation of $(x,y,z)$, and \item $\sum_{x\in V}\sum_{y\in V}\sum_{z\in V}\mu_3(x,y,z)=1$. \end{itemize} Define the marginals $\mu_2\in\mathbb R^{V^2}$ and $\mu_1\in\mathbb R^V$ as follows: \begin{align*} \mu_2(x,y)&:= \sum_{z\in V}\mu_3(x,y,z).\\ \mu_1(x)&:= \sum_{y\in V}\mu_2(x,y)=\sum_{y\in V}\sum_{z\in V}\mu_3(x,y,z). \end{align*} \begin{defn} Let $\mu_3:V^3\to\mathbb R$ be a symmetric signed measure. We say that $H=(V,\mu_3)$ is a weighted 3-graph if $\mu_2(x,y)>0$ for all $x,y\in V$. \end{defn} Note that $\mu_1(x)>0$ for all $x$ follows from \eqref{eq2:mu1}. We will consider two inner products, one is with respect to $\mu_1(x)$, and the other is with respect to $\mu_2(x,y)/\mu_1(x)$ for fixed $x$. We need the conditions in the above definition to ensure that these inner products are defined properly. We say that a subset $I\subset V$ is an independent set in $H$ if $\mu_3(x,y,z)=0$ for all $x,y,z\in I$, and we define the independence ratio $\alpha(H)$ by \[ \alpha(H):=\max\{\mu_1(I):\text{$I$ is an independent set in $H$}\}. \] Suppose that $H=(V,\mu_3)$ is a weighted 3-graph. Then $G=(V,\mu_2)$ is a weighted 2-graph because $\mu_2$ is a symmetric measure and $\mu_1$ satisfies \eqref{positivity}. The adjacency matrix $T=T(G)$ is defined by \eqref{def:T}. Now we define a link graph relative to $x$ which will be denoted by $H_x$. To this end, for $x,y,z\in V$, we define $\mu_{2,x}\in\mathbb R^{V^2}$ and its marginal $\mu_{1,x}\in\mathbb R^V$ by \begin{align*} \mu_{2,x}(y,z)&:=\frac{\mu_3(x,y,z)}{\mu_1(x)},\\ \mu_{1,x}(y)&:=\sum_{z\in V}\mu_{2,x}(y,z)=\frac{\mu_2(x,y)}{\mu_1(x)}. \end{align*} Then $H_x:=(V,\mu_{2,x})$ is a weighted 2-graph because $\mu_{2,x}$ is a symmetric measure and $\mu_{1,x}$ satisfies \eqref{positivity}. The adjacency matrix $T_x=T_x(H_x)$ is also defined by \eqref{def:T}, so the $(y,z)$-entry of $T_x$ is \begin{align}\label{def:Tx} (T_x)_{y,z}=\frac{\mu_{2,x}(y,z)}{\mu_{1,x}(y)}=\frac{\mu_3(x,y,z)}{\mu_2(x,y)}.\end{align} By definition both $T$ and $T_x$ are self-adjoint, and they have $|V|$ real eigenvalues. We can relate $\mathbb E_{\mu_2}$ and $\mathbb E_{\mu_{1,x}}$ as follows. Here we write $x\in\varphi$ to mean $\varphi(x)=1$. \begin{lemma}\label{claim:E[f,f]} For $\varphi\in\{0,1\}^V$ we have $\mathbb E_{\mu_2}[\varphi,\varphi] \leq \mathbb E_{\mu_1}[\varphi]\,\max_{x\in\varphi}\mathbb E_{\mu_{1,x}}[\varphi]$. \end{lemma} \begin{proof} Note that if $x\not\in\varphi$ then $\varphi(x)=0$ and the term having $\varphi(x)$ does not contribute in the sum below. Thus we have \begin{align*} \mathbb E_{\mu_2}[\varphi,\varphi] &= \sum_{x\in V}\sum_{y\in V} \varphi(x)\varphi(y)\mu_2(x,y)\\ &= \sum_{x\in\varphi}\varphi(x)\mu_1(x)\sum_{y\in V} \varphi(y)\frac{\mu_2(x,y)}{\mu_1(x)}\\ &= \sum_{x\in\varphi}\varphi(x)\mu_1(x)\sum_{y\in V} \varphi(y)\mu_{1,x}(y)\\ &\leq\sum_{x\in V}\varphi(x)\mu_1(x)\,\max_{x\in\varphi}\sum_{y\in V}\varphi(y)\mu_{1,x}(y)\\ &= \mathbb E_{\mu_1}[\varphi]\,\max_{x\in\varphi}\mathbb E_{\mu_{1,x}}[\varphi]. \end{align*} \end{proof} \begin{thm}[Hoffman's bound for a 3-graph \cite{FGL}]\label{thm:3-graph Hoffman} Let $H=(V,\mu_3)$ be a weighted $3$-graph. Let $\varphi$ be the indicator of an independent set. Suppose that $\lambda_{\min}(T)<1$ and $\lambda_{\min}(T_x)<1$ for $x\in\varphi$. Then \[ 1-\mathbb E_{\mu_1}[\varphi]\geq\frac1{(1-\lambda_{\min}(T))\max_{x\in\varphi}(1-\lambda_{\min}(T_x))}. \] In particular, if $\varphi$ is the indicator of a maximum independent set, then \[ \alpha(H)\leq 1- \frac1{(1-\lambda_{\min}(T))\max_{x\in\varphi}(1-\lambda_{\min}(T_x))}. \] \end{thm} \begin{proof} Let $I\subset V$ be the independent set in $H$ such that ${\bf 1}_I=\varphi$. By Lemma~\ref{E[f,f]>} we have \begin{align*} \mathbb E_{\mu_2}[\varphi,\varphi]\geq\mathbb E_{\mu_1}[\varphi]\big( 1-(1-\lambda_{\min}(T))(1-\mathbb E_{\mu_1}[\varphi])\big). \end{align*} This together with Lemma~\ref{claim:E[f,f]} yields \[ \mathbb E_{\mu_1}[\varphi]\,\max_{x\in\varphi}\mathbb E_{\mu_{1,x}}[\varphi]\geq \mathbb E_{\mu_1}[\varphi]\big(1-(1-\lambda_{\min}(T))(1-\mathbb E_{\mu_1}[\varphi])\big), \] that is, \begin{align}\label{eq:la1+la2} 1-\mathbb E_{\mu_1}[\varphi]\geq\frac{1-\max_{x\in\varphi}\mathbb E_{\mu_{1,x}}[\varphi]}{1-\lambda_{\min}(T)}. \end{align} Next we bound $\mathbb E_{\mu_1,x}[\varphi]$ by using the the link graph $H_x:=(V,\mu_{2,x})$ relative to $x\in I$. Note that $I$ is an independent set in $H_x$ as well. Indeed if $y,z\in I$ then $\mu_{2,x}(y,z)=0$ because $\mu_3(x,y,z)=0$. By applying Theorem~\ref{thm:Hoffman bound for 2-graph} to the adjacency matrix $T_x$ of $H_x$ we get \[ 1-\mathbb E_{\mu_{1,x}}[\varphi]\geq\frac1{1-\lambda_{\min}(T_x)}, \] and \begin{align}\label{eq:1-E[mu_{1,x}]} 1-\max_{x\in\varphi}\mathbb E_{\mu_{1,x}}[\varphi]\geq \frac1{\max_{x\in\varphi}(1-\lambda_{\min}(T_x))}. \end{align} By \eqref{eq:la1+la2} and \eqref{eq:1-E[mu_{1,x}]} we obtain the desired inequality. \end{proof} \subsection{Tools for uniqueness}\label{subsec:unique} In Setup~\ref{setup} any $\varphi\in\{0,1\}^V$ can be expanded in the form in \eqref{eq:setup}. We first show that if $\mathbb E_{\mu_2}[\varphi,\varphi]$ is small, then we only need the eigenvectors corresponding to the largest and the smallest eigenvalues for the expansion. \begin{lemma}\label{reduction1} We assume Setup~\ref{setup}. If \begin{align}\label{eq:red1} \mathbb E_{\mu_2}[\varphi,\varphi]\leq \widehat\varphi_0^2+\lambda_{\min}(T)(\widehat\varphi_0-\widehat\varphi_0^2), \end{align} then $\varphi=\widehat\varphi_0{\bf 1}+\sum_{i\in J}\widehat\varphi_i{\bm v}_i$, where $J=\{i:1\leq i<|V|,\,l_i=\lambda_{\min}(T)\}$. \end{lemma} \begin{proof} By Fact~\ref{fact2} and \eqref{eq:red1} we have \[ \mathbb E_{\mu_2}[\varphi,\varphi]=\widehat\varphi_0^2+\sum_{i\geq 1}\widehat\varphi_i^2l_i \leq\widehat\varphi_0^2+\lambda_{\min}(T)(\widehat\varphi_0-\widehat\varphi_0^2) =\widehat\varphi_0^2+\lambda_{\min}(T)\sum_{i\geq 1}\widehat\varphi_i^2, \] and \[ \sum_{i\geq 1}(l_i-\lambda_{\min}(T))\widehat\varphi_i^2 \leq 0. \] This yields $l_i=\lambda_{\min}(T)$ or $\widehat\varphi_i=0$, and the result follows. \end{proof} Let $1>p_1\geq p_2\geq\cdots\geq p_n>0$ be given. Let $V=2^{[n]}$ and let $\mu_1:V\to(0,1)$ be the measure defined by $\mu_1(S)=\prod_{i\in S}p_i\prod_{j\in [n]\setminus S}q_j$ for $S\in V$. Then we can view $\mathbb R^V$ as a $2^n$-dimensional inner space over $\mathbb R$, where the inner product is defined by \eqref{inner product}. We will construct an orthonormal basis that suits our purpose. \begin{fact}\label{T^(i)} Let $p_i>\frac{r-2}{r-1}$ and let \[ T^{(i)}= \left[ \begin{matrix} 1-\frac{p_i}{(r-1)q_i}& \frac{p_i}{(r-1)q_i}\\ \frac1{r-1}& 1-\frac1{r-1} \end{matrix} \right]. \] Then $T^{(i)}$ has eigenvalues 1 and $\lambda_i:=1-\frac{1}{(r-1)q_i}<0$ with the corresponding eigenvectors ${\bm v}_{\emptyset}^{(i)}=\left(\begin{smallmatrix} 1\\1 \end{smallmatrix}\right)$ and ${\bm v}_{\{i\}}^{(i)}=\left(\begin{smallmatrix} c_i\\-\frac1{c_i} \end{smallmatrix}\right)$, where $c_i=\sqrt{p_i/q_i}$. \end{fact} Let $T=T^{(n)}\otimes T^{(n-1)}\otimes\cdots\otimes T^{(1)}$. Then the rows and columns of $T$ are indexed by the order $\emptyset, \{1\}, \{2\}, \{1,2\},\{3\},\{1,3\},\{2,3\},\{1,2,3\},\ldots$. For each $S\in V$ the corresponding indicator is given by the column vector of the matrix \[ \left[ \begin{matrix} 1&0\\ 1&1 \end{matrix} \right] \otimes\cdots\otimes \left[ \begin{matrix} 1&0\\ 1&1 \end{matrix} \right]. \] One can construct the eigenvectors of $T$ by routine computation, and we have the following, see e.g., \cite{Friedgut, STT}, \begin{fact}\label{ONB} Let $S\in V$ and let ${\bm v}_S$ be the column vector (indexed by $S$) of the matrix \[ C_n:= \left[ \begin{matrix} 1&c_n\\ 1& -\frac1{c_n} \end{matrix} \right] \otimes \left[ \begin{matrix} 1&c_{n-1}\\ 1& -\frac1{c_{n-1}} \end{matrix} \right] \otimes\cdots\otimes \left[ \begin{matrix} 1&c_1\\ 1& -\frac1{c_1} \end{matrix} \right]. \] \begin{itemize} \item[(i)] The $\mathbb R^V$ with the inner product defined by \eqref{inner product} is spanned by the orthonormal basis $\{{\bm v}_S:S\in V\}$. \item[(ii)] The ${\bm v}_S$ is an eigenvector of $T$ with the corresponding eigenvalue $\lambda_S:=\prod_{j\in S}\lambda_j$. In particular, ${\bm v}_{\emptyset}={\bf 1}$ and $\lambda_{\emptyset}=1$. \item[(iii)] The entry of ${\bm v}_{\{i\}}$ corresponding to $S\in V$ is $c_i$ if $i\not\in S$ and $-1/c_i$ if $i\in S$, and $p_i{\bf 1}-\sqrt{p_iq_i}{\bm v}_{\{i\}}$ is the indicator of the star centered at $i$, i.e., $\{S\in V:i\in S\}$. \item[(iv)] We have ${\bm v}_S=\prod_{i\in S}{\bm v}_{\{i\}}$, where the product is taken componentwise. \end{itemize} \end{fact} For example, the matrix $C_3$ is as follows, where the columns are in the order ${\bm v}_\emptyset$, ${\bm v}_{\{1\}}$, ${\bm v}_{\{2\}}$, ${\bm v}_{\{1,2\}}$, ${\bm v}_{\{3\}}$, ${\bm v}_{\{1,3\}}$, ${\bm v}_{\{2,3\}}$, ${\bm v}_{\{1,2,3\}}$. \[ C_3= \left[ \begin{array}{cccccccc} 1 & c_1 & c_2 & c_1 c_2 & c_3 & c_1 c_3 & c_2 c_3 & c_1 c_2 c_3 \\ 1 & -\frac{1}{c_1} & c_2 & -\frac{c_2}{c_1} & c_3 & -\frac{c_3}{c_1} & c_2 c_3 & -\frac{c_2 c_3}{c_1} \\ 1 & c_1 & -\frac{1}{c_2} & -\frac{c_1}{c_2} & c_3 & c_1 c_3 & -\frac{c_3}{c_2} & -\frac{c_1 c_3}{c_2} \\ 1 & -\frac{1}{c_1} & -\frac{1}{c_2} & \frac{1}{c_1 c_2} & c_3 & -\frac{c_3}{c_1} & -\frac{c_3}{c_2} & \frac{c_3}{c_1 c_2} \\ 1 & c_1 & c_2 & c_1 c_2 & -\frac{1}{c_3} & -\frac{c_1}{c_3} & -\frac{c_2}{c_3} & -\frac{c_1 c_2}{c_3} \\ 1 & -\frac{1}{c_1} & c_2 & -\frac{c_2}{c_1} & -\frac{1}{c_3} & \frac{1}{c_1 c_3} & -\frac{c_2}{c_3} & \frac{c_2}{c_1 c_3} \\ 1 & c_1 & -\frac{1}{c_2} & -\frac{c_1}{c_2} & -\frac{1}{c_3} & -\frac{c_1}{c_3} & \frac{1}{c_2 c_3} & \frac{c_1}{c_2 c_3} \\ 1 & -\frac{1}{c_1} & -\frac{1}{c_2} & \frac{1}{c_1 c_2} & -\frac{1}{c_3} & \frac{1}{c_1 c_3} & \frac{1}{c_2 c_3} & -\frac{1}{c_1 c_2 c_3} \\ \end{array} \right]. \] \begin{lemma}\label{reduction2} Let $L=\{i\in[n]:p_i=p_1\}$ and $\varphi\in\{0,1\}^V$. Suppose that $\lambda_{\min}(T)<\lambda_S$ for all $S\in V\setminus\binom L1$. If $\varphi(\emptyset)=0$ and $\varphi([n])=1$, and $\varphi$ is expanded as \begin{align}\label{unique expansion} \varphi=p_1{\bf 1}+\sum_{k\in L}\widehat\varphi_{\{k\}}{\bm v}_{\{k\}}, \end{align} then $\varphi$ is the indicator of a star centered at some $i\in L$. \end{lemma} \begin{proof} We first show that there is only one $i\in [n]$ such that $\varphi=p_1{\bf 1}-\sqrt{p_iq_i}\,{\bm v}_{\{i\}}$. Suppose, to the contrary, that there are distinct $i,j$ such that both $\widehat\varphi_{\{i\}}$ and $\widehat\varphi_{\{j\}}$ are non-zero. Let $\varphi^2\in\{0,1\}^V$ be such that $\varphi^2(x)=\varphi(x)^2$. Then, by Fact~\ref{ONB} (iv), $\varphi^2=(p_1{\bf 1}+\sum_{k\in L}\widehat\varphi_{\{k\}}{\bm v}_{\{k\}})^2$ must contain the term \[ \widehat\varphi_{\{i\}}\widehat\varphi_{\{j\}}{\bm v}_{\{i\}}{\bm v}_{\{j\}} =\widehat\varphi_{\{i\}}\widehat\varphi_{\{j\}}{\bm v}_{\{i,j\}} \] whose coefficient $\widehat\varphi_{\{i\}}\widehat\varphi_{\{j\}}$ is non-zero. But this contradicts the fact that the expansion \eqref{unique expansion} is unique and $\varphi=\varphi^2$. Therefore we can write $\varphi=p_1{\bf 1}+\widehat\varphi_{\{i\}}{\bm v}_{\{i\}}$ for some $i\in[n]$. By Fact~\ref{ONB} (iii) we have ${\bm v}_{\{i\}}(\emptyset)=c_i$ and ${\bm v}_{\{i\}}([n])=-1/c_i$, and so \[ \varphi(\emptyset)=0=p_1+\widehat\varphi_{\{i\}}c_i \text{ and } \varphi([n])=1=p_1-\widehat\varphi_{\{i\}}/c_i. \] Solving the equations we get $\widehat\varphi_{\{i\}}=-\sqrt{p_1q_1}$ and $c_i=c_1$. This means that $i\in L$. Consequently $\varphi=p_i{\bf 1}-\sqrt{p_iq_i}{\bm v}_{\{i\}}$, and by (iii) of Fact~\ref{ONB} we complete the proof. \end{proof} \section{Application} Recall that a family of subsets $\mathcal A\subset 2^{[n]}$ is $r$-wise intersecting if $A_1\cap A_2\cap\cdots\cap A_r\neq\emptyset$ for all $A_1,\ldots,A_r\in\mathcal A$. Let ${\bm p}=(p_1,\ldots,p_n)\in(0,1)^n$ be a fixed real vector. The $\mu_{{\bm p}}$-measure of a family $\mathcal A\subset 2^{[n]}$ is defined by \[ \mu_{{\bm p}}(\mathcal A):=\sum_{A\in\mathcal A}\prod_{i\in A}p_i\prod_{j\in[n]\setminus A}q_j. \] \subsection{2-wise case: Proof of Theorem~\ref{thm1}} \begin{proof}[Proof of Theorem~\ref{thm1}] The case $n=1$. In this case the only intersecting family is $\mathcal A=\{\{1\}\}$, and $\mu_{{\bm p}}(\mathcal A)=p_1$, where ${\bm p}=(p_1)$. But we will get this result by using Hoffman's bound because the spectral information in this case will be used later to get the spectral information for the general $n\geq 2$ case. Let $V^{(1)}=2^{\{1\}}=\{\emptyset,\{1\}\}$, and define the symmetric signed measure $\mu_2^{(1)}:V^{(1)}\times V^{(1)}\to\mathbb R$ by \[ \mu_2^{(1)}(\emptyset,\{1\})= \mu_2^{(1)}(\{1\},\emptyset)=p_1,\quad \mu_2^{(1)}(\emptyset,\emptyset)=1-2p_1,\quad \mu_2^{(1)}(\{1\},\{1\})=0. \] This induces the marginal \[ \mu_1^{(1)}(\{1\})=p_1,\quad\mu_1^{(1)}(\emptyset)=q_1. \] Then we obtain a weighted 2-graph $G=(V,\mu_2^{(1)})$. Note that $\mu_{{\bm p}}=\mu_1^{(1)}$. (Indeed this $\mu_2$ is the only symmetric signed measure which satisfies $\mu_2^{(1)}(\{1\},\{1\})=0$ and $\mu_{{\bm p}}=\mu_1^{(1)}$.) The adjacency matrix $T^{(1)}$ is given by \[ T^{(1)}=\left[ \begin{matrix} 1-\frac{p_1}{q_1}& \frac{p_1}{q_1}\\ 1& 0 \end{matrix} \right], \] where the rows and columns are indexed in the order $\emptyset,\{1\}$. This matrix has eigenvalues $1$ and $-\frac{p_1}{q_1}$. Thus $\lambda_{\min}(T^{(1)})=-\frac{p_1}{q_1}$. Then by Theorem~\ref{thm:Hoffman bound for 2-graph} we have \begin{align*} 1- \alpha(G)&\geq \frac1{1-\lambda_{\min}(T^{(1)})}=\frac1{1+\frac{p_1}{q_1}}=q_1, \end{align*} and $\alpha(G)\leq 1-q_1=p_1$. Now it follows from the definition of $\mu_2^{(1)}$ that a 2-wise intersecting family $\mathcal A\subset V^{(1)}$ is an independent set in $G$. Thus we have shown that $\mu_{\bm p}(\mathcal A)\leq p_1$ in this case $n=1$. \medskip The general case $n\geq 2$. For $i=1,2,\ldots, n$, let $V_i=2^{\{i\}}$ and let $\mu_2^{(i)}$ be defined as in the previous $n=1$ case. Let $G^{(i)}=(V^{(i)},\mu_2^{(i)})$ with the adjacency matrix $T^{(i)}$, where \[ T^{(i)}= \left[ \begin{matrix} 1-\frac{p_i}{q_i}& \frac{p_i}{q_i}\\ 1& 0 \end{matrix} \right]. \] Now we define $G=(V,\mu_2)$ to be a product of $G^{(1)},\ldots, G^{(n)}$. To this end let $V=V^{(1)}\times \cdots\times V^{(n)}\cong 2^{[n]}$, and define $\mu_2:V^2\to\mathbb R$ by $\mu_2=\mu_2^{(1)}\times\cdots\times\mu_2^{(n)}$, that is, for $S,S'\in V$, let \[ \mu_2(S,S') :=\mu_2^{(1)}(s_1,s'_1)\times\cdots\times\mu_2^{(n)}(s_n,s_n'), \] where $s_i=S\cap\{i\}$ and $s_i'=S'\cap\{i\}$ for $1\leq i\leq n$. \begin{claim}\label{mu1=mu1^1...} The marginal $\mu_1$ satisfies $\mu_1=\mu_1^{(1)}\times\cdots\times\mu_1^{(n)}$, and $\mu_1=\mu_{\bm p}$. \end{claim} \begin{proof} Indeed, by \eqref{eq2:mu1}, we have \begin{align*} \mu_1(S) &=\sum_{S'\in V}\mu_2(S,S')\\ &=\sum_{S'\in V}\mu_2^{(1)}(s_1,s'_1)\times\cdots\times\mu_2^{(n)}(s_n,s_n')\\ &=\sum_{s_1'\in V^{(1)}}\mu_2^{(1)}(s_1,s_1')\times\cdots\times \sum_{s_n'\in V^{(n)}}\mu_2^{(n)}(s_n,s_n')\\ &=\mu_1^{(1)}(s_1)\times\cdots\times\mu_1^{(n)}(s_n) =\mu_{\bm p}(S). \end{align*} \end{proof} Thus $0<\mu_1(S)<1$ for all $S\in V$, and $G$ is a weighted graph. By construction we see that the adjacency matrix is $T=T^{(n)}\otimes\cdots\otimes T^{(1)}$. We can apply Fact~\ref{T^(i)} with $r=2$ and Fact~\ref{ONB}. Then, for each $S\in V$, $T$ has an eigenvalue $\lambda_S:=\prod_{j\in S}\left(-\frac{p_j}{q_j}\right)$ with the corresponding eigenvector ${\bm v}_S$. Now we determine $\lambda_{\min}(T)=\min_{S}\lambda_S$. \begin{claim} We have $\lambda_{\min}(T)=\lambda_{\{1\}}=-\frac{p_1}{q_1}$, and if $\lambda_S=\lambda_{\min}(T)$ then $S=\{i\}$ with $p_i=p_1$. \end{claim} \begin{proof} Since $p_i\geq p_{i+1}$ we have $\lambda_{\{i\}}=-\frac{p_i}{q_i}\leq -\frac{p_{i+1}}{q_{i+1}} =\lambda_{\{i+1\}}<0$, and $\min_i\lambda_{\{i\}}=\lambda_{\{1\}}$. The assumption $p_3<\frac12$ means $-1<\lambda_{\{3\}}$, and so $-1<\lambda_{\{i\}}<0$ for all $3\leq i\leq n$. Thus if $\lambda_{\min}(T)=\lambda_S$ then, using Fact~\ref{ONB} (ii), $S$ contains at most one $i$ with $i\geq 3$. In particular if $\lambda_S=\lambda_{\{1\}}$ then $S=\{i\}$ with $p_i=p_1$. If $p_1\leq\frac12$ then $-1\leq\lambda_{\{1\}}$, and $-1\leq\lambda_{\{1\}}\leq\lambda_{\{2\}}<0$. Therefore if $i\in S\subset[n]$ then $\lambda_{\{i\}}\leq\lambda_S$ with equality holding if and only if $S=\{j\}$ with $p_j=p_i$. Thus we get the statement of the claim in this case. If $p_1>\frac12$ then $\lambda_{\{1\}}<-1$. Thus we have $\lambda_{\min}(T)=\min\{\lambda_{\{1\}},\lambda_{\{1,2,3\}}\}$. By simple computation we see that $\lambda_{\{1\}}<\lambda_{\{1,2,3\}}$ is equivalent to $p_3<q_2$, which is our assumption. Thus we get the statement of the claim again. \end{proof} Thus by Theorem~\ref{thm:Hoffman bound for 2-graph} we have $\alpha(G)\leq p_1$. Now let $\mathcal A\subset 2^{[n]}$ be a $2$-wise intersecting family. If $A,B\in\mathcal A$ then there is some $i\in A\cap B$. Then $\mu_2(A,B)=0$ follows from the fact that $\mu_2^{(i)}(\{i\},\{i\})=0$ with the definition of $\mu_2$. This means that $\mathcal A\subset V$ is an independent set of $G$. Since $\mu_1=\mu_{\bm p}$ we see that $\mu_{\bm p}(\mathcal A)=\mu_1(\mathcal A)\leq\alpha(G)\leq p_1$, which completes the proof of inequality. Finally we prove the uniqueness. Suppose that $\alpha(G)=p_1$ and let $\varphi$ be the indicator of a maximum independent set. Then $\widehat\varphi_\emptyset={\langle} \varphi,{\bf 1}{\rangle}_{\mu_1}=\mathbb E_{\mu_1}[\varphi]=p_1$. We also have $\mathbb E_{\mu_2}[\varphi,\varphi]=0$ by Fact~\ref{fact0}. Thus, using $\widehat\varphi_\emptyset=p_1$ and $\lambda_{\min}(T)=-\frac{p_1}{q_1}$, we can verify \eqref{eq:red1}, and by Lemma~\ref{reduction1} we have $\varphi=p_1{\bf 1}+\sum_{S\in W}\widehat\varphi_S{\bm v}_S$, where $W=\{S\in V:\lambda_S=\lambda_{\min}(T)\}$. Since $\lambda_S=\lambda_{\min}(T)$ is equivalent to $S=\{i\}$ with $p_i=p_1$, we can rewrite $\varphi=p_1{\bf 1}+\sum_{k\in L}\widehat\varphi_{\{k\}}{\bm v}_{\{k\}}$, where $L=\{i\in[n]:p_i=p_1\}$. Consequently it follows from Lemma~\ref{reduction2} that $\varphi$ is the indicator of a star centered at some $i\in L$. This completes the proof of Theorem~\ref{thm1}. \end{proof} \begin{example} Define a 2-wise intersecting family $\mathcal A\subset 2^{[n]}$ by $\mathcal A=\{A\in 2^{[n]}:|A\cap[3]|\geq 2\}$, and let ${\bm p}=(p_1,p_2,\ldots,p_n)$. Then \[ \mu_{\bm p}(\mathcal A) = p_1p_2q_3+p_1q_2p_3+q_1p_2p_3+p_1p_2p_3 =p_1p_2+p_1p_3+p_2p_3-2p_1p_2p_3. \] If $p_1=p_2=p_3$ then $\mu_{\bm p}(\mathcal A)=p_1^2(3-2p_1)$ and $\mu_{\bm p}(\mathcal A)>p_1$ iff $\frac12<p_1<1$. If $p_1=p_2$ and $p_3=\frac12$ then $\mu_{\bm p}(\mathcal A)=p_1$. These examples show the sharpness of the condition $p_3<\frac12$ in Conjecture~\ref{conj1} (if true) in the following sense. First, for the inequality ($\mu_{\bm p}(\mathcal A)\leq p_1$) we cannot replace the condition with $p_4<\frac12$. Second, to ensure the uniqueness we cannot replace the condition with $p_3\leq\frac12$. \qed \end{example} \subsection{3-wise case: Proof of Theorem~\ref{thm2}} \begin{prop}\label{prop1} Let $\frac23>p_1\geq\frac12$ and $p_1\geq p_2\geq\cdots\geq p_n>0$. Let $\mathcal A\subset 2^{[n]}$ be a $3$-wise intersecting family. Then \[ \mu(\mathcal A) \leq p_1. \] Moreover equality holds if and only if $\mathcal A$ is a star centered at some $i\in[n]$ with $p_i=p_1$. \end{prop} \begin{proof} The case $n=1$. Let $V^{(1)}=2^{\{1\}}=\{\emptyset,\{1\}\}$, and we will define a symmetric signed measure $\mu_3^{(1)}:V^{(1)}\times V^{(1)}\times V^{(1)}\to\mathbb R$. Here, for simplicity, we write $0$ and $1$ to mean $\emptyset$ and $\{1\}$, e.g., we write $\mu_3^{(1)}(0,1,1)$ to mean $\mu_3^{(1)}(\emptyset,\{1\},\{1\})$. Now $\mu_3^{(1)}$ is defined by \begin{align*} & \mu_3^{(1)}(0,1,1)= \mu_3^{(1)}(1,0,1)= \mu_3^{(1)}(1,1,0)=\frac12{p_1},\quad \mu_3^{(1)}(0,0,0)=1-\frac32p_1,\\ & \mu_3^{(1)}(1,0,0)= \mu_3^{(1)}(0,1,0)= \mu_3^{(1)}(0,0,1)= \mu_3^{(1)}(1,1,1)=0. \end{align*} Then \[ \mu_2^{(1)}(1,1)=\frac12p_1,\quad \mu_2^{(1)}(1,0)=\mu_2^{(1)}(0,1)=\frac12p_1,\quad \mu_2^{(1)}(0,0)=1-\frac32p_1, \] and \[ \mu_1^{(1)}(1)=p_1,\quad\mu_1^{(1)}(0)=q_1. \] It follows from $0<p_1<\frac23$ that $\mu_1^{(1)}$ and $\mu_2^{(1)}/\mu_1^{(1)}$ take values in $(0,1)$. So we can define a weighted 3-graph $H=(V^{(1)},\mu_3^{(1)})$. Then, from \eqref{def:T} and \eqref{def:Tx}, we have the following matrices. \[ T^{(1)}=\left[ \begin{matrix} 1-\frac{p_1}{2q_1}& \frac{p_1}{2q_1}\\ \frac12 & \frac12 \end{matrix} \right],\quad T^{(1)}_\emptyset=\left[ \begin{matrix} 1&0\\0&1 \end{matrix} \right],\quad T^{(1)}_{\{1\}}=\left[ \begin{matrix} 0&1\\ 1&0 \end{matrix} \right]. \] By direct computation we get the following table concerning spectral information. \begin{center} \begin{tabular}{|c||c|c|c|} \hline & $T^{(1)}$& $T^{(1)}_\emptyset$& $T^{(1)}_{\{1\}}$\\ \hline eigenvalues $\lambda$ & $1,1-\frac1{2q_1}$ & $1,1$ & $1,-1$\\ \hline $\lambda_{\min}$ & $1-\frac1{2q_1}$ & 1 & $-1$\\ \hline \end{tabular} \end{center} Let $\varphi$ be the indicator of a maximum independent set in $H$. Then $\alpha(H)=\mathbb E_{\mu^{(1)}_1}[\varphi]$ and $\emptyset\not\in\varphi$. So, by Theorem~\ref{thm:3-graph Hoffman}, we have \begin{align*} 1- \alpha(H)&\geq \frac1{(1-\lambda_{\min}(T^{(1)}))\max_{x\in\varphi} (1-\lambda_{\min}(T^{(1)}_x))}\\ &=\frac{1}{(1-1+\frac1{2q_1})(1+1)}=q_1, \end{align*} and $\alpha(H)\leq 1-q_1=p_1$. \smallskip The general case $n\geq 2$. For $i=1,2,\ldots, n$ let $V_i=2^{\{i\}}$ and let $\mu_3^{(i)}$ be defined as in the previous $n=1$ case. Let $H^{(i)}=(V^{(i)},\mu_3^{(i)})$ be the weighted 3-graph. This induces the weighted 2-graph and the link graphs with the adjacency matrices $T^{(i)},T_\emptyset^{(i)}, T_{\{i\}}^{(i)}$, where \[ T^{(i)}= \left[ \begin{matrix} 1-\frac{p_i}{2q_i}& \frac{p_i}{2q_i}\\ \frac12& \frac12 \end{matrix} \right], \quad T_\emptyset^{(i)}=T_\emptyset^{(1)}, \quad T_{\{i\}}^{(i)}=T_{\{1\}}^{(1)}. \] We will construct a weighted 3-graph $H=(V,\mu_3)$ from $H^{(1)},\ldots, H^{(n)}$. Let $V=V^{(1)}\times \cdots\times V^{(n)}\cong 2^{[n]}$. Define $\mu_3:V^3\to\mathbb R$ by $\mu_3=\mu_3^{(1)}\times\cdots\times\mu_3^{(n)}$. Let $\mu_2\in\mathbb R^{V^2}$ and $\mu_1\in\mathbb R^{V}$ be the marginals. Then, as in Claim~\ref{mu1=mu1^1...}, we see that $\mu_i=\mu_i^{(1)}\times\cdots\times\mu_i^{(n)}$ for $i=1,2$, as well, in particular, $\mu_1=\mu_{\bm p}$. Note also that both $\mu_1$ and $\mu_2/\mu_1$ take values in $(0,1)$, and we need the condition $p_1<\frac23$ here. Consequently $H$ is a weighted 3-graph with the adjacency matrix $T=T^{(n)}\otimes\cdots\otimes T^{(1)}$. We apply Fact~\ref{T^(i)} with $r=3$ and Fact~\ref{ONB}. Then, for each $S\in V$, the matrix $T$ has an eigenvalue $\lambda_S:=\prod_{j\in S}\left(1-\frac1{2q_j}\right)$ with the corresponding eigenvector ${\bm v}_S$ from Fact~\ref{ONB}. Since $\frac12\leq p_1<\frac23$ and $\lambda_{\{1\}}=1-\frac1{2q_1}$ we have $-\frac12<\lambda_{\{1\}}\leq 0$ and $\lambda_{\min}(T)=\lambda_{\{1\}}$. The adjacency matrix of the link graph $H_S=(V,\mu_{2,S})$ is $T_S=T^{(n)}_{s_n}\otimes\cdots\otimes T^{(1)}_{s_1}$, where $s_i=S\cap\{i\}$. If $S\neq\emptyset$ then $T_S$ has eigenvalues $\{1,-1\}$ and \begin{align}\label{lambda(T_S)} \lambda_{\min}(T_S)=-1. \end{align} Let $\varphi$ be the indicator of a maximum independent set in $H$. We have $\emptyset\not\in\varphi$ because \[ \mu_3(\emptyset,\emptyset,\emptyset)=\prod_{i=1}^n\mu_3^{(i)}(0,0,0) =\prod_{i=1}^n(1-\tfrac32p_i)\neq 0. \] Thus, by Theorem~\ref{thm:3-graph Hoffman}, we have \begin{align*} 1- \alpha(H)&\geq \frac1{(1-\lambda_{\min}(T))\max_{S\in\varphi}(1-\lambda_{\min}(T_S))}= \frac1{\left(1-(1-\frac1{2q_1})\right)(1-(-1))}=q_1, \end{align*} and $\alpha(H)\leq 1-q_1=p_1$. Now let $\mathcal A\subset 2^{[n]}$ be a $3$-wise intersecting family. If $A,B,C\in\mathcal A$ then there is some $i\in A\cap B\cap C$. Then $\mu_3(A,B,C)=0$ follows from the fact that $\mu_3^{(i)}(\{i\},\{i\},\{i\})=0$ with the definition of $\mu_3$. This means that $\mathcal A\subset V$ is an independent set in $H$. Since $\mu_1=\mu_1^{(1)}\times\cdots\times\mu_n^{(n)}=\mu_{\bm p}$ we have $\mu_{{\bm p}}(\mathcal A)=\mu_1(\mathcal A)\leq\alpha(H)\leq p_1$, which completes the proof of inequality. Finally we show the uniqueness of equality case. Suppose that $\alpha(H)=p_1$ and let $\varphi$ be the indicator of a maximum independent set $I$ in $H$. Then $\emptyset\not\in I$ and $I$ is also an independent set in the link graph $H_S=(V,\mu_{2,S})$ if $S\neq\emptyset$. Thus by applying Theorem~\ref{thm:Hoffman bound for 2-graph} to $H_S$ with \eqref{lambda(T_S)} we have \begin{align}\label{E_1,x<1/2} \max_{S\in\varphi}\mathbb E_{\mu_1,S}[\varphi]\leq\max_{S\neq\emptyset} \frac{-\lambda_{\min}(T_S)}{1-\lambda_{\min}(T_S)}=\frac12. \end{align} (We note that \eqref{E_1,x<1/2} holds for the indicator of \emph{any} independent set, not necessarily a maximum one, and we will use this fact in the next subsection.) Then by Lemma~\ref{claim:E[f,f]} we have $\mathbb E_{\mu_2}[\varphi,\varphi] \leq \frac{p_1}2$. This together with $\widehat\varphi_0=p_1$ and $\lambda_{\min}(T)=1-\frac1{2q_1}$ verifies \eqref{eq:red1}, and we can apply Lemma~\ref{reduction1}. Since $\lambda_{\min}(T)$ is attained only by $\lambda_{\{i\}}$ with $i\in J:=\{j\in[n]:p_j=p_1\}$ we have $\varphi=p_1{\bf 1}+\sum_{j\in J}\widehat\varphi_{\{j\}}{\bm v}_{\{j\}}$. Finally by Lemma~\ref{reduction2} it follows that $\varphi$ is the indicator of a star centered at some $i\in J$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm2}] We note that the $\mu_1$ in Theorem~\ref{thm1} and the $\mu_1$ in Proposition~\ref{prop1} are the same, and moreover $\mu_{\bm p}=\mu_1$. Then Theorem~\ref{thm2} for the case $p_1\leq\frac12$ follows from Theorem~\ref{thm1}, and the case $\frac12\leq p_1<\frac23$ follows from Proposition~\ref{prop1}. Thus we may assume that $p_1\geq\frac23$ and $p_2<\frac 23$. Now we follow the argument in \cite{FFFLO}. Let ${\bm p}=(p_1,p_2,p_3,\ldots,p_n)$ and ${\bm p}'=(p_2,p_2,p_3,\ldots,p_n)$, that is, ${\bm p}'$ is obtained from ${\bm p}$ by replacing $p_1$ with $p_2$. Let $\mathcal A$ be the star centered at $1$, and let $\mathcal B$ be an inclusion maximal intersecting family. Suppose that $\mathcal B\neq\mathcal A$, and we will show that $\mu_{\bm p}(\mathcal A)>\mu_{\bm p}(\mathcal B)$. By the construction we have $\mu_{\bm p}(\mathcal A)=p_1$ and $\mu_{{\bm p}'}(\mathcal A)=p_2$. Thus $\mu_{\bm p}(\mathcal A)=\frac{p_1}{p_2}\mu_{{\bm p}'}(\mathcal A)$. On the other hand, by Proposition~\ref{prop1}, we have $\mu_{{\bm p}'}(\mathcal B)\leq p_2$. Let $B\in\mathcal B$. If $1\in B$ then $\mu_{\bm p}(B)=\frac{p_1}{p_2}\mu_{{\bm p}'}(B)$. If $1\not\in B$ then $\mu_{\bm p}(B)=\frac{q_1}{q_2}\mu_{{\bm p}'}(B)< \frac{p_1}{p_2}\mu_{{\bm p}'}(B)$, where we used $p_1>p_2$. Since $\mathcal B\neq\mathcal A$ and $\mathcal B$ is inclusion maximal there is some $B\in\mathcal B$ such that $1\not\in B$, e.g., $\{2,3,\ldots,n\}\in\mathcal B$. Thus we have $\mu_{\bm p}(\mathcal B)<\frac{p_1}{p_2}\mu_{{\bm p}'}(\mathcal B)\leq p_1=\mu_{\bm p}(\mathcal A)$, as needed. This means that $\mathcal A$ is the only intersecting family which attains the maximum $\mu_{\bm p}$-measure. \end{proof} \subsection{Stability: Proof of Theorem~\ref{thm3}} Friedgut \cite{Friedgut} obtained a stability result for 2-wise $t$-intersecting families. The special case $t=1$, which is a stability version of a result by Ahlswede--Katona \cite{AKa}, reads as follows. \begin{prop}[\cite{Friedgut}]\label{stability 2-wise} Let $0<p<\frac 12$ be fixed. Then there exists a constant $\epsilon_p>0$ such that the following holds for all $0<\epsilon<\epsilon_p$. If $\mathcal A\subset 2^{[n]}$ is a $2$-wise intersecting family with $\mu_{\bm p}(\mathcal A)=p-\epsilon$, then there exists a star $\mathcal B\subset 2^{[n]}$ such that $\mu_{\bm p}(\mathcal A\triangle\mathcal B)<(C_p+o(1))\epsilon$, where ${\bm p}=(p,p,\ldots,p)$ and $C_p=\frac{4q^2}{1-2p}$. \end{prop} \noindent We include the proof for convenience in Appendix. (The $C_p$ is not explicitly computed in \cite{Friedgut}.) In this section we adapt his proof to 3-wise intersecting families to show the following. \begin{prop}\label{stability 3-wise} Let $\frac12<p<\frac 23$ be fixed. Then there exists a constant $\epsilon_p>0$ such that the following holds for all $0<\epsilon<\epsilon_p$. If $\mathcal A\subset 2^{[n]}$ is a $3$-wise intersecting family with $\mu_{\bm p}(\mathcal A)=p-\epsilon$, then there exists a star $\mathcal B\subset 2^{[n]}$ such that $\mu_{\bm p}(\mathcal A\triangle\mathcal B)<(C_p+o(1))\epsilon$, where ${\bm p}=(p,p,\ldots,p)$ and \[ C_p=\frac{16pq^2}{(2p-1)(3-4p)}. \] \end{prop} For the proof we use the Kindler--Safra theorem, which extends the Friedgut--Kalai--Naor theorem \cite{FKN}. To state the result we need a definition. Let $V=2^{[n]}$ and $g\in\{0,1\}^V$. We say that a boolean function $g\in\{0,1\}^V$ \emph{depends on at most one coordinate} if $g$ is one of the following: \begin{itemize} \item[(G1)] there is some $i\in[n]$ such that $g={\bf 1}_{\{i\}}$, or \item[(G2)] there is some $i\in[n]$ such that $g={\bf 1}-{\bf 1}_{\{i\}}$, or \item[(G3)] $g$ is a constant function, that is, $g={\bf 0}$, or $g={\bf 1}$. \end{itemize} (G1) means that $g$ is the indicator of the star centered at $i$, and (G2) means that $g$ is the indicator of the complement of the star. We can expand any boolean function $f\in\{0,1\}^V$ as $f=\sum_{S\in V}\widehat f_S{\bm v}_S$, where ${\bm v}_s$ is defined in Subsection~\ref{subsec:unique} (see Fact~\ref{ONB}). Let $f^{>1}:=\sum_{|S|>1}\widehat f_S{\bm v}_S$, and let $\|f\|$ denote the square root of ${\langle} f,f{\rangle}_{\mu_{\bm p}}$, where ${\bm p}=(p,\ldots,p)$. For example, if $f={\bf 1}_{\{i\}}$ then $f=p{\bm v}_\emptyset-\sqrt{pq}{\bm v}_{\{i\}}$, and $\|f\|=p$, $\|f^{>1}\|=0$. \begin{thm}[Kindler--Safra, Corollary 15.2 in \cite{Kindler}, see also \cite{KS}]\label{KS thm} Let $p\in(0,1)$ be fixed and let $V=2^{[n]}$. Let $f\in\{0,1\}^V$ and $\|f^{>1}\|^2\leq \delta\ll p$. Then there exists $g\in\{0,1\}^V$ which depends on at most one coordinate and $\|f-g\|^2<(4+o(1))\delta$. \end{thm} \noindent The $o(1)$ term is actually smaller than $c_1 \exp(-\frac{c_2}{\delta})$, where $c_1,c_2$ are positive constants depending only on $p$, see Corollary 6.1 in \cite{KS} for more details. \begin{proof}[Proof of Proposition~\ref{stability 3-wise}] Let $V=2^{[n]}$ and let $\mu_3\in\mathbb R^{V^3}$ be the measure defined in the proof of Proposition~\ref{prop1}. By definition of $\mu_3$ with $\frac12<p<\frac23$ we have that $0<\mu_2(x,y)<1$ for all $x,y\in V$, and $\mu_1=\mu_{\bm p}$. Thus we can define a weighted 3-graph $(V,\mu_3)$. Let $\varphi$ be the indicator of $\mathcal A$ and we write \[ \varphi=\widehat\varphi_{\emptyset}{\bf 1}+\sum_{S\neq\emptyset}\widehat\varphi_S{\bm v}_S. \] To apply Theorem~\ref{KS thm} we need to show that $\|\varphi^{>1}\|$ is small. By Fact~\ref{fact2} (ii) we have \[ \mathbb E_{\mu_2}[\varphi,\varphi]=\widehat\varphi_{\emptyset}^2+\sum_{|S|=1}\widehat\varphi_S^2 \lambda_S +\sum_{|S|>1}\widehat\varphi_S^2 \lambda_S, \] where $\lambda_S=(1-\frac1{2q})^{|S|}$. Since $-\frac12<1-\frac1{2q}<0$ the minimum and the second minimum eigenvalues come from the cases $|S|=1$ and $|S|=3$, respectively. So let $\lambda_1:=1-\frac1{2q}$ and $\lambda_3:=(1-\frac1{2q})^3$. Then we have \begin{align}\label{E[f,f] for 3-wise} \mathbb E_{\mu_2}[\varphi,\varphi]\geq\widehat\varphi_{\emptyset}^2 +\lambda_1\sum_{|S|=1}\widehat\varphi_S^2 +\lambda_3\sum_{|S|>1}\widehat\varphi_S^2. \end{align} Define $\tau$ by $\sum_{|S|>1}\widehat\varphi_S^2=\tau\widehat\varphi_{\emptyset}$. Then, by Fact~\ref{fact2} (i), $\sum_{|S|=1}\widehat\varphi_S^2=\widehat\varphi_{\emptyset}-\widehat\varphi_{\emptyset}^2-\tau\widehat\varphi_{\emptyset}$. Thus we have \[ \mathbb E_{\mu_2}[\varphi,\varphi]\geq\widehat\varphi_{\emptyset}^2 +\lambda_1(\widehat\varphi_{\emptyset}-\widehat\varphi_{\emptyset}^2-\tau\widehat\varphi_{\emptyset})+\lambda_3\tau\widehat\varphi_{\emptyset}. \] On the other hand we have $\mathbb E_{\mu_1}(\varphi)=\widehat\varphi_\emptyset$ and $\max_{x\in\varphi}\mathbb E_{\mu_1,x}[\varphi]\leq\frac12$ by \eqref{E_1,x<1/2}. Thus it follows from Lemma~\ref{claim:E[f,f]} that $\mathbb E_{\mu_2}[\varphi,\varphi]\leq\frac12\widehat\varphi_{\emptyset}$. So estimating $\mathbb E_{\mu_2}[\varphi,\varphi]/\widehat\varphi_{\emptyset}$ we get \[ \frac12\geq\widehat\varphi_{\emptyset}+\lambda_1(1-\widehat\varphi_{\emptyset}-\tau) +\lambda_3\tau, \] which yields \[ \tau\leq\frac1{\lambda_3-\lambda_1} \left(\frac12-\widehat\varphi_{\emptyset}-\lambda_1(1-\widehat\varphi_{\emptyset})\right) =\frac 1{\lambda_3-\lambda_1}\cdot\frac{\epsilon}{2q} =\frac{4q^2}{(2p-1)(3-4p)}\,\epsilon, \] where we used $\widehat\varphi_{\emptyset}=p-\epsilon$ for the first equality. Since \[ \|\varphi^{>1}\|^2=\sum_{|S|>1}\widehat\varphi_S^2=\tau\widehat\varphi_{\emptyset}<\tau p \] we obtain \[ \|\varphi^{>1}\|^2< \frac{4pq^2}{(2p-1)(3-4p)}\,\epsilon. \] By applying Theorem~\ref{KS thm} with $\delta=\frac{4pq^2}{(2p-1)(3-4p)}\epsilon$ we can find $g\in\{0,1\}^V$ which depends at most one coordinate and $\|\varphi-g\|^2<(4+o(1))\delta$. We claim that $g$ is the indicator of a star, that is, (G1) happens. Note that $\frac12<p<\frac23$ and $p\gg\epsilon+\delta$ by the choice of $\epsilon$. So we have $\|\varphi-{\bf 0}\|^2=\|\varphi\|^2=p-\epsilon\gg \delta$ and $\|\varphi-{\bf 1}\|^2=1-(p-\epsilon)\gg\delta$. Thus (G3) cannot happen. If $g$ is the indicator of the complement of a star, then $\|g\|^2=1-p$ and \[ \|\varphi-g\|^2\geq(\|\varphi\|-\|g\|)^2=(\sqrt{p-\epsilon}-\sqrt{1-p})^2> \frac{16pq^2}{(2p-1)(3-4p)}\epsilon=4\delta \] by choosing $\epsilon\ll p$ small enough (we need to choose $\epsilon$ quite small when $p$ is close to $1/2$). This shows that (G2) cannot happen. So the only possibility is (G1), as needed. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm3}] Let $\mathcal A\subset 2^{[n]}$ be a 3-wise intersecting family with $\mu_{\bm p}(\mathcal A)=p-\epsilon$. First let $\frac12<p<\frac23$. Then (ii) of the theorem follows from Proposition~\ref{stability 3-wise}. Next let $p=\frac12$. It follows from the Brace--Daykin Theorem \cite{BD} that if ${\mathcal F}\subset2^{[n]}$ is a 3-wise intersecting family which is not a subfamily of a star, then \[ |{\mathcal F}|\leq|\{F\subset[n]:|F\cap[4]|\geq 3\}|, \] or equivalently $\mu_{\bm p}({\mathcal F})\leq\frac5{16}$, where ${\bm p}=(\frac12,\ldots,\frac12)$. Thus if $p-\epsilon>\frac5{16}$, that is, $0<\epsilon<\frac3{16}$, then $\mathcal A$ is a subfamily of a star, which shows (i) of the theorem in this case. Finally let $0<p<\frac12$. We say that $\mathcal A$ is 2-wise 2-intersecting if $|A\cap A'|\geq 2$ for all $A,A'\in\mathcal A$. If $\mathcal A$ is \emph{not} 2-wise 2-intersecting, then there exist $A,A'\in\mathcal A$ such that $|A\cap A'|=1$, say, $A\cap A'=\{i\}$. In this case every $A\in\mathcal A$ must contain $i$ due to the 3-wise intersecting condition. Thus $\mathcal A$ is contained in a star $\mathcal B$ centered at $i$, and we get (i) of the theorem in this case. The only remaining case is that $\mathcal A$ is 2-wise 2-intersecting, and we show that this cannot happen. For $i=0,1,\ldots$, let $\frac i{2i+1}\leq p\leq\frac{i+1}{2i+3}$. Then it follows from the Ahlswede--Khachatrian theorem \cite{AK} that $\mu_{\bm p}(\mathcal A)\leq\mu_{\bm p}(\mathcal G_i)$, where \[ \mathcal G_i=\{G\subset[n]:|G\cap[2i+2]|\geq i+2\}. \] A direct computation shows that $\mu_{\bm p}(\mathcal G_i)=\sum_{j=0}^i\binom{2i+2}jp^{2i+2-j}(1-p)^j<p$. So by choosing $\epsilon<\epsilon_p:=p-\mu_{\bm p}(\mathcal G_i)$ we see that $\mu_{\bm p}(\mathcal A)=p-\epsilon>p-\epsilon_p=\mu(\mathcal G_i)$, a contradiction. \end{proof} \section{Concluding remarks} \subsection{Generalization to $r$-graphs} Filmus et al.\ extended the Hoffman's bound to an $r$-graph in \cite{FGL}. We briefly explain how to extend Theorem~\ref{thm:3-graph Hoffman} to an $r$-graph by induction on $r$. Let $V$ be a finite set with $|V|\geq 2$ and we define a weighted $r$-graph on $V$ as follows. \begin{defn} Let $\mu_r:V^r\to\mathbb R$ be a symmetric signed measure. We say that $H=(V,\mu_r)$ is a weighted $r$-graph if $\mu_{r-1}(x_1,\ldots,x_{r-1})>0$ for all $x_1,\ldots,x_{r-1}\in V$, where \[ \mu_{r-1}(x_1,\ldots,x_{r-1}):=\sum_{y\in V}\mu_r(x_1,\ldots,x_{r-1},y). \] \end{defn} \noindent For $i=r-2,r-3,\ldots,1$ we define a measure $\mu_i\in\mathbb R^{V^i}$ inductively by \[ \mu_i(x_1,\ldots,x_i):=\sum_{y\in V}\mu_{i+1}(x_1,\ldots,x_i,y). \] Note that $\mu_i(x_1,\ldots,x_i)>0$ for all $x_1,\ldots,x_i\in V$. Let $\varphi$ be the indicator of an independent set $I$ in the weighted $r$-graph $H=(V,\mu_r)$. Then Lemma~\ref{E[f,f]>} and Lemma~\ref{claim:E[f,f]} work for $H$ as well. Here we define $\mu_{r-1,x}(y_2,\ldots,y_r):=\mu_r(x,y_2,\ldots,y_r)/\mu_1(x)$. Then $\mathbb E_{\mu_{1,x}}[\varphi]$ is bounded from above by $\alpha(H_x)$, where $H_x=(V,\mu_{r-1,x})$ is the link $(r-1)$-graph of $H$ relative to $x$. By induction hypothesis we can bound $\alpha(H_x)$, and we eventually bound $\mathbb E_{\mu_1}[\varphi]$ using Lemma~\ref{E[f,f]>} and Lemma~\ref{claim:E[f,f]}. To state the bound, for $s=1,\ldots,r-2$, and $S=\{v_1,\ldots,v_s\}$, where $v_1,\ldots, v_s\in V$, let $T_S$ be the adjacency matrix of the link $(r-s)$-graph relative to $S$, defined by \begin{align}\label{matrix for link} (T_S)_{x,y}=\frac{\mu_{s+2}(v_1,\ldots,v_s,x,y)}{\mu_{s+1}(v_1,\ldots,v_s,x)}. \end{align} Let $\lambda_s:=\min_{S}\lambda_{\min}(T_S)$ and $\min_S$ is taken over all $s$-element (multi)subset $S$ of $I$. Also let $\lambda_0:=\lambda_{\min}(T)$ where $T$ is the adjacency matrix of $H$ defined by \eqref{def:T}. Then the Filmus--Golubev--Lifshitz bound is stated as follows. \begin{align}\label{FGL bound} \mathbb E_{\mu_1}[\varphi]\leq 1-\prod_{s=0}^{r-2}\frac1{1-\lambda_s}. \end{align} With this bound it is not difficult to extend Theorem~\ref{thm2} to an $r$-graph. \begin{thm}\label{thm2 for r-wise} Let $1>p_1\geq p_2\geq\cdots\geq p_n>0$ and ${\bm p}=(p_1,p_2,\ldots,p_n)$. Let $r\geq 3$. If $\frac{r-1}r>p_2$ and $\mathcal A\subset 2^{[n]}$ is an $r$-wise intersecting family, then $\mu_{\bm p}(\mathcal A) \leq p_1$. Moreover equality holds if and only if $\mathcal A$ is a star centered at some $i\in[n]$ with $p_1=p_i$. \end{thm} The proof of Theorem~\ref{thm2 for r-wise} goes exactly the same as that of Theorem~\ref{thm2}, and the main part is the proof of the following result which corresponds to Proposition~\ref{prop1}. \begin{prop} Let $\frac {r-1}r>p_1\geq\frac{r-2}{r-1}$ and $p_1\geq p_2\geq\cdots\geq p_n>0$. Let $\mathcal A\subset 2^{[n]}$ be an $r$-wise intersecting family. Then \[ \mu(\mathcal A) \leq p_1. \] Moreover equality holds if and only if $\mathcal A$ is a star centered at some $i\in[n]$ with $p_i=p_1$. \end{prop} \begin{proof} The matrices for the proof are different from the ones used in the proof of Theorem~7.1 in \cite{FGL}, because we will introduce a parameter $\epsilon>0$ so that all the measures $\mu_i$ take positive values and \eqref{matrix for link} is well defined. Here we only record the matrices and the corresponding eigenvalues. Otherwise the proof is the same as the one for Proposition~\ref{prop1}. Let $n=1$ and $V=\{\emptyset, \{1\}\}$. We need to define a symmetric measure $\mu_r^{(1)}$. For this we start with a symmetric function $\mu_r^{(1)}:V^r\to \mathbb R$ defined by \[ \mu_r^{(1)}(0^i,1^{r-i}) = \begin{cases} 0 & \text{if }i=0,\\ \frac {p_1}{r-1}-\delta_1& \text{if }i=1,\\ \epsilon &\text{if }2\leq i\leq r-1,\\ 1-\frac{rp_1}{r-1}-\delta_2&\text{if }i=r, \end{cases} \] where $\epsilon,\delta_1,\delta_2$ are small positive constants, and $(0^i,1^{r-i})$ in the LHS means $i$ repeated $\emptyset$ and $r-i$ repeated $\{1\}$. Since $\mu_r^{(1)}$ is symmetric, any permutation of $(0^i,1^{r-i})$ takes the same value. We require $\sum_{x\in V^r}\mu_r^{(1)}(x)=1$ for $\mu_r^{(1)}$ to be a measure, that is, \[ \sum_{x\in V^r}\mu_r^{(1)}(x)= \binom r1\left(\frac{p_1}{r-1}-\delta_1\right) +\sum_{i=2}^{r-1}\binom ri\epsilon +\binom rr\left(1-\frac{rp_1}{r-1}-\delta_2\right)=1. \] We also require that the induced measure $\mu_1^{(1)}$ is the $p_1$-biased one, that is, $\mu_1^{(1)}(1)=p_1$, and so \begin{align*} \mu_1^{(1)}(1)&=\sum_{x_2\in V}\mu_2^{(1)}(1,x_2)=\cdots= \sum_{(x_2,\ldots,x_r)\in V^{r-1}}\mu_r^{(1)}(1,x_2,\ldots,x_r)\\ &=\binom{r-1}1\left(\frac{p_1}{r-1}-\delta_1\right) +\sum_{i=2}^{r-1}\binom{r-1}i\epsilon=p_1. \end{align*} These two requirements yield that \[ \delta_1=\frac{2^{r-1}-r}{r-1}\epsilon,\quad \delta_2=\frac{(2^{r-1}-1)(r-2)}{r-1}\epsilon. \] Then $H=(V,\mu_r^{(1)})$ is a weighted $r$-graph by choosing $\epsilon>0$ sufficiently small. For each $s=1,\ldots,r-2$, and $S\in V^s$, the link $(r-s)$-graph $H_S=(V,\mu_{r-s,S}^{(1)})$ is induced from $H$. Let $T^{(1)}(\epsilon)$ and $T_S^{(1)}(\epsilon)$ be the adjacency matrices corresponding to $H$ and $H_S$, respectively, and let $T^{(1)}=\lim_{\epsilon\to 0}T^{(1)}(\epsilon)$ and $T^{(1)}_S=\lim_{\epsilon\to 0}T_S^{(1)}(\epsilon)$. After a somewhat tedious but direct computation one can verify that, for $i\geq 2$ and $j\geq 1$, \begin{align*} T^{(1)}&= \left[ \begin{matrix} 1-\frac{p_1}{(r-1)q_1}& \frac{p_1}{(r-1)q_1}\\ \frac1{r-1}& 1-\frac1{r-1} \end{matrix} \right], & T_0^{(1)}&= \left[ \begin{matrix} 1&0\\ 0&1 \end{matrix} \right], & T_{1^j}^{(1)}&= \left[ \begin{matrix} 0&1\\ \frac1{r-j-1}&\frac{r-j-2}{r-j-1} \end{matrix} \right], \\ T_{0^i1^j}^{(1)}&= \frac12 \left[ \begin{matrix} 1&1\\ 1&1 \end{matrix} \right], & T_{0^i}^{(1)}&= \frac12 \left[ \begin{matrix} 2&0\\ 1&1 \end{matrix} \right], & T_{01^j}^{(1)}&= \frac12 \left[ \begin{matrix} 1&1\\ 0&2 \end{matrix} \right]. \end{align*} The above six matrices have the corresponding eigenvalues below: \begin{align*} &\{1-\tfrac1{(r-1)q_1}, 1\}, && \{1,1\}, && \{-\tfrac1{r-j-1}, 1\}, \\ &\{0,1\}, && \{\tfrac12,1\}, && \{\tfrac12,1\}. \end{align*} Thus we have $\lambda_0^{(1)}:=\lambda_{\min}(T)=1-\frac1{(r-1)q_1}<0$, and $\lambda_s^{(1)}:=\min_{S\in V^s}\lambda_{\min}(T_S)=-\frac1{r-s-1}$ for $s=1,\ldots,r-2$. Let $\varphi$ be the indicator of an independent set in $H$. Then by \eqref{FGL bound} we have $\mathbb E_{\mu_1}[\varphi]\leq 1-\prod_{s=0}^{r-2}\frac1{1-\lambda_s^{(1)}}=p_1$. For the general case let $n\geq 2$ and $V=2^{[n]}$. We define the measure $\mu_r:V^r\to\mathbb R$ by $\mu_r:=\mu_r^{(1)}\times\cdots\times\mu_r^{(n)}$. Then the corresponding adjacency matrices are obtained by taking tensor product of the ones in the $n=1$ case. So $T=T^{(1)}\otimes\cdots\otimes T^{(n)}$ with eigenvalues \begin{align}\label{lambda_v} \lambda_v(T):=\prod_{i\in v}(1-\frac1{(r-1)q_i}) \end{align} for $v\in V$, and $\lambda_0:=\min_{v\in V}\lambda_v(T)=\lambda_{\{1\}} =1-\frac1{(r-1)q_1}$. For $1\leq s\leq r-2$ and $S\in V^s$ we have $T_S=T_S^{(1)}\otimes\cdots\otimes T_S^{(n)}$ with \begin{align}\label{lambda_s} \lambda_s:=\min_{S\in V^s}\lambda_{\min}(T_S)=\lambda_{\min}(T_{1^s})= -\frac1{r-s-1}. \end{align} Finally it follows form \eqref{FGL bound} that $\mathbb E_{\mu_1}[\varphi]\leq p_1$. \end{proof} \begin{conj} The condition $\frac{r-1}r>p_2$ in Theorem~\ref{thm2 for r-wise} can be replaced with $\frac{r-1}r>p_{r+1}$. In particular, Theorem~\ref{thm2} holds if $p_4<\frac 23$ instead of $p_3<\frac 23$. \end{conj} \noindent On the other hand, the condition above cannot be replaced with $\frac{r-1}r>p_{r+2}$. To see this let $\mathcal A=\{A\in 2^{[n]}:|A\cap[r+1]|\geq r\}$, and $p_1=\cdots=p_{r+1}=:p$. Then $\mathcal A$ is an $r$-wise intersecting family with $\mu_{\bm p}(\mathcal A)=(r+1)p^rq+p^{r+1}$. A computation shows $\mu_{\bm p}(\mathcal A)$ is greater than $p$ provided, e.g., $p\geq 1-\frac1{r^2}$. More generally we can ask the following. \begin{prob} Let $1>p_1\geq p_2\geq\cdots\geq p_n>0$ and ${\bm p}=(p_1,p_2,\ldots,p_n)$. Determine the maximum of $\mu_{\bm p}(\mathcal A)$, where $\mathcal A\subset 2^{[n]}$ is an $r$-wise intersecting family. \end{prob} Proposition~\ref{stability 3-wise} can be extended to $r$-wise intersecting families as follows. \begin{prop}\label{stability r-wise} Let $r\geq 3$ and $\frac{r-2}{r-1}<p<\frac {r-1}r$ be fixed. Then there exists a constant $\epsilon_{r,p}>0$ such that the following holds for all $0<\epsilon<\epsilon_{r,p}$. If $\mathcal A\subset 2^{[n]}$ is an $r$-wise intersecting family with $\mu_{\bm p}(\mathcal A)=p-\epsilon$, then there exists a star $\mathcal B\subset 2^{[n]}$ such that $\mu_{\bm p}(\mathcal A\triangle\mathcal B)<(C_p+o(1))\epsilon$, where ${\bm p}=(p,p,\ldots,p)$ and \[ C_p=\frac{4(r-1)^2pq^2}{\left((r-1)p-(r-2)\right)\left((2r-3)-2(r-1)p\right)}. \] \end{prop} \begin{proof} The proof is the same as the proof of Proposition~\ref{stability 3-wise}. We estimate \eqref{E[f,f] for 3-wise} from both sides. For the RHS we see from \eqref{lambda_v} that the minimum and the second minimum eigenvalues come from the cases $|v|=1$ and $|v|=3$, so \[ \lambda_1=1-\frac1{(r-1)q}, \quad \lambda_3=\lambda_1^3. \] For the LHS we use Lemma~\ref{claim:E[f,f]} with \eqref{lambda_s}, and we have \[ \mathbb E_{\mu_2}[\varphi,\varphi]\leq\varphi_{\emptyset} \left(1-\prod_{s=1}^{r-2}\frac1{1-\lambda_s}\right)=1-\frac1{r-1}. \] Then we get the $C_p$ exactly in the same way as in the proof of Proposition~\ref{stability 3-wise}. \end{proof} \subsection{More about stability} Let $0<p<1$ and ${\bm p}=(p,\ldots,p)\in(0,1)^n$. In Theorem~\ref{thm3} the statement of stability differs between the two cases (i) and (ii). In (ii) (the case $p>\frac12$) we have the following example: \[ \mathcal A_n:=\left(\{A\in 2^{[n]}:1\in A,\,|A|>\tfrac n2\}\setminus\{[1]\}\right) \sqcup\{[n]\setminus[1]\}. \] Then $\mathcal A_n$ is a 3-wise intersecting family with $\mu_{\bm p}(A_n)\to p$ as $n\to\infty$, but $\mathcal A_n$ is not contained in any star. It is worth noting that the stability in (i) (the case $p\leq\frac12$) also differs from the situation in 2-wise intersecting case in Proposition~\ref{stability 2-wise}. Indeed let \[ \mathcal A'_n:=\left(\{A\in 2^{[n]}:1\in A\setminus\{\{1\}\}\right)\cup\{[n]\setminus\{1\}\}, \] then $\mathcal A'_n$ is a 2-wise intersecting family with $\mu_{\bm p}(\mathcal A'_n)\to p$, but no star can contain $\mathcal A_n$. The constant $C_p$ in Theorem~\ref{thm3} becomes very large when $p$ is slightly more than $\frac12$. This is because our proof relies on Theorem~\ref{KS thm} and we need to distinguish our indicator from the indicator of (G2). But the family corresponding to (G2) is not 3-wise intersecting at all. This suggests that the $C_p$ could be far from the best possible value especially when $p$ is close to $\frac12$, or even more bravely, we conjecture the following. \begin{conj} There exists $C_p'$ such that the inequality in (ii) of Theorem~\ref{thm3} can be replaced with $\mu_{{\bm p}}(\mathcal A\triangle\mathcal B)<(C_p'+o(1))\epsilon$, where $C_p'\leq C_p$ and moreover $C_p'$ is increasing in $p$ for $\frac12\leq p<\frac23$. \end{conj} The item (ii) of Theorem~\ref{thm3} can be extended to $r$-wise intersecting case as in Proposition~\ref{stability r-wise}. So maybe the item (i) could be extended to $r$-wise intersecting case as well. \begin{prob} Let $r\geq 4$ and $p\leq\frac{r-2}{r-1}$. Is it true that the item (i) of Theorem~\ref{thm3} holds as well for $r$-wise intersecting families? \end{prob} It is also interesting to see whether or not Theorem~\ref{thm3} (and/or Theorem~\ref{KS thm}) can be extended to a general ${\bm p}=(p_1,p_2,\ldots,p_n)$. \begin{prob} What happens if we replace ${\bm p}=(p,p,\ldots,p)$ in Theorem~\ref{thm3} with ${\bm p}=(p_1,p_2,\ldots,p_n)$ where $\frac 23>p_1\geq p_2\geq\cdots\geq p_n$? \end{prob} \subsection{Multiply cross intersecting families} We say that $r$ families $\mathcal A_1,\mathcal A_2,\ldots,\mathcal A_r\subset 2^{[n]}$ are \emph{$r$-cross intersecting} if $A_1\cap A_2\cap\cdots\cap A_r\neq\emptyset$ for all $A_1\in\mathcal A_1,A_2\in\mathcal A_2,\ldots,A_r\in\mathcal A_r$. Let ${\bm p}_1,{\bm p}_2,\ldots,{\bm p}_r\in(0,1)^n$ be given vectors. Then one can ask the maximum of $\prod_{i=1}^r\mu_{{\bm p}_i}(\mathcal A_i)$ for $r$-cross intersecting families. For the case $r=2$, Suda et al.\ obtained the following result. \begin{thm}[\cite{STT}]\label{STT-thm} For $i=1,2$ let ${\bm p}_i=(p_i^{(1)},\ldots,p_i^{(n)})$, and $p_i=\max\{p_i^{(\ell)}:\ell\in [n]\}$. Suppose that $p_1^{(\ell)},p_2^{(\ell)}\leq 1/2$ for $\ell\geq 2$. If $\mathcal A_1,\mathcal A_2\subset 2^{[n]}$ are $2$-cross intersecting, then \[ \mu_{{\bm p}_1}(\mathcal A_1) \mu_{{\bm p}_2}(\mathcal A_2)\leq p_1p_2. \] Moreover, unless $p_1=p_2=1/2$ and $|w|\geq 3$, equality holds if and only if both $\mathcal A_1$ and $\mathcal A_2$ are the same star centered at some $\ell\in w$, where $w:=\bigl\{\ell\in [n]:(p_1^{(\ell)},p_2^{(\ell)})=(p_1,p_2)\bigr\}$. \end{thm} Almost nothing is known for the cases $r\geq 3$. Perhaps the easiest open problem is the case when $r=3$ and ${\bm p}_i=(p,p,\ldots,p)$ for all $1\leq i\leq 3$. \begin{conj} Let $p\leq\frac23$ and ${\bm p}=(p,p,\ldots,p)$. If $\mathcal A_1,\mathcal A_2,\mathcal A_3\subset 2^{[n]}$ are $3$-cross intersecting, then $\mu_{{\bm p}}(\mathcal A_1)\mu_{{\bm p}}(\mathcal A_2)\mu_{{\bm p}}(\mathcal A_3)\leq p^3$. \end{conj} \section{Acknowledgment} The author thanks Tsuyoshi Miezaki for valuable discussions. He also thanks the referees for their very careful reading and helpful suggestions. This research was supported by JSPS KAKENHI Grant No. 18K03399.
2,877,628,089,993
arxiv
\section{Introduction} Various cosmological observations, including the type Ia Supernova [1], the cosmic microwave background radiation [2] and the large scale structure [3,4], have shown that the universe is undergoing an accelerating expansion and it entered this accelerating phase only in the near past. The unexpected observed phenomenon poses one of the most puzzling problems in cosmology today. Usually, it is assumed that there exists, in our universe, an exotic energy component with negative pressure, named dark energy $(DE)$, which dominates the universe and drives it to an accelerating expansion at recent times. Many candidates of DE have been proposed such as the cosmological constant, quintessence, phantom, quintom as well as the (generalized) Chaplygin gas, and so on. However, alternatively, we can take this observed accelerating expansion as a signal of the breakdown of our understanding to understand the laws of gravitation. Thus, a modified theory of gravity is needed. Modified theories of gravity, e.g., Scalar tensor theory, Brans-Dick theory, String theory, Gauss-Bonnet theory, $f(R)$ theory, $f(T)$ gravity etc. have recently gained a lot of interest during the last decade. These theories provide the very natural gravitational alternative for the DE. The modification of gravitational action may resolve cosmological problems, paradigm $DE$ and $DM$ issues. In this paper we focus our attention only on $f(T)$ theory of gravity. This theory of gravity is the generalization of teleparallel theory of gravity [5]. $F(T)$ theory is proposed best to account for the present accelerating expansion [6-9]. In teleparallel gravity $(TPG)$, we use the Weitzenb$\ddot{o}$ck connection instead of using the Levi-Civita connection, which we usually used in $GR$. As a result, in $TPG$, the Weitzenb$\ddot{o}$ck spacetime has only non-zero torsion and is curvature free. Similar to $GR$, where the action involves the curvature scalar $R$, the action of $TPG$ is obtained by simply replacing $R$ with torsion scalar $T$. In analogy to the $f(R)$ theory, Bengochea and Ferraro suggested [6] a modified $TPG$ theory, named $f(T)$ theory, by generalizing the action of $TPG$,i.e., by replacing $T$ with $f(T)$. They found that it can explain the observed acceleration of the universe. It is worth mentioning here that the field equations of $f(R)$ theory are of fourth order while the field equations of $f(T)$ theory are of second order, which seem easier to be solved. Linder proposed two new $f(T)$ models in order to explain the present cosmic accelerating expansion [7]. He said that $f(T)$ theory could unify a number of interesting extensions of gravity beyond $GR$. He investigated that the power law and exponential models depending upon torsion might give the de-Sitter fate of the universe. Wu and Yu [10] analyzed the dynamical property of this theory by using a concrete power law model and showed that the universe could evolve from radiation dominated era to matter dominated era and finally enter in an exponential expansion era. Yang [11] introduced some new $f(T)$ models and gave their physical implications and cosmological behavior. Wu and Yu [12] discussed two new $f(T)$ models and showed how the crossing of phantom divide line takes place. They also explained the observation constraints corresponding to these models. Karami and Abdolmaleki [13] found that equation of state $EoS$ parameter of holographic and new age graphic $f(T)$ models always cross the phantom divide line where entropy connected model has to experience some conditions on parameters model. The same author [14] obtained $EoS$ parameter of polytropic, standard, generalized and modified Chaplygin gas in this modified scenario. Dent, et al. [15] investigated this theory at the background and perturbed level and also explored it for quintessence scenarios. Li, et al. [16] explored local Lorentz invariance and remarked that $f(T)$ theory is not local Lorentz invariant. Chen, et al. [17] investigated expressions for growth factor, stability and vector-tensor perturbations. Bamba, et al. [18] studied the cosmological equations of $EoS$ in exponential, logarithmic and their combined $f(T)$ models. Wang [19] searched spherically symmetric static solution of $f(T)$ models with a Maxwell term and demonstrated that in conformal Cartesian coordinates the Reissner-Nordstrom solution does not exist in this theory. Myrzakulov [20] discussed different $f(T)$ models including scalar fields and gave analytical solutions for scale factors and scalar fields. Sharif and Rani explored Bianchi type-1 universe using different $f(T)$ gravity models [21]. They also discussed K-essence models in the framework of $f(T)$ gravity. Recently, we explored Kantowski-Sachs universe models in $f(T)$ theory of Gravity [22]. Recently, some interesting $f(T)$ models have been explored by different authors in [23]-[25]. In this paper, we explore some $f(T)$ models within the Kantowski-Sachs universe. For this purpose, we use conservation equation and equation of state parameter, which represent the different phases of the universe. Also, we discuss the cosmic acceleration of the universe and $EoS$ parameter by considering two particular $f(T)$ models. The structure of the paper is as follows. In section $2$, we shall present some basics of the $f(T)$ theory of gravity and the corresponding field equations for Kantowski-Sachs spacetime. Section $3$ contains a detailed construction of $f(T)$ models by using two different approaches. Section $4$ is devoted to study the $EoS$ parameter for two particular models and also a discussion on cosmic acceleration is provided. In the last section, we summarize and conclude the results. \section{An Overview of Generalized Teleparallel Theory $f(T)$} In this section, we introduce briefly the teleparallel theory of gravity and its generalization to $f(T)$ theory. The Lagrangian density for teleparallel and $f(T)$ gravity are, respectively, given as follows [22]: \begin{eqnarray} L_T&=&\frac{h}{16 \pi G}T,\\ L_{F(T)}&=&\frac{h}{16 \pi G}F(T), \end{eqnarray} where $T$ is the torsion scalar, $f(T)$ is a general differentiable function of torsion, $G$ is the gravitational constant and $h=det({h^i}_\mu)$. Mathematically, the torsion scalar is defined as \begin{eqnarray} T= {S_\rho}^{\mu\nu}{T^\rho}_{\mu\nu}, \end{eqnarray} where ${S_\rho}^{\mu\nu}$ is antisymmetric in its upper indices while ${T^\rho}_{\mu\nu}$ is antisymmetric torsion tensor in its lower indices. Here ${S_\rho}^{\mu\nu}$ is determined by the relation \begin{eqnarray} S^{\mu\rho\sigma}=\frac{1}{4}(T^{\mu\rho\sigma}+T^{\rho\mu\sigma}- T^{\sigma\mu\rho})-\frac{1}{2}(g^{\mu\sigma}{T^{\lambda\rho}}_\lambda-g^{\rho\mu} {T^{\lambda\sigma}}_\lambda) \end{eqnarray} and ${T^\lambda}_{\mu\nu}$ is defined as [26] \begin{eqnarray} {T^\lambda}_{\mu\nu}={\Gamma^\lambda}_{\nu\mu}-{\Gamma^\lambda}_{\mu\nu}={h^\lambda}_i \left(\partial_\mu{h^i}_\nu-\partial_\nu{h^i}_\mu\right). \end{eqnarray} Here ${h^i}_\mu$ are the components of the non-trivial tetrad field $h_i$ in the coordinate basis. It is an arbitrary choice to choose the tetrad field related to the metric tensor ${g}_{\mu\nu}$ by the following relation \begin{eqnarray} g_{\mu\nu}=\eta_{ij} { h^i}_\mu {h^j}_\nu, \end{eqnarray} where $\eta_{ij}$ is the Minkowski spacetime for the tangent space such that $\eta_{ij}=diag(+1,-1,-1,-1)$. For a given metric there exists infinite different tetrad fields ${h^i}_\mu $ which satisfy the following properties: \begin{eqnarray} {h^i}_\mu{h_j}^\mu={\delta_j}^i; {h^i}_\mu{h_i}^\nu={\delta_\mu}^\nu. \end{eqnarray} In this paper, the Latin alphabets $(i,j,.. =0,1,2,3)$ will be used to denote the tangent space indices and the Greek alphabets $(\mu,\nu,... =0,1,2,3)$ to denote the spacetime indices. The variation in the indices other than the above mentioned range will be specified when needed. The variation of Eq.(2) with respect to the vierbein field leads to the following field equations \begin{eqnarray} \left[e^ {-1} \partial_\mu\left(e{S_i}^{\mu\nu}\right)+{h_i}^\lambda {T^\rho}_{\mu\lambda} {S_\rho}^{\nu\mu}\right]F_T+{S_i}^{\mu\nu}\partial_\mu(T)F_{TT}\nonumber\\+ \frac{1}{4}{h_i}^\nu F =\frac{1}{2}\kappa^{2}{h_i}^\rho {T_\rho}^\nu. \end{eqnarray} Here $f_T=\frac{df}{dT}, f_{TT}=\frac{d^2 f}{dT^2}, \kappa^2=8\pi G,{S_i}^{\mu\nu}={h_i}^\rho{S_\rho}^{\mu\nu}$, and $T_{\mu\nu}$ is the energy-momentum tensor,given as \begin{eqnarray} {T_\rho}^\nu=diag\left(\rho_m, -p_m, -p_m, -p_m\right), \end{eqnarray} where $\rho_m$ is the density while $p_m$ is the pressure of the matter inside the universe.\\ \textbf{ The Field Equations}\\ The line element for a flat, homogeneous and anisotropic Kantowski-Sachs spacetime is \begin{eqnarray} ds^{2}=dt^{2}-A^{2}(t)dr^{2}-B^{2}(t)\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right), \end{eqnarray} where the scale factors $A$ and $B$ are functions of cosmic time $t$ only. Using Eqs.$(6)$ and $(10)$, we obtain tetrad components as follows [27]: \begin{eqnarray} {h^i}_\mu&=&diag\left(1,A,B,B\sin\theta\right),\nonumber\\ {h_i}^\mu&=&diag\left(1,A^{-1},B^{-1},(B\sin\theta)^{-1}\right), \end{eqnarray} which obviously satisfy Eq.$(7)$. Substituting Eqs.$(4)$ and $(5)$ in$(3)$ and using$(10)$, it follows after some manipulation \begin{eqnarray} T=-2\left(\frac{2\dot{A}\dot{B}}{AB}+\frac{\dot{B}^2}{B^2}\right). \end{eqnarray} The field equations $(8)$, for $i=0=\nu$ and $i=1=\nu$, turn out to be \begin{eqnarray} F-4\left(\frac{2\dot{A}\dot{B}}{AB}+\frac{\dot{B}^2}{B^2}\right) F_T=2\kappa^{2}\rho_m,\\ 4\left(\frac{\dot{A}\dot{B}}{B}+\frac{A\dot{B}^2}{B^2}+\frac{A\ddot{B}}{B}+\frac{\dot{A}\dot{B}}{AB}\right)F_T -16\frac{A\dot{B}}{B}\left(\frac{\ddot{A}\dot{B}}{AB}+ \frac{\dot{A}\ddot{B}}{AB}\right.\nonumber\\-\left.\frac{\dot{A}^2\dot{B}}{A^2B}-\frac{\dot{A}\dot{B}^2}{AB^2} +\frac{\dot{B}\ddot{B}}{B^2}-\frac{\dot{B}^3}{B^3}\right)F_{TT} -F=2\kappa^2p_m. \end{eqnarray} The conservation equation takes the form \begin{eqnarray} \dot{\rho_m} +\left(\frac{\dot{A}}{A}+2\frac{\dot{B}}{B}\right)\left(\rho_m+p_m\right)=0. \end{eqnarray} The average scale factor $R$, the mean Hubble parameter $H$ and the anisotropy parameter $\Delta$ of the expansion respectively become \begin{eqnarray} \dot{\rho_m} +\left(\frac{\dot{A}}{A}+2\frac{\dot{B}}{B}\right)\left(\rho_m+p_m\right)=0. \end{eqnarray} where $H_i$ are the directional parameters in the direction $x$,$y$ and $z$ respectively given as \begin{eqnarray} H_1&=&\frac{\dot{A}}{A} , \nonumber\\ H_2&=&\frac{\dot{B}}{B}=H_3. \end{eqnarray} It is mentioned here that the isotropic expansion of the universe is obtained for $\Delta=0$ which further depends upon the values of unknown scale factors and parameters involved in the corresponding models [28]-[30]. The equation $(12)$ can be written as \begin{eqnarray} 2T=J-9H^{2}, J=\frac{\dot{A}}{A}+2\frac{\dot{B}}{B}, \end{eqnarray} which implies that \begin{eqnarray} H=\frac{1}{3}\sqrt{J-2T}. \end{eqnarray} If we take $F(T)=T$ then Eqs.$(13)$ and $(14)$ will reduce to \begin{eqnarray} \rho_m + \rho T&=&\frac{1}{2\kappa^{2}}\left[-4\left(\frac{2\dot{A}\dot{B}}{AB}+\frac{\dot{B}^2}{B^2}\right)+T\right], \\ p_m+pT&=&\frac{1}{2\kappa^{2}}\left[4\left(\frac{\dot{A}\dot{B}}{B}+\frac{A\dot{B}^2}{B^2} +\frac{A\ddot{B}}{B}+\frac{\dot{A}\dot{B}}{AB}\right)-T\right], \end{eqnarray} where $\rho T$ and $pT$ are the torsion contributions given respectively as \begin{eqnarray} \rho\ T=\frac{1}{2\kappa^{2}}\left[-4\left(\frac{2\dot{A}\dot{B}}{A B}+\frac{\dot{B}^2}{B^2}\right)\left(1-F_T\right)+T-F\right], \end{eqnarray} and \begin{eqnarray} pT&=&\frac{1}{2\kappa^{2}}\left[4\left(\frac{\dot{A}\dot{B}}{B}+\frac{A\dot{B}^2}{B^2}+\frac{A\ddot{B}}{B}+\frac{\dot{A}\dot{B}}{AB}\right) (1- F_T)\right.\nonumber\\&+&16\frac{A\dot{B}}{B}\left(\frac{\ddot{A}\dot{B}}{A B}+ \frac{\dot{A}\ddot{B}}{A B}-\frac{\dot{A}^2\dot{B}}{A^2B}-\frac{\dot{A}\dot{B}^2}{AB^2}\right.\nonumber\\ &+&\left.\left.\frac{\dot{B}\ddot{B}}{B^2}-\frac{\dot{B}^3}{B^3}\right)F_{TT} -T+F\right]. \end{eqnarray} The relationship between energy density $\rho$ and pressure of matter $p$ is described by $EoS$, $p=\omega \rho$ where $\omega$ is the $EoS$ parameter. For normal, relativistic and non-relativistic matters, $EoS$ parameter has different corresponding values. Using Eqs.$(13)$ and $(14)$, the $EoS$ parameter is obtained as follows \begin{eqnarray} \omega=-1+\frac{4\left(E-U\right)F_T-16ZF_{TT}}{-4UF_T+F}, \end{eqnarray} where \begin{eqnarray} E&=&\frac{\dot{A}\dot{B}}{B}+\frac{A\dot{B}^2}{B^2}+\frac{A\ddot{B}}{B}+\frac{\dot{A}\dot{B}}{AB}, \\ U&=&\frac{2\dot{A}\dot{B}}{A B}+\frac{\dot{B}^2}{B^2}, \\ Z&=&\frac{A\dot{B}}{B}\left[\frac{\ddot{A}\dot{B}}{A B}+ \frac{\dot{A}\ddot{B}}{A B}-\frac{\dot{A}^2\dot{B}}{A^2B}-\frac{\dot{A}\dot{B}^2}{AB^2} +\frac{\dot{B}\ddot{B}}{B^2}-\frac{\dot{B}^3}{B^3}\right]. \end{eqnarray} It is mentioned here that the homogeneous part of Eq.$(13)$ yields the following solution \begin{eqnarray} F(T)=\frac{C_0}{\sqrt{T}}, \end{eqnarray} where $C_0$ is an integration constant. Using this equation in Eq.$(14)$, we obtain \begin{eqnarray} p_m=-\frac{c_0}{2\kappa^{2}}\left(\frac{2E}{T}+\frac{12Z}{T^2}+1\right)\frac{1}{\sqrt{T}}. \end{eqnarray} It is mentioned here that the $p_m$ vanishes for the $FRW$ spacetime [31].\\ \section{Construction of Some $F(T)$ Models} Here we construct some $F(T)$ models with different cases of perfect fluid by using two approaches. In the first approach we use the continuity equation $(15)$ while in the second approach, $EoS$ parameter $(26)$ will be used. As the constituents of the universe are non-relativistic matter, radiation and DE, we consider the corresponding values of $\omega$ in the following subsections.\\ \subsection{Using Continuity Equation} In this approach, we use the following relation [32] for Kantowski- Sachs spacetime \begin{eqnarray}\frac{1}{9}\left(\frac{\dot{A}}{A}+2\frac{\dot{B}}{B}\right)^2=H_0^2 +\frac{\kappa^{2}\rho_0}{3AB^2\sin\theta}, \end{eqnarray} where $H_0$ is the Hubble constant having primary implication in cosmology and $\rho_0$ is an integration constant. The value of $H_0$ corresponds to the rate at which the universe is expanding today. This equation implies that \begin{eqnarray} \left(AB^2\sin\theta\right)^{-1}=\frac{3}{\kappa^{2}\rho_0}\left(H^2-H_0^2\right). \end{eqnarray} Using $EoS$ in Eq.$(15)$, it follows that \begin{eqnarray}\frac{\dot{\rho_m}}{\rho_m}+3H\left(1+\omega\right)=0. \end{eqnarray} The components of the universe are described by the terms dark matter$(DM)$ and dark energy$(DE)$. We consider different cases of fluids and their combination to construct corresponding $F(T)$ models. For example, for relativistic matter, $\omega=\frac{1}{3}$, for non relativistic matter, it is zero and for $DE$ era, it is equal to $-1$ [33].\\ \textbf{ Case 1 $(\omega=0)$:}\\ This is the case of non-relativistic matter, like cold dark matter ($CDM$) and baryons. It is well approximated as pressureless dust and called the matter dominated era. Inserting $\omega=0$ in Eq.$(34)$ and using Eq.$(33)$, we have \begin{eqnarray}\rho_m=\frac{\rho_c}{AB^2}=\frac{3\rho_c\sin\theta}{\kappa^{2}\rho_0}(H^2-H_0^2), \end{eqnarray} where $\rho_c$ is an integration constant. In terms of torsion scalar, the above equation becomes \begin{eqnarray} \rho_m=\frac{\rho_c\sin\theta}{3\kappa^{2}\rho_0}\left(J-9H_0^2-2T\right). \end{eqnarray} Substituting the values of $\rho_m$ from Eq.$(36)$ in Eq.$(13)$, we have \begin{eqnarray} 2TF_T+F=\frac{2\rho_c\sin\theta}{3\kappa^{2}\rho_0}\left(J-9H_0^2-2T\right), \end{eqnarray} which has the solution \begin{eqnarray} F(T)=\frac{\rho_c\sin\theta}{3\rho_0\sqrt{T}}\int{ \frac{J-9H_0^2-2T}{\sqrt{T}}}dT. \end{eqnarray} This will have a unique solution if the value of $J$ is known which corresponds to the unknown scale factors. Thus for matter dominated era, we obtain a model in the form of torsion scalar and Hubble constant.\\ \textbf{Case 2 ($\omega=\frac{1}{3}$):}\\ Here we consider the relativistic matter, like photons and massless neutrinos with $EoS$ parameter $\omega= \frac{1}{3}$. This case represents the radiation dominated era of the universe. Substituting $\omega= \frac{1}{3}$ in Eq.$(13)$ and making use of Eqs.$(20)$ and $(33)$, we obtain \begin{eqnarray} \rho_m = \frac{\rho_r\sin^{\frac{4}{3}}\theta} {{3}^\frac{4}{3}\kappa^{\frac{8}{3}}\rho^{\frac{4}{3}}_0}\left(J-9H_0^2-2T\right)^{\frac{4}{3}}, \end{eqnarray} where $\rho_r$ is another integration constant. Inserting this value of $\rho_m$ in Eq.$(13)$, we get \begin{eqnarray}2TF_T+F= \frac{2\rho _r\sin^{\frac{4}{3}}\theta} {{3}^\frac{4}{3}\kappa^{\frac{2}{3}}\rho^{\frac{4}{3}}_0}\left(J-9H_0^2-2T\right)^{\frac{4}{3}}, \end{eqnarray} which has solution \begin{eqnarray}F(T)= \frac{\rho _r\sin^{\frac{4}{3}}\theta} {{3}^\frac{4}{3}\kappa^{\frac{2}{3}}\rho^{\frac{4}{3}}_0\sqrt{T}} \int\frac{\left(J-9H_0^2-2T\right)^{\frac{4}{3}}}{\sqrt{T}}dT. \end{eqnarray} This also depends upon the value of $J$ as well as torsion scalar and Hubble constant.\\ \textbf{Case 3 ($\omega$=-1):}\\ This case represents the present $DE$ constituting $74$ percent of the universal density. $DE$ is assumed to have a large negative pressure in order to explain the observed acceleration of the universe. It is also termed as energy density of vacuum or cosmological constant $\Lambda$. Replacing $\omega$=-1 in Eq. $(34)$, we get \begin{eqnarray} \rho_m=\rho_d, \end{eqnarray} where $\rho_d$ is an integration constant. Consequently, Eq.$(13)$ takes the form \begin{eqnarray} 2TF_T+F=2\kappa^{2}\rho_d \end{eqnarray} with solution \begin{eqnarray} F(T)=\frac{\kappa^{2}\rho_d}{\sqrt{T}}\int\frac{1}{\sqrt{T}}dT. \end{eqnarray}\\ \textbf{Case 4 (Combination of $\omega=0$ and $\omega=\frac{1}{3}$):}\\ Let us now consider the case when the energy density is a combination of different fluids, the dust fluid and the radiations. Adding Eqs.$(36)$ and $(39)$, after simplification, it follows that \begin{eqnarray} \rho_m=\frac{\rho_c \sin\theta}{6\kappa^{2}\rho_0}\left(J-9H_0^2-2T\right)+\frac{\rho _r\sin^{\frac{4}{3}}\theta} {{2}.{3}^\frac{4}{3}\kappa^{\frac{8}{3}}\rho^\frac{4}{3}_0}\left(J-9H_0^2-2T\right)^{\frac{4}{3}}. \end{eqnarray} Substituting this value of $\rho_m$ in Eq.$(13)$, we get \begin{eqnarray} 2TF_T+F=\frac{\rho_c\sin\theta}{3\rho_0}\left(J-9H_0^2-2T\right)+\frac{\rho _r\sin^{\frac{4}{3}}\theta} {{3}^\frac{4}{3}\kappa^{\frac{2}{3}}\rho^\frac{4}{3}_0}\left(J-9H_0^2-2T\right)^{\frac{4}{3}} \end{eqnarray} and its solution is \begin{eqnarray} F(T)&=&\frac{\rho_c\sin\theta} {6\rho_0\sqrt{T}}\int\frac{\left(J-9H_0^2-2T\right)}{\sqrt{T}}dT\nonumber\\&+&\frac{\rho _r\sin^{\frac{4}{3}}\theta} {{2}.{3}^\frac{4}{3}\kappa^{\frac{8}{3}}\rho^{\frac{4}{3}}_0\sqrt{T}} \int \frac{\left(J-9H_0^2-2T\right)^{\frac{4}{3}}}{\sqrt{T}}dT. \end{eqnarray}\\ \textbf{Case 5 ( Combination of $ \omega =0 $ and $ \omega =-1 $):}\\ The combination of $EoS$ parameters for matter dominated era and $DE$ yields \begin{eqnarray} \rho_m=\frac{\rho_c\sin\theta}{6\kappa^{2}\rho_0}\left(J-9H_0^2-2T\right)+\frac{\rho_d}{2}. \end{eqnarray} Inserting this value of $\rho_m$ in Eq.$(13)$, we get \begin{eqnarray} 2TF_T+F=\frac{\rho_c\sin\theta}{3\rho_0}\left(J-9H_0^2-2T\right)+\kappa^{2}\rho_d, \end{eqnarray} yielding \begin{eqnarray} F(T)=\frac{\rho_c\sin\theta} {6\rho_0\sqrt{T}}\int\frac{\left(J-9H_0^2-2T\right)}{\sqrt{T}}dT+\frac{\kappa^{2}\rho_d}{2\sqrt{T}}\int\frac{1}{\sqrt{T}}dT. \end{eqnarray}\\ \textbf{ Case 6 (Combination of $\omega=-1$ and $\omega=\frac{1}{3}$):}\\ This case gives the following form of the energy density \begin{eqnarray} \rho_m=\frac{\rho_r \sin{^\frac{4}{3}}\theta} {{2}.{3}^\frac{4}{3} {2}\kappa^{\frac{8}{3}}\rho^{\frac{4}{3}}_0}\left(J-9H_0^2-2T\right)^{\frac{4}{3}}+\frac{\rho_d}{2}. \end{eqnarray} Substituting this value in Eq.$(13)$, we get \begin{eqnarray} 2TF_T+F =\frac{\rho_r\sin^{\frac{4}{3}}\theta} {{3}^\frac{4}{3}\kappa^{\frac{4}{3}}\rho^{\frac{4}{3}}_0}\left(J-9H_0^2-2T\right)^{\frac{4}{3}}+\kappa^2 \rho_d, \end{eqnarray} which gives \begin{eqnarray} F(T)=\frac{\rho_r\sin^{\frac{4}{3}}\theta} {{2}.{3}^ {\frac{4}{3}} \kappa^{\frac{2}{3}}\rho^{\frac{4}{3}}_0\sqrt{T}} \int\frac{\left(J-9H_0^2-2T\right)^{\frac{4}{3}} }{\sqrt{T}}dT+\frac{\kappa^{2}\rho_d}{2\sqrt{T}}\int\frac{1}{\sqrt{T}}dT. \end{eqnarray} It is mentioned here that the cases $4$-$6$ provide $F(T)$ models for combination of different matters. Normally, the dark matter and DE developed independently. However, there are attempts [34] to include an interaction amongst them so that one can get some insights and see the combined effect of different fluids. Dark matter plays a central role in galaxy evolution and has measurable effects on the anisotropies observed in the cosmic microwave background. Although, matter made a large fraction of total energy of the universe but its contribution would fall in the far future as DE becomes more dominated. It may provide an interaction between dark matter and DE and drive transition from an early matter dominated era to a phase of accelerated expansion. Using the same phenomenon, DE and different forms of matter are discussed in the framework of $F(T)$ theory which may help to discuss accelerated expansion of the universe.\\ \subsection{Using $EoS$ Parameter} Here we formulate some $F(T)$ models in a slightly different way. We substitute different values of parameter $\omega$ in Eq.$(13)$ and solve it accordingly. The Eq.$(26)$ can be written as \begin{eqnarray} 16ZF_{TT}-4\left(E+\omega U\right)F_T+\left(1+\omega\right)F=0. \end{eqnarray} Now, we construct $F(T)$ models in the following cases :\\ \textbf{ Case 1:}\\ When we put $\omega=\frac{1}{3}$ in Eq.$(54)$, we obtain \begin{eqnarray} Z F_{TT}-\left(E+\frac{U}{3}\right)F_T+\frac{1}{3}F=0. \end{eqnarray} This has the following general solution. \begin{eqnarray} F(T)&=& C_1\texttt{ exp} \left[\left\{\frac{\left(3E+U\right)+\sqrt{\left(3E+U\right)^2-12Z}}{6Z}\right\}T\right] \nonumber\\ & +&C_2\texttt{ exp} \left[\left\{\frac{\left(3E+U\right)-\sqrt{(3E+U)^2-12Z}}{6Z}\right\}T\right], \end{eqnarray} where $C_1$ and $C_2$ are constants. \\ \textbf{Case 2:} \\ Here we consider the dust case when pressure is zero, that is, $\omega$ =0. Then the Eq.$(52)$ takes the form \begin{eqnarray} 16ZF_{TT}-4EF_T+F=0. \end{eqnarray} It has the following general solution \begin{eqnarray} F(T)&=&C_3\texttt{ exp} \left[\left\{\frac{E+\sqrt{E^2-4Z}}{8Z}\right\}T\right]\nonumber\\&+& C_4\texttt{ exp }\left[\left\{\frac{E-\sqrt{E^2-4Z}}{8Z}\right\}T\right], \end{eqnarray}\\ where $C_3$ and $C_4$ are constants.\\ \textbf{ Case 3:}\\ For $\omega=-1$, Eq.$(54)$ becomes \begin{eqnarray} 4ZF_{TT}-\left(E-U\right)F_T=0, \end{eqnarray} whose general solution is \begin{eqnarray} F(T)=C_5+C_6\texttt{ exp} \left[\left(\frac{E-U}{4Z}\right)T\right], \end{eqnarray} where $C_5$ and $C_6$ are constants. The Eqs.$(54)$, $(56)$ and $(58)$ represent $F(T)$ models corresponding radiation, matter and DE phases respectively. The exponential form of $F(T)$ models represents a universe which always lies in phantom or non-phantom phase depending on parameters of the models [35].\\ \section{Construction of $EoS$ Parameters and \\Cosmic Acceleration } In this section we derive $EoS$ parameter by using two different $F(T)$ models and also investigate cosmic acceleration. For this purpose, we evaluate $\rho_{m}$ and $p_{m}$ using the field equations and then construct the corresponding $EoS$ parameters. \\ \subsection{The First Model} Consider the following $F(T)$ model [31] \begin{eqnarray} F= \alpha T + \frac{\beta}{T}, \end{eqnarray} where $\alpha$ and $\beta$ are positive real constants. Inserting this value of $F$ in Eqs.$(13)$ and $(14)$, it follows that\\ \begin{eqnarray} 2 \kappa ^{2} \rho_{m}=\left(-4U+T\right)\alpha + \beta \left(1+4UT^{-1}\right)T^{-1}, \\ 2 \kappa ^{2} p_{m}= \left(4E-T\right)\alpha - \beta \left(4ET^{-1}+32ZT^{-2}+1\right)T^{-1}. \end{eqnarray} Dividing Eq.$(61)$ by $(60)$, the $EoS$ parameter is obtained as follows \begin{eqnarray} \omega=-1+\frac{4\left(E-U\right)\alpha - \beta \left(4\left(E-U\right)T^{-1}+32ZT^{-1}\right)T^{-2}}{\left(-4U+T\right)\alpha + \beta \left(1+4UT^{-1}\right)T^{-1}}. \end{eqnarray} Now, we would like to discuss the last equation for particular values of $\alpha$ and $\beta$. For $\alpha$ $\neq$0, $\beta=0$, we obtain \begin{eqnarray} \omega=-1+\frac{2}{3}\left(1-\frac{E}{U}\right). \end{eqnarray} This leads to three different cases of $\omega$ representing different phases of the evolution of the universe as follows: \begin{itemize} \item If $\frac{E}{U} >1$ then $\omega <-1$, which corresponds to the phantom accelerating universe. \item When $\frac{E}{U} <1$ then $\omega >-1$, slightly which corresponds to the quintessence region. \item When $\frac{E}{U}=1$, we obtain a universe whose dynamics is dominated by cosmological constant with $\omega =-1$ which corresponds to the phantom accelerating universe. \end{itemize} It is interesting to mention here that model $(59)$ reduces to $GR$ spatially flat Friedmann equation in the limiting case when anisotropy vanishes. Also, for the case, when $\alpha$ $\neq$0, $\beta$ $\neq$0, we obtain no physical results. \subsection{The Second Model} Assume the $F(T)$ has the form [31] \begin{eqnarray} F= \alpha T + \beta T^{n}, \end{eqnarray} where $n$ is a positive real number. The corresponding field equations become. \begin{eqnarray} 2 \kappa ^{2} \rho_{m}&=& \left(-4U+T\right)\alpha + \beta \left(-4nUT^{-1}+1\right)T^{n}, \\ 2 \kappa ^{2} p_{m}&=& \left(4E-T\right)\alpha +4n \beta E T^{n-1}-16n(n-1)\beta Z T^{n-2}-\beta T^{n}. \end{eqnarray} Consequently, the $EoS$ parameter takes the form \begin{eqnarray} \omega=-1+\frac{4\left(-U+E\right)\alpha +4n \beta \left(-U+E\right)T^{n-1}-16n(n-1)\beta Z T^{n-2}}{\left(-4U+T\right)\alpha + \beta \left(-4nUT^{-1}+1\right)T^{n}}. \end{eqnarray} The case $\alpha$ $\neq0$, $\beta=0$, leads to the same discussion as in the first case. For $\alpha=0$, $\beta \neq 0$, we have \begin{eqnarray} \omega=-1+\frac{2n}{2n+1}\left[1- \left\{\frac{E}{U}+ \frac{8n(n-1)Z}{U^{2}}\right\}\right]. \end{eqnarray} For any positive real number $n$, we can discuss as follows: \begin{itemize} \item When $\left[\frac{E}{U}+ \frac{8n(n-1)Z}{U^{2}}\right] <1$, the Eq.$(70)$ gives $\omega <-1$ which represents the phantom accelerating universe. \item For $\left[\frac{E}{U}+\frac{8n(n-1)Z}{U^{2}}\right]=1$, we obtain $\omega=-1$ and hence the universe rests in DE era dominated by cosmological constant. \item The case $\left[ \frac{E}{U}+ \frac{8n(n-1)Z}{U^{2}}\right]<-1$, corresponds to the quintessence era because $\omega>-1.$ \end{itemize} Assuming $n=1$ as a particular case in Eqs.$(67)$ and $(68)$, we have \begin{eqnarray} \rho_m&=&\frac{\left(\alpha+\beta\right)\left(-4U+T\right)}{2\kappa^{2}}, \\ p_m&=&\frac{\left(\alpha+\beta\right)\left(4E-T\right)}{2\kappa^{2}}. \end{eqnarray} In the following, we discuss the evolution of the scale factor for Kantowski-Sachs universe. For this purpose, we assume [31] \begin{eqnarray} p_m=\frac{A_{-1}(T)}{\rho_m}+A_0(T)+A_1(T)\rho_m, \end{eqnarray} such that $A_{-1}$, $A_0$, $A_1$ are constants. Substituting Eqs.$(71)$ and $(72)$ in the above equation, it follows that \begin{eqnarray} 4E-T=\frac{a}{-4U+T}+b+c\left(-4U+T\right), \end{eqnarray} where \begin{eqnarray} a=\frac{4\kappa^{4}A_{-1}}{\left(\alpha+\beta\right)^{2}},~~~ b=\frac{2\kappa^{2}A_0}{\alpha+\beta},~~~c=A_1. \end{eqnarray} This leads to \begin{eqnarray} T&=&-\frac{4U+4E-b+8Uc}{2\left(1+c\right)}\nonumber\\ &\pm& \frac{1}{2\left(1+c\right)}\left[\left(4U+4E-b+8Uc\right)^{2}\right.\nonumber\\&-&\left.4\left(1+c\right)\left(16cU^{2}-4bU+a+16UE\right)\right]^{ \frac{1}{2}}.\nonumber\\ \end{eqnarray} Substituting this value of torsion in Eq.$(21)$ we have \begin{eqnarray} H&=&\frac{1}{3}\left[\left|J-\frac{4U+4E-b+8Uc}{1+c}\right.\right.\nonumber\\ &\pm& \frac{1}{1+c}\left\{\left(4E+4U-b+8Uc\right)^2\right.\nonumber\\&-&\left.\left.\left.4\left(1+c\right)\left(16cU^2-4bU+a+16UE\right)\right\}^ {\frac{1}{2}}\right|\right]^{\frac{1}{2}}.\nonumber\\ \end{eqnarray} The corresponding average scale factor becomes \begin{eqnarray} R&=&R_0 \texttt{exp} \left\{\frac{1}{3}\int \left[\left|J-\frac{4U+4E-b+8Uc}{1+c}\right.\right.\right.\nonumber\\&\pm& \frac{1}{1+c}\left\{\left(4E+4U-b+8Uc\right)^2\right.\nonumber\\&-&\left.\left.\left.\left. 4\left(1+c\right)\left(16cU^2-4bU+a+16UE\right)\right\}^{\frac{1}{2}}\right|\right]^{\frac{1}{2}}dT\right\}. \end{eqnarray} As a special case of model $(73)$, if we take $A_{-1}$ as a constant while $A_0=0=A_1$, we obtain standard Chaplygin gas $EoS$ [36]. In this respect, Eqs.$(74)$ and $(75)$ give the following results respectively. \begin{eqnarray} T&=&2\left(E+U\right)\pm \sqrt{\left\{2\left(E+U\right)\right\}^2-\left(a+16UE\right)}, \\ H&=&\frac{1}{3}\left[J-4\left(E+U\right)\pm 2\sqrt{\left\{2\left(E+U\right)\right\}^2-\left(a+16UE\right)}\right]. \end{eqnarray} The average scale factor for Chaplygin gas has the form \begin{eqnarray} R=R_0 \texttt{exp} \left\{\frac{1}{3}\int \left| J-4\left(E+U\right)\pm 2\sqrt{\{2\left(E+U\right)\}^2-\left(a+16UE\right)}\right| dT \right\}. \end{eqnarray} This represents an exponential expansion which may result a rapid increment between the distance of the two non-accelerating observers as compared to the speed of light. As a result, both observers are unable to contact each other. Thus if our universe is forthcoming to a de-Sitter Universe [7], then we would not be able to observe any galaxy other than our own Milky way system. \section{Summary and Conclusion} The study of cosmological models has become burning issue since the last decade. Much interest has been given by the researchers to resolve the cosmological problems including the existence of $DE$ and $DM$ in the universe. As $GR$ can not explain the rushing growth of the universe so we need some other framework of gravity, which may resolve this issue. There are many alternate theories of gravity among which $F(T)$ theory of gravity is one of the candidates. The purpose of this paper is to investigate the recently developed F(T) gravity. For this purpose we have taken Kantowski-Sachs spacetime model describing anisotropic and spherically homogeneous universe. Some $F(T)$ models have been constructed by using two different approaches. In the first approach, we have used the continuity equation while in the second method, $EoS$ is used. The results obtained so far in these approaches are given in the following tables $(1-2)$: \vspace{0.5cm} {\bf {\small Table 1.} {\small Expressions for $F(T)$ using Continuity Equation }} \begin{center} \begin{tabular}{|c|c|} \hline{\bf CASES}&{\bf $F(T)$ }\\ \hline{$1$} & $\frac{\rho_c\sin\theta}{3\rho_0\sqrt{T}}\int{ \frac{J-9H_0^2-2T}{\sqrt{T}}}dT$\\ \hline{ $2$} & $\frac{\rho _r\sin^{\frac{4}{3}}\theta} {{3}^\frac{4}{3}\kappa^{\frac{2}{3}}\rho^{\frac{4}{3}}_0\sqrt{T}} \int\frac{(J-9H_0^2-2T)^{\frac{4}{3}}}{\sqrt{T}}dT$\\ \hline{ $3$} & $\frac{\kappa^{2}\rho_d}{\sqrt{T}}\int\frac{1}{\sqrt{T}}dT $\\ \hline{ $4$} & $\frac{\rho_c\sin\theta} {6\rho_0\sqrt{T}}\int\frac{(J-9H_0^2-2T)}{\sqrt{T}}dT+\frac{\rho _r\sin^{\frac{4}{3}}\theta} {{2}.{3}^\frac{4}{3}\kappa^{\frac{8}{3}}\rho^{\frac{4}{3}}_0\sqrt{T}} \int \frac{(J-9H_0^2-2T)^{\frac{4}{3}}}{\sqrt{T}}dT $\\ \hline{ $5$} & $\frac{\rho_c\sin\theta} {6\rho_0\sqrt{T}}\int\frac{(J-9H_0^2-2T)}{\sqrt{T}}dT+\frac{\kappa^{2}\rho_d}{2\sqrt{T}}\int\frac{1}{\sqrt{T}}dT $\\ \hline{ $6$} & $\frac{\rho_r\sin^{\frac{4}{3}}\theta} {{2}.{3}^ {\frac{4}{3}} \kappa^{\frac{2}{3}}\rho^{\frac{4}{3}}_0\sqrt{T}} \int\frac{(J-9H_0^2-2T)^{\frac{4}{3}} }{\sqrt{T}}dT+\frac{\kappa^{2}\rho_d}{2\sqrt{T}}\int\frac{1}{\sqrt{T}}dT $\\ \hline \end{tabular} \end{center} \vspace{0.5cm} {\bf {\small Table 2.} {\small Expressions for $F(T)$ using $EoS$ Parameter }} \begin{center} \begin{tabular}{|c|c|} \hline{\bf CASES}&{\bf $F(T)$}\\ \hline $1$ & $ C_1\texttt{ exp} [\{\frac{(3E+U)+\sqrt{(3E+U)^2-12Z}}{6Z}\}T]$\\& $+C_2\texttt{ exp} [\{\frac{(3E+U)-\sqrt{(3E+U)^2-12Z}}{6Z}\}T] $\\ \hline{ $2$} & $C_3\texttt{ exp} [\{\frac{E+\sqrt{E^2-4Z}}{8Z}\}T]+ C_4\texttt{ exp }[\{\frac{E-\sqrt{E^2-4Z}}{8Z}\}T] $\\ \hline \hline{ $3$} & $C_5+C_6\texttt{ exp} [(\frac{E-U}{4Z})T] $\\ \hline \end{tabular} \end{center} These $F(T)$ gravity models represent three different eras of the universe corresponding to different values of $EoS$ parameter. These are the matter, radiation and DE dominated eras corresponding to $\omega=0$, $\omega=\frac{1}{3}$ and $\omega=-1$ respectively, given in table $1$ as cases $1$-$3$. If we consider combination of radiation and matter, we may have more interesting results to study the developing universe. Using different combinations of $EoS$ parameter we obtain three more models, given in table $1$ as cases $4$-$6$. Also we have obtained $F(T)$ models in exponential form for some particular values of $EoS$ parameter, given in table $2$. It is well known that the evolution of $EoS$ parameter is one of the biggest efforts in the observational cosmology today. We have considered two well known $F(T)$ models, given in Eqs.$(61)$ and $(66)$ and found the corresponding expressions for $EoS$ parameter $\omega$. These expressions have been investigated for some particular values of the parameters $\alpha$ and $\beta$ which yield fruitful results corresponding to realistic situations. Further, we discuss the cosmic acceleration for these models. We conclude that our universe would approach to de-Sitter universe in the infinite future. The isotropic expansion of the universe is obtained for $\Delta=0$ which depends upon the values of unknown scale factors and parameters involved in the corresponding models. \vspace{1.5cm} {\bf References} \begin{description} \item{[1]} Riess, A.G. et al.: Astron. J. \textbf{116}(1998)1009, Perlmutter, S. et al.: Astrophys. J. \textbf{517}(1999)565. \item{[2]} Spergel, D.E. et al.: ApJS. \textbf{175}(2003)148. \item{[3]} Tegmark, M.: Phys. Rev. \textbf{D69}(2004)103501. \item{[4]} Einstein, D.J. et al.,: Astrophys. J.\textbf{633}(2005)560. \item{[5]} Sharif, M. and Amir, M.J.: Mod. Phys. Lett. \textbf{A22}(2007)425; Sharif, M. and Amir, M.J.: Gen. Relativ Gravit. \textbf{39}(2007)989; Sharif, M and Amir M.J.: Int. J. Theor. Phys. \textbf{47}(2008)1742; Sharif, M. and Amir M.J.: Mod. Phys. Lett. \textbf{A37}(2007)1292; Sharif, M. and Nazir K.: Commun Theor. Phys. \textbf{50}(2008)664; Sharif, M. and Taj, S.: Astrophys. Space Sci. \textbf{75}(2010)325; Hayashi, K. and Shirafuji, T.: Phys. Rev. \textbf{D19}(1979)3524; Sharif, M. and Amir, M.J.: Mod. Phys. Lett. \textbf{A23}(2008)963. \item{[6]} Bengochea, G.R and Ferraro, R.: Phys. Rev. \textbf{D79}(2009)124019. \item{[7]} Linder, E.V.: Phys. Rev. \textbf{D81}(2010)127301. \item{[8]} R. Myrzakulov, arXiv : 1006.1120. , K.K. Yerzhanov, S.R. Myrzakulo, I.I. Kulnazarov, R. Kulnazarov, arXiv : 1006.389., Wu, P. and Yu, H. : Phys. Lett. \textbf{B692}(2010)176., R. Yang, arXiv : 1007.3571, P. Yu. Isyba, I.I. Kulnazarov, K.K. Yerzhanov, R. Myrzakulov. arXiv : 1008.0779. , J.B. Dent, S. Dutta, E.N. Saridakis, arXiv :1008.3188. , P. Wu and H. Yu, arXiv :1008.3669. \item{[9]} Bamba, K. , Geng, C.Q. and Lee, C.C.: arXiv: [astro-ph.] 1008.4036. \item{[10]} Wu, P. and Yu, H.: Phys. Lett. \textbf{B692}(2010)176. \item{[11]} Yang, R.J.: Eur. Phys. J. \textbf{C71}(2011)1797. \item{[12]} Wu, U. and Yu, H. : Eur. Phys. J. \textbf{C71}(2011)1552. \item{[13]} Karami, K. and Abdolmaleki, A.: Research in Astron. Astrophys. \textbf{13}(2013)757. \item{[14]} Karami, K. and Abdolmaleki, A.: Journal of Physics: Conference Series \textbf{375}(2012)032009. \item{[15]} Dent, J.B. , Dutta, S. and Saridakis, E.N.: JCAP \textbf{1101}(2011)009. \item{[16]} Li, B. , Sotiriou, T.P. and Barrow, J.D.: Phys. Rev. \textbf{D83}(2011)064035. \item{[17]} Chen, S.H. et al.: Phys. Rev. \textbf{D83}(2011)023508. \item{[18]} Bamba, K. et al.: JCAP \textbf{1101}(2011)021. \item{[19]} Wang, T.: Phys. Rev. \textbf{D84}(2011)024042. \item{[20]} Myrzakulov, R.:\textit{ According to cosmology in F(T) Gravity with scalar field}, arXiv/1006.3879. \item{[21]} Sharif, M. and Rani, S.: Mod. Phys. Lett. \textbf{A26}(2011)1657. \item{[22]} Li, B. et al.: Phys. Rev. \textbf{D83}(2011)064035. \item{[23]} Cardone, V.F., Radicella, N. and Camera, S.: Phys. Rev. \textbf{D} (to appear). \item{[24]} Nashed, G.G.L.: arXiv/1403.6937v1. \item{[25]} Aghamohammadi, A.: arxiv/1402.2607v1. \item{[26]} Aldrovandi, R. and Pereira, J.G.: \textit{An introduction to Geometrical Physics} (World scientific,1995). \item{[27]} Vakili, B. and Sepangi, H.R.: JCAP \textbf{09}(2005)008. \item{[28]} Sharif, M. and Zubair, M.: Astrophys. Space Sci. \textbf{330}(2010)399. \item{[29]} Sharif, M. and Kausar, H.R.:Phys. Lett. \textbf{B697}(2011)1. \item{[30]} Tiwari, R.K.:\textit{ Research in Astron Astrophys.} \textbf{10}(2010)291. \item{[31]} Myrzakulov, R.: Eur. Phys. J. \textbf{C71}(2011)1752. \item{[32]} Elizalde, E. et al.:\textit{ Class. Quantum Grav}. \textbf{27}(2010)095007. \item{[33]} Bean, R.:\textit{ Mod. TASI Lectures on Cosmic Acceleration}, arXiv/1003.4468. \item{[34]} Chimento, L.P., Jakubi, A.S. and Zimdahl, W.: Phys. Rev. \textbf{D67}(2003)083513. \item{[35]} Bamba, K. et al.: JCAP \textbf{01}(2011)021. \item{[36]} Bilic, N. et al.: J. Phys. \textbf{A40}( 2007)6877. \end{description} \end{document}
2,877,628,089,994
arxiv
\section{Acknowledgements} This work acknowledges the support from the National Science Foundation, Grant No.2110279, and the Fulbright-University of Bordeaux Doctoral Research Award.
2,877,628,089,995
arxiv
\section{Introduction} \label{sec:introduction} Percolation is the study of connectivity in random systems, particularly of the transition that occurs when the connectivity first becomes long-ranged \cite{stauffer:aharony}. Examples are the formation of gels in polymer systems \cite{flory:41}, conductivity in random conductor/insulator mixtures \cite{ottavi:clerc:giraud:roussenq:guyon:mitescu:78}, and flow of fluids in random porous materials \cite{larson:scriven:davis:81}. The percolation model has been of immense theoretical interest in the field of statistical mechanics, being a particularly simple example of a system that undergoes a non-trivial phase transition. It is directly related to the Ising model through the Fortuin-Kasteleyn \cite{fortuin:kasteleyn:72} representation of the Potts model. Variations that have received attention recently including $k$-core or bootstrap percolation \cite{dorogovtsev:goltsev:mendes:06}, invasion percolation and watersheds \cite{knecht:trump:benavraham:ziff:12,araujo:andrade:ziff:herrmann:11}, and explosive percolation \cite{achlioptas:dsouza:spencer:09,araujo:andrade:ziff:herrmann:11}. Percolation has also been intensely studied in the mathematical field in recent years \cite{smirnov:werner:01,schramm:smirnov:garban:11,flores:kleban:ziff:11}. In the basic model of random percolation, one considers a lattice of sites (vertices) and bonds (edges), and one randomly occupies a fraction $p$ of either sites or bonds, creating clusters of connected components. Of particular interest is the behavior near the critical threshold $p_c$ where an infinite cluster first appears. The study of this model has encompassed a wide variety of approaches, including experimental measurements \cite{ottavi:clerc:giraud:roussenq:guyon:mitescu:78}, asymptotic analysis of exact series expansions \cite{domb:pearce:76}, theoretical methods \cite{temperley:lieb:71}, conformal invariance \cite{cardy:92}, Schramm Loewner Evolution theory \cite{smirnov:werner:01,schramm:smirnov:garban:11}, and numerous types of computer simulation \cite{vyssotsky:etal:61,dean:bird:67,reynolds:stanley:klein:80,tiggemann:01,xu:wang:lv:deng:14, leath:76,hoshen:kopelman:76,newman:ziff:00,newman:ziff:01}. For some classes of 2d models, thresholds can be found exactly \cite{sykes:essam:64,scullard:ziff:06,grimmett:manolescu:14}, and recently methods have been developed to find approximate 2d values to extremely high precision \cite{scullard:jacobsen:12,jacobsen:14,yang:zhou:li:13,jacobsen:15}. Universality has played a central role in the understanding of the critical behavior of the percolation process (and in statistical mechanics in general). First of all there are universal exponents such as $\alpha$ (related to the number of clusters), $\beta$ (the percolation probability $P_\infty$), $\sigma$ (the inverse of the exponent for the divergence of the typical cluster size), $\nu$ (the correlation length) etc.\cite{stauffer:aharony}. For all systems of a given dimensionality, these exponents have universal values, such as $\alpha = 2/3$, $\beta = 5/36$, $\sigma = 36/91 $ and $\nu = 4/3$ in two dimensions (2d), independent of the system (lattice, non-lattice, etc) and the shape of the boundary. This is the strongest form of universality. Secondly, there are quantities, such as the number of clusters of size $s$, $n_s \sim s^{-\tau} f_1(b (p-p_c) s^\sigma)$, whose the scaling function $f_1(z)$ is universal, identical for all systems of a given dimensionality, although in order for this universality to be realized, the metric factor $b$ must be adjusted for each system. One usually assumes $b = 1$ for one system, such as bond percolation on the square lattice, and then chooses $b$ for the other systems to get the behaviors to match. The metric factor compensates for the roles of $L$ and $p$ for the different systems. Here the system is assumed to be infinite, and the scaling function $f_1(z)$ is independent of the system shape that was used in the limiting process to infinity. Thirdly, there are properties that are universal in the sense of being independent of the lattice and percolation type, but still dependent upon the shape of the system, even in the limit that the system size becomes infinite. For example, the finite-size scaling of $P_\infty$ is given by \begin{equation} P_\infty(p,L) \sim L^{-\beta/\nu} f_2(b (p-p_c) L^{1/\nu}) \end{equation} where the scaling function $f_2(z)$ is universal only when comparing different systems of the same shape and boundary condition. (Again, $b$ has to be adjusted to make the different systems coincide, and will be the same $b$ as in $f_1(b(p-p_c)s^\sigma)$.) The reason that shape matters here is that, for $p$ close to $p_c$, the correlation length diverges, and the boundaries of the system are seen. Note $P_\infty = s_\mathrm{max}/L^d$ is just the size of the maximum cluster divided by the area or volume of the system, and the properties of the maximum cluster will depend upon the boundary of a system. Another well-known example of a shape-dependent quantity is the percolation crossing probability, where for a rectangular system Cardy derived his well-known formula for the crossing of a rectangular system of any aspect ratio.\cite{cardy:92}. Here the system is made infinite but with the boundary shape fixed in the limiting process. The reference to system shape may seem irrelevant, since usually percolation is related to just connectivity. However, there are finite-size effects that depend upon the large clusters of a system, and for those clusters there is a unique representation of a lattice in space that makes the cluster growth isotropically. For example, the triangular lattice can be deformed into a square lattice with diagonals in one directions, but in that representation the clusters would grow unequally in the two diagonal directions. To properly characterize the shape of the system, the triangles must be represented equilaterally. One of the earliest and most fundamental quantities to be studied in percolation is simply the number of clusters per site $n(p)$ as a function of the occupation probability $p$ \cite{fisher:essam:61,sykes:essam:64}; this quantity corresponds to the free energy of the percolating system \cite{fortuin:kasteleyn:72}. In an infinite system and for $p$ near $p_c$, $n(p)$ behaves as \begin{equation} \label{eq:rho-singularity} n(p) = A_0 + B_0(p-p_c)+C_0 (p-p_c)^2 + {\mathcal A}^\pm |p-p_c|^{2-\alpha}+\ldots\,. \end{equation} where the first three terms represent the analytical part af $n(p)$, and the last term represents the singular part. $ {\mathcal A}^\pm$ is the amplitude above ($+$) and below ($-$) the critical point $p_c$. In two dimensions, the critical exponent $\alpha$ has the universal value $\alpha = -2/3$ \cite{domb:pearce:76} and ${\mathcal A}^+ = {\mathcal A}^-$. However, the value of ${\mathcal A}^\pm$, as well as those of $A_0$, $B_0$ and $C_0$, are nonuniversal. The subscript $0$ indicates an infinite system. The singularity is a weak one and $n(p)$ becomes infinite at $p_c$ in the third derivative. In terms of the correlation length $\xi\sim |p-p_c|^{-\nu}$ where $d \nu = 2 - \alpha$, the singularity in $n(p)$ is proportional to $\xi^{-d}$, where $d$ is the number of dimensions. In 1976, Domb and Pearce \cite{domb:pearce:76}, using series analysis, found values of the coefficients $A_0$, $B_0$, $C_0$ and $ {\mathcal A}^\pm$ for two systems: site percolation on the triangular lattice, and bond percolation on the square lattice (see Table \ref{tab:ABC}). They used their results to conjecture that $\alpha = -2/3$, which proved correct. However, there has been little further determination or discussion of these quantities, other than $A_0$, since then. One exception is the finite-size correction to $n(p_c)$, the so-called excess cluster number \cite{ziff:finch:adamchik:97}, where measurements have been made and the shape dependence has been quantified theoretically. However, other correction quantities, and especially the strength of the singularity, have not been studied. In the present paper, we report several new high-precision results for the quantities in (\ref{eq:rho-singularity}), and also discuss, for the first time we believe, many aspects of the finite-size scaling corrections, with a focus on universality. We determine the metric factors $b$ using the same convention as Hu et al., that $b = 1$ for bond percolation on the square lattice, but then also propose an ``absolute" definite of $b$ by using a fully universal property of the scaling function---the coefficient of the singular behavior, which we can take as equal to unity. We determine this absolute $b$ for site percolation on the triangular, square, honeycomb, and union-jack lattices, and for bond percolation on the square lattice, where $b$ is no longer equal to 1. \section{Finite-size corrections and scaling theory} The leading amplitude $A_0$ in (\ref{eq:rho-singularity}) gives the critical number of clusters per site $n(p_c)$, and has been found exactly in only two cases: bond percolation on the square lattice, where the number of clusters per bond is \cite{temperley:lieb:71,ziff:finch:adamchik:97} \begin{equation} n(p_c) = A_0 = \frac{24 \sqrt{3} - 41}{32} = 0.017788106\ldots \label{A0squarebond} \end{equation} and bond percolation on the dual triangular and honeycomb lattices, where $n(p_c) = (1/3) [35/4 - 3/p_c^\mathrm{TR} -(1-p_c^\mathrm{TR})^6] = 0.01150783\ldots$ and $n(p_c) = (1/3) [35/4 - 3/p_c^\mathrm{TR} - (p_c^\mathrm{TR})^3] = 0.02331840\ldots$ bond clusters per bond, respectively, with $p_c^\mathrm{TR} = 2 \sin \pi/18$ \cite{baxter:temperley:ashley:78,ziff:finch:adamchik:97}. The next amplitude $B_0$ is known exactly for some systems. Sykes and Essam \cite{sykes:essam:64} showed that for site percolation on infinite planar lattices, \begin{equation} n(p) - \tilde n(1-p) = \phi(p) \label{eq:SykesEssamMatching} \end{equation} where $\tilde n$ represents the number of clusters on the \emph{matching lattice} in which the vertices in every face of the original lattice are completely connected, and $\phi(p)$ is the \emph{matching polynomial} or Euler characteristic \cite{neher:mecke:wagner:08} corresponding to the specific lattice. For all fully triangulated lattices, such as the triangular and union-jack lattices, as well as the square-bond covering lattice, the matching lattice is identical to the original lattice, $p_c = 1/2$, and $\phi(p) = p - 3 p^2 + 2 p^3$ \cite{sykes:essam:64}, implying\begin{equation} n'(p_c) = B_0 = \phi'(1/2)/2 = -1/4\,. \end{equation} For other lattices, we can find exact results if we include the matching lattice. For example, for a square (SQ) lattice (site percolation), $\phi(p) = p - 2 p^2 + p^4$, and it follows from (\ref{eq:SykesEssamMatching}) that the following combinations of quantities are known exactly in terms of $p_c$: \begin{equation} \begin{aligned} A_0^\mathrm{SQ}-A_0^\mathrm{NNSQ} &= p_c - 2 p_c^2 + p_c^4 = 0.01349562262604(1), \\ B_0^\mathrm{SQ}+B_0^\mathrm{NNSQ} &= 1 - 4 p_c + 4 p_c^3 = -0.537943928141750(5),\\ C_0^\mathrm{SQ}-C_0^\mathrm{NNSQ} &= - 4 + 12 p_c^2 = 0.2161745687555(3) \end{aligned} \label{eq:ABCmatching} \end{equation} using $p_c$ from \cite{jacobsen:15}, where NNSQ represents the square lattice with next-nearest-neighbor connections, which is the matching lattice of the square lattice. Next we consider the behavior for finite systems. For systems of length scale $L$, (\ref{eq:rho-singularity}) is replaced by \cite{aharony:stauffer:97} \begin{equation} \label{eq:rho-finite} n_L(p) = A_0 + B_0(p-p_c)+C_0 (p-p_c)^2 +L^{-d} f(z)+\ldots\,, \end{equation} where $f(z)$ is the leading scaling function. Here $z = b (p-p_c)L^{1/\nu}$ and $b$ is a metric factor depending on the lattice and percolation type, but not on the shape of the boundary of the system. The subscript $L$ on $n_L(p)$ indicates a finite system. We assume that the boundary conditions are periodic, so there are no surface correction terms. We do not consider higher-order corrections-to-scaling terms, such as $ L^{-2d} g(z)$, here. The scaling function $f(z)$ depends upon the system's shape, boundary conditions and dimensionality, but is universal for all percolation types, including different lattices with site or bond percolation, continuum systems, etc., for systems of the same shape. It is analytic around the origin, allowing us make a Taylor expansion about $z = 0$: \begin{equation} \label{eq:nfinite} n_{L}(p) \sim A + B(p-p_c) + C(p-p_c)^2 + \ldots \end{equation} with \begin{subequations} \label{eq:ABC-scaling} \begin{align} \label{eq:ABC-scaling-a} A &= A_0 + A_1 L^{-d} + \ldots \\ \label{eq:ABC-scaling-b} B &= B_0 + B_1 L^{-d+{1}/{\nu}} + \ldots\\ \label{eq:ABC-scaling-c} C &= C_0 + C_1 L^{-d+{2}/{\nu}}+ \ldots \end{align} \end{subequations} and $A_1 = f(0)$, $B_1 = b f'(0)$, $C_1 = b^2 f''(0)/2$. The metric factor $b$ cancels out in the dimensionless ratio \begin{equation} R = \frac{A_1 C_1}{B_1^2} = \frac{f(0) f''(0)}{2 f'(0)^2} \label{eq:R} \end{equation} which is predicted to be universal for systems of a given shape. By including $A_1$ in this ratio, we also account for different definitions of the unit area of the system in $n(p)$, such as using clusters per bond rather than per site for the square-bond system. For $|z| \gg 1$, $f(z) \sim \hat{\mathcal A^\pm} |z|^{2 - \alpha}$, where $2 - \alpha = d \nu$, $\nu = 4/3$ in 2d and 0.8762 \cite{xu:wang:lv:deng:14} in 3d, and the amplitudes $\hat {\mathcal A^\pm}$ are universal for a given definition of $f(z)$. For large $z$, the behavior is not shape dependent, because $z \propto (L/\xi)^{1/\nu}$ so for $|z| \gg 1$, $\xi \ll L$ and the boundaries are not seen. Substituting $z = b(p-p_c)L^{1/\nu}$ into $f(z)$, we find for $z \gg 1$ that $L^{-d} f(z) \sim \hat{\mathcal A^\pm} b^{2-\alpha}|p-p_c|^{2-\alpha}$, which implies the singular term in (\ref{eq:rho-singularity}) with \begin{equation} \label{eq:universalA} {\mathcal A}^\pm = b^{2 - \alpha} \hat {\mathcal A^\pm} \end{equation} This equation shows the scaling between the universal ($\hat {\mathcal A^\pm}$) and non-universal coefficients (${\mathcal A}^\pm $) for the different systems. Note that this implies $B_1/(- {\mathcal A^\pm})^{1/(2-\alpha)}$ is another universal ratio along with $R$. We discuss these universal ratios below. The correction term $A_1$ in (\ref{eq:ABC-scaling}) is the excess cluster number \cite{ziff:finch:adamchik:97}. It is the difference between the actual cluster number $L^2 n(p_c)$ and the expected number $L^2 A_0$, and for compact shapes it is of order 1. Using results from conformal field theory, $A_1$ can be calculated exactly \cite{ziff:lorenz:kleban:99}, with $A_1=0.883576308\ldots$ for a square torus, and $0.878290117\ldots$ for a 60$^\circ$ periodic rhombus \cite{mertens:jensen:ziff:16}, which is equivalent to a rectangle of aspect ratio $\sqrt{3}/2$ with a twist of $1/2$. This rhombus is a natural system boundary shape for triangular, hexagonal, and related systems and is conjectured to give the lowest value of $A_1$ for any repeatable shape of a periodic system \cite{ziff:lorenz:kleban:99}. \begin{table}[htbp] \caption{ Values of the coefficients $A_0$, $B_0$, $C_0$, $A_1$, $B_1$, and $C_1$ in (\ref{eq:nfinite}) and (\ref{eq:ABC-scaling}) for 2D and 3D systems found in previous papers as cited, and in this work by microcanonical MC simulations ($m$), series analysis ($s$), conformal field invariance ($c$), or duality ($d$). Numbers in parentheses give errors in the last digit(s). All are for site percolation except for the square-bond case. In the latter case, the results are per bond rather than per site on the lattice, accounting for a factor of two decrease in $A_1$ from the other square-boundary cases SQ and UJ. } \label{tab:ABC} \begin{tabular}{lcll} \hline \hline Lattice & \multicolumn{1}{c}{$X$} & \multicolumn{1}{c}{$X_0$} & \multicolumn{1}{c}{$X_1$} \\%[1ex] % \hline Square & $A$ &$\phantom{-}0.027 598 1(3)$\textsuperscript{\cite{ziff:finch:adamchik:97}}& $\phantom{-}0.8835(5)$\textsuperscript{\cite{ziff:finch:adamchik:97}}\\ & & $\phantom{-}0.02759791(5)$\textsuperscript{\cite{tiggemann:01}} & $\phantom{-}0.883 576 308...$\textsuperscript{\cite{ziff:lorenz:kleban:99}} \\ & & $\phantom{-}0.02759800(5)$\textsuperscript{\cite{hu:bloete:deng:12}} &\\ & & $\phantom{-}0.02759803(2)$\textsuperscript{m}& $\phantom{-}0.8834(1)$\textsuperscript{m}\\ % & $B$ & $-0.3205738(7)$\textsuperscript{m} &$\phantom{-}0.8708(2)$\textsuperscript{m}\\ % & $C$ &$\phantom{-}1.9669(3)$\textsuperscript{m}&$-3.286(3)$\textsuperscript{m}\\%[2ex] % \hline Honeycomb & $A$ & $\phantom{-}0.03530709(1)$\textsuperscript{m} & $\phantom{-}0.9468(1)$\textsuperscript{m}\\ & & & $\phantom{-}0.946 883 263\ldots$\textsuperscript{c}\\[0.5ex] & $B$ & $-0.4109549(6)$\textsuperscript{m} & $\phantom{-}0.8260(1)$\textsuperscript{m}\\ & $C$ & $\phantom{-}2.3082(2)$\textsuperscript{m} & $-3.898(1)$\textsuperscript{m}\\%[2ex] % \hline Triangular & $A$ & $\phantom{-}0.0168(2)$\textsuperscript{\cite{domb:pearce:76}} &\\ & & $\phantom{-}0.017 630(2)$\textsuperscript{\cite{margolina:etal:84}} & \\ & & $\phantom{-}0.017 626(1)$\textsuperscript{\cite{rapaport:86}} & \\ & & $\phantom{-}0.017 625 5 (5)$\textsuperscript{\cite{ziff:finch:adamchik:97} } & $\phantom{-}0.878(1)$\textsuperscript{\cite{ziff:finch:adamchik:97} } \\ & & $\phantom{-}0.017625277(4)$\textsuperscript{m} & $\phantom{-}0.87839(7)$\textsuperscript{m} \\ & & $\phantom{-}0.017625277368(2)$\textsuperscript{s} & $\phantom{-}0.878290117\ldots$\textsuperscript{c} \\ & $B$ & $-0.2500006(3)$\textsuperscript{m} & $\phantom{-}0.8807(1)$\textsuperscript{m} \\ & & $-1/4$\textsuperscript{d} &\\[1ex] & $C$ & $\phantom{-}1.5(2)$\textsuperscript{\cite{domb:pearce:76}} & \\ & & $\phantom{-}1.91392(9)$\textsuperscript{m} & $-3.2909(8)$\textsuperscript{m}\\ & & $\phantom{-}1.91391790(5)$\textsuperscript{s} & \\ \hline Union-Jack & $A$ & $\phantom{-}0.025662605(6)$\textsuperscript{m} & $\phantom{-}0.88345(8)$\textsuperscript{m}\\ & & & $\phantom{-}0.883 576 308...$\textsuperscript{\cite{ziff:lorenz:kleban:99}} \\ & $B$ & $-0.2500005(3)$\textsuperscript{m} & $\phantom{-}0.76074(5)$\textsuperscript{m}\\ & & $-1/4$\textsuperscript{d} & \\[0.5ex] & $C$ & $\phantom{-}1.41334(5)$\textsuperscript{m} & $-2.5206(3)$\textsuperscript{m}\\%[2ex] \hline Square (bond) & $A$ & $\phantom{-}0.0173(3)$\textsuperscript{\cite{domb:pearce:76}} & \\ & & $\phantom{-}0.017788096(3)$\textsuperscript{m} & $\phantom{-}0.44183(1)$\textsuperscript{m}\\ & & $\phantom{-}0.017788106(1)$\textsuperscript{s} & $\phantom{-}0.441 783 154...$ \textsuperscript{\cite{ziff:lorenz:kleban:99}}\\ & & $\phantom{-}0.01778810567665\ldots$ & \\ & & $\phantom{-} = (24 \sqrt{3} - 41)/32$\textsuperscript{\cite{temperley:lieb:71,ziff:finch:adamchik:97}}& \\ & $B$ & $-0.2499995(4)$\textsuperscript{m} & $\phantom{-}0.55504(7)$\textsuperscript{m}\\ & & $-1/4$\textsuperscript{d} & \\ & $C$ & $\phantom{-}1.4(3)$\textsuperscript{\cite{domb:pearce:76}} & \\ & & $\phantom{-}1.87706(4)$\textsuperscript{m}& $-2.6882(5)$\textsuperscript{m} \\ & & $\phantom{-}1.87714(2)$\textsuperscript{s} & \\ \hline Cubic & $A$ & $\phantom{-}0.0524387(3)$\textsuperscript{\cite{tiggemann:01}} & \\ & & $\phantom{-}0.052 438 218(3)$\textsuperscript{\cite{wang:etal:13}} & $\phantom{-}0.6746(3)$\textsuperscript{\cite{wang:etal:13}} \\ & & $\phantom{-}0.052438223(3)$\textsuperscript{m} & $\phantom{-}0.6748(2)$\textsuperscript{m}\\ & $B$ & $-0.4107249(5)$\textsuperscript{m} & $\phantom{-}1.7147(4)$\textsuperscript{m}\\ & $C$ & $\phantom{-}0.4405(6)$\textsuperscript{m} & $-1.004(7)$\textsuperscript{m}\\ \hline \hline \end{tabular} \end{table} \section{Measurements} In order to study these quantities, we carried out extensive studies using several different methods. Details will be given in another paper \cite{mertens:jensen:ziff:16}. Many of the results are summarized in Table \ref{tab:ABC}, where previous values are also listed. First of all, we extended the series analysis of $n(p)$ for the triangular lattice to 69th order. In 1976, Domb and Pearce \cite{domb:pearce:76} used a 19th-order analysis to find $\alpha = -0.668(4)$, and they also found accurate values of $A_0$, $B_0$, $C_0$ and $\mathcal A^\pm$. Using Domb and Pearce's powerful substitution $u = p(1-p)$ on $B(u) = \phi(p)/2 + n(p)$ \cite{domb:pearce:76} in our series, we find the very precise result. \begin{equation} n(p_c) = A_0 = 0.017625277368(2) \label{triangularsite} \end{equation} and also to high accuracy the exponent $\alpha = -0.6666669(4)$, an unusually precise test of a critical exponent. We also checked the result (\ref{A0squarebond}) for $A_0$ of the square bond lattice, and found agreement using a 72-order series (see Table \ref{tab:ABC}), although the convergence here was slower than for the triangular lattice. Secondly, we found exact results for $n(p)$ for small $L \times L$ systems using the Newman-Ziff (NZ) method \cite{newman:ziff:01}. The NZ method computes $n(p)$ by occupying the sites (or bonds) one by one in random order. The cluster structure can be updated very efficiently because the changes in the cluster structure are triggered by local events. For exhaustive enumerations, we have to loop over all $2^{L\times L}$ configurations and record the cluster structure for each. If you do this in the obvious fashion (binary counting or gray code), many consecutive configurations differ by many occupied sites. In particular many sites change their status from occupied to empty from one configuration to the next. This is something the NZ method cannot handle, and you need to compute the next cluster structure from the empty lattice. There is, however, a clever way to loop through all $2^{L\times L}$ configurations by adding an occupied site most of the time, while the number of transitions that require a restart grows only like $O(2^L)$. With this method, exact computation of $n_L(p)$ is possible for $L\leq 7$ \cite{mertens:jensen:ziff:16}. For the square lattice with periodic boundary conditions and $L=3$, for example, the polynomial is \begin{equation} \label{polynomial} \begin{aligned} n_3(p) = &9 p q^8 + 54 p^2 q^7 + 132 p^3 q^6 + 171 p^4 q^5\\ & + 135 p^5q^4 + 84p^6q^3+36p^7q^2+9p^8q +p^9\,, \end{aligned} \end{equation} where $q = 1 - p$. We considered several systems with $L$ up to 7, and the resulting polynomials of order $L^2$ are posted on \cite{mertens:website}. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{cn2-square.pdf} \caption{Second derivative of the cluster density $n_L(p)$ for square lattices of size $L\times L$ for $L=8,16,\ldots,1024$. Error bars are much smaller than the linewidth. The vertical dashed line marks the percolation threshold $p_c$.} \label{fig:2d-example} \end{figure} Thirdly, we carried out Monte-Carlo (MC) simulations using the NZ method, which generates the microcanonical weights---essentially approximations for the coefficients in polynomials such as (\ref{polynomial}), but for much larger systems. In this method, occupied sites are added one at a time, and an efficient union-find procedure is used to update the cluster connectivity. Once the microcanonical weights $N_{i,L}$ (number of clusters of size $i$ in a system of length $L$) are found, the canonical $p$-dependent expressions are found through a convolution with the binomial distribution: \begin{equation} n_L(p) = \frac{1}{N} \sum_{i=0}^N N_{i,L} \binom{N}{i} p^i (1-p)^{N-i} \label{eq:convolution0} \end{equation} Derivatives $n^{[k]}_L(p)$ can be found by a similar convolution \begin{equation} n^{[k]}_L(p) = \frac{1}{N} \sum_{i=0}^N N_{i,L} {\mathcal D_{k,i}} \binom{N}{i} p^i (1-p)^{N-i} \label{eq:convolution} \end{equation} with ${\mathcal D_{1,i}} = (i - p N)/[p(1-p)]$ and ${\mathcal D_{2,i}}=[i^2 - (1+2(N-1)p) i +N(N-1)p^2]/[p(1-p)]^2$ for the first and second derivatives respectively. In the MC work we considered $L \times L$ systems with $L$ up to $1024$ for site percolation (s) on the SQ, NNSQ, triangular (TR), honeycomb (HC), and union-jack (UJ) lattices, the 3d cubic lattice, and bond percolation (b) on the SQ lattice. For the TR lattice, we used a periodic square lattice with diagonal bonds, so the system shape was effectively a 60$^\circ$ rhombus. For the HC lattice, we also used a square lattice but with half the vertical bonds missing in a brick pattern, so the effective shapes was a rectangle with aspect ratio $\sqrt{3}$. For each size and lattice type we computed up to $10^{10}$ samples. Fig.\ \ref{fig:2d-example} shows $n_L''(p)$ for the square-site problem, clearly demonstrating the development of the branch-point singularity, something not calculated before. (Note that peaked plots of closely related ``specific heat'' functions were given by \cite{kirkpatrick:76} and more recently by \cite{hu:bloete:ziff:deng:14}.) \begin{figure} \centering \includegraphics[width=\columnwidth]{square-C.pdf} \caption{An example of a plot of the MC and exact-enumeration data, used to find coefficients given in Table~\ref{tab:ABC}: $n''_{L}(p_c) = 2 C$ for site percolation on square lattices vs.\ $L^{-1/2}$. The line is a fit of (\ref{eq:ABC-scaling-c}) which yields values for $C_0$ and $C_1$. Error bars of the MC data are much smaller than the size of the symbols.} \label{fig:extrapolations} \end{figure} Finally, we carried out a Monte-Carlo simulation at fixed $p = p_c$, counting clusters and keeping track of $\langle N_c \rangle$, $\langle N_s \rangle$ and $\langle N_c N_s \rangle$, where $N_c$ is the number of clusters and $N_s$ is the number of occupied sites in each sample, with $\langle N_s \rangle / L^2 = p$. These allow $B_0 = n'(p_c)$ to be calculated from \begin{equation} n'(p) =\frac{\langle N_s N_c \rangle - \langle N_s \rangle \langle N_c \rangle }{L^2 p(1-p)} \end{equation} which follows from (\ref{eq:convolution}) for $k = 1$. We carried this out for site percolation simultaneously on the matching SQ and NNSQ lattices, identifying nearest-neighbor clusters on the black sites (occupied with probability $p$) and next-nearest neighbor clusters on the white sites (occupied with probability $1-p$) for each sample. We confirmed our values of $B_0$ and also verified that the matching relation (\ref{eq:ABCmatching}) holds to a high degree of accuracy. Analyzing these results \cite{mertens:jensen:ziff:16}, we find the values of the amplitudes listed in Table \ref{tab:ABC}. Agreement with exact results and with previous values is generally good. The early results of Domb and Pearce \cite{domb:pearce:76} have been vastly improved. Plots of the data of $n_L(p)$, $n_L'(p)$ and $n_L''(p)$ verified that the scaling predicted by (\ref{eq:ABC-scaling}) is correct; for example, the plot for $n_L''(p)$ for site percolation on the square lattice is given in Fig.\ \ref{fig:extrapolations}. \begin{table}[tpb] \caption{Values of shape-dependent universal quantities $R = A_1 C_1 / B_1^2$, $B_1/b = B_1/(-{\mathcal A^\pm})^{3/8}$ and $C_2/b^2 = C_2/(-{\mathcal A^\pm})^{3/4}$, using $b[\mathcal A^\pm]$abs.\ from Table \ref{tab:metric}.} \begin{center} \begin{tabular}{lcccc} \hline \hline System & Shape & R & $B_1/b$ & $C_1/b^2$\cr \hline SQ,b & square & $-3.855(2)$ & $0.4995(1)$ & $-1.0884(2)$ \\ SQ,s & square & $-3.829(4)$ & $0.5002(1)$ & $-1.084(1)$ \\ UJ,s & square & $-3.8484(5)$ & $0.4999(2)$ & $-1.088(1)$ \\ TR,s & rhomb. & $-3.726(2)$ & $0.5064(1)$ & $-1.0883(2)$ \\ HC,s & $\sqrt{3}$ rect. & $-5.410(3)$ & $0.4393(1)$ & $-1.1024(10)$ \\ \hline \hline \end{tabular} \end{center} \label{tab:B1C1square}\ \end{table}% Calculating the quantity $R$ of (\ref{eq:R}) we find the values given in table \ref{tab:B1C1square}. The three square-boundary systems give similar values consistent with a common value of $R=-3.844(10)$, while for TR and HC systems, simulated on a rhombus and rectangle respectively, the value is different. This confirms our expectations about the shape-dependent but otherwise universal behavior of $R$. Relative metric factors $b$ can be calculated from $B_1$ and $C_1$ for systems of the same shape by the equations below (\ref{eq:ABC-scaling}), which imply \begin{align} b/b' &= B_1'/B_1 \label{eq:bB}\\ b/b' &= (C_1'/C_1)^{1/2} \label{eq:bC} \end{align} where the prime indicates a reference system. The relative $b$'s can also be calculated from the $ {\mathcal A}^\pm$, which is not shape-dependent and therefore can be used for all 2d systems we consider, irrespective of the shape that was used in the simulations: \begin{equation} b/b' = [{\mathcal A}^\pm/({\mathcal A}^\pm)']^{3/8} \label{eq:bA} \end{equation} from (\ref{eq:universalA}). We can choose a convention such as that of Hu et al.\ \cite{hu:lin:chen:95,hu:lin:chen:95b} that $b'=1$ for bond percolation on the square lattice; this yields the values of $b$ given in the first four columns of Table \ref{tab:metric}. Note, in order to use this system for a reference, we have to multiply the quantities for the square-bond model by 2 to account for the fact that they represent the number of clusters per bond, not per site, and there are two bonds per site on the square lattice. \begin{table}[t] \caption{Metric factor $b$ calculated from $B_1$ of (\ref{eq:bB}), $C_1$ of (\ref{eq:bC}), and $\mathcal A^\pm$ of (\ref{eq:bA}), normalized to those of the SQ,b system (with a factor of two in the coefficients of the SQ,b system because there are two bonds per lattice site). Results for $b$ from Hu et al.\ \cite{hu:lin:chen:95,hu:lin:chen:95b} are also shown. In the last column are the values $b$ based upon the convention $\hat{\mathcal A}^\pm = -1$, calculated from (\ref{eq:absolute}).} \begin{center} \begin{tabular}{lcccc|c} \hline \hline Lattice & $b[B_1]$ & $b[C_1]$ & $b[\mathcal A^\pm]$ & $b$[Hu] & $b[\mathcal A^\pm]$abs.\cr \hline SQ,b & 1& 1 & 1&1 & 2.22254(8) \cr SQ,s & 0.7847(3) & 0.7818(4) & 0.7810(10) & 0.79& 1.7410(6)\cr UJ,s & 0.6854(1) & 0.6847(1) & 0.6815(11) & - & 1.522(6) \cr TR,s & - & - & 0.780(2) & 0.79 & 1.73897548(3) \cr HC,s & - & - & 0.8435(14) & 0.86 & 1.8804(7)\cr \hline \hline \end{tabular} \end{center} \label{tab:metric} \end{table}% The quantity $\mathcal A^\pm$ can be difficult to measure because, for a finite-size system, it represents the behavior for sufficiently large $|p - p_c|$ so that $\xi \ll L$, yet still within the scaling region. Our 2d results for $\mathcal A^\pm$ are given in Table \ref{tab:amplitude}. We also show the values of $\hat{\mathcal A}^\pm$, the and for the cases we have measured values of $b$, we find good evidence of universality of that quantity for systems of different shapes. \begin{table}[htpb] \caption{The non-universal amplitude $\mathcal A^\pm$ for 2d lattices, with our series ($s$) and MC ($m$) results, along with results from Domb and Pearce \cite{domb:pearce:76}. The final column shows $\hat{\mathcal A}^\pm = b^{-8/3} \mathcal A^\pm$, using our values of $b$ given in the first two columns of Table~\ref{tab:metric}, representing the SQ,b, SQ,s and UJ systems with the same square boundary. The results for our measurements on the SQ,b, SQ,s and UJ,s systems gives a fairly consistent value of 8.42. For the last two cases, the HC and TR lattices, we use the values of $b$ from \cite{hu:lin:chen:95} to find $\hat{\mathcal A}^\pm$ from the ${\mathcal A}^\pm$ , and find less consistent values of $\hat{\mathcal A}^\pm$. For the square-bond system, we have to double the value of $\hat{\mathcal A}^\pm$ because of the different basis used. These values of $\hat{\mathcal A}^\pm$ are based upon the convention that $b = 1$ for bond percolation on the square lattice. } \begin{center} \begin{tabular}{lll} \hline \hline Lattice & \quad $-\mathcal A^\pm$ & $-\hat{\mathcal A}^\pm$\cr \hline SQ,b & $4.240(15) \textsuperscript{\cite{domb:pearce:76}}$, $4.211(1) \textsuperscript{m}$, 4.2063(2)\textsuperscript{s}, & 8.41\cr SQ,s &$ 4.3867(4) \textsuperscript{m}$ & 8.45\cr UJ,s & $3.064(3) \textsuperscript{m}$ & 8.40 \cr TR,s & $4.370(15) \textsuperscript{\cite{domb:pearce:76}} $, $4.379(2) \textsuperscript{m}$, $4.3730310(2) \textsuperscript{s}$ & 8.20 \cr HC,s & $5.387(5) \textsuperscript{m}$ & 8.05\cr \hline \hline \end{tabular} \end{center} \label{tab:amplitude} \end{table} \section{Absolute value of the metric factor $b$} Having verified universality of $\hat{\mathcal A}^\pm$, we can turn it around and can use it to propose a definition of $b$ that is not based upon a reference lattice but instead is based upon the universal behavior of $f(z)$. Because the quantity $\hat{\mathcal A}^\pm$ is independent of both the lattice type and the system shape, it is a good quantity to use. There is a freedom to choose an arbitrary overall scale factor for $z$ in $f(z)$, and we can assume that that scale factor is chosen so that $\hat{\mathcal A}^\pm = -1$. By (\ref{eq:universalA}), this choice implies that $b$ can be calculated from \begin{equation} b = (-{\mathcal A}^\pm)^{3/8} \label{eq:absolute} \end{equation} which leads to the values of $b$ given in the last column of Table \ref{tab:metric}. We call these ``absolute" values of $b$ because we are not assuming $b=1$ for any particular system. Using these values for the absolute metric factor $b$, we can find the shape-dependent but otherwise universal behavior of $f(z)$: \begin{align} f(z) &= f(0) + z f'(0) + z^2 f''(0) /2 + \hat{\mathcal{A}^\pm} |z|^{8/3} \\ &= A_1 + z (B_1/b) + z^2 (C_1/b^2) - |z|^{8/3} \end{align} For our three systems with the square boundary, we find very good consistency in these coefficients (see Table \ref{tab:B1C1square}) yielding \begin{equation} f(z)= 0.883576 + 0.5000(2)z - 1.088(1) z^2 - |z|^{8/3} \\ \label{eq:fsquare} \end{equation} with the intriguing result that $B_1/b = B_1/(-{\mathcal A}^\pm)^{3/8}$ seems to equal exactly $1/2$ for the square boundary. We have no explanation for this value. For the systems with other boundary shapes, we have one system for each. For a system with a rhombus boundary or equivalently a rectangle of aspect ratio $\sqrt{3}/2$ with a twist of $1/2$ (which we used for the TR lattice), we find \begin{equation} f(z)= 0.878290 + 0.5064(1)z - 1.0883(2) z^2 - |z|^{8/3} \\ \label{eq:ftr} \end{equation} For the HC system, where we used a rectangular boundary of aspect ratio $\sqrt{3}$, we find \begin{equation} f(z)= 0.946883 + 0.4393(1) z - 1.1024(10) z^2 - |z|^{8/3} \\ \label{eq:fhc} \end{equation} Thus, we see, as predicted, that systems of different shapes have different forms of $f(z)$ for small $z$. Interesting, it seems that $C_1/b^2$ is the same for the 60$^\circ$ rhombus (the TR system) as for the three square systems. However, for the $\sqrt{3}$ rectangle (the HC system), it is somewhat different. We have no explanation for this behavior. Clearly, an interesting area for future study would be to find $f(z)$ for systems of more shapes, and to also verify universality by considering different lattices of a given shape. \section{The function $M_L(p)$.} \begin{figure}[htpb] \centering \includegraphics[width=\columnwidth]{square-match.pdf} \caption{$M_L(p)=L^2[n^\mathrm{SQ}(p) - n^\mathrm{NNSQ}(1-p)-\phi(p)]$ vs.\ $p$ from exact enumeration results for $L = 3, 4, 5, 6, 7$ (solid lines) and $L = 8, 12, 16, 24, 32, 48$ from MC (dashed lines), plotted as a function of $p$, and (inset) as a function of the scaling variable $(p-p_c)L^{1/\nu}$, yielding $f(z)-f(-z)$. In the limit $L \to \infty$, $M_L(p)$ becomes a step function.} \label{fig:matching} \end{figure} We also analyzed the function $M_L(p)=L^2[n_L^{\mathrm{SQ}}(p) - n_L^{\mathrm{NNSQ}}(1-p)-\phi(p)]$, where $\phi(p) = p - 2 p^2 + p^4$ is the matching polynomial (\ref{eq:SykesEssamMatching}) for the square lattice. Note that $M_L(p)/L^2\to0$ as $L\to\infty$, but $M_L(p)$ converges to a step function independent of $L$ that jumps from $-1$ to $+1$ at $p=p_c$; see Fig.~\ref{fig:matching}. At $p_c$, $M_L$ appears to go to zero as $M_L(p_c) \sim L^{-4}$ as $L \to \infty$, which implies that finding where $M_L(p) = 0$ is a very sensitive criterion for finding $p_c$. In fact, this is identical to the criterion used by Jacobsen and Scullard \cite{scullard:jacobsen:12,jacobsen:14,jacobsen:15} whose studies yielded the most precise estimates of percolation thresholds to date. We discuss $M_L(p)$ more in Ref.\ \cite{mertens:ziff:16}, where it is also shown that $M_L(p)$ is related to the probability of the existence of wrapping clusters on the lattice and matching lattice. In the inset to Fig.~\ref{fig:matching} we show a plot of $M_L(p)$ as a function of $(p-p_c)L^{1/\nu}$ for the square-site system. Because of the relations (\ref{eq:ABCmatching}), it follows that in the scaling limit $M_L(p) = f(z) - f(-z)$, all the terms proportional to $L^{2}$ having cancelled out. If we had plotted the inset to the figure vs.\ $z = b (p - p_c)L^{-1/\nu}$ with $b$ equal to its absolute value $b = 1.741$, then by (\ref{eq:fsquare}) the slope at $z = 0$ would be exactly 1. Finally, we also carried out simulations for site percolation on a cubic lattice in three dimensions, and these results are shown in Table \ref{tab:ABC}. The behavior was found to be consistent with the scaling predictions of equation (\ref{eq:ABC-scaling}). \section{Conclusions} In this paper, we have found many new results concerning the function $n(p)$, including \begin{itemize} \item A discussion of the finite-size corrections to A, B, and C, including a derivation of the scaling of those terms. \item The verification of that scaling on several different system types. \item A discussion of the use of the coefficient ${\mathcal A}^\pm$ of the singular term in $n(p)$ to define an absolute, rather than relative, value of the metric factor $b$. \item A visualization of the formation of a cusp in $n''(p)$, Fig.\ \ref{fig:2d-example}. \item The extension of previous work on metric factors \cite{hu:lin:chen:95} to a new system, the union-jack lattice. This system is interesting to study because it is fully triangulated, so has a site threshold of 1/2, but can be made into a perfect square, so is useful to comparing to other square systems. \item A discussion of shape-dependent universality \cite{ziff:lorenz:kleban:99,aharony:stauffer:97}, as summarized in Table~\ref{tab:universality}. \smallskip \item Application of the Sykes-Essam matching polynomial to find relations for $A$, $B$, and $C$ between a lattice and its matching lattice. \item Development of new algorithms for carrying out the simulations and series analyses. \item The determination of many precise values concerning $n(p)$, including a very precise determination of $n(p_c)$ for site percolation on the triangular lattice, using a much extended series expansion for that system. \item A discussion of $M_L(z)$ which directly yields an anti-symmetrized version of the the scaling function $f(z)$. \item The derivation of universal expressions for $f(z)$ for systems of three different shapes (\ref{eq:fsquare},\ref{eq:ftr},\ref{eq:fhc}), based upon our standard definition of $b$. Note that $f(z)$ is a subtle function to observe as it corresponds to finite-size corrections to $n(p)$. \end{itemize} \begin{table}[htpb] \caption{Universality properties of various quantities related to $n(p)$. A check in the first column means that the quantity depends upon the shape of the boundary of the system (with periodic b.\ c.); a check in the second column means that the quantity depends upon the lattice and percolation type (site or bond). The final column shows the dependence on dimensionality, which applies to all of the quantities here. } \begin{center} \begin{tabular}{lccc} \hline \hline Quantity & Shape & Lattice & Dimensionality\cr \hline $\alpha,\nu \ldots$ & & & \checkmark \cr $\hat{\mathcal A}^\pm = b^{\alpha-2}{\mathcal A}^\pm$ & & & \checkmark \cr $ A_0, B_0, C_0 $ & &\checkmark & \checkmark \cr $b$ & & \checkmark & \checkmark \cr $ f(z) $ & \checkmark & & \checkmark \cr $ A_1$, $b^{-1}B_1$, $b^{-2}C_1$, $R $ & \checkmark & & \checkmark \cr $B_1$, $C_1$ & \checkmark & \checkmark & \checkmark \cr \hline \hline \end{tabular} \end{center} \label{tab:universality} \end{table}% Future work is suggested to study $n(p)$ and $f(z)$ for different lattices and boundary shapes, as well as the behavior in higher dimensions. Perhaps new exact results for some of these quantities can also be found, such as $n(p_c)$ for site percolation on the triangular lattice, where we found the precise value (\ref{triangularsite}). The dependence of $B_1/b$ and $C_1/b^2$ as a function of the system shape seems also interesting, since they are related to the scaling function $f(z)$. {\it Acknowledgments:} IJ was supported under the Australian Research Council's Discovery Projects funding scheme by the grant DP140101110 and IJ's computational work was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI), which is supported by the Australian Government. The authors thank Peter Kleban for help in calculating the conformal excess number $A_1$ for the three different system shapes.
2,877,628,089,996
arxiv
\subsection{Assessing the Diversity of a Music List}\label{sec:qual1} Interviewees mentioned several aspects related to what diversity means to them, and how it has been interpreted while comparing lists of tracks and artists. When asked about their strategy in choosing the most diverse lists, they often started the discussion by self-identifying themselves as being or not experts in EM. Those who do not identify as an expert of this genre highlighted how the diversity assessment has been mostly based on listenable characteristics. Some of them also recognized how difficult it was to assess the diversity of the lists: ``\textit{It was very difficult} [...] \textit{at some point everything sounded very similar}'' (P4), commented a participant who self-defined herself as a newcomer to EM. Moreover, another newcomer to EM observed: ``\textit{It was a mentally taxing task because I had to kind of create a small description of every track and compare them in my head}'' (P10). On the contrary, interviewees affirming to be familiar with EM emphasized how they made use of prior knowledge for categorizing artists and tracks, also discussing how this could have led to some sort of preference or prejudice while evaluating diversity: ``\textit{I can feel like I can make a better decision of what is diverse} [...] \textit{but then there is kind of a bias that comes based on the fact that I like this music a lot}'' (P2). Here, we observe a contrast between the higher difficulty of the newcomers and the role of prior knowledge for experts as a facilitator to assess diversity. On the one hand, newcomers not having prior knowledge could rely more on generic listenable features while interacting with unknown music (e.g. tempo): ``\textit{People who don't know much about a particular genre, probably would agree on some things that are a bit more generic}'' (P3). At the same time, they can associate such features to generic or stereotypical representations of the unfamiliar genre: ``\textit{[...] this kind of prejudice or bias we have about music that we do not know because we get to know them through these representations of what is considered to be}'' (P12). On the other hand, experts are assumed to be strongly reliant on their knowledge, which can stimulate inner reflections and thoughts: ``[Experts] \textit{add more layers of abstraction, or more complexities to the assessment of diversity}'' (P2). Moreover, they may be more thoughtful to specific differences: ``\textit{If you are familiar with a specific genre, you are more receptive, and you can find distinct patterns}'' (P12). Interestingly, several interviewees declared how participating in the study made them realize the limits of their knowledge about the diversity of the EM scene, both in terms of musical variation and sociodemographic variation: ``\textit{I realized while making the survey that I had a very strict definition about electronic music myself}'' (P12). Under this perspective, we observe how the mere exposure to several lists, specifically designed to highlight the diversity of the EM scene, made participants aware of their limited knowledge and stereotypical representation that even unconsciously they may associate with EM. This supports the idea that to be exposed to a diverse list of music can be a method to stimulate a self-reflection about prior beliefs, in line with the findings presented by Clarke et al. in \cite{Clarke2015}. Indeed, in the survey wherein interviewees participated, we choose to include non-mainstream tracks and artists, especially from groups that are normally underrepresented in the EM scene, pursuing what is defined by Helberger et al. \cite{Helberger2018} as exposure diversity with an adversarial-deliberative perspective. Consequently, we provide to the participants an additional viewpoint to reflect on their knowledge: ``\textit{The electronic music artists that I went to listen to and that I liked before the survey were predominantly white male, which I suppose is still what is predominant in the industry to some extent [...] but definitely it is not the only thing}'' (P13). Starting from these differences on evaluating the diversity of music lists, we continue discussing the role of diversity with regards to music recommendations and listening practices. \subsection{Music Recommendation and Diversity}\label{sec:qual2} Interviewees highlighted how recommendation diversity enables a wider range of choices while listening to music, contrasting the repetitiveness and monotony that eventually could lead to boredom: ``[Diversity] \textit{gives me more opportunities, not only to discover but also the possibility to choose. If everything sounds similar I may get tired after a bit}'' (P4). This relationship between diversity, choice difficulty, and overall satisfaction has been already found in several studies in the RS literature \cite{Knijnenburg2012}. Nonetheless, there is a contrast between diversity as opposed to monotony in the listening experience, to which participants associate different feelings. On the one hand, there can be satisfaction in listening to what is considered familiar or expected, because of the immediate reward that can be gained: ``\textit{I get frustrated when you get caught going in the same loops, but it is at some level also slightly satisfying [...] it is kind of brain-numbing in a way}'' (P2). A positive aspect linked to the availability of a more diverse set of recommendations is strictly related to the idea of serendipity, i.e. to receive something unexpected but valued positively \cite{Chang2018}: ``\textit{I ended up once listening to a music that was tagged like Afghan metal, something it was great, I would not imagine searching this type of music}'' (P12). However, such a positive vision of the unexpected is not shared by everyone: ``\textit{I am not the one who likes to take the risks [...], because some genres of music to me are definitely awful, and I do not want to be exposed to that}'' (P11). The difficulty of facing something different, or the willingness of taking a risk when leaving the comfort zone for listening to some music somehow considered diverse, are some of what interviewees mentioned as limitations of recommendation diversity: ``\textit{Taking that plunge into something new is really hard, and I do not know if recommender systems could bridge that barrier of making it more attractive}'' (P13). Most of the participants also highlighted that in specific moments diversity can be beneficial (e.g. when you want to discover), while in others may be counterproductive (e.g. when you want to focus): ``\textit{If I click on a playlist very homogenous then I would like to continue with that flow. But if it is something more heterogeneous, I can like it the same [...] in the end} [diversity] \textit{could be positive or negative at the same time}'' (P14). From another perspective, they also affirmed that the most effective way for exposing people to diversity could be by forcing them to take the risk: ``\textit{When you cannot change the playlist or whatever, you know, you are forced to that, and that is the only way that you can listen to something radically new}'' (P11). Lastly, participants have been asked if they believe that recommendation diversity could be a tool for modifying prior beliefs with regards to a specific musical genre or culture. They have also been asked to provide examples when they have experienced such a change in opinion due to the interaction with Music RS. As a result, some of the participants presenting their experiences pointed out that, even when they initially disliked or had prejudices, algorithmic recommendations helped them in discovering new facets of previously disliked music genres, which they eventually ended up enjoying and listening to more often: `\textit{From the idea of classical music that I had [...] quite boring and everything very similar, thanks to some recommendations I have been able to discover different styles and to be in the mood to try to listen to it}'' (P1); `\textit{I never liked EDM but [...] algorithms presented to me different tracks, and I found myself listening to it, and noticing the differences within this genre. In the end, I started listening to it more often}'' (P8). \section{Introduction}\label{sec:introduction} \input{1_intro} \section{Background}\label{sec:background} \input{2_background} \section{Methodology}\label{sec:method} \input{3_methodology} \section{Insights from Interviews}\label{sec:insights} \input{4_insights} \section{Conclusion and Future Work}\label{sec:conclusions} \input{5_conclusions} \begin{acks} This work is partially supported by the European Commission under the TROMPA project (H2020 - grant agreement No.\ 770376). It is also part of the project Musical AI - PID2019-111403GB-I00/AEI/ 10.13039/501100011033 funded by the Spanish Ministerio de Ciencia, Innovación y Universidades (MCIU) and the Agencia Estatal de Investigación (AEI). This work is also partially supported by the HUMAINT programme (Human Behaviour and Machine Intelligence), Joint Research Centre, European Commission. The project leading to these results received funding from "la Caixa" Foundation (ID 100010434), under the agreement LCF/PR/PR16/51110009. \end{acks} \bibliographystyle{ACM-Reference-Format}
2,877,628,089,997
arxiv
\section{Introduction} \label{secIntro} \IEEEPARstart{S}{parse} linear arrays sample a spatial aperture with fewer sensors than required by a standard half-wavelength sampled array. Many sparse array designs prune or thin a uniform linear array (ULA), so the sparse array sensor locations fall on an underlying half-wavelength lattice \cite{JohnsonBook}. Examples of sparse arrays in this class include minimum redundancy arrays (MRA \cite{MRA}), coprime arrays (CSA \cite{PPCSA}), and nested arrays \cite{PPNested}. These arrays have many array processing applications including source detection \cite{CSADetectionICASSP}, Direction-of-Arrival (DOA) estimation \cite{PPCSAMUSIC}\cite{Aminmultiple} and spatial power spectral density (PSD) estimation \cite{kaushallyaColinear}\cite{YangTSP}. Assuming a large number of snapshots, sparse array processing techniques can localize more sources than sensors by constructing augmented covariance matrices (ACMs) using the second or higher order statistics of the propagating electromagnetic or acoustic field \cite{PPNested}\cite{PillaiORG}. This paper considers the problem of enumerating and estimating the DOAs of more sources than sensors using sparse arrays for temporally broadband signals. When the incoming sources are broadband in temporal frequency, it is possible to combine the spectral information from multiple frequency bands to improve the precision of the spatial correlation estimates. Properly combining data across frequency bands reduces the large number of snapshots required in sparse array processing. This approach will be especially useful in acoustical scenarios, which are often limited in available snapshots due to the relatively slow propagation speed for sound, large array apertures and non-stationary sound fields \cite{BragCox}\cite{Cox}. Assuming the signal obervation time is much longer than the signal correlation times, a commonly used processing approach is to decompose the broadband data into disjoint and uncorrelated narrow frequency bands using Discrete Fourier transform (DFT) or filter banks \cite{VanTrees}. The simplest follow-up step is to estimate the number of sources or their DOAs separately for each frequency band and then average the results across all bands as the final estimate. This method is referred to as \textit{incoherent} signal subspace (ISS) method in th sense that it treats the snapshots from each band as uncorrelated data \cite{ISSM}\cite{MorfISSM}. For source enumeration, the ISS method usually computes the information criteria, such as the Akaike information criterion (AIC \cite{AIC}) or Rissanen's minimum description length criterion (MDL \cite{MDL}), for each band before averaged across all bands to achieve a final estimate \cite{WaxKailathSN}\cite{RajAIC}. For DOA estimation, subspace spectral estimation methods such as MUSIC \cite{MUSIC} are applied on each band and then average the pseudo-spectra across all bands to estimate the source DOAs \cite{ISSM}\cite{HanWidebandSPL}. While the ISS method works well for broadband signals in high SNR scenarios, the performance can suffer severely for low SNRs and limited snapshots \cite{CSSM}, which frequently occur in underwater acoustical environments. In contrast to the ISS method, the \textit{coherent} signal-subspace (CSS) method exploits the correlations between signal subspaces at different frequencies and combines the narrowband snapshots to construct a single covariance matrix at a focused frequency \cite{CSSM}\cite{hung1988focussing}. The focused covariance matrix can be estimated with a higher statistical precision reflecting the full time-bandwidth product of the broadband sources \cite{krolik1989multiple}. Narrowband techniques can therefore be applied on the focused covariance matrix with lower thresholds on SNR and snapshots for broadband source enumeration and DOA estimation. The major challenge in the CSS methods is to design focusing algorithms to align the snapshots across frequency bands to coherently estimate a single covariance matrix. Popular broadband focusing algorithms include rotational signal subspace focusing matrix (RSS, \cite{hung1988focussing}), steered covariance matrices (STCM, \cite{krolik1989multiple}), DFT projection \cite{DFTprojection}, weighted average of signal subspaces (WAVES \cite{WAVES}), beamforming invariance \cite{BICSSM} and auto-focusing \cite{autofocusing}. Many of these focusing algorithms require preliminary estimates of the number of sources and their DOAs, which increase the computation cost and bias to the final estimates. Moreover, these algorithms were primarily developed in the context of ULAs and do not apply directly to sparse array data. This paper extends two broadband focusing algorithms originally proposed for ULAs to sparse arrays: spatial periodogram averaging \cite{Hinich} and spatial resampling \cite{krolik1990focused}. Neither of these extensions require preliminary DOA estimates for broadband focusing and can be applied on any sparse array geometry with a contiguous coarray region, including MRAs, CSAs and nested arrays. Constructing the ACM using the correlations estimated from the spatial periodogram and spatially resampled correlations offers processing gains for both source enumeration and localization over the ISS approach in \cite{HanWidebandSPL}, especially in low SNR and few snapshots scenarios. The rest of this paper is organized as follows. Section \ref{Sec2ULA} discusses the broadband signal model and briefly reviews the ISS method for broadband sparse array processing. Section \ref{Sec3SparseACM} proposes the periodogram averaging (AP) and spatial correlation resampling (SCR) based algorithms for broadband focusing. The focused ACMs from these algorithms are then the inputs for the new MDL-gap source enumeration algorithm and the standard narrowband MUSIC DOA estimator. The performances of the proposed algorithms are compared with extensive numerical simulations in Section \ref{Sec5Simulations}. Section \ref{conclusion} concludes this paper. \section{Incoherent sparse array processing} \label{Sec2ULA} This section first describes the array signal model for broadband sources impinging on a sparse linear array and then reviews the incoherent method for broadband sparse array processing. \subsection{Wideband signal model} Assume a sparse linear array with $N$ sensors and $D$ broadband planewave signals impinging on the array from the far field with different DOAs within the visible region $u_1, u_2,...,u_D \in [-1, 1]$. Here we use the directional cosine $u = \cos(\theta)$ to indicate the source DOA, where $\theta \in [0^o, 180^o]$ is the angle-of-arrival with respect to the array endfire. The signal received by the $n$th sensor at time $t$ can be modeled as \begin{equation} x_n(t) = \sum_{i = 1}^D s_i(t-\tau_n(\theta_i)) + \text{n}_n(t),~n=1,...,N \end{equation} where $\tau_n(\theta_i)$ is the propagation time delay for the $i$th signal arriving at the $n$th sensor and $\text{n}_n(t)$ is the measurement noise at that sensor. We assume both the signals and noise measured by the sensors are samples of wide-sense stationary and ergodic complex Gaussian processes. The time series at each sensor are divided into $L$ segments. Applying the discrete Fourier transform (DFT) to each segment forms multiple non-overlapping narrow frequency bands, from which we extract the frequency domain phasors at the frequencies of interest $f_1,...,f_M \in [f_{min}, f_{max}]$ \cite{VanTrees}. The segment duration is assumed much longer than the signal correlation time, such that the different DFT bins are statistically uncorrelated. The vector of DFT coefficients (or complex phasors) for all $N$ sensors and the $l$th snapshot at frequency $f_m$ is \begin{eqnarray} \label{snapshots} \textbf{x}_l(f_m) = \textbf{A}(f_m)\textbf{s}_l(f_m) + \textbf{n}_l(f_m),~m &=& 1,...,M \nonumber \\ l &=& 1,...,L, \end{eqnarray}where $\textbf{x}_l(f_m) $ is the $N\times 1$ DFT coefficients vector, $\textbf{A}(f_m)$ is the $N\times D$ array manifold matrix at temporal frequency $f_m$ and $\textbf{s}_l(f_m)$ is the $D\times 1$ source amplitudes vector. The array manifold corresponding to the $n$th element and the $i$th source at frequency $f_m$ is \begin{equation} \label{arraymanifold} [\textbf{A}(f_m)]_{n,i} = e^{j(2\pi f_m d_n/c) u_i}, \end{equation}where $d_n$ is the location of the $n$th element with respect to the array phase center and $c$ is the field propagation speed. The source signal amplitudes are assumed uncorrelated zero-mean and circular complex Gaussians $s_i(f_m) \sim CN(0,\sigma^2_{i,m}), i = 1,...,D$ and uncorrelated from the noise. The additive noise is assumed zero-mean, white, and circular complex Gaussian $\textbf{n} \sim CN(\textbf{0},\sigma_n^2 \textbf{I}_N)$. \subsection{Incoherent signal subspace method for sparse arrays} \label{secISSMULA} For the broadband signal model in \eqref{snapshots}, the ISS method applies narrowband subspace processing to each frequency band and combines the estimation results across all bands for the final estimate \cite{ISSM}\cite{MorfISSM}. The source enumeration and DOA estimation algorithms are often based on the eigenvalues and eigenvectors of the sample covariance matrices (SCM) computed from each of the complex phasors data in \eqref{snapshots} for a ULA. \cite{HanWidebandSPL} extended the ISS method to sparse linear nested arrays. We review here its data processing procedures in the context of finite snapshots, as shown in Fig. \ref{Periodogramblock}(a). For any particular frequency band $f_m$, the narrowband SCM averaged over $L$ snapshots follows \begin{equation} \label{SCM} \textbf{R}_{xx,m} = \frac{1}{L}\sum_{l=1}^L \textbf{x}_l(f_m) \textbf{x}_l^H(f_m), \end{equation}where $(\cdot)^H$ denotes Hermitian transpose. Reconstructing the SCM to obtain the $(2P-1)\times 1$ correlation vector corresponding to the contiguous region of the difference coarray \begin{equation} \label{SScorr} \textbf{r}_{m}(k) = \frac{1}{\boldsymbol \eta (k)} \sum_{(n_1,n_2)~\in~\zeta(k)} \left[ \textbf{R}_{xx,m} \right]_{n_1,n_2}, \end{equation} where $\left[ \textbf{R}\right]_{n_1,n_2}$ selects the $(n_1,n_2)$th element of matrix $\textbf{R}$. mThe set $\zeta(k)$ collects every sensor pair ($n_1,n_2$) separated by the difference coarray index $k = n_1-n_2 \in [1-P,P-1]$ and $\boldsymbol \eta(k) = |\zeta(k)|$ is the co-array weight equal to the cardinality of the set $\zeta(k)$. Note for different sparse array geometries, the co-array span $P$ will be larger than the number of sensors $N$ by different amounts. To exploit fully the degrees-of-freedom (DOFs) offered by the co-array, apply spatial smoothing (SS) to construct a full-rank and positive semi-definite ACM by \cite{PPNested}, \begin{equation} \label{SS-ACM} \textbf{R}_{ss,m} = \frac{1}{P}\sum_{i=1}^P \textbf{v}_m^i (\textbf{v}_m^i)^H, \end{equation}where $\textbf{v}_m^i$ is a $P\times 1$ vector containing the ($P-k+1$)th through ($2P-k$)th element of $\textbf{r}_{m}(k)$. The spatially smoothed ACM for each frequency band goes through eigenvalue decomposition. The eigenvalues are used to compute the information criteria for each frequency band, which are then averaged across all bands for source enumeration. The ISS method takes the source enumeration estimate and computes narrowband spatial pseudo-spectra for each frequency band, which are then averaged to obtain a broadband pseudo-spectra used to estimate the DOAs. Eq. \eqref{SS-ACM} indicates that SS exploits the fourth-order statistics of the propagating field by averaging the covariance matrices computed from the overlapping subarrays of the co-array correlations. For infinite snapshots, the SS eigenvalues are proportional to the squares of the ensemble eigenvalues for a $P$-element ULA \cite{PPNested}\cite{PPLiuSPL}. Thus, information criteria for source enumeration developed for ULA SCM eigenvalues, which are second moments, are more appropriately applied to the square root of the SS-ACM eigenvalues, and not the eigenvalues themselves as in \cite{HanWidebandSPL}. For both fully populated and sparse linear arrays, the ISS method works relatively well for broadband sources in high SNR and snapshot rich scenarios \cite{HanWidebandSPL}\cite{CSSM}. However, the source enumeration and localization performance suffers in low SNR scenarios, for sources with gaps in spectral energy such as harmonic sources, and in snapshot limited scenarios. To address these issues, the following section proposes two coherent broadband focusing algorithms for sparse array processing. \section{Proposed Coherent Wideband Sparse Array Focusing Algorithms} \label{Sec3SparseACM} This section proposes two broadband focusing algorithm for coherent correlation estimations: spatial periodogram averaging (AP) and spatial correlation resampling (SCR). The spatial correlation estimates from either of these two algorithms then populate the diagonals of Hermitian Toeplitz ACMs for subspace processing. The proposed approaches are coherent in the sense that they combine the observed data across all frequency bands to estimate a single broadband ACM from which the number of sources and their DOAs are estimated. In this sense, the frequency averaging occurs with the narrowband spatial correlation functions, which still includes phase terms, in contrast with the incoherent approach which averages only the real-valued information criteria and pseudo-spectra. Both these two algorithms can be applied to any sparse array geometry based on a pruned ULA as long as a contiguous coarray region exists. \subsection{Periodogram averaging} \begin{figure*}[!t] \centering ]{\includegraphics[scale=0.18]{BroadbandProcessingDiagrams.eps} \label{fig_first_case}} \caption{Block diagrams for the incoherent signal subspace method (ISS, panel a) and the proposed periodogram averaging (AP, panel b) and spatial correlation resampling (SCR, panel c) algorithms for broadband sparse array source enumeration and DOA estimation. The $N\times L$ matrix $ \bold X = [ \bold x_{1}(f_m),..., \bold x_{N}(f_M)]$ includes the DFT coefficients for all $N$ sensors and $L$ snapshots for frequencies $f_1,...,f_M$} \label{Periodogramblock} \end{figure*} The spatial periodogram averaging for sparse arrays extends Hinich's broadband beamformer for undersampled ULAs to nonuniform sparse arrays. This approach exploits the frequency diversity obtained through the scanned responses across the signal bandwidth while processing a single ULA \cite{Hinich}. As Fig.~\ref{Periodogramblock}(b) shows, the array frequency snapshot data for each band $f_m,m=1,...,M$ are conventionally beamformed independently via FFT and averaged over all snapshots to estimate the narrowband spatial periodogram $\textbf{t}_m(u)$. The estimated spatial periodogram $\textbf{t}_m(u)$ is the Fourier transform of the estimated spatial auto-correlation function $\textbf{r}_m(k)$ in \eqref{SScorr}, that is routinely used for ACM construciton \cite{PPNested}\cite{PPCSAMUSIC}, weighted by the coarray weights $\boldsymbol \eta (k)$. Specifically, the narrowband periodogram follows \begin{equation} \label{narrowbandperiodogram} \textbf{t}_m(u) = \frac{1}{L} \sum_{l = 1}^L \left|\textbf{w}_m^H(u) \textbf{x}_l(f_m)\right|^2 = \textit{F}_m(\textbf{r}_m(k)\boldsymbol\eta(k)). \end{equation}where $\textbf{w}_m(u)$ is the conventional beamforming weights vector for frequency $f_m$ at steering direction $u$ (equal to the column vector of the steering matrix $\textbf{A}$ in \eqref{snapshots} for direction $u$) and $\textit{F}_m$ is the spatial Fourier transform operator accounting for the different temporal frequencies $f_m$. In broadband processing, only the true source peaks remain fixed in directional cosine $u$ across different frequency bands, while the grating lobes and sidelobes change their locations in $u$ as the temporal frequency varies. Averaging the periodograms across frequencies constructively reinforces the energy at the true source locations while other sidelobes are relatively attenuated \begin{equation} \label{widebandPeriodogram} \textbf{t}(u) = \frac{1}{M}\sum_{m=1}^M \textbf{t}_m(u). \end{equation} Note that the broadband periodogram in (\eqref{widebandPeriodogram}) has the same functional form as the steered covariance matrix estimate (STCM), which has attractive statisical features expressed in terms of a Wishart characteristic function \cite{krolik1989multiple}. The inverse spatial Fourier transform of the spatial periodogram $\textbf{t}(u)$ estimates the spatial correlation function after normalizing for the coarray weights \begin{equation} \label{correstimatesAP} \tilde{\textbf{r}}(k) = \frac{\textit{F}^{-1}(\textbf{t}(u))}{\boldsymbol\eta(k)},~k = -(P-1),...,(P-1) \end{equation} The estimated broadband correlation function $\tilde{\textbf{r}}(k)$ then populates the diagonals of a Hermitian Toeplitz ACM, as given in Section \ref{subsecACMconstruction}. The covariance focusing through periodogram averaging simplifies the coherent broadband processing algorithm while maintaining its advantages in low SNR and limited snapshot scenarios. Processing broadband data in the beamspace avoids the complexity of constructing focusing matrices that are commonly required in the coherent algorithms. Substituting \eqref{narrowbandperiodogram}-\eqref{widebandPeriodogram} into \eqref{correstimatesAP}, the correlation estimates can be written as \begin{equation} \tilde{\textbf{r}}(k) = \frac{\textit{F}_c^{-1}\left( \frac{1}{M}\sum_{m=1}^M \textit{F}_m \left(\textbf{r}_m(k)\boldsymbol\eta(k)\right) \right)}{\boldsymbol\eta(k)}, \end{equation} where $\textit{F}_c^{-1}$ is the inverse spatial Fourier transform operator corresponding to the central frequency within the bandwidth. This notation implies that estimating the broadband spatial correlation function through inverse Fourier transform of the averaged spatial periodograms does not account for the temporal frequencies mismatch between frequency bands. To account for this mismatch, it is in general a good practice to perform the inverse Fourier transform at the central frequency of the sources' bandwidth. This is similar to choosing the focusing frequency as the central frequency to reduce DOA estimation bias, as suggested in \cite{KrolikBias}. \subsection{Spatial correlation resampling} Another approach for coherent broadband focusing is through spatial resampling \cite{krolik1990focused}. Spatial resampling exploits the structural characteristic that the array manifold in \eqref{arraymanifold} depends on the source temporal frequencies and the element positions only through their product. By adjusting the spatial sampling intervals of the frequency-domain snapshots as a function of the temporal frequency for each of the frequency bands, it is possible to obtain (nearly) the same array manifold vector at different frequencies. Spatial resampling for broadband processing approaches the performance of the narrowband scenario with a comparable time-bandwidth product \cite{krolik1990focused}. At first glance, the spatial resampling algorithm previously applied to ULAs cannot be directly applied to sparse array data due to the gaps in the spatial sampling. However, the important insight is that the sparse arrays still provide contiguous and uniformly sampled difference co-array functions. This insight allows us to extend the application of the spatial resampling technique to sparse arrays. Rather than directly resampling the array data, we resample the estimated second-order statistics as a function of spatial lag. To make this insight precise, the spatial correlation between the signals received by sensors located at $d_{n_1}$ and $d_{n_2}$ for a single source with amplitude $s$ from direction $u_i$ can be expressed as \begin{equation} E\{x_1(f_m)x_2^*(f_m)\} = s s^* e^{-j(2\pi f_m/c)(d_{n_1}-d_{n_2})u_i} \end{equation}for frequency band $f_m$. This implies that the array manifold corresponding to the coarray depends on the product of the source temporal frequency $f_m$ and the inter-element spacing $d_{n_1}-d_{n_2}$. Since the contiguous region of the coarray is uniform in spatial lag $k$, applying spatial resampling to the spatial correlations corresponding to this region for all frequency bands will realign the coarray manifolds. The resampling changes the spatial correlation sampling interval for the $m$th band from $d$ to $d_m = d f_0 /f_m$, where $d$ is the physical inter-sensor spacing and $f_0$ is the focus frequency. Unlike the periodogram averaging algorithm discussed in Section \ref{subsecAP}, broadband focusing through spatial correlation resampling explicitly accounts for the coarray manifold mismatches between frequency bands due to different temporal frequencies. Fig.~\ref{Periodogramblock}(c) demonstrates the data processing procedures for the SCR algorithm for broadband focusing. Each snapshot data at each frequency band $f_m$ goes through the following procedures: 1) Compute the spatial auto-correlation function by averaging all $L$ snapshots, take the portion corresponding to the contiguous region of the coarray and normalize it by the coarray weights $\boldsymbol \eta(k)$ for unbiased narrowband correlation estimates $\textbf{r}_m(k)$ in \eqref{SScorr}. 2) Since the correlation estimate is even conjugate symmetric about the coarray center, we apply spatial resampling only to the right half side of the correlation estimate such that $\textbf{z}_m(k) = \textbf{r}_m(k= 0,...,P-1)$ to save computation. 3) Choose integers $K_m$ and $L_m$ appropriately such that $K_m/L_m = f_m/f_0$, where $K_m$ and $L_m$ are both integers. 4) Upsample $\textbf{z}_m(k)$ by inserting $(K_m-1)$ zeros in between each sample of the correlation estimate such that \begin{equation} \textbf{z}'_m(k) = \left\{ \begin{array}{c} \textbf{z}_m\left(k/K_m\right) ,\text{for}~k = 0, K_m, ..., (P-1)K_m\\ ~~~~~~~~~0~~~~~~,~\text{otherwise.}~~~~~~~~~~~~~~~~~~~~~~~~~ \end{array} \right. \end{equation} 5) Filter the upsampled correlation function $\textbf{z}'_m(k)$ by a linear phase finite impulse response low pass filter with cut-off frequency of $\min(\pi/K_m, \pi/L_m)$ to obtain the interpolated correlations $\textbf{z}'_{m,\text{intp}}(k)$. Shift or re-index $\textbf{z}'_{m,\text{intp}}(k)$ to obtain the correct set of correlations by accounting for the group delay due to linear phase filtering. 6) Decimate $\textbf{z}'_{m,\text{intp}}(k)$ by a factor of $L_m$ such that $\tilde{\textbf{z}}_m(k) = \textbf{z}'_{m,\text{intp}}(L_m k)$ to obtain the focused spatial correlation function. 7) Make up for the left half side of the resampled correlation estimates using the even conjugate symmetry property such that \begin{equation} \tilde{\textbf{r}}_m(k) = \left\{ \begin{array}{c} ~~~\tilde{\textbf{z}}^*_m(-k), ~\text{for}~k = -(P-1),...,-1\\ \tilde{\textbf{z}}_m(k), ~\text{for}~k = 0,...,P-1 \end{array} \right. \end{equation} The procedures above are repeated for all snapshots at all frequency bands before averaging across all $M$ frequencies to obtain the coherently combined spatially correlation estimates \begin{equation} \label{SRcorr} \tilde{\textbf{r}}(k) = \frac{1}{M}\sum_{m=1}^M \tilde{\textbf{r}}_m(k). \end{equation} The estimated correlation function $\tilde{\textbf{r}}(k)$ then populates the diagonals of a Hermitian Toeplitz ACM as given in Section \ref{subsecACMconstruction}. The spatial resampling procedures are essentially the same as time domain resampling, as described in Fig. 4.28 \cite{OppenheimDSPbook}, adapted to spatial correlation functions. It is worth to note that, in theory, the focus frequency can be any value equal to or below the array design frequency to avoid spatial aliasing. However, for practical implementation, we choose to focus at the minimum frequency in band such that $f_0 = f_1$. Resampling in this case corresponds to an interpolation or spatial sampling rate increase by a factor of $K_m/L_m$ at the $m$th frequency band. This makes sure that no extrapolation is needed in Step 5) to guarantee enough correlation samples to decimate in Step 6) in order to maintain the same coarray support for spatial correlation estimates as before resampled. \subsection{Augmented covariance matrix construction} \label{subsecACMconstruction} An alternative approach to SS for ACM construction is through lag redundancy averaging (LRA) \cite{PillaiORG}. This technique exploits the coarray redundancies by averaging all repeated estimates of the spatial correlation function at any given lag from different sensor pairs and then replacing the individual estimates at that lag by their average \cite{noteLRA}\cite{LRAperformance}. As a result, the constructed ACM is populated with correlation estimates with reduced variances. The LRA-ACM is populated with the spatial correlation estimates from either AP \eqref{correstimatesAP} or SCR \ref{SRcorr} following \begin{equation} \label{LRA-ACM} \textbf{R}_{\text{LRA}} = \left[ \begin{array}{cccc} \tilde{\textbf{r}}(0) & \tilde{\textbf{r}}(-1) & \cdots & \tilde{\textbf{r}}(1-P) \\ \tilde{\textbf{r}}(1) & \tilde{\textbf{r}}(0) & \cdots & \tilde{\textbf{r}}(2-P) \\ \vdots & \vdots & \ddots & \vdots \\ \tilde{\textbf{r}}(P-1) & \tilde{\textbf{r}}(P-2) & \cdots & \tilde{\textbf{r}}(0) \end{array} \right]. \end{equation} The LRA approach constructs a Hermitian Toeplitz ACM from the correlation estimates, although the ACM is positive indefinite. Compared against the SS-ACM, populating the LRA-ACM is more computationally efficient. For the same sparse array data, note that the LRA-ACM exploits the second-order statistics, whereas the SS-ACM exploits the fourth-order statistics of the propagating field. For finite snapshots, the SS-ACM in \eqref{SS-ACM} can be shown explicitly related to the LRA-ACM by \cite{PPLiuSPL} \begin{equation} \textbf{R}_{\text{SS}} = \textbf{R}^2_{\text{LRA}}/P. \end{equation}This implies that $\textbf{R}_{\text{SS}}$ and $\textbf{R}_{\text{LRA}}$ share the same eigen space and the eigenvalues of $\textbf{R}_{\text{SS}}$ are proportional to the square of the eigenvalues of $\textbf{R}_{\text{LRA}}$. For infinite snapshots, the LRA-ACM approaches the ensemble covariance matrix of a fully populated ULA with probability 1 \cite{PillaiACM}. This implies it is more reasonable to use the eigenvalue magnitudes and the eigenvectors of the LRA-ACM rather than the SS-ACM for source enumeration and DOA estimation. \subsection{Source Enumeration and DOA estimation} The ACM constructed in \eqref{LRA-ACM} goes through eigenvalue decomposition, with the eigenvalues sorted in descending order by their magnitudes \cite{PPLiuSPL} \begin{equation} |\lambda_1| \geq |\lambda_2| \geq ... \geq |\lambda_k| \geq ... \geq |\lambda_P|, \end{equation} before computing the information criteria for source enumeration. Rissanen proposed estimating the number of sources as the model order that yields the minimum code length over a range of possible number of sources \cite{MDL}\cite{WaxKailathSN}. The proposed MDL criterion is the sum of the log-likelihood of the maximum likelihood estimator of the model parameters and a bias correction term penalizing over-fitting of the model order \begin{equation} \label{MDL} \text{MDL}(q) = - \log \left( \frac{g_q}{a_q} \right)^{(P-q)L} + \frac{1}{2}q(2P-q)\log L, \end{equation} for the possible number of sources $q = 0, ..., P-1$. The functions $g_q = \prod_{j = q+1}^P |\lambda_j|^{1/(P-q)}$ and $a_q = \frac{1}{P-q} \sum_{j = q+1}^P |\lambda_j|$ are, respectively, the geometric and arithmetic mean of the $P-q$ smallest eigenvalues of the Wishart distributed SCM. The estimated number of sources is $\hat{q} = \arg \min_q \text{MDL}(q)$. Since the ACM in \eqref{LRA-ACM} does not follow Wishart distribution, there is no theoretical guarantee that the MDL criterion achieves an accurate estimate of the number of sources, especially in under-determined scenarios. \cite{LiuBuckMDLgap} modified the standard MDL criterion in \eqref{MDL} and extended its application to the LRA-ACM for enumerating more sources than sensors using narrowband sparse arrays.The new information criterion, termed MDL-gap, is defined as the first-order backward difference of the MDL criterion normalized by the number of snapshots such that \begin{small} \begin{eqnarray} \label{MDlgapcriteria} \text{MDL-gap}(q) &=& (\text{MDL}(q) - \text{MDL}(q-1))/L \\ \nonumber &=& -\log \left( \frac{(a_{q-1})^{P-q+1}}{|\lambda_q| (a_{q})^{P-q} } \right) + \frac{P-q+1/2}{L} \log L, \end{eqnarray} \end{small}for the possible number of sources $q = 1,...,P-1$. The detected source number is $\hat{q} = \arg \min_q \text{MDL-gap}(q)$. Since the MDL-gap criterion showed improved performance over MDL in enumerating more sources than sensors in the narrowband scenarios \cite{LiuBuckMDLgap}, we here extend its application to the LRA-ACM in \eqref{LRA-ACM} for broadband sources. Assuming the number of sources is accurately estimated, the DOA estimation is performed by directly applying the standard narrowband spectral MUSIC algorithm \cite{MUSIC} to the coherently constructed ACM. Specifically, the eigenvectors corresponding to the $P-D$ least significant eigenvalues of the ACM are extracted to estimate the noise subspace \begin{equation} \label{coherentnoisesub} \textbf{V}_\text{coh}^{\perp} = [\textbf{v}_{D+1},\textbf{v}_{D+2},...,\textbf{v}_P]. \end{equation} Since the source manifold vectors at the focused frequency \begin{equation} \textbf{a}(u_i) = [1,...,e^{j(2\pi f_0 kd/c)u_i},...,e^{j(2\pi f_0 Pd/c)u_i}] \end{equation} for each source $i=1,...,D$ are orthogonal to the noise subspace spanned by $\textbf{V}_\text{coh}^{\perp}$, the MUSIC spectra computed as \begin{equation} \label{cohMUSIC} P_{\text{coh}}(u) = \frac{1}{\textbf{a}(u)^H\textbf{V}_\text{coh}^{\perp} (\textbf{V}_\text{coh}^{\perp})^H \textbf{a}(u)}, \end{equation} will show $D$ peaks at the source locations. The source DOAs are then estimated by searching for the highest $D$ peaks in the coherently estimated MUSIC spectra. \section{Comparative Simulation Results and Performance Analysis} \label{Sec5Simulations} This section compares the performance of the proposed AP and SCR based broadband focusing algorithms for source enumeration and DOA estimation in numerical simulations. These approaches are compared against the ISS processing in scenarios with relatively few snapshots. All simulations in this section model the source amplitudes as uncorrelated, complex Gaussians with equal power occupying a bandwidth of 40 Hz around the central frequency of 100 Hz. The broadband sources are decomposed evenly into 41 narrowband components via FFT within the bandwidth. As a benchmark, we compare all simulations against the narrowband (NB) case with comparable time-bandwidth product to the broadband sources. This means the narrowband sources has 41 times more snapshots than the broadband sources. This comparison with the narrowband case makes clear the performance cost paid by the focusing operations where the proposed broadband algorithms combine information across the frequency band. For demonstration purposes, we compare a MRA with 6 sensors at locations $[1,2,5,6,12,14]d$. This array offers a contiguous coarray region spanning $k \in [-13,13]$. The fundamental inter-element spacing of the MRA is $d = \lambda/2$, where $\lambda$ is the spatial wavelength at the central frequency $f =$ 100 Hz. The sensor SNR level is defined as the ratio between the power of each source signal to the noise power at a single sensor. The noise is assumed both temporally and spatially white and complex Gaussian occupying the same bandwidth as the sources, uncorrelated among the sources and also uncorrelated between each pair of sensors. The following simulations consider two scenarios focusing on different perspectives. The first is an over-determined scenario to demonstrate the proposed algorithms' capability to resolve closely spaced sources. The second is an under-determined scenario to demonstrate the proposed algorithms' capability to enumerate and localize more sources than sensors. \subsection{Resolving two closely-spaced sources} In the two-source scenario, we first evaluate the performance of the 4 approaches for source enumeration using the MDL and MDL-gap criteria. Fig. \ref{SampleRealization2sources} compares the sample realizations of the information criteria as a function of possible number of sources. All information criteria are normalized by their maximum magnitudes respectively for demonstration purpose. All simulations use 3 snapshots/sensor for the broadband approaches and equivalently, 123 snapshots/sensor for the narrowband sources. There are two sources D = 2 arriving from directions u = [0, 0.06] for the left column and u = [0, 0.3] for the right column. The true number of sources $D=2$ is indicated by orange vertical dashed lines in all panels. For simplicity, all sources are assumed equal power with sensor level SNR = 0 dB. For the two-source case, all information criteria show minima at D = 2, which implies that all algorithms are able to estimate the true number of sources. The sample realization results indicate all algorithms struggle to enumerate closely spaced sources but start to enumerate correctly when the sources are further separated. \begin{figure} \centerline{\includegraphics[width=10cm]{InfoagainstK-2sources.eps}} \caption{Comparing the sample realizations of MDL and MDL-gap criteria for the AP, SCR, ISS, and equivalent narrowband scenarios for 2 uncorrelated sources. The 2-sources are separated by $\Delta u = 0.06$ on the left column and $\Delta u = 0.3$ on the right column. All simulations assume equal power sources with sensor level SNR = 0 dB using 3 snapshots/sensor for broadband sources and 123 snapshots/sensor for narrowband sources. The results imply all algorithms struggle to enumerate closely spaced sources but start to enumerate correctly when the sources are further separated. } \label{SampleRealization2sources} \end{figure} \begin{figure} \centerline{\includegraphics[width=9cm]{MRA6_SpectralMUSIC_2sources.eps}} \caption{Comparing the (a) AP, (b) SCR, (c) ISS, and (d) equivalent narrowband MUSIC pseudo-spectra for two uncorrelated sources with DOAs $u = [0,0.06]$ indicated by vertical dashed lines. All simulations assume equal power sources with sensor level SNR = 0 dB and 3 snapshots per sensor for each of the 41 frequency bands. The equivalent narrowband case uses 123 snapshots/sensor. The MUSIC spectra imply the proposed AP and SCR approaches are more capable of resolving closely spaced sources than the ISS approach. } \label{MUSICMRA} \end{figure} Assuming the number sources are accurately estimated, Fig. \ref{MUSICMRA} compares the MUSIC pseudo-spectra of AP, SCR, ISS and the equivalent NB scenario in panels ($a$-$d$) for two closely spaced uncorrelated sources with DOAs at $u = [0,0.06]$, which are within the Rayleigh resolution limit $\Delta u = 0.13$ calculated based on the MRA co-array aperture. The two sources are assumed equal power with sensor level SNR = 0 dB and 3 snapshots/sensor for each of the 41 narrow bands. The equivalent NB case uses 123 snapshots/sensor. Note that the AP, SCR and NB MUSIC spectra all show two discernible peaks near the true DOAs indicated by vertical orange dashed lines. However, the ISS approach fails to resolve these two sources, showing only one unique peak in between the true DOAs instead. The MUSIC spectra imply the proposed AP and SCR approaches are more capable of resolving closely spaced sources than the ISS approach. To characterize rigorously the resolvability and DOA estimate errors of the two closely spaced sources, we compare the 4 approaches on their probabilities of resolution \cite{kaveh1986statistical} and average root mean square errors (RMSE) for all estimated DOAs against the source separation $\Delta u$, source SNR and number of snapshots/sensor. The DOA estimate performance is characterized by the \begin{equation} \text{RMSE} = \sqrt{\sum_{d=1}^D \sum_{j=1}^{J} (\hat{u}_d(j) - u_d)^2/DJ }, \end{equation} where $\hat{u}_d(j)$ is the estimated DOA using the MUSIC algorithm for the $d$-th source in the $j$-th Monte Carlo trial with $d = 1,...,D$ and $j = 1,...,J$. All simulations results are averaged over $J = 500$ independent Monte Carlo trials. Fig. \ref{ProbResoRMSEAgaisntDeltaU2sources}(a) compares the probability of resolution of these 4 approaches as a function of the spacing $\Delta u$ between two sources for SNR = 0 dB and 5 snapshots/sensor. AP and SCR have close performance in resolving two closely spaced sources, which are both worse than the NB case. However, the AP and SCR approaches outperform the ISS approach in their ability to resolve more closely spaced sources. Fig. \ref{ProbResoRMSEAgaisntDeltaU2sources}(b) compares the RMSE for the DOA estimates. For all approaches, the RMSEs decrease as the separation between the two sources increases. The AP, SCR and NB have similar RMSEs, which are lower than the RMSE using the ISS approach. \begin{figure} \centerline{\includegraphics[width=9cm]{MRA_ProbRMSE_DeltaU_2sources_5snaps_0dB_500MC.eps}} \caption{Comparing (a) the probability of resolution and (b) the RMSE of DOA estimates as a function of the spacing between 2 uncorrelated equal power sources with SNR = 0 dB. One source is fixed at broadside and the other source is located away from broadside by $\Delta u$ between [0.01, 0.1]. The simulations for broadband sources use 5 snapshots/sensor and the equivalent narrowband sources use 205 snapshots per sensor. The results indicate the proposed AP and SCR approaches are capable of resolving more closely spaced sources and achieving higher DOA estimate precision than the ISS approach.} \label{ProbResoRMSEAgaisntDeltaU2sources} \end{figure} \subsection{Enumerating/Localizing more sources than sensors} One major advantage that sparse arrays offer over the fully populated arrays is the capability of localizing more sources than sensors \cite{PillaiORG}. This section explores the advantages of the proposed AP and SCR approaches in enumerating and estimating more broadband sources than sensors over the ISS approach. We again use the same 6-element MRA as in the previous section, but with 9 uncorrelated equal power sources: 1 at broadside, 4 uniformly spaced in $\theta = (90^o, 135^o]$ and the other 4 uniformly spaced in $u = (0,0.7]$. We first evaluate the performance of the 4 approaches for source enumeration using the MDL and MDL-gap criteria. Fig. ~\ref{SampleRealization9sources} compares the sample realizations of the criteria as a function of possible number of sources. All information criteria are normalized by their maximum magnitudes respectively for demonstration purpose. The simulations in the left column of panels $(a,c)$ use 3 snapshots/sensor for the broadband source and equivalently, 123 snapshots/sensor for the narrowband source. The simulations in the right column of panels $(b,d)$ use 10 snapshots/sensor for the broadband source and equivalently, 410 snapshots/sensor for the narrowband source. For all panels, the true number of sources D = 9 is indicated by vertical orange dashed lines. For simplicity, all sources are assumed equal power with sensor level SNR = 0 dB. Panel (a) shows that when the number of sources D = 9 exceeds the number of sensors N = 6, none of the approaches exhibits a minimal MDL value at $\hat{D}$ = 9. Panels (c) shows that the AP, SCR and NB approaches show minimal MDL-gap values at $\hat{D}$ = 9. However, the ISS approach is not able to estimate $\hat{D}$ = 9 using either criteria at the modest snapshots level. When the number of snapshots increases, panel (b) shows the MDL still fails to estimate $\hat{D}$ = 9 for all four methods of constructing the ACM. However, panel (d) shows that all approaches using the MDL-gap criterion are able to correctly estimate the source number $\hat{D}$ = 9. These simulations imply that the AP and SCR approaches are capable of enumerating more sources than sensors in relatively few snapshots using MDL-gap. However, at least in this example, the ISS approach requires relatively large number of snapshots to achieve an accurate enumeration of more sources than sensors using MDL-gap. \begin{figure} \centerline{\includegraphics[width=10cm]{InfoagainstK-9sources.eps}} \caption{Comparing the sample realizations of MDL and MDL-gap criteria for the AP, SCR, ISS, and equivalent narrowband scenarios for 9 uncorrelated sources. All simulations assume equal power sources with sensor level SNR = 0 dB and 3 snapshots per sensor on the left column and 10 snapshots per sensor on the right column. The results imply that MDL struggles to enumerate more sources than sensors regardless of the number of snapshots available. However, using the MDL-gap criteria, the proposed AP and SCR approaches require fewer snapshots than ISS for correct source enumeration.} \label{SampleRealization9sources} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm]{DetectionProbability.eps} \caption{Comparing the probability of correctly enumerating the number of sources using the MDL-gap criterion for different approaches (a) as a function of the number of snapshots per sensor for fixed sensor level SNR = 0 dB and (b) as a function of sensor level SNR for a fixed 5 snapshots per sensor. There are 9 equal power sources impinging on the 6-element MRA. The results indicate the AP and SCR approaches require fewer snapshots and lower SNR than the ISS approach for source enumeration. } \label{ProbDetection} \end{center} \end{figure} To quantify rigorously the performance of the proposed AP and SCR approaches in enumerating more sources than sensors, Fig. \ref{ProbDetection}(a) compares the probability of correct enumeration using MDL-gap against snapshots/sensor and Fig. \ref{ProbDetection}(b) against sensor level SNR. The detection probability is calculated as the number of Monte Carlo trials correctly estimating $\hat{D}= 9$ sources, normalized over a total of 500 trials. The sensor level SNRs are the same of 0 dB for all 9 sources for the simulations in panel (a). The simulation results show that the detection probabilities using all approaches increase as the numbers of snapshots increase. In particular, the AP and SCR approaches require much lower numbers of snapshots than the ISS approach to achieve a high detection probability. AP has higher detection probability than SCR for less than 2 snapshots/sensor, but doesn't converge to 1 as fast as SCR. In contrast, ISS requires 6 snapshot/sensor to start detecting all sources and 10 snapshots/sensor to achieve a detection probability above $90\%$. Panel (b) evaluates the detection probability as a function of sensor level SNR. The number of snapshots/sensor is fixed as 5 for the broadband source and 205 for the equivalent NB source. The simulation results show that the NB approach requires the lowest SNR level to start correctly detecting all sources. The AP and SCR approaches require SNR of -9 dB to start detecting all sources. ISS is not able to enumerate all sources for all SNRs with only 5 snapshots per sensor available. \begin{figure} \centerline{\includegraphics[width=9.8cm]{MUSICMRA9sources.eps}} \caption{Comparing the (a) AP, (b) SCR, (c) ISS, and (d) equivalent narrowband MUSIC pseudo-spectra for 9 broadband sources with 1 snapshots/sensor (left column) and 10 snapshots/sensor (right column) for the broadband sources. The AP and SCR MUSIC spectra show sharper peaks than the ISS MUSIC spectra for the same number of snapshots, indicating more precise DOA estimation. } \label{MUSICMRA9sources} \end{figure} \begin{figure} \centerline{\includegraphics[width=9.5cm]{MRA6_RMSE_snapsSNR_500MC_9sourcess.eps}} \caption{Comparing the RMSE of DOA estimates against (a) snapshots/sensor with fixed SNR = -5 dB (a) and against (b) SNR with fixed 1 snapshots/sensor for 9 uncorrelated equal power sources. The results indicate the AP and SCR algorithms achieve lower RMSE than the ISS algorithm in low snapshots and SNR scenarios.} \label{MRARMSE9sources} \end{figure} Assuming the number of sources is correctly estimated, we explore the DOA estimation performances of the AP and SCR approaches for scenarios with more sources than sensors. Fig. \ref{MUSICMRA9sources} $(1a-1d)$ compare the MUSIC pseudo-spectra of AP, SCR, ISS and the equivalent NB approaches for 9 sources with DOAs indicated by vertical orange dashed lines. All sources are assumed equal power with sensor level SNR = 0 dB and 1 snapshots/sensor for each of the 41 frequency bands. The equivalent narrowband case uses 41 snapshots/sensor. Note that the AP, SCR and NB MUSIC spectra all show discernible peaks near the true DOAs. However, the ISS approach shows very shallow (smeared) peaks in its MUSIC spectra and misses detecting some sources. Panels $(2a-2d)$ compare the MUSIC pseudo-spectra of AP, SCR, ISS algorithms for 10 snapshots/sensor for the broadband sources and equivalently, 410 snapshots/sensor for the narrowband scenario. When the number of snapshots increases, the MUSIC spectra for all algorithms have sharper peaks at the true DOAs. However, the MUSIC spectra for ISS algorithm is still shallower than the other 3 algorithms. Fig. \ref{MRARMSE9sources}(a) compares the RMSEs of all approaches averaged over 500 Monte Carlo trials against number of snapshots/sensor for the 9 sources with fixed SNR = -5 dB. All RMSEs decrease as the number of snapshots increases. AP and SCR have almost identical RMSEs, which are lower than ISS for less than 3 snapshots/sensor, and slightly higher than ISS for above 4 snapshots/sensor. ISS converges to the narrowband scenario closer than both AP and SCR for above 4 snapshots/sensor. Fig. \ref{MRARMSE9sources}(b) compares the RMSEs of all approaches against sensor level SNR for 9 sources with 1 snapshot/sensor for the broadband scenario, and equivalently 41 snapshots/sensor for the NB scenario. All RMSEs decrease as SNR level increases. AP and NB have very close RMSEs for the SNR range considered, which are both close to the SCR. The ISS has strictly greater RMSE than AP and SCR for all SNR levels due to the low number of snapshots available. These simulations imply that the AP and SCR approaches have advantages over the ISS approach in enumerating and estimating the DOAs of more broadband sources than sensors, especially in low SNR and relatively few snapshots scenarios. \section{Conclusion} \label{conclusion} This paper proposed new coherent broadband focusing algorithms for sparse linear array processing. The proposed algorithms extend the concepts of periodogram averaging and spatial resampling developed for ULAs to the correlation estimates for any sparse array geometries. By averaging the spatial periodograms across multiple narrow frequency bands, the sources' spectral information are constructively reinforced in the beamspace. Alternatively, spatial resampling of the correlation estimates from different frequency bands realigns the co-array manifold mismatches due to distinct temporal frequencies between frequency bands. The improved statistical precision in the correlation estimates offered by both broadband periodogram averaging and spatial correlation resampling construct augmented covariance matrices with higher statistical precision than processing only one frequency band with the same number of snapshots. The broadband algorithms usually pay small penalty for coherent focusing when compared with a narrowband algorithm with the same total number of measurements. The new algorithms proposed in this paper addressed the challenges of sparse array processing in low SNR and snapshot-limited environments. Treating data from other frequency bands as additional snapshots inherently reduces the number of snapshots usually required compared to the narrowband scenarios to achieve a given estimate precision. Improving the precision of spatial correlation estimate benefits the sparse array source enumeration and localization tasks in underwater sonar systems with practically challenging scenarios due to slow speed of sound propagation, relatively large aperture and non-stationary fields. \section*{Acknowledgment} This material is based upon research supported by the U.S. Office of Naval Research under award numbers N00014-13-1-0230, N00014-17-1-2397 and N00014-18-1-2415. \ifCLASSOPTIONcaptionsoff \newpage \fi
2,877,628,089,998
arxiv
\section{Introduction}\label{s:introduction} Ultraluminous infrared galaxies (ULIRGs) exhibit some of the most extreme star formation rates in the Universe. The volume density of higher redshift ULIRGs peaks at z$\sim$2-3 \citep{chapman05a}$-$this is also the peak epoch in the cosmic star formation rate density and volume density of active galactic nuclei \citep[AGN; e.g.][]{fan01a,richards06a}. Not only does this indicate a possible link between supermassive black hole growth and rapid star formation, but it also signals the most active phase in galaxy evolution and formation. ULIRGs exhibit very intense (SFR\simgt200\,\Mpy), short-lived bursts ($\tau\,\sim\,$100\,Myr) of star formation. The possible life cycle of a ULIRG, from star-formation dominated, dust-enshrouded galaxy, to obscured AGN and then luminous quasar \citep[e.g.][]{sanders88b,veilleux09a}, provides a testable evolutionary sequence. The best studied ULIRGs at high redshift are submillimetre galaxies \citep[SMGs;][]{blain02a} which are characterised by their detection at 850$\mu$m\, with S$_{850}\simgt5$\,mJy. While SMGs put powerful constraints on galaxy evolution theories and the environments of extreme star formation \citep{greve05a,tacconi06a,tacconi08a,chapman05a,pope06a}, their selection is susceptible to strong temperature biasing \citep{eales00a,blain04a}. At the mean redshift of radio SMGs, z$\sim$2.2, observations at 850$\mu$m\ sample the Rayleigh-Jeans tail of blackbody emission where the observed flux density may be approximated by S$_{850}\propto$L$_{\rm FIR}\,T_{dust}^{-3.5}$. Due to the strong dependence on dust temperature, the 850$\mu$m\ flux density of warm-dust ULIRGs (T$_{d}\lower.5ex\hbox{\gtsima}$40\,K) might be much lower than cooler dust specimens (T$_{d}=$20-40\,K) thus causing the warmer-dust galaxies to evade submm detection. This selection bias suggests that a large fraction of the z$\sim2$ ULIRG population has not been accounted for in current work on high-z star formation. \citet{chapman04a} describe the first observational effort to identify warm-dust ULIRGs as a population, via the selection of submm-faint radio galaxies (SFRGs) with starburst-consistent rest-UV spectra. While they were thought to be ULIRGs by their similarities to SMGs (similar radio luminosities, optical spectra, stellar masses), without detection in the far-infrared (FIR), there was no direct evidence that their luminosities were in excess of 10$^{12}$\,{\rm\,L$_\odot$}. Ideally, detection at shorter wavelengths in the infrared, at $\lambda\,\le\,$500\,$\mu$m\, must be used to confirm a ULIRG's luminosity in the absence of submm detection; \citet{casey09b} used 70\,$\mu$m\ detection to confirm that a subset of the SFRG population contains a dominating warmer-dust component with $<T_{d}>$\,=\,52\,K. While a population of warm-dust ULIRGs has been shown to exist, some fundamental questions still remain unanswered: are warm-dust ULIRGs in a post-SMG AGN heated phase? Could they be triggered by different mechanisms than the major mergers said to give rise to cold-dust SMGs? Investigating the molecular gas content is fundamental to the characterisation of star formation properties and gas dynamics of a galaxy population. Molecular line transitions from carbon monoxide (CO) are a direct probe of the vast gas reservoirs that are needed to fuel high star formation rates \citep{frayer99a,greve05a,tacconi06a,tacconi08a,chapman08a}. The gas dynamics which are derived from these observations shed light on galaxies' evolutionary sequences by measuring how disturbed their gas reservoirs are and how long they can maintain their star formation rates with the observed fuel supply. Recent simulations work hint that a ULIRG phase may be triggered by either major merger interactions \citep[e.g.][]{narayanan09a} or from steady bombardment from low mass fragments \citep[e.g.][]{dave10a}; linking observations with these different evolutionary scenarios is an essential step in understanding galaxy evolution in the early Universe. In this paper, we present CO molecular gas observations, taken with the IRAM Plateau de Bure Interferometer (PdBI), of twelve SFRGs (and four additional SFRGs from the literature) to compare the population with SMGs and other high redshift star forming galaxies. Section \ref{s:observations} describes the sample selection, molecular gas observations and ancillary data, while section \ref{s:results} presents our results, in the form of derived gas and star formation quantities of the SFRG sample. Section \ref{s:discussion} discusses the gas properties of the sample, compares the population to other high redshift galaxies, and hypothesizes on the role of SFRGs in a broader galaxy evolution context relative to local ULIRGs and SMGs while section \ref{s:conclusions} concludes. Throughout, we use a $\Lambda$ CDM cosmology with $H_{\rm 0} = 71$\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi~Mpc$^{-1}$, $\Omega_{\rm \Lambda}=0.73$ and $\Omega_{\rm m}=0.27$ \citep{hinshaw09a}. \section{Observations \& Reduction}\label{s:observations} Our sample is drawn from a set of \uJy\ radio galaxies in the GOODS-N, Lockman Hole, Elais-N2, SSA13 and the UDS fields using the \citet{chapman04a} selection of submm-faint star forming radio galaxies (SFRGs). They were detected in ultra-deep VLA radio maps \citep{biggs06a,ivison02a,ivison07a,fomalont06a} with S$_{\rm 1.4\,GHz}\lower.5ex\hbox{\gtsima}$15\uJy\ at $>$3\,$\sigma$ with an approximate upper limit of S$_{\rm 1.4\,GHz}\lower.5ex\hbox{\ltsima}$1\,mJy since strong AGN and radio-bright local galaxies were removed from the sample. The \uJy\ radio galaxy population was identified in an effort to isolate bright star formers at high redshift, so only the sources with non-AGN photometric redshifts of $z\lower.5ex\hbox{\gtsima}$1 were included \citep[e.g.][]{chapman03b}. Spectroscopic follow up with Keck LRIS revealed starburst spectral features \citep{chapman04a,reddy06a}, mostly at redshifts z\simgt1. The SFRGs with the most reliable spectroscopic redshifts (often due to a strong Ly-$\alpha$ emission peak, $\sigma_{z}\lower.5ex\hbox{\ltsima}$0.005) were chosen for CO observations at the IRAM Plateau de Bure Interferometer. We note that all galaxies in our sample satisfy the $BzK$ 'active' galaxy selection criterion \citep{daddi04a}, and 10/14 satisfy the Dust Obscured Galaxy (DOG) selection \citep{dey08a}. All SFRGs have poor rest-UV photometry, so their selection with respect to BX/BM \citep{steidel04a} is not constrained. Figure~\ref{fig:radiodist} shows the distribution in radio luminosity of the CO-observed SFRG sample relative to the distributions of parent SFRGs, CO-observed SMGs, and parent SMGs. We note that the CO observed sample in this paper is about two times less radio luminous than the CO-observed SMGs which were analyzed in \citet{neri03a}, \citet{greve05a}, and \citet{tacconi06a}$-$an aspect of their selection which traces back to the removal of more luminous spectroscopic AGN from the SFRG sample. The equivalent class of spectroscopic AGN are not removed from the SMG sample since their detection in the FIR provides sufficient evidence that the SMGs are star-formation dominated. This is likely represents the most dominant selection bias between the populations, however we address more in the discussion in section \ref{ss:volumedensity}. We also discuss the AGN fraction of SFRGs at length in section \ref{ss:agn}. While lower luminosity SMGs have been observed with PdBI (representing the low luminosity tail on the SMGs in Fig~\ref{fig:radiodist}; Bothwell {\rm et\ts al.}, in preparation), only the published CO observed SMGs are included in this paper for comparison. \begin{figure} \centering \includegraphics[width=0.90\columnwidth]{radiodist.pdf} \caption{The distribution in radio luminosity (and inferred FIR luminosity) of CO observed SFRGs (black line-filled) and SMGs (gray filled) relative to the distribution in radio luminosities of their parent populations: spectroscopically confirmed SFRGs (dashed line) and SMGs (dotted line) without CO observations. } \label{fig:radiodist} \end{figure} \subsection{PdBI Observations}\label{ss:pdbiobs} Nine SFRGs were observed from June 2008 to October 2008, while three additional sources were observed in August through October of 2009. The mean redshift of the sample is $z=1.9\pm0.6$. \begin{table*} \begin{center} \caption{PdBI Observation Properties and FIR, Radio Data for SFRGs} \label{tab:observations} \begin{tabular}{c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c} \hline\hline NAME & z$_{\rm opt}$ & Obs. $^{12}$CO & $\nu_{\rm obs}$ & {\footnotesize BW$^{a}$} & {\footnotesize RMS$_{ct}$} & {\footnotesize RMS$_{ch}$} & Bin & Beamsize & $i$ & S$_{24}$ & S$_{70}$ & S$_{350}$ & S$_{850}$ & S$_{1200}$ & S$_{1.4\,GHz}$ \\ & & Transition & (GHz) & {\footnotesize (GHz)} & {\scriptsize (mJy)} & {\scriptsize (mJy)} & (MHz) & (\arcsec\,$\times$\,\arcsec, $^{o}$) & (mag) & (mJy) & (mJy) & (mJy) & (mJy) & (mJy) & (\uJy) \\ \hline \tett & 1.361 & 2$\to$1 & 97.810 & 0.9 & 0.11 & 0.71 & 20 & 7.1$\times$4.4, 150$^{o}$ & 24.5 & ... & ... & ... & $<$3.5 & ... & 83.7$\pm$7.0 \\ \rm RGJ105209 & 2.112 & 3$\to$2 & 111.117 & 1.8 & 0.08 & 0.56 & 40 & 5.1$\times$3.9, 133$^{o}$ & $>$25.1 & 167$\pm$47 & $<$1.8 & $<$33 & $<$4.2 & $<$1.8 & 34.5$\pm$5.5 \\ \tptt & 1.819 & 2$\to$1 & 81.780 & 0.9 & 0.10 & 0.65 & 20 & 6.1$\times$5.1, 98$^{o}$ & 22.8 & 150$\pm$30 & $<$6.2 & ... & $<$1.8 & $<$1.4 & 25.6$\pm$6.2 \\ \rm RGJ123642 & 3.661 & 4$\to$3 & 98.915 & 1.8 & 0.08 & 0.56 & 40 & 7.6$\times$3.9, 49$^{o}$ & 25.9 & $<$15 & $<$1.9 & ... & $<$3.4 & $<$0.9 & 20.1$\pm$8.2 \\ \rm RGJ123644 & 2.095 & 3$\to$2 & 111.727 & 1.6 & 0.10 & 0.66 & 40 & 6.6$\times$3.6, 46$^{o}$ & 24.2 & 123$\pm$29 & $<$1.8 & ... & $<$3.6 & $<$2.8 & 39.6$\pm$8.7 \\ \rm RGJ123645 & 1.433 & 2$\to$1 & 94.755 & 1.8 & 0.06 & 0.39 & 40 & 5.0$\times$4.8, 86$^{o}$ & 23.5 & 172$\pm$34 & 4.8$\pm$0.4 & ... & $<$10.8 & $<$1.6 & 83.4$\pm$9.8 \\ \rm RGJ123653 & 1.275 & 2$\to$1 & 101.335 & 1.8 & 0.08 & 0.56 & 40 & 4.9$\times$4.0, 75$^{o}$ & 22.7 & 164$\pm$33 & 6.6$\pm$0.4 & ... & $<$1.2 & $<$0.7 & 86.7$\pm$8.3 \\ \rm RGJ123707 & 1.489 & 2$\to$1 & 92.623 & 1.8 & 0.08 & 0.56 & 40 & 6.1$\times$4.4, 65$^{o}$ & 22.9 & 588$\pm$63 & $<$1.7 & ... & $<$3.2 & $<$1.7 & 24.1$\pm$8.6 \\ \rm RGJ123711 & 1.996 & 4$\to$3 & 153.885 & 1.8 & 0.09 & 0.60 & 40 & 4.2$\times$2.7, 53$^{o}$ & 24.2 & 473$\pm$57 & 1.4$\pm$0.4 & $<$24 & $<$2.4 & $<$4.4 & 126.3$\pm$8.6 \\ \rm RGJ123718 & 1.512 & 2$\to$1 & 91.775 & 1.8 & 0.06 & 0.39 & 40 & 5.5$\times$4.4, 80$^{o}$ & 23.1 & 73$\pm$23 & $<$1.7 & ... & $<$3.8 & $<$0.6 & 15.2$\pm$6.8 \\ \tott & 1.532 & 2$\to$1 & 91.050 & 0.9 & 0.12 & 0.82 & 20 & 5.8$\times$4.3, 127$^{o}$ & 24.8 & 105$\pm$15$^\ddag$ & ... & ... & $<$3.8 & ... & 44.9$\pm$2.4 \\ \rm RGJ131208 & 2.237 & 3$\to$2 & 106.826 & 1.8 & 0.07 & 0.44 & 40 & 4.5$\times$4.0, 82$^{o}$ & 25.2 & 279$\pm$14$^\ddag$ & ... & $<$82 & $<$3.0 & ... & 37.6$\pm$4.0 \\ {\bf Lit SFRGs:} & & & & & & & & & & & & \\ RG\,J123626$^{a}$ & 1.465 & 2$\to$1 & 93.525 & 1.8 & 0.09 & 0.78 & 12 & & 24.3 & 94$\pm$26 & $<$1.7 & ... & $<$7.0 & $<$1.1 & 37.9$\pm$9.3 \\ RG\,J123710$^{a}$ & 1.522 & 2$\to$1 & 91.411 & 1.8 & 0.10 & 0.87 & 12 & & 24.2 & 227$\pm$39 & 3.9$\pm$0.5 & ... & $<$1.8 & $<$1.2 & 38.3$\pm$10.1 \\ {\it ( RG\,J123711$^{b}$} & {\it 1.996 } & {\it 3$\to$2 } & {\it 115.410} & 0.9 & 0.19 & 1.34 & 18 ) & & & & & & & & \\ RG\,J131236$^{b}$ & 2.224 & 3$\to$2 & 106.727 & 0.9 & 0.10 & 0.69 & 18 & & 24.1 & ... & ... & ... & $<$2.2 & ... & 43.9$\pm$7.1 \\ RG\,J163655$^{b}$ & 2.186 & 3$\to$2 & 108.536 & 1.2 & 0.07 & 0.49 & 20 & & 23.5 & $<$150 & ... & 13.7$\pm$6.9 & $<$2.2 & ... & 48.7$\pm$4.3 \\ \hline\hline \end{tabular} \end{center} {\small {\bf Table Notes.} The top 12 sources were observed in our program and the bottom five CO-observed SFRGs are taken from the literature: $^a$ from \citet{daddi08a} and $^b$ from \citet{chapman08a}. BW$^{a}$ denotes the bandwidth of observations. Non-detections are 2$\sigma$ upper limits, magnitudes are in AB, and ellipses denote that no data have been taken of that galaxy at the given wavelength (24$\mu$m, 70$\mu$m, 350$\mu$m, and 1200$\mu$m). \rm RGJ123711\ is listed twice, once for its observations taken under our program, and once for the observations discussed in \citet{chapman08a}. The 24$\mu$m\ flux densities of the SSA13 field sources are marked by $^\ddag$: they are derived from IRS spectral observations since no MIPS imaging exists for this field (Casey et al., in preparation). The RMS$_{ct}$ is the noise of the CO observations averaged over the bandwidth of observations (also over one beamsize at phase centre), while RMS$_{ch}$ is the noise per frequency channel which has width with size Bin, in MHz. The frequency bins are chosen for optimum presentation of the spectra, as shown in Figure~\ref{fig:cospec} and are based on bandwidth. The uncertainties on the radio flux densities are errors on flux integrated measurements and not statistical errors. \tett\ was targeted at a redshift of 1.357, which is the redshift of a nearby source; the correct UV spectroscopic redshift is 1.361, which is within the bandwidth of our CO observations. \chap\ has an H$\alpha$ redshift of 2.192 and a UV redshift of 2.186, both within the bandwidth of the literature observations. The redshift for \chan\ has been corrected from earlier measurement of 2.240. Its CO observations were taken at 2.240, so unfortunately the CO[3-2] line at 2.224 falls at the edge of the PdBI bandwidth, which makes it difficult to put constraining limits on its CO emission. } \end{table*} CO Observations were carried out with PdBI in the 5 dish D-configuration (i.e. compact). We used the 2\,mm and 3\,mm receivers tuned to the appropriate frequencies of redshifted CO transitions, as detailed in Table~\ref{tab:observations}. Pointing centres were at the VLA positions of the galaxies, and example phase calibrators which we used were 0221+067, 1044+719, 1418+546, 1308+326, and 0954+658 with flux calibrators like MWC349, 3C84, and 3C345. The synthesised beam size for the configuration varies from $\approx\,$3-6\arcsec\ FWHM. Receiver Noise Temperature calibration was obtained every 12\,min using the standard hot/cold--load absorber measurements. The antenna gains were found to be consistent with a standard range of values from 24-31\,Jy\,K$^{-1}$. We estimate the flux density scales to be accurate to about $\pm15$\%. Data were recorded using both polarisations, offset in frequency, covering a 1.8\,GHz\ bandwidth, except in the case of \tett, \tott, and \tptt\ which had 0.9\,GHz bandwidth coverage (overlapping polarisations), and \rm RGJ123644\ which had 1.6\,GHz coverage (semi overlapping polarisations). \rm RGJ123644\ had an uncertain spectroscopic redshift and upon initial observations with a 0.9\,GHz bandwidth, a feature was identified at the edge of the bandwidth and subsequent observations were shifted in redshift, thus totaling 1.6\,GHz coverage. The total on-source integration time varied between $\sim$4-12 hours. The data were processed using the {\sc GILDAS} packages {\sc CLIC} and {\sc MAPPING} and analyzed with our own IDL-based routines. The RMS noise of each object's map is also given in Table~\ref{tab:observations}. For clarity of presentation, we have re-gridded the data to a spectral resolution of 40\,MHz for data with $>$1.5GHz bandwidth and to 20\,MHz for data with $<$1.5GHz bandwidth. The bandwidths, binsizes and noise properties of the maps are given in Table~\ref{tab:observations}. The search for CO line detection was performed by integrating over all possible combination of channels at every location in the observed maps using our own iterative {\sc GILDAS} script. From the result we measure the data's signal to noise ratio (S/N) at all points in the map, and subsequently investigate signal peaks greater than 4$\sigma$. If a detection with signal strength $>$4$\sigma$ exists within 5\arcsec\ of the target position, the observed galaxy is classified as detected. \subsubsection{Offset CO Detections} Searching for detections within a 5\arcsec\ radius imply that significant positional offsets are acceptable. In our galaxies, only three of our sources, \tett, \rm RGJ105209, and \tptt), have offsets \simgt2\arcsec. Nominally, an offset $>$2\arcsec\ would be too large to attribute to the targeted source, however we note a few factors which can increase the positional uncertainty to $\sim$5\arcsec. PdBI D-configuration positional uncertainty is governed by the source S/N (which is proportional to beamsize/2(S/N), $\lower.5ex\hbox{\ltsima}$1\arcsec, but might diverge at small S/N), the baseline model uncertainty ($\sim$0.25\arcsec\ for D-config), and the quality of phase calibration (which can translate to 1.5-3\arcsec\ seeing). Since the S/N of these galaxies is small, then the nominal uncertainty likely increases from $\sim$2\arcsec to $\sim$3\arcsec. It is also possible for the bulk of the gas reservoir in a system to be offset from the primary source of starlight, especially in the case of major mergers. In addition, we have found large positional offsets $\sim$5\arcsec\ with strong $>$5$\sigma$ CO detections in the larger samples of SMGs (i.e. CO positional offset from radio centroid, Smail \&\ Bothwell, private communication). Since these detections are centred on the correct redshifts and there does not seem to be other radio/UV/IR sources corresponding with their positions, we treat \tett, \rm RGJ105209, and \tptt\ as detected; however, we label their spectra and CO maps as `OFFSET' and list them separately in Table~\ref{tab:co}, both as tentatively detected and as undetected, giving the upper limits for phase center observations. We plot them in subsequent figures as detected, although mark them with different symbols to indicate exclusion from key calculations in our analysis. \subsubsection{Remarks on Individual Sources} \chap, \chan, and \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi\ \rm RGJ123711\ are taken from \citet{chapman08a}, the pilot study for the sample observed in this paper. These data, when presented in this paper, are different from the spectra and maps presented in Chapman {\rm et\ts al.}\ due to a different reduction. We decided to re-reduce their data to be consistent with our reduction, making significant improvements in both phase and amplitude calibration. The differences do not change the results of Chapman {\rm et\ts al.}, but a few changes in minor conclusions are noted later on in the results and discussion. The Chapman {\rm et\ts al.}\ CO observations of \chan\ were carried out assuming $z=$2.240 however more recent rest-UV spectroscopic observations indicate a different rest-UV redshift, $z=$2.224. The final channel in the data cube might suggest a flux excess at the correct redshift, although we cannot distinguish between a bright line cutoff at the edge, a broad faint line, continuum, or noise spike. We treat this source as undetected with the associated noise properties of our observations. We also note that \rm RGJ123711\ was observed as part of \citet{chapman08a} in \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi, but newer, higher S/N observations in \ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi\ were taken as part of this program. For comparison, we include the Chapman {\rm et\ts al.}\ results in Tables~\ref{tab:observations} and \ref{tab:co}. In the re-reduction of \chap\ observations, we find a possible companion CO source at the same redshift offset by $\sim$8\arcsec\ to the northwest which is not seen in the Chapman {\rm et\ts al.}\ results, however significant improvements on phase and amplitude calibration have been made since. This source is discussed briefly in section \ref{ss:derivedprops}. The new data taken for \rm RGJ123711\ reveals a marginal \ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi\ detection for the SMG SMM\,J123711.98+621325.7 (also called HDF\,255); its integrated CO flux (detected at 4.5$\sigma$) is $I_{\ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi}\,=\,$0.54$\pm$0.12\,Jy\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi; its spectrum and properties will be discussed more at length in a paper summarising CO properties of observed SMGs to date (Bothwell {\rm et\ts al.}\ in preparation). \subsection{Archival Observations}\label{ss:archivalobs} This paper also includes the CO-observations of two additional SFRGs from the literature. \citet{daddi08a} analyzed \dadb\ and \dada\ as being high redshift normal spiral galaxies which have very low star formation efficiencies \citep[a study expanded upon in][]{daddi10a}. The active $BzK$ galaxies which Daddi {\rm et\ts al.}\ survey have significant overlap with the SFRG population, since their CO sources require radio detection, thus they have ULIRG luminosities implied from the radio. Both \dadb\ and \dada\ are selected as SFRGs via the \citet{chapman04a} method and are likely very-luminous star formers, with higher SFRs than most active $BzK$s. \dadb\ has also been detected at 70$\mu$m\ \citep{casey09b}, directly confirming that it is a ULIRG with a warm dust temperature. Interpreting the $BzK$ active galaxy population as ULIRGs rather than ``high-$z$ normal spirals'' is perhaps sensible for these reasons. \begin{table*} \begin{center} \caption{Derived Gas Properties of submm-faint ULIRGs} \label{tab:co} \begin{tabular}{c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c} \hline\hline NAME & z$_{\rm optical}$ & z$_{CO}$ & Obs. $^{12}CO$ & S/N & I$_{CO}$ & L$^\prime_{CO}$ & I$_{CO(1-0)}$ & L$^\prime_{CO(1-0)}$ & $\Delta V_{CO}$ \\ & & & {\small Transition} & & (Jy\,km\,s$^{-1}$) & (K\,km\,s$^{-1}$\,pc$^2$) & (Jy\,km\,s$^{-1}$) & (K\,km\,s$^{-1}$\,pc$^2$) & (km\,s$^{-1}$) \\ \hline {\bf CO-Detected SFRGs:} & & & & & & & \\ \dadalong & 1.465 & 1.465$\pm$0.002 & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi & 6.8 & 0.60$\pm$0.09 & (1.7$\pm$0.3)$\times$10$^{10}$ & 0.20$\pm$0.10 & (2.3$\pm$0.3)$\times$10$^{10}$ & $\sim$350 \\ \rm RGJ123644.13+621450.7 & 2.095 & 2.090$\pm$0.001 & \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi & 4.3 & 0.79$\pm$0.22 & (1.9$\pm$0.5)$\times$10$^{10}$ & 0.12$\pm$0.03 & (2.5$\pm$0.7)$\times$10$^{10}$ & 214$\pm$132 \\ \rm RGJ123645.88+620754.2 & 1.433 & 1.434$\pm$0.001 & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi & 4.1 & 0.64$\pm$0.17 & (1.8$\pm$0.5)$\times$10$^{10}$ & 0.22$\pm$0.06 & (2.4$\pm$0.6)$\times$10$^{10}$ & 320$\pm$144 \\ \dadblong & 1.522 & 1.522$\pm$0.002 & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi & 8.9 & 0.85$\pm$0.10 & (2.5$\pm$0.3)$\times$10$^{10}$ & 0.28$\pm$0.08 & (3.3$\pm$0.3)$\times$10$^{10}$ & $\sim$250 \\ \rm RGJ123711.34+621331.0 & 1.996 & 1.988$\pm$0.002 & \ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi & 6.5 & 1.02$\pm$0.16 & (1.3$\pm$0.2)$\times$10$^{10}$ & 0.10$\pm$0.02 & (2.0$\pm$0.4)$\times$10$^{10}$ & 558$\pm$121 \\ & & 1.996$\pm$0.002 & \ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi & 7.3 & 0.61$\pm$0.08 & (7.8$\pm$1.1)$\times$10$^{9}$ & 0.06$\pm$0.01 & (1.2$\pm$0.1)$\times$10$^{10}$ & 318$\pm$86 \\ & & {\it COMBINED:} & \ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi &10.4 & 1.87$\pm$0.18 & (2.4$\pm$0.2)$\times$10$^{10}$ & 0.19$\pm$0.02 & (3.7$\pm$0.4)$\times$10$^{10}$ & (1400$\pm$100) \\ {\it (RGJ123711.34+621331.0} & {\it 1.996} & {\it 1.995} & \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi &{\it 3.2} & {\it 0.70$\pm$0.22} &{\it (1.5$\pm$0.5)$\times$10$^{10}$} & 0.10$\pm$0.03 & {\it (2.0$\pm$0.6)$\times$10$^{10}$)} & ... \\ \rm RGJ131208.34+424144.4 & 2.237 & 2.237$\pm$0.001 & \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi &5.6 & 0.88$\pm$0.15 & (2.4$\pm$0.5)$\times$10$^{10}$ & 0.13$\pm$0.02 & (3.3$\pm$0.6)$\times$10$^{10}$ & 439$\pm$84 \\ \chaplong & 2.186 & 2.187$\pm$0.002 & \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi &4.9 & 0.29$\pm$0.06 & (7.8$\pm$1.6)$\times$10$^{9}$ & 0.04$\pm$0.01 & (1.0$\pm$0.2)$\times$10$^{10}$ & 252$\pm$40 \\ {\bf Offset-CO SFRGs:} & & & & & & & \\ {\emph (as detections)} & & & & & & & \\ \tettlong & 1.361 & 1.362$\pm$0.002 & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi &4.2 & 0.44$\pm$0.10 & (1.1$\pm$0.2)$\times$10$^{10}$ & 0.15$\pm$0.04 & (1.5$\pm$0.3)$\times$10$^{10}$ & 557$\pm$107 \\ \rm RGJ105209.31+572202.8 & 2.112 & 2.113$\pm$0.001 & \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi &4.6 & 1.12$\pm$0.25 & (2.8$\pm$0.6)$\times$10$^{10}$ & 0.17$\pm$0.04 & (3.7$\pm$0.7)$\times$10$^{10}$ & 446$\pm$85 \\ \tpttlong & 1.819 & 1.820$\pm$0.001 & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi &4.7 & 0.57$\pm$0.12 & (2.5$\pm$0.5)$\times$10$^{10}$ & 0.19$\pm$0.04 & (3.3$\pm$0.7)$\times$10$^{10}$ & 498$\pm$157 \\ {\emph (as non-detections)} & & & & & & & \\ \tettlong & 1.361 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi & ... & $<$0.20 & $<$4.9$\times$10$^{9}$ & $<$0.07 & $<$6.5$\times$10$^{9}$ & ... \\ \rm RGJ105209.31+572202.8 & 2.112 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi & ... & $<$0.21 & $<$5.1$\times$10$^{9}$ & $<$0.03 & $<$6.9$\times$10$^{9}$ & ... \\ \tpttlong & 1.819 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi & ... & $<$0.20 & $<$8.5$\times$10$^{9}$ & $<$0.07 & $<$1.1$\times$10$^{10}$ & ... \\ {\bf CO-Undetected SFRGs:} & & & & & & & \\ \rm RGJ123642.96+620958.1 & 3.661 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi &... & $<$0.22 & $<$7.8$\times$10$^{9}$ & $<$0.02 & $<$1.3$\times$10$^{10}$ & ... \\ \rm RGJ123653.37+621139.6 & 1.275 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi &... & $<$0.22 & $<$4.7$\times$10$^{9}$ & $<$0.07 & $<$6.2$\times$10$^{9}$ & ... \\ \rm RGJ123707.82+621057.6 & 1.489 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi &... & $<$0.23 & $<$6.7$\times$10$^{9}$ & $<$0.08 & $<$8.9$\times$10$^{9}$ & ... \\ \rm RGJ123718.58+621315.0 & 1.512 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi &... & $<$0.15 & $<$4.7$\times$10$^{9}$ & $<$0.05 & $<$6.3$\times$10$^{9}$ & ... \\ \tottlong & 1.532 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi &... & $<$0.24 & $<$7.3$\times$10$^{9}$ & $<$0.08 & $<$9.8$\times$10$^{9}$ & ... \\ \chanlong & 2.224 & ... & \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi &... & $<$0.18 & $<$4.7$\times$10$^{9}$ & $<$0.03 & $<$6.3$\times$10$^{9}$ & ... \\ \hline\hline \end{tabular} \end{center} {\small {\bf Table Notes.} The observed CO properties of the sample. The top seven sources have clean $>$4$\sigma$ detections of CO at phase center, the next three sources (labeled 'Offset') have CO detections at the correct redshift but offset $\sim$3-5\arcsec\ from phase center, and the remaining six sources are undetected in CO. $L_{CO}^{\prime}$ and I$_{CO}$ are given for the observed CO transition, which does not depend on source excitation. Then assuming the \citet{weiss07a} SMG excitation ladder (see text for description), we convert to $I_{CO[1-0]}$ and $L_{CO[1-0]}^\prime$, which is given in the subsequent columns. $\Delta V_{\rm \co}$ is the FWHM of the fitted feature shown in Fig.~\ref{fig:comaps}. The SFRG with a double peaked feature (\rm RGJ123711) has the details of each feature listed separately. I$_{\rm \co}$ 2-$\sigma$ limits for undetected SFRGs are calculated assuming a $\Delta V_{\rm \co}$\,=\,320\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ (the mean FWHM of the detected-SFRG sample). } \end{table*} \subsection{Multiwavelength Data}\label{ss:otherdata} Radio fluxes from VLA B-array maps (5\arcsec\ resolution) are taken from \citet{richards00a} and \citet{morrison08a}, where the uncertainty represents the error in the extracted flux measurement rather than statistical error. High-resolution observations from the Multi-Element Radio Linked Interferometer Network \citep[MERLIN;][]{thomasson86a} were obtained for the sources in GOODS-N as described in \citet{muxlow05a}, and a combined MERLIN+VLA map was constructed with an RMS noise of 4.0\,\uJy\,beam$^{-1}$. A similar map was constructed in Lockman Hole with an RMS noise of 4.5\,\uJy\,beam$^{-1}$ \citep*{biggs08a}. The combined MERLIN+VLA maps have positional accuracies of tens of mas and restoring circular beam sizes of 0.4\arcsec\ (GOODS-N) and 0.5\arcsec (Lockman Hole). \rm RGJ105209, \rm RGJ123711\ and \rm RGJ131208\ were all observed at 350$\mu$m\ with the {\sc SHARC2} camera \citep{dowell03a} on the Caltech Submillimeter Observatory and reduced with {\sc CRUSH} software \citep{kovacs06a}. With on-source integration times of 950s, 1080s and 200s, \rm RGJ105209, \rm RGJ123711\ and \rm RGJ131208\ were have flux densities of 19$\pm$7\,mJy, 6$\pm$9\,mJy, and 48$\pm$17\,mJy respectively. A fourth SFRG, \chap, was observed at 350$\mu$m\ with the CSO previously \citep{chapman08a}. None are detected at $>$3$\sigma$. While the acquisition of ground-based 350\,$\mu$m\ observations is laborious and dependent on the best weather conditions ($\tau_{\rm cso}\,<\,$0.06, measured at 225\,GHz), the relative depth of our observations is comparable to expected confusion limits for the {\it Herschel Space Observatory} at 350\,$\mu$m. While {\it Herschel} will detect $>$40\,mJy 350\,$\mu$m\ sources at high-$z$ regularly, sources with flux density $\lower.5ex\hbox{\ltsima}$20\,mJy will need follow up from ground-based facilities like the CSO for more precise flux density estimates and better SED constraints. The 1200$\mu$m\ flux limits for GOODS-N and Lockman Hole come from the Max-Planck Millimeter Bolometer \citep[MAMBO, with a mean RMS of $\sim$0.8\,mJy;][]{greve08a} and 850$\mu$m\ flux limits from the Submm Common User Bolometric Array \citep[SCUBA, with a mean RMS of $\sim$1.6\,mJy;][]{borys03a,coppin06a}. All fields are covered with $Spitzer$ IRAC (3.6, 4.5, 5.8, and 8.0$\mu$m) and MIPS (24 and 70$\mu$m), however at greatest depths in all bands in GOODS-N. Optical photometry in GOODS-N is from the $HST$ ACS\footnote{Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (STECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).} using the F435W, F606W, F814W, and F850LP filters (B, V, i and z bands). The Lockman Hole has $HST$ ACS F814W (PI: Chapman HST 7057) in addition to extensive optical photometry from Subaru/Suprime-Cam \citep{miyazaki02a}. X-ray fluxes are measured from the Chandra/XMM maps of GOODS-N \citep{alexander03a}, Lockman Hole \citep{brunner08a}, SSA13 \citep{mushotzky00a}, and SXDF \citep{ueda08a}. \subsection{\rm RGJ105209} Recent work on mid-infrared spectra of SFRGs from the {\it Spitzer} InfraRed Spectrograph (IRS) (Casey et al., in preparation) has revealed that the 24$\mu$m\ source that lies at the position of \rm RGJ105209\ (at the radio position) has a dominant PAH redshift of 2.37, in contrast to the CO and UV spectroscopic redshift of 2.112. We suspect that two different systems overlap to create these discrepant redshifts, but note that only one source is visible in the rest-UV. The high-resolution MERLIN+VLA radio imaging also shows only one point source slightly offset 1.2\arcsec\ from the UV source. Due to positional uncertainties, it is difficult to pinpoint which source is generating the radio emission. For that reason, we assume in this paper that the radio is associated with the 2.112 CO source. We note that similar positional overlap phenomena have occurred with star forming galaxies before, for example there are several examples of sources with two distinct redshifts for one continuum source in the surveys of \citet{steidel04a} and \citet{reddy06a,reddy08a}, and the density of bright star forming z$\sim$2 sources is high enough that overlap will occur with non-negligible probability. \section{Analysis and Results}\label{s:results} \subsection{Derived Properties from CO Data}\label{ss:derivedprops} \begin{figure*} \centering \includegraphics[width=0.67\columnwidth]{comap021827.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap105209.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap105239.pdf}\\ \includegraphics[width=0.67\columnwidth]{comap123644.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap123645.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap123711.pdf}\\ \includegraphics[width=0.67\columnwidth]{comap131208.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap163655.pdf}\\ \includegraphics[width=0.67\columnwidth]{comap123642.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap123653.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap123707.pdf}\\ \includegraphics[width=0.67\columnwidth]{comap123718.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap131207.pdf}\hspace{0.1in} \includegraphics[width=0.67\columnwidth]{comap131236.pdf} \caption{CO maps of SFRG sample in 40\arcsec\,$\times$\,40\arcsec\ cutouts, sorted first by detected CO (the first eight are detected), then by RA. For those galaxies detected in CO, the maps are integrated over the optimal velocity channels corresponding to the detected CO line. Black contours are integer multiples of the RMS starting at 2$\sigma$ and thin gray contours are negative integer multiples of RMS starting at -2$\sigma$. The relative flux scales on the maps are indicated by the color-bars to the right. The spectra in Figure \ref{fig:cospec} are extracted within one beamsize (white contours in the lower right, outer contour is FWHM) centred on the highest S/N point in the map (large cross). The three sources at top are labelled ``Offset'' since their CO peak is $\sim$3-5\arcsec\ offset from phase center, a greater offset than would be caused by positional inaccuracies in the instrument. For galaxies not detected in CO, the maps are integrated over all velocity channels and spectra are extracted from the map center (small cross). } \label{fig:comaps} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.68\columnwidth]{spectrum-021827.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-10-31.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-105239.pdf}\\ \includegraphics[width=0.68\columnwidth]{spectrum-1816.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-188.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-254.pdf}\\ \includegraphics[width=0.68\columnwidth]{spectrum-13398.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-163655.pdf}\\ \includegraphics[width=0.68\columnwidth]{spectrum-16-27.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-6b.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-1158.pdf}\\ \includegraphics[width=0.68\columnwidth]{spectrum-1335.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-13394.pdf} \includegraphics[width=0.68\columnwidth]{spectrum-131236.pdf}\\ \caption{ The CO spectra for SFRGs. Spectra are extracted within one beamsize centred at the highest S/N peak for detected sources and at phase center for non-detections (i.e. the galaxies' radio positions). Vertical dashed lines indicate the galaxies optically derived redshifts. Gray brackets highlight the channel range integrated over to produce the maps in Fig~\ref{fig:comaps}. The first three sources labeled ``Offset'' are detected at large offset positions $\sim$3-5\arcsec\ from phase center. At bottom right, we show the spectrum for \chan, the SFRG observed in \citet{chapman08a} at the redshift 2.240; its revised redshift, 2.224, is at the edge of the observed bandwidth.} \label{fig:cospec} \end{figure*} Table~\ref{tab:co} lists the integrated line fluxes and limits for all SFRGs in our sample. Out of the twelve sources in our observing program, seven SFRGs were detected at $\lower.5ex\hbox{\gtsima}$4$\sigma$. Including the literature SFRGs, 10/16 are CO detected, and one `undetected source' has insufficient data to determine detection (\chan, with an ambiguous redshift). Three of our detected sources have significant positional offsets from their radio positions, thus we have classified these as `tentative' detections in Table~\ref{tab:co} and excluded them from calculations in our analysis. The detection fraction of SFRGs, 10/16 (63\%, including offset sources), is similar to the detection fraction of SMGs in \citet{greve05a} and \citet{coppin08a}. While we do not expect continuum flux from any SFRGs, we test for it based on FIR detection limits. To test the detectability of blackbody continuum radiation, we interpolate the best fit FIR SEDs (see section \ref{ss:dustmass}) to estimate the flux density at the frequency of PdBI observations ($\nu_{\rm obs}\,=\,80-150\,$GHz). All SFRGs in the sample have estimated continuum flux densities $<$0.13\,mJy, which is of order the RMS noise in our CO spectra, thus not detectable. Each remaining SFRG spectrum was fit with a three parameter Gaussian using a least squares fitting algorithm, where the spectral noise is simulated as a function of the RMS channel noise \citep[see ][ for details]{coppin07a}. Since no detectable continuum emission is expected we do not fit line profiles with constant baselines. Figure~\ref{fig:comaps} shows the velocity-averaged spatial maps of the mm line observations (with VLA centroids marked with crosses) and Figure~\ref{fig:cospec} shows the extracted spectra of the CO detections with overlaid Gaussian fits and the original optical spectroscopic redshifts. The 2D maps for CO-detected SFRGs are integrated over the channel range which produces the highest signal to noise line profile in the spectrum. The 2D maps of the undetected sample are integrated over all velocity channels. CO detected SFRGs have CO redshifts which agree with their optical spectroscopic features. Every spectrum was also tested against two three-parameter Gaussian fits (i.e. a double peaked line) in a $\chi$ squared goodness of fit algorithm. Only one galaxy's line fits were significantly improved by fitting to a double peaked Gaussian. The detection of \rm RGJ123711\ is double peaked with one component centred at z$_{\rm \co}$\,=\,1.988 and another at z$_{\rm \co}$\,=\,1.996 which bracket its optical redshift ($z\,=$\,1.995). The line properties of both components of \rm RGJ123711\ are listed separately in Table~\ref{tab:co}; the total system is represented by the sum of the two separate gas masses and sum of dynamical masses (we treat them as separate systems since the peaks are asymmetrical and more likely indicative of a merger than rotating disk). The ratio for double-peaked CO in SFRGs is thus 1/8\,$\approx$\,13\%, comparable to 5/30\,$\approx$\,17\%\ for SMGs. The weighted mean line width of this sample, excluding `offset' detections is 320$\pm$80\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi (note that it increases to 370$\pm$110\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi if the offset sources are detections). The 2$\sigma$ line intensity limits for the CO-undetected SFRGs are calculated using the following: \begin{equation} I_{\rm CO} < 2\, {\rm RMS}_{\rm ch}\, (\Delta V_{\rm \co}\, dv)^{1/2} \end{equation} where ${\rm RMS}_{\rm ch}$ is the channel noise from Table~\ref{tab:observations}, and $dv$ is the spectral resolution, the binsize value given in Table~\ref{tab:observations} converted to \ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ \citep[as in][]{boselli02a}. These limits are given in Table~\ref{tab:co}. The spatial maps for the undetected SFRGs (in Figure~\ref{fig:comaps}) are integrated over the entire bandwidth of observations. None of the maps are corrected for primary beam attenuation; given the 40$\times$40\arcsec\ size, the PBA correction is $<$2 even for all peripheral areas of the map. The spatial map for \chap\ shows another bright companion CO emitter $\sim$8\arcsec\ to the northwest of \chap\ not previously identified in the reduction of Chapman {\rm et\ts al.}\ 2008. Extraction of its spectrum reveals a line centred on the same frequency as the \chap\ \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi\ line, however there are no radio sources or spectroscopically identified sources nearby. If followup CO observations of \chap\ reveal the same feature in different CO transitions, then high resolution CO will be needed to precisely measure the companion galaxy's position, and thus its multi-wavelength properties. A lack of multiple CO transition data has meant that many past studies have relied on the assumption of constant brightness temperature to convert between higher transition line luminosities and L$^\prime_{\rm \co(1-0)}$. However, recent observations of multiple \co\ transitions in high redshift galaxies has shown that the transitions are often not in thermal equilibrium \citep{dannerbauer09a}. The three $z\sim2.5$ SMGs of \citet{weiss07a} are consistent with being in thermal equilibrium up to a J=3 transition yet not towards higher-J values, while there are several other high redshift sources which are much more excited, for example, FSC\,10214+4724 \citep{scoville95a} and APM\,08279+5255 \citep{weiss07b} which have nearly constant brightness temperatures out to J=6) or much less excited (ERO J16450+4626 turns over at CO(3-2)). In this paper we adopt the brightness temperature conversions inferred from the spectral line energy distributions for three SMGs observed in \citet{weiss07a}; explicitly, we assume the following line flux ratios derived from LVG models: S$_{\rm CO21}$/S$_{\rm CO10}$\,=\,3.0$\pm$0.1, S$_{\rm CO32}$/S$_{\rm CO10}$\,=\,6.8$\pm$0.5 and S$_{\rm CO43}$/S$_{\rm CO10}$\,=\,10.0$\pm$0.8. One SFRG has multiple line data (\rm RGJ123711) and can be analyzed for its excitation. This work is presented in section \ref{ss:254excitation}. More recent work \citep{ivison10c,danielson10a} on \ifmmode{^{12}{\rm CO}(J\!=\!1\! \to \!0)}\else{$^{12}${\rm CO}($J$=1$\to$0)}\fi\ observations in SMGs show that the assumption of a constant brightness temperature can lead to underestimations of gas mass by factors of $\sim$2$\times$. They find that the ratio of luminosities between \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi\ and \ifmmode{^{12}{\rm CO}(J\!=\!1\! \to \!0)}\else{$^{12}${\rm CO}($J$=1$\to$0)}\fi\ is $r_{3-2/1-0}\,=\,$0.55 and $\sim$0.67 respectively. Our excitation assumption implies a ratio $r_{3-2/1-0}\,\sim\,$0.78, slightly higher than these recent studies, but well within the uncertainties of the $S_{\ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi}$ measurements of Ivison {\rm et\ts al.}\ and Danielson {\rm et\ts al.}. The derived line luminosities for our observations are listed in Table~\ref{tab:co}, in their observed transition as well as in our conversion to L$^\prime_{\co(1-0)}$. \subsection{CO Excitation of \rm RGJ123711}\label{ss:254excitation} \begin{figure*} \centering \includegraphics[width=0.89\columnwidth]{co-254-both.pdf} \includegraphics[width=0.99\columnwidth]{254ladder.pdf} \caption{ {\it Left:} Both \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi\ and \ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi\ observed transitions for \rm RGJ123711. Determined from the spectral lines seen in the higher S/N CO[4-3] observation, the velocity range of the emitting region which overlaps in each data set is marked by two vertical dashed lines at -700\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ and 300\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi. {\it Right:} the CO SLED normalised to I$_{CO[3-2]}$ for \rm RGJ123711\ overlaid with literature results from other high-redshift sources \citep{weiss07a,danielson10a} and the Milky Way \citep{fixsen99a}. One of these sources is \dadb, an SFRG observed by \citet{dannerbauer09a}. We also include LVG model no. 1 from Dannerbauer {\rm et\ts al.}, representing a low-excitation Milky Way type source. By normalising to I$_{CO[3-2]}$, we illustrate that it is difficult to draw conclusions on the nature of the CO SLED without observing at least three CO transitions, particularly in the region where the SLEDs are suspected of turning over near CO[4-3]. } \label{fig:co254both} \end{figure*} The observations of \rm RGJ123711\ in \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi\ and \ifmmode{^{12}{\rm CO}(J\!=\!4\! \to \!3)}\else{$^{12}${\rm CO}($J$=4$\to$3)}\fi\ allow an analysis of the CO excitation in the system. Figure~\ref{fig:co254both} shows the spectrum of both lines and the proposed CO spectral line energy distribution (SLED) for the source; since the \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi\ observations do not cover the entire velocity range of the emitting gas, a direct comparison of the line strengths can only be done in the channels where data overlap (between -700 and 300\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi) marked by vertical dashed lines in Figure~\ref{fig:co254both}. The ratio of fluxes is then S$_{CO[4-3]}$/S$_{CO[3-2]}$\,=\,1.6$\pm$0.4. We use Monte Carlo testing to estimate the added uncertainty introduced by not including the missing portion of \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi\ (from -1150 to -700\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi) and we remeasure this ratio to be S$_{CO[4-3]}$/S$_{CO[3-2]}$\,=\,2.0$\pm$0.8. This agrees with the mean [4--3]/[3--2] ratio observed for SMGs, 1.4$\pm$0.2 \citep{weiss07a} and the expected ratio if constant brightness temperature is assumed, 1.8, however it appears to be inconsistent with the ratio for the Milky Way, $\sim$0.7 \citep{fixsen99a}. We contrast this result with those of \citet{weiss07a}, \citet{dannerbauer09a}, and \citet{danielson10a} in the right panel of Fig.~\ref{fig:co254both}. Although we have taken measurements for $J=$3 and 4 only and other work is restricted to $J\le$3 transitions, we see that \dada\ and \rm RGJ123711, the two SFRGs included here, seem to exhibit different excitation levels where \dada\ is more consistent with Milky Way type excitation and \rm RGJ123711\ is consistent with the \citet{weiss07a} SMGs. The issue of gas excitation is essential to the physical interpretation of high-$z$ star formers, and more/multiple line transitions are necessary for a full analysis of the population \citep[e.g. see recent work on low CO transitions, like \ifmmode{^{12}{\rm CO}(J\!=\!1\! \to \!0)}\else{$^{12}${\rm CO}($J$=1$\to$0)}\fi\ in][]{riechers06a,hainline06a,ivison10c,carilli10a}. \subsection{Gas Mass and Dynamical Mass} To derive \ifmmode{{\rm H}_2}\else{H$_2$}\fi\ masses we assume the ULIRG \ifmmode{{\rm H}_2}\else{H$_2$}\fi/\co\ gas conversion factor $X\,=\,M_{\rm \ifmmode{{\rm H}_2}\else{H$_2$}\fi}/L^\prime_{\rm \co}\,=\,$ 0.8\,M$_\odot$\, (K\,km\,s$^{-1}$\,pc$^2$)$^{-1}$ from \citet{downes98a}. It is worth noting, however, that the CO to \ifmmode{{\rm H}_2}\else{H$_2$}\fi\ conversion factor varies substantially depending on the type of source being sampled, and it can range from 0.8 in ULIRGs to 4-5 in normal spiral, late-type galaxies \citep[see discussion in the Appendix of][]{tacconi08a}. The basis of our choice of $X\,=\,$0.8\,{\rm\,M$_\odot$}\,(K\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\,pc$^{2}$)$^{-1}$ is based on the merger, or disturbed nature of ULIRGs and their high star formation rates. Mergers are confined by ram pressure which pervades the whole system, while quiescent discs are confined gravitationally leading to gas fragmenting into clouds (per the jeans mass) and thus much higher ratios of \ifmmode{{\rm H}_2}\else{H$_2$}\fi\ mass to unit \co. We note that higher gas conversion factors are possible and perhaps likely if SFRGs were to represent an intermediate stage or `less extreme' population of galaxies than SMGs. For this reason, we advise that our \ifmmode{{\rm H}_2}\else{H$_2$}\fi-dependent quantities (like $M_{\ifmmode{{\rm H}_2}\else{H$_2$}\fi}$, gas fraction, etc) be taken with a hint of caution, since the full range of galaxy properties and conversion factors (from $X_{\rm CO}=\,$0.8--4.5) suggest variations of $M_{\ifmmode{{\rm H}_2}\else{H$_2$}\fi}$ of order 0.75\,dex and variations of gas fractions of order 0.6. We also include gas mass estimates for $X_{\rm CO}=\,$4.5{\rm\,M$_\odot$}\,(K\,km\,s$^{-1}$\,pc$^2$)$^{-1}$ for contrast in Table~\ref{tab:derived} but proceed with $X_{\rm CO}$=0.8 for our analysis. Using X$_{\rm CO}$=0.8 for ULIRGs, the mean gas mass of the sample is (2.1$\pm$0.7)$\times$10$^{10}$\,{\rm\,M$_\odot$}, which is roughly half the mean \ifmmode{{\rm H}_2}\else{H$_2$}\fi\ mass of CO-observed SMGs \citep[5.1$\times$10$^{10}$\,{\rm\,M$_\odot$};][]{greve05a,neri03a}. Dynamical mass is dependent on the galaxy's inclination angle; we use the average inclination correction of $\langle sin\,i \rangle\,=$\,1/2 (in other words $i$\,=\,30$^{o}$, corresponding to a random distribution in galaxy angles between 0 and 90$^{o}$). Thus \begin{equation} M_{\rm dyn}\,sin^{2}\,i\,= \frac{C \sigma^{2} r}{G}\,=\,4.215\times10^{4}\,C \Delta V_{\rm \co}^{2} r \end{equation} where $C\,=$\,4 \citep[the value adopted for mergers;][]{genzel03a} and $r$ is given in kpc and $\Delta V_{\rm CO}$ in \ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi. We note however that the difference between assuming a merger ($C=4$), spheroid ($C=5$), or rotating disk ($C=3.4$) is on the order of the uncertainty in M$_{\rm dyn}sin^{2}\, i$ caused by uncertainty in line width. Deriving a dynamical mass is dependent on a resolved spatial size measurement ($r$ in Eq 2), which is unfortunately unavailable with this low resolution, 3-6\arcsec\ beam size (D-configuration PdBI) data. In place of measuring CO sizes explicitly, we assume that the SFRGs of this sample have gas emitting regions roughly the size of SMGs \citep{tacconi08a} which have $R_{\rm 1/2}=$2$\pm$1\,kpc. This is supported by our analysis of high-resolution MERLIN+VLA radio imaging discussed in the next section (in section~\ref{sec:merlinvla}), where SFRGs and SMGs are shown to have similar radio sizes and SFRG sizes average to 2.3$\pm$0.8\,kpc. \citet{bothwell10a} have taken high-resolution CO observations of \rm RGJ123711\ and found its size to be consistent with this assumption (its CO emission extends over 12.6kpc in one direction and $\sim$5kpc in the other, corresponding to a 3.9kpc effective radius, only 0.2kpc larger than our measured MERLIN+VLA radio size). \rm RGJ123711\ is perceived to be a merging system that has two components of roughly equal spatial extent \simlt2kpc, thus its effective size is about $\sim$2$\times$ larger than the rest of the galaxies in our sample, although we still use the MERLIN+VLA sizes as priors for CO size due to lack of resolved CO data. For the sources with measured MERLIN+VLA sizes, we use the $R_{\rm eff}$ as the radius $r$ in Equation 2, and for sources without MERLIN+VLA data, we assume that $r$ is equal to the mean of the measured MERLIN+VLA sizes, $r$\,=\,2.3$\pm$0.8\,kpc. It is important to note that the extent and morphology of the MERLIN+VLA radio emission can affect the interpretation of the dynamical mass estimate. Three objects have large radio/UV offsets or very extended radio emission: RGJ105209 (whose offset is discussed in section 2.4), RGJ123707 (extended across a $\sim$7\,kpc$\times$17\,kpc area), and RGJ123710 (two knots separated by $\sim$8kpc). If the CO gas were to trace the MERLIN+VLA morphology perfectly, as is our a priori assumption, then the radius of each knot of radio emission would be used to estimate dynamical masses and they would be summed. Depending on the distribution of these knots, the true dynamical mass would be calculated by considering the size of each knot and the distance separating them, however, within the uncertainty of our measurements and assumptions (e.g. inclination angle, line-width uncertainty and uncertainty on the value of $C$) we consider our approach of circularising the MERLIN+VLA sizes and inferring a radius as accurate. We caution that dynamical masses are potentially slightly underestimated, due to the size and distribution of the CO gas relative to radio emission. For example, radio sizes are potentially inconsistent with CO sizes, as with the B-configuration data of \dada\ and \dadb\ presented in \citet{daddi10a}, which suggest larger dynamical mass estimates by a factor proportional to the radius. When applied to the whole SFRG sample, we derive a median SFRG dynamical mass of M$_{\rm dyn} = \,$(7.2$^{+6.7}_{-3.4}$)$\times$10$^{10}$\,{\rm\,M$_\odot$}$/\langle sin^{2}\,i \rangle\, \approx \,$2.9$\times$10$^{11}$\,{\rm\,M$_\odot$}. The same value of $C$ is used to compare with mean dynamical mass of SMGs, which is M$_{\rm dyn}(SMG) = $(1.5$^{+1.4}_{-0.4}$)$\times$10$^{11}$\,{\rm\,M$_\odot$}$/\langle sin^{2}\,i\rangle\,\approx\,$6.0$\times$10$^{11}$\,{\rm\,M$_\odot$}. \begin{figure*} \centering \includegraphics[width=0.66\columnwidth]{cutout-10-31.pdf} \includegraphics[width=0.66\columnwidth]{cutout-105239.pdf} \includegraphics[width=0.66\columnwidth]{cutout-1816.pdf}\\ \includegraphics[width=0.66\columnwidth]{cutout-188.pdf} \includegraphics[width=0.66\columnwidth]{cutout-254.pdf} \includegraphics[width=0.66\columnwidth]{cutout-1627_hc.pdf}\\ \includegraphics[width=0.66\columnwidth]{cutout-6b_hc.pdf} \includegraphics[width=0.66\columnwidth]{cutout-1335_hc.pdf} \includegraphics[width=0.66\columnwidth]{cutout-1158_hc.pdf}\\ \includegraphics[width=0.66\columnwidth]{cutout-2100.pdf} \includegraphics[width=0.66\columnwidth]{cutout-4171.pdf}\\ \caption{ Contours of MERLIN+VLA high-resolution 0.4-0.5\arcsec\ radio emission overlayed on 3\arcsec\,$\times$\,3\arcsec\ optical imaging for the CO-observed SFRGs. The optical imaging of GOODS-N sources is from $HST$-ACS $B$, $V$, and $z$ bands (presented in tricolor). The Lockman Hole sources have ACS $i$-band (black and white cutouts). The MERLIN+VLA beam shape is shown in the lower right of each panel in white (outer contour is the FWHM). The contour levels plotted integer multiples of $\sigma$ starting at 3$\sigma$ (except for \rm RGJ123711\ and \rm RGJ123653, where only 3, 5, and 7$\sigma$ contours are shown for clarity). Structure in radio emission is seen in these galaxies on 1-8\,kpc scales. } \label{fig:merlin} \end{figure*} \subsection{MERLIN+VLA Radio Morphology}\label{sec:merlinvla} We use the high-resolution MERLIN+VLA radio maps to map the extension of starburst emission and assess the contribution of AGN by considering their radio morphology, which is shown as contour overlays on optical imaging in Fig.~\ref{fig:merlin}. With resolutions of 0.4\arcsec\ and 0.5\arcsec\ per beam (for GOODS-N and the Lockman Hole respectively), the smallest resolvable structure at z$\sim$1.5 would be 4-5\,kpc across. The typical size of an AGN emission region in a \uJy\ radio galaxy at $z>1$ \citep[see][]{casey09a} at radio wavelengths is much less than 1kpc, implying that an AGN dominated source would be completely unresolved in MERLIN+VLA radio maps \citep[e.g. as seen in the high stellar mass, giant elliptical systems of ][]{casey09a}. Fig.~\ref{fig:merlin} shows that each of these galaxies has extended emission regions on large scales irregularly spread across large regions of the galaxy, suggestive of spatially distributed star formation, unlikely to be generated by AGN. About 10-60\%\ of the total flux from the source is estimated to be resolved out by the high-resolution imaging, which is dependent on the galaxies' extended or compact morphologies. We measure an effective radius of the star forming area by isolating the regions where MERLIN+VLA radio emission is significant to $>$3$\sigma$, and we then take the square root of the surface area over pi (effectively circularised), R$_{\rm eff}$, which is given in Table \ref{tab:derived}). While the morphologies are irregular (and are not in fact circular), we find that the effective radii average to 2.3$\pm$0.8\,kpc, which agrees with the size measurements of SMGs in \citet{biggs08a} and \citet{chapman04b}. We use the agreement of MERLIN+VLA SFRG and SMG sizes to partly justify our assumption of similar CO sizes between the populations. Since as much as 60\%\ of the radio flux can be resolved out in the high-resolution maps, we caution that this effective radius might be an underestimate, and that it could increase by factors of 1.1-1.5$\times$ if all of the flux is accounted for; however, this adjustment is not made to our effective radii measurements since it is highly uncertain and relies on assumptions of the distribution of star formation activity in the outskirts of each source. Similarly, this underestimate of effective radius would propagate to the derivation of dynamical masses. \begin{table*} \begin{center} \caption{Other Derived Properties of CO-observed SFRGs} \label{tab:derived} \begin{tabular}{c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c} \hline\hline NAME & {\it z} & L$_{\rm FIR}$ & SFR$_{\rm radio}$ & SFR$_{\rm UV}$ & M$_{\rm \ifmmode{{\rm H}_2}\else{H$_2$}\fi}$[U] & M$_{\rm \ifmmode{{\rm H}_2}\else{H$_2$}\fi}$[Sp] & M$_{\rm dyn}$sin$^2i$ & SFE & R$_{\rm eff}$ & $\Sigma_{\rm SFR}$& M$_\star$ & Class$^{\rm X}$ & Class$^{\rm IR}$ & Class$^{\rm CO}$ \\ & & {\scriptsize (10$^{12}${\rm\,L$_\odot$})} & {\scriptsize ({\rm\,M$_\odot$\,yr$^{-1}$})} & {\scriptsize ({\rm\,M$_\odot$\,yr$^{-1}$})} & ({\rm\,M$_\odot$}) & ({\rm\,M$_\odot$}) & ({\rm\,M$_\odot$}) & {\scriptsize ({\rm\,L$_\odot$}/{\rm\,M$_\odot$})} & (kpc) & ($\dagger$) & ({\rm\,M$_\odot$}) & & & \\ \hline \tett... & 1.362 & 3.3$^{+1.0}_{-0.8}$ & 570$^{+170}_{-120}$ & 9$^{+5}_{-3}$ & 1.2$\times$10$^{10}$ & 6.5$\times$10$^{10}$ & 1.2$\times$10$^{11}$ & 280 & ... & ... & 2.3$\times$10$^{11}$ & SB & SB & SB \\ \rm RGJ105209... & 2.113 & 4.4$^{+2.2}_{-1.5}$ & 750$^{+380}_{-250}$ & 20$^{+30}_{-10}$ & 3.0$\times$10$^{10}$ & 1.7$\times$10$^{11}$ & 7.4$\times$10$^{10}$ & 150 & 2.2 & 49 & 1.0$\times$10$^{11}$ & SB & SB & SB \\ \tptt... & 1.820 & 2.2$^{+1.8}_{-1.0}$ & 380$^{+300}_{-170}$ & 90$^{+30}_{-20}$ & 2.6$\times$10$^{10}$ & 1.5$\times$10$^{11}$ & 9.6$\times$10$^{10}$ & 80 & 2.3 & 23 & 1.2$\times$10$^{11}$ & SB & SB & SB \\ \dada... & 1.465 & 1.9$^{+1.5}_{-0.5}$ & 320$^{+250}_{-140}$ & 7$^{+5}_{-3}$ & 1.8$\times$10$^{10}$ & 1.0$\times$10$^{11}$ & 5.8$\times$10$^{10}$ & 110 & 2.8 & 13 & 2.8$\times$10$^{10}$ & SB & SB & SB \\ \rm RGJ123642... & 3.661 & 1.1$^{+1.7}_{-0.7}$ & 1900$^{+3000}_{-1100}$ & 30$^{+20}_{-10}$ & $<$1.0$\times$10$^{10}$ & $<$5.9$\times$10$^{10}$ & ... & $>$110 & 1.4 & 310 & 4.1$\times$10$^{10}$ & SB+ & SB & SB \\ \rm RGJ123644... & 2.090 & 5.0$^{+3.5}_{-2.1}$ & 850$^{+600}_{-350}$ & 30$^{+30}_{-20}$ & 2.0$\times$10$^{10}$ & 1.1$\times$10$^{11}$ & 2.1$\times$10$^{10}$ & 250 & 2.7 & 37 & 1.1$\times$10$^{11}$ & SB+ & SB+ & SB \\ \rm RGJ123645... & 1.434 & 3.8$^{+1.4}_{-1.0}$ & 660$^{+250}_{-180}$ & 30$^{+8}_{-6}$ & 1.9$\times$10$^{10}$ & 1.1$\times$10$^{11}$ & 5.2$\times$10$^{10}$ & 200 & 3.0 & 23 & 4.9$\times$10$^{10}$ & SB & SB & SB \\ \rm RGJ123653... & 1.275 & 2.9$^{+0.9}_{-0.7}$ & 500$^{+160}_{-120}$ & 40$^{+10}_{-10}$ & $<$5.0$\times$10$^{9}$ & $<$2.8$\times$10$^{10}$ & ... & $>$580 & 2.8 & 20 & 3.6$\times$10$^{10}$ & SB & AGN & SB+ \\ \rm RGJ123707... & 1.489 & 1.2$^{+1.6}_{-0.7}$ & 210$^{+280}_{-120}$ & 50$^{+20}_{-10}$ & $<$7.1$\times$10$^{9}$ & $<$4.0$\times$10$^{10}$ & ... & $>$170 & 1.3 & 39 & 2.3$\times$10$^{10}$ & SB+ & SB & SB \\ \dadb... & 1.522 & 2.1$^{+1.8}_{-1.0}$ & 350$^{+310}_{-170}$ & 10$^{+10}_{-5}$ & 2.7$\times$10$^{10}$ & 1.5$\times$10$^{11}$ & 2.0$\times$10$^{10}$ & 80 & 1.9 & 30 & 2.4$\times$10$^{10}$ & SB & SB & SB \\ \rm RGJ123711... & 1.996 & 8.8$^{+6.1}_{-3.6}$ & 1500$^{+1000}_{-600}$ & 30$^{+10}_{-9}$ & 3.0$\times$10$^{10}$ & 1.7$\times$10$^{11}$ & 1.1$\times$10$^{11}$ & 290 & 3.7 & 34 & 1.2$\times$10$^{11}$ & SB+ & AGN & SB \\ \rm RGJ123718... & 1.512 & 0.8$^{+1.5}_{-0.5}$ & 140$^{+250}_{-90}$ & 40$^{+10}_{-10}$ & $<$5.0$\times$10$^{9}$ & $<$2.8$\times$10$^{10}$ & ... & $>$160 & 1.1 & 36 & 1.6$\times$10$^{11}$ & SB & AGN & SB \\ \tott... & 1.532 & 2.5$^{+0.6}_{-0.5}$ & 420$^{+100}_{-80}$ & 9$^{+7}_{-4}$ & $<$7.8$\times$10$^{9}$ & $<$4.4$\times$10$^{10}$ & ... & $>$320 & ... & ... & 6.0$\times$10$^{9}$ & SB & SB & SB \\ \rm RGJ131208... & 2.237 & 5.6$^{+1.9}_{-1.4}$ & 960$^{+330}_{-250}$ & 20$^{+9}_{-6}$ & 2.6$\times$10$^{10}$ & 1.5$\times$10$^{10}$ & 7.5$\times$10$^{10}$ & 220 & ... & ... & 1.8$\times$10$^{10}$ & SB & AGN & SB \\ \chan... & 2.224 & 6.6$^{+3.3}_{-2.2}$ & 1100$^{+600}_{-400}$ & 40$^{+20}_{-20}$ & $<$5.0$\times$10$^{9}$ & $<$2.8$\times$10$^{10}$ & ... & $>$1300 & ... & ... & 2.4$\times$10$^{10}$ & AGN & SB & AGN \\ \chap... & 2.187 & 6.8$^{+2.1}_{-1.6}$ & 1200$^{+400}_{-300}$ & 70$^{+40}_{-30}$ & 8.0$\times$10$^{9}$ & 4.5$\times$10$^{10}$ & 2.5$\times$10$^{10}$ & 850 & ... & ... & 4.1$\times$10$^{10}$ & SB & SB+ & AGN \\ \hline\hline \end{tabular} \end{center} {\small {\bf Table Notes.} Derived gas properties of SFRGs. L$_{\rm FIR}$ is derived from radio luminosity, and the associated SFR is thus labelled $SFR_{\rm radio}$. Conversion to gas mass assumes the typical ULIRG conversion factor $X=$\,0.8\,{\rm\,M$_\odot$}\,(K\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\,pc$^2$)$^{-1}$ \citep{downes98a} for $M_{\ifmmode{{\rm H}_2}\else{H$_2$}\fi}$ [U], the default assumption of gas mass for this sample, although we note that higher conversion factors ($\sim$4.5\,{\rm\,M$_\odot$}\,(K\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\,pc$^2$)$^{-1}$) might be appropriate for a subset of these galaxies which exhibit normal spiral galaxy characteristics \citep[see][]{daddi10a}, and these gas masses are given in $M_{\ifmmode{{\rm H}_2}\else{H$_2$}\fi}$ [Sp]. The total dynamical mass is calculated by assuming a size of CO emission, which we do not directly measure, but assume is equal to the effective radius of radio emission, given as $R_{\rm eff}$, and when radio is not available we assume a radius of $\sim$2.3\,kpc, the mean MERLIN+VLA radius, consistent with SMG sizes. The star formation efficiency (SFE) is $SFR_{\rm radio}$ divided by the ULIRG molecular gas mass, M$_{\rm \ifmmode{{\rm H}_2}\else{H$_2$}\fi}$. SFR$_{UV}$ has not been corrected for extinction. The star formation rate density, $\Sigma_{\rm SFR}$, has units of {\rm\,M$_\odot$}\,yr$^{-1}$\,kpc$^{-2}$ ($\dagger$). Since these galaxies were selected in the radio, we provide a measure of their starburst and AGN content by classifying them into starburst (SB), AGN, and starburst/AGN mix (SB+) in five separate AGN selection criteria. Class$^{\rm X}$ marks any galaxy with X-Ray luminosity $>$5$\times$10$^{43}$\,erg\,s$^{-2}$ as ``AGN'', $<$1$\times$10$^{43}$\,erg\,s$^{-2}$ as ``SB'' and intermediate luminosity galaxies as ``SB+''. X-ray data is taken from \citet{alexander03a}, \citet{brunner08a}, \citet{ueda08a}, and \citet{mushotzky00a}. Class$^{\rm IR}$ marks galaxies with significant 8$\mu$m\ flux excess (relative to the stellar population fits) as ``AGN'', galaxies with marginally significant $<$2$\sigma$\ 8$\mu$m\ excess as ``SB+'' and galaxies with no 8$\mu$m\ excess as ``SB'' (see Figure~\ref{fig:midir}). Class$^{\rm CO}$ marks galaxies with unusually high FIR-to-CO ratios as ``AGN'' ($log(L_{FIR}/L_{CO}^\prime)\,>\,$2.7), ``SB+'' (2.5$\,<\,log(L_{FIR}/L_{CO}^\prime)\,<\,$2.7), and at low FIR-to-CO ratios, ``SB'' ($log(L_{FIR}/L_{CO}^\prime)\,>\,$2.5). A fourth class may be defined as a measure of the star formation rate density, $\Sigma_{\rm SFR}$; all of the galaxies for which we have $\Sigma_{\rm SFR}$ measurements satisfy the ``SB'' criteria however, with star formation densities $<$200\,{\rm\,M$_\odot$}\,yr$^{-2}$. A fifth class is defined by their rest-UV/optical spectroscopic properties, which, by method of their selection, is uniformly ``SB.'' } \end{table*} \subsection{Star Formation Rates, Densities and Efficiencies}\label{ss:sfrsfe} We estimate SFRs from VLA radio luminosities, using the radio/FIR correlation for star forming galaxies \citep[e.g.][]{helou85a,condon92a,sanders96a}: \begin{equation} L_{\rm FIR}\,=\,(3.583\times10^{-48})\,D_{L}^{2}\,S_{1.4}\,(1\,+\,z)^{(1\,-\,\alpha)} \end{equation} where $D_L$ is luminosity distance in cm, $L_{\rm FIR}$ is evaluated from 8-1000$\mu$m\ and is in {\rm\,L$_\odot$}, $S_{1.4}$ is the radio flux density at 1.4\,GHz in units of \uJy. A Salpeter initial mass function is assumed. The factor of (1\,+\,z)$^{(1 - \alpha)}$ accounts for bandwidth compression and the radio K-correction to the rest-frame luminosity; $\alpha$ is the synchrotron slope index, here taken to be 0.75 \citep{yun01a}. Since this is an empirical relation defined by the mean ratio $q$ between FIR and radio luminosities, there are a few different versions in use in the literature. This relation uses the bolometric ratio between FIR flux and radio flux of $q_{\rm IR}\,=\,$2.46 \citep[e.g.][]{ivison10a,ivison10b,casey10a}. The relation is traditionally defined by $q$\,=\,2.35 \citep{sanders96a} and was used by \citet{chapman05a} to calculate FIR luminosities for the redshift-identified SMGs (only differing by factors of 20\%\ between different assumed $q$ ratios). In most cases, the FIR luminosities derived from the radio are consistent with the submm detection limits and any flux density measurements at shorter wavelengths in the FIR (e.g. at 70$\mu$m\ or at 350$\mu$m). The FIR luminosity is then converted into a star formation rate (SFR) using the following relation: \begin{equation} SFR (M_{\odot}\,yr^{-1})\,=\,1.7\times10^{-10}\,L_{\rm FIR}\, (${\rm\,L$_\odot$}$) \end{equation} from \citet{kennicutt98a}. The inner quartile (25-75 percent) of star formation rates is 400-1600\Mpy\ with median 700\Mpy. Both derived quantities, L$_{\rm FIR}$ and SFR, are given in Table~\ref{tab:derived}. We also derive UV-inferred SFRs using $v$-band and $i$-band optical photometry. Since all SFRGs are quite optically faint ($i>22$) and FIR luminous, they are potentially subject to significant (yet uncertain) extinction factors. While not taken into account for the UV-inferred SFRs themselves, the large extinction factors may be deduced from the large disparities between SFR$_{\rm radio}$ and SFR$_{\rm UV}$ given in table~\ref{tab:derived} with a typical ratio of SFR$_{\rm radio}$/SFR$_{\rm UV}$\,=\,30 (we do not consider SFR$_{\rm radio}$/SFR$_{\rm UV}$ to have AGN contamination based on the previous section). Additional evidence that extinction caused by dust is significant comes in the comparison between rest-UV morphologies and their MERLIN+VLA radio morphologies as seen in Fig.~\ref{fig:merlin}. Since the brightest and/or bluest components of the rest-UV emission and radio emission do not often coincide, it indicates that the dustiest, most FIR-luminous regions in the galaxy could be highly obscured. Total star formation rate densities, $\Sigma_{\rm SFR}$, are estimated by dividing these SFRs (from VLA fluxes) by the surface areas of MERLIN+VLA radio emission regions. We measure an effective star formation surface area by isolating the regions where MERLIN+VLA radio emission is significant to $>$3$\sigma$ (the values for effective radius of radio emission and SFR density, $\Sigma_{\rm SFR}$, are given in Table~\ref{tab:derived}). The median total SFR density is 30$^{+40}_{-20}$\,\Mpy\,kpc$^{-2}$. Note that this is potentially an overestimation of the SFR density since the radio flux contained within the $>$3$\sigma$ MERLIN+VLA region constitutes a fraction of the total VLA radio flux (it is inferred that this fraction is within 40-90\%). For this reason we also compute alternate star formation rate densities using the total integrated MERLIN+VLA flux within the $>$3$\sigma$ emission area. We convert the flux to a radio luminosity, then FIR luminosity, then SFR, and divide it by the area. The median value for this alternate SFR density is 10$^{+20}_{-10}$\,\Mpy\,kpc$^{-2}$ which is on average, three times less than the total $\Sigma_{\rm SFR}$. Comparing either SFR density measurements to their theoretical maximum$-$the maximum gas density divided by the local dynamical time \citep[see equation 5 of][ we use t$_{dyn}$=4$\times$10$^{7}$yr]{elmegreen99a}$-$we can determine if the implied SFR density exceeds the theoretical prediction. While local ULIRGs with $\Sigma_{\rm SFR}\approx$200\,\Mpy\,kpc$^{-2}$ are forming stars at their theoretical maximum \citep[e.g.][]{tacconi06a}, only one of the SFRGs exceeds this limit, which is a dust opacity Eddington limit for SFR density \citep[see][]{thompson05a}. This source is \rm RGJ123642, the highest redshift source in our sample having a low S/N in the radio, meaning the measured effective radius is unusually small. Even by assuming unresolved radio profiles, only \rm RGJ123642\ exceeds the maximal starburst density, further highlighting that even starburst emission can dominate unresolved radio emission. In contrast, it would be unlikely for extended radio emission (this faint at these redshifts) to be driven by emission from AGN. The star formation efficiency (SFE) can be calculated by dividing the FIR luminosity by the \ifmmode{{\rm H}_2}\else{H$_2$}\fi\ gas mass. This calculation is contingent on the gas reservoir being in the same region as the starburst, which is an assumption which needs to be investigated in more detail through future high-resolution multiple-$J$ CO maps and resolved FIR emission maps. The mean SFE for SFRGs is 280$\pm$260\,{\rm\,L$_\odot$}/{\rm\,M$_\odot$}. Put another way, the median depletion timescale (defined as M$_{\rm \ifmmode{{\rm H}_2}\else{H$_2$}\fi}$/SFR, assuming 100\%\ efficiency) for SFRGs is $\sim$34\,Myr with inner quartile 20\,Myr - 55\,Myr. It should be noted here that a different gas conversion factor (e.g. 4.5\,{\rm\,M$_\odot$}\,(K\,km\,s$^{-1}$\,pc$^2$)$^{-1}$) would increase the depletion timescale and decrease the star formation efficiency by 5.6 times. \subsection{Dust Temperature and Dust Mass}\label{ss:dustmass} We fit FIR spectral energy distributions (SEDs) to the FIR flux density limits at 70$\mu$m, 350$\mu$m, 850$\mu$m, and 1200$\mu$m\ (see Table~\ref{tab:observations} for details). Our fitting method follows the methodology of \citet{chapman04a}, \citet{chapman05a}, and \citet{casey09b}: a single dust temperature modified black body fit. We fix the dust emissivity $\beta$ to 1.5 and use the FIR/radio correlation to infer a FIR luminosity from the radio flux. Since we are limited by a lack of data in the FIR for most of our sample (many only have limits), our derived dust temperatures are not well constrained. However if the uncertainty in our assumption of the radio/FIR correlation and dust emissivity are taken as givens, then we deduce temperature uncertainties on the order of 20-30K. This is calculated by assuming $L_{\rm FIR}$ scales with $L_{\rm radio}$; a higher $L_{\rm radio}$ implies a high $L_{\rm FIR}$, and a FIR SED with a high $L_{\rm FIR}$ can then be constrained by the FIR flux density measurements, even in the case of upper limits. Despite the large uncertainties, our measurements (whose statistical mean is ~65\,K) are consistent with a selection of warmer dust galaxies, even though there may be outliers. We also estimate dust mass for the SFRG sample by using the following relation \begin{equation} M_{\rm dust} (M_\odot) \simeq \frac{S_{\rm obs}D^2_L}{\kappa_{\nu} B(\nu_{\rm obs},T_d)} \end{equation} where $\kappa_{\nu}$ is the dust mass absorption coefficient as a function of rest wavelength: $\kappa_{850}$\,=\,0.15\,m$^2$\,kg$^{-1}$, $\kappa_{250}$\,=\,0.29$\pm$0.03\,m$^2$\,kg$^{-1}$ \citep{wiebe09a} and $\kappa_{70}$\,=\,1.2\,m$^2$\,kg$^{-1}$ \citep{weingartner01a}. Again, the lack of FIR data means that our dust mass calculations are not well constrained; our SFRGs have a mean dust mass upper limit (at 2$\sigma$) of $\langle M_{\rm dust} \rangle<$2$\times$10$^{9}$\,{\rm\,M$_\odot$}, which is not a well constrained measurement. A quick comparison to the mean gas mass (2$\times$10$^{10}$\,{\rm\,M$_\odot$}) implies mean dust-to-gas ratios of $<$1/10 (well within expectation). \subsection{Stellar Mass}\label{ss:stellarpops} \begin{figure*} \centering \includegraphics[width=1.99\columnwidth]{allcomidir.pdf} \caption{ Stellar population models are fit to SFRG optical and near-IR photometric points. We estimate stellar mass using the rest-frame H-band magnitudes which are inferred from the fits; the method is described in section~\ref{ss:stellarpops}. The SFRGs which are detected in CO have their names enclosed in boxes.} \label{fig:midir} \end{figure*} To estimate the galaxies' stellar masses, we combine the photometric points in the optical ($HST$ ACS $B$, $V$, $i$, and $z$ for GOODS-N, $HST$ ACS $i$ for Lockman Hole, and Subaru $i$ for SSA13) and the mid-IR ($Spitzer$-IRAC 3.6$\mu$m, 4.5$\mu$m, 5.8$\mu$m\ and 8.0$\mu$m). For $z=$1-3, these photometric points cover rest-frame 1.6$\mu$m\ where stellar emission peaks. We use the {\sc hyperz} photometric redshift code \citep{bolzonella00a} to fit this photometry to several stellar population SEDs \citep{bruzual03a}. Using the best fit stellar population SED, we estimate the rest-frame H-band magnitude. While rest-frame K-band is often used to measure stellar mass for high-redshift galaxies \citep[using the method outlined by ][]{borys05a}, recent work has shown that stellar masses derived from K-band are overestimated due to increased contribution from AGN power law and decreased contribution from star light at longer wavelengths, \citep[see][]{hainline09a}. The mean light to mass ratio, $L_{H}$/M, can range from 5-10\,{\rm\,L$_\odot$}/{\rm\,M$_\odot$}, and using the \citet{bruzual03a} models, is set at 5.6\,{\rm\,L$_\odot$}/{\rm\,M$_\odot$} for SMGs as in Hainline {\rm et\ts al.}. The mean rest-frame H-band absolute magnitude is -25.4$\pm$0.8 with an inferred average stellar mass of 7$\times$10$^{10}$\,{\rm\,M$_\odot$}. These masses are consistent with the H-band derived SMG stellar masses (H-band absolute magnitude averages -25.8$\pm$0.2 and masses average 2$\times$10$^{11}$) found by \citet{hainline09a} and a factor of $\sim$2 lower than the K-band SMG masses derived by \citet{borys05a}. Figure~\ref{fig:midir} illustrates the best stellar population models with respect to the galaxies' photometric data. If an excess flux density is detected above the stellar model SED at observed 8$\mu$m\ then an AGN might be contributing significantly to the near infrared luminosities (see section \ref{ss:agn} for an analysis of AGN contamination). While there are potential flux excess in seven of the 15 sources illustrated, the excess is only significant $>$2$\sigma$ in \rm RGJ123653\ and \rm RGJ123718. The former is discussed in detail in \citet{casey09b} who conclude an insignificant AGN contribution based on combined evidence from X-ray (very low flux, consistent with starbursts), extended radio emission (inconsistent with AGN radio emission), and no clear powerlaw dominating the near to mid-IR data. The latter has extended radio emission, no X-ray detection and is the faintest radio galaxy of our sample ($\sim$15\uJy), and therefore, it is unlikely that it is dominated by a very powerful AGN. There is no clear relation between a system's stellar mass and its resulting detection in CO. The galaxies' formation timescales are given by $\tau_{form} \propto M_\star$/SFR; the median $\tau_{\rm form}$ of the sample is 10\,Myr. While this quantity could be overestimated if M$_\star$ is overestimated, the formation timescales likely represent a lower limit on the time it took to build up the stellar population since the star formation rates are hypothetically near their peak during the ULIRG phase ($\lower.5ex\hbox{\gtsima}$200\,\Mpy) in comparison to most galaxies at the same epoch (1-10\,\Mpy). The hypothesis of SFRGs being near the peak is based on the fact that their SFRs are not sustainable beyond $\sim$100\,Myr, but it is unlikely that these systems are that young; it is more probable that SFRGs evolved more slowly, building up stellar mass gradually until a point, when a trigger led to an extreme star-bursting phase. \section{Discussion}\label{s:discussion} Here we explore the relationship between the CO observations in SFRGs and their other multi-wavelength properties, drawing on comparisons with other galaxy populations. CO observations provide a unique and independent probe of the galaxies' star formation properties, by constraining the molecular gas reservoirs which fuel the star formation. In Table~\ref{tab:summary} we have summarised many of the measured physical properties of SFRGs relative to SMGs for quick reference. Important to note, however, is that the SMGs of Table~\ref{tab:summary} are those with CO observations, limited to the mostly bright SMG subsample. After comparing with other populations, we discuss the implications that these observations have for the study of all high-z ULIRGs and how improved targeted observations, in both molecular interstellar medium lines and FIR continuum, will enable thorough, unhindered analysis of ULIRG evolution and extreme star formation at z$\sim$2. The original motivation for segregating submm-faint and submm-bright populations for CO observations is the premise that they exhibit similar extreme starburst qualities, thus molecular gas qualities, but differ in dust distribution whereby slightly warmer dust systems are undetectable at 850$\mu$m\ (the usual SMG-selection band). Surveying SFRGs in CO provides confirmation, through detection of vast gas reservoirs, that SMGs are not the only significant population of ULIRGs at high-$z$. The basic physical premise of the comparison is that the dust distribution in SFRGs would need to be clumpier or more compact to be heated to slightly higher dust temperatures. Since SFRG radio morphologies on average seem to be similarly extended and irregular, SFRGs are suggested to be less homogeneously diffuse than SMGs \citep{menendez-delmestre09a,hainline09a}, with more concentrated clumps, yet spread over the same large area. \begin{table} \begin{center} \caption{SFRG Properties Summarised Relative to SMGs} \label{tab:summary} \begin{tabular}{l@{ }cc} \hline\hline Property & SFRGs & SMGs$^a$ \\ \hline $\langle z\rangle$ & 1.8$\pm$0.7 & 2.4$\pm$0.6 \\ L$_{\rm FIR}$ ({\rm\,L$_\odot$}) & (4$\pm$3)$\times$10$^{12}$ & (8$\pm$4)$\times$10$^{12}$ \\ SFR (\Mpy) & 700$\pm$500 & 1400$\pm$500 \\ $\langle$ R$_{\rm eff} \rangle$ (kpc) $^b$ & 2.3$\pm$0.8 & 2.7$\pm$0.4 \\ $\Sigma_{\rm SFR}$ (\Mpy\,kpc$^{-2}$) $^b$ & 30$^{+40}_{-20}$ & 60$^{+140}_{-40}$ \\ T$_{\rm dust}$ (K) $^c$ & 66$\pm$15 & 41$\pm$5 \\ M$_{\rm dust}$ ({\rm\,M$_\odot$}) $^c$ & $<$2$\times$10$^9$ & 9$\times$10$^8$ \\ H-band Mag & -25.4$\pm$0.8 & -25.8$\pm$0.2 \\ M$_{\rm \star}$ ({\rm\,M$_\odot$}) & (7$\pm$6)$\times$10$^{10}$ & (2$\pm$1)$\times$10$^{11}$ \\ $\langle \tau_{form} \rangle$ (Myr) & 10$^{+15}_{-6}$ & 14$^{+8}_{-4}$ \\ M$_{\rm \ifmmode{{\rm H}_2}\else{H$_2$}\fi}$ ({\rm\,M$_\odot$}) & (2.1$\pm$0.7)$\times$10$^{10}$ & (5.1$\pm$2.8)$\times$10$^{10}$ \\ $\Delta V_{\rm \co}$ (\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi) & 320$\pm$80 & 530$\pm$150 \\ SFE ({\rm\,L$_\odot$}/{\rm\,M$_\odot$}) & 280$\pm$260 & 450$\pm$170 \\ M$_{\rm dyn}\,sin^{2}\,i$ ({\rm\,M$_\odot$}) $^d$ & (7.2$^{+6.8}_{-3.4}$)$\times$10$^{10}$ & (1.5$^{+1.4}_{-0.4}$)$\times$10$^{11}$ \\ $\langle \tau_{\rm depl} \rangle$ (Myr) & 34$\pm$24 & 40$^{+50}_{-30}$ \\ f$_{gas}=\langle M_{gas}/M_{dyn} \rangle$ $^e$ & 0.07$^{+0.11}_{-0.02}$ & 0.09$^{+0.09}_{-0.07}$ \\ f$_{stars}=\langle M_{\star}/M_{dyn} \rangle$ $^e$ & $\sim$0.2 & $\sim$0.3 \\ $S_{\rm CO(4-3)}/S_{\rm CO(3-2)}$ $^f$ & 2.0$\pm$0.8 & 1.4$\pm$0.2 \\ $f_{\rm AGN}$ $^g$ & 0.3$\pm$0.1 & 0.4$\pm$0.2 \\ \hline\hline \end{tabular} \end{center} {\small $^a$ All aggregate properties of SMGs are measured with respect to the CO-observed subset from \citet{neri03a}, \citet{greve05a}, and \citet{tacconi06a}. $^b$ Effective radius and SFR density are measured for MERLIN+VLA imaged SFRGs only. The effective size of SMGs is from \citet{biggs08a} and the SFR density is the SMG SFR divided by the mean R$_{\rm eff}$. $^c$ T$_{\rm dust}$ and M$_{\rm dust}$ fits for SFRGs are described in section \ref{ss:dustmass}; while both measurements are highly uncertain due to poor FIR flux density constraints, their calculation is useful for comparison with SMGs from \citet{chapman05a} and \citet{kovacs06a}. $^d$ Dynamical mass assumes an effective radius of 2\,kpc for SFRGs and SMGs without radio size measurements; for SMGs, this size is supported by measurements from \citet{tacconi08a}, and for SFRGs the size has been shown to be similar for one SFRG \citep{bothwell10a} and is similar to their MERLIN+VLA radio sizes (R$_{\rm eff}$). $^e$ The gas and stars fraction represent the fraction of total mass which is in gas (or in stars), internal to each galaxy. Note: the gas fraction does not include the 40\%\ correction for helium. $^f$ The ratio of CO line fluxes represents the source excitation. The value for SFRGs is based on the single measurement of \rm RGJ123711\ described in section \ref{ss:254excitation}, and the SMG measurement is taken from the three SMGs measured by \citet{weiss07a}. $^g$ The AGN fraction (within a population) is estimated in the SFRG sample as described in section~\ref{ss:sfes}, while the AGN fraction of CO-observed SMGs is taken from their rest-UV spectral classification (note none of the SFRGs have AGN spectral signatures). The entire SMG population is estimated to have an AGN fraction of $\sim$0.25 \citep[see][]{alexander05a}. } \end{table} \subsection{Multi-wavelength Properties of SFRGs and AGN fraction}\label{ss:agn} Table~\ref{tab:derived} provides a summary of SFRGs' physical properties derived from multi-wavelength data. It includes AGN/starburst classifications for each individual SFRG, which help shed light on the complex nature of the population, and the potential for AGN contamination. ``Class$^{\rm X}$'' classifies SFRGs based on X-ray luminosity, ``Class$^{\rm IR}$'' classifies according to 8$\mu$m\ flux excess in the near-IR, and ``Class$^{\rm CO}$'' classifies according to FIR-to-CO luminosity ratio (where an unusually high ratio can be accounted for by an AGN contributing significantly to radio luminosity, thus overestimated FIR luminosity). A fourth class and fifth could also be defined, as rest-UV/optical spectroscopic features and as a function of the star formation rate density as traced by MERLIN+VLA morphologies, however we have noted that all of the samples' morphologies are consistent with starbursts in both criteria. While we leave the reader the flexibility to interpret the AGN content of the SFRG sample as (s)he sees fit, we infer a total SFRG AGN fraction of 0.3$\pm$0.1 to first order (included in Table~\ref{tab:summary}), the mean AGN dominated fraction from each of the three Table~\ref{tab:derived} classifications. This agrees with earlier measurements and estimates for SFRGs given in \citet{casey09a} and \citet{casey09b}. Also, the possibility exists that weak beaming of low-luminosity radio jets could overestimate radio luminosity by factors of a few for some SFRGs \citep[please see][for the detailed discussion of radio beaming in more compact SFRGs]{casey09a}. Again, we caution AGN contamination has the potential to strongly bias the CO observations/interpretation of individual objects. Figure~\ref{fig:merlin} shows that SFRGs have faint optical luminosities, suggestive of heavy reddening or obscuration by dust. This idea is supported further by the large discrepancy between star formation rates derived from radio luminosity versus rest-UV flux densities. Even after correction for dust extinction using the UV slope and dust extinction models of \citet{calzetti94a}, the UV derived SFR can be a factor of 10-100 times lower than the radio-inferred SFR demonstrating that SFRGs are far dustier than normal dust extinction laws predict (which also holds for SMGs). Their radio morphologies are primarily extended and irregular, thus attributable to star formation and not compact AGN as is also the case for SMGs \citep{biggs08a,chapman04b}. We also note the effective MERLIN+VLA radii between SFRGs that have been CO detected ($R_{\rm eff}\,=\,$2.7$\pm$0.6\,kpc) and those that have not ($R_{\rm eff}\,=\,$1.7$\pm$0.8\,kpc). While the difference is insignificant, it could be explained by observing that the sources with lower effective radii have lower S/N in the radio, implying lower FIR luminosities. If we assume the inferred radio/FIR and the ULIRG $L^\prime_{\rm CO}$/$L_{\rm FIR}$ relations, we predict integrated CO fluxes of 0.1-0.3\,Jy\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ for CO-undetected sources based on radio flux density (requiring particularly sensitive observations with our setup). Figure~\ref{fig:midir} showed that most SFRGs have only minor AGN contribution in the near-IR; most emission is dominated by stars. The stellar masses of SFRGs average to 7$\times$10$^{10}$\,{\rm\,M$_\odot$}, consistent with SMG masses derived by \citet{hainline09a} using rest H-band luminosity but a factor of $\sim$2$\times$ lower than the SMG masses of \citet{borys05a} from rest K-band luminosity. \subsection{Comparing to SMGs}\label{ss:brightSMGs} The comparison of molecular gas properties between SFRGs and SMGs is the main focus of this work, however, a strong selection bias is folded into this comparison. The first SMGs to be observed in CO \citep{neri03a,greve05a,tacconi06a} were among the brightest SMGs with spectroscopic redshifts ($L_{\rm FIR}\lower.5ex\hbox{\gtsima}$10$^{12.5}$\,{\rm\,L$_\odot$}) since brighter $L_{\rm FIR}$ systems had a higher likelihood of being CO detected. We note that the luminosity distribution of SMGs plotted in Fig.~\ref{fig:radiodist} includes many lower luminosity SMGs which have recently been observed in CO, in parallel to our SFRG observing programs, but have not yet been analyzed or published (Bothwell {\rm et\ts al.}, in preparation). Most of the 'bright-SMGs' (those with published CO spectra) have AGN signatures in their rest-UV spectra, a property which would exclude them from SFRG selection if they were submm-faint. This means that similarly bright radio sources which are submm-faint, comparable to the 'bright-SMGs,' were excluded from our sample due to AGN contamination. Besides the luminosity bias introduced by weeding out AGN as revealed by rest-UV/optical spectroscopy, a further bias exists due to spectroscopic incompleteness of the SFRG population. Since their radio emission was more likely thought to be dominated by AGN, SFRGs were not followed up in rest-UV/rest-optical spectroscopy nearly as thoroughly or completely as SMGs. This likely means that the absolute brightest SMGs were CO observed while a large sample of bright SFRGs could have been excluded from CO observations due to a lack of reliable redshift information or potentially strong AGN contamination. Furthermore, the original SFRG selection of \citet{chapman04a} had the added criteria of a faint optical magnitude $i>$\,23, which made redshift measurement from rest-UV spectra more difficult. There has been some anecdotal indication, however, that warmer dust systems might not exist at the highest luminosities with the same frequency as colder dust systems, as revealed by 250$\mu$m\ selected HyLIRG populations \citep{casey10a}. We note that the mis-identification of optical counterparts to SMGs has potentially led to a lower CO-detection rate for that population than other CO-observed galaxy populations; some of the radio galaxy SMG-counterparts might not be starbursts and might have intrinsically low FIR luminosities, thus CO luminosities. We note that SFRGs (although not faced with the issue of matching FIR positions to a correct radio counterpart) could also suffer from the selection of non-starburst radio galaxies. In the sections below, we frequently discuss how SFRGs relate to SMGs: both the CO-observed 'bright' subsample of SMGs and extrapolations based on preliminary analysis on the fainter, more numerous sample of SMGs (Bothwell, private communication). The overall luminosity bias existing in these distinct samples must be kept in mind when population comparisons are made. \subsection{Star Formation Efficiencies}\label{ss:sfes} \begin{figure*} \centering \includegraphics[width=0.99\columnwidth]{bw_lfir_lco.pdf} \includegraphics[width=0.99\columnwidth]{bw_lfirlco_z.pdf} \caption{{\it Left:} FIR Luminosity against CO Luminosity, L$^\prime_{CO[1-0]}$. SFRGs ({\it black circles}) lie in the same luminosity space as local ULIRGs \citep[{\it gray triangles};][]{solomon97a}, while SMGs \citep[$crosses$;][]{greve05a} have higher luminosities. SFRGs detected in CO are solid while SFRGs without CO detection are open circles. The tentative identifications or ``offset'' CO sources are circles with gray centres. We also overplot data of spiral galaxies for perspective \citep[{\it gray diamonds};][]{solomon88a,gao04a}, and include the best-fit observed relations between L$_{\rm FIR}$ and L$^\prime_{\rm CO}$ for local spirals (dotted line) and local ULIRGs/SMGs (dashed line). {\it Right:} the star formation efficiency, given by L$_{\rm FIR}$/L$_{\rm CO}$, is plotted with redshift. SFRGs appear to share the same range of SFEs as SMGs despite their fainter luminosities. The mean SFE and 1-$\sigma$ bounds of local ULIRGs are illustrated by the horizontal solid and dashed lines. } \label{fig:lfirlco} \end{figure*} The relationship between star formation rate and molecular gas mass is paramount to a galaxy's evolutionary interpretation. This is measured by comparing the FIR luminosity with CO line luminosity, as we show in Figure~\ref{fig:lfirlco}. In this context, SFRGs appear to have similarly high CO luminosities as SMGs, and most SFRGs lie slightly above the `ULIRG' star formation efficiency powerlaw relation, $L^\prime_{\rm CO}\,\propto\,L_{\rm FIR}$$^{0.61}$, which is followed by both local ULIRGs and SMGs. However, SFRGs are inconsistent with the star formation efficiency relation which describes local spiral galaxies \citep[$L^\prime_{\rm CO}\,\propto\,L_{\rm FIR}$$^{0.93}$;][]{solomon88a,gao04a}. The SMGs shown on Fig~\ref{fig:lfirlco} are those of \citet{neri03a}, \citet{greve05a} and \citet{tacconi06a}. As mentioned in section~\ref{ss:brightSMGs}, SFRGs are a factor of $\sim$2 less luminous in radio (thus L$_{\rm FIR}$) than CO-observed SMGs. Due to their relative high luminosities with respect to local ULIRGs, the SMGs have been described as the {\it scaled-up} high-redshift analogues to local ULIRGs \citep{tacconi06a}. In contrast, SFRGs probe a luminosity regime \simlt10$^{12.5}$\,{\rm\,L$_\odot$}\ closer to the locus of local ULIRGs at $\sim$10$^{12}$\,{\rm\,L$_\odot$}. While SFRGs might seem to be better analogues of local ULIRGs than SMGs in luminosity space, we note that the stellar and gas properties of the two populations are quite distinct: local ULIRGs being more compact with lower stellar masses than SFRGs and SMGs \citep{dasyra06a}, and they have lower CO luminosities by a factor of $\sim$2-3. The star formation efficiencies (SFEs) of SMGs and SFRGs span the range 70-1000\,{\rm\,L$_\odot$} /{\rm\,M$_\odot$}. The median SFE of the CO-detected SFRG sample is 280$\pm$260\,{\rm\,L$_\odot$}\,{\rm\,M$_\odot$}$^{-1}$ which is statistically consistent with the mean SFE for SMGs, 450$\pm$170\,{\rm\,L$_\odot$}\,{\rm\,M$_\odot$}$^{-1}$, although both values incorporate the large uncertainties of the gas conversion factor and FIR-derived SFR. \citet{chapman08a} highlights that the two pilot program \ifmmode{^{12}{\rm CO}(J\!=\!3\! \to \!2)}\else{$^{12}${\rm CO}($J$=3$\to$2)}\fi\ detections of \rm RGJ123711\ and \chap\ have exceptionally high SFEs and hypothesised that SFRGs, with further observation, might show similarly high SFEs compared to SMGs. \citet{daddi08a} analyzed the CO content of two $BzK$ selected galaxies (\dadb\ and \dada, also selected as SFRGs and included in our analysis) and claimed that they had relatively low, Milky Way/``normal spiral'' efficiencies, emphasising the difference between them and the high-efficiency ULIRGs. Our large sample of SFRGs, including both the Chapman et al. and Daddi et al. subsamples reveals a much wider spread in star formation efficiencies, suggestive of a wide range in gas states and a possible range of galaxy states, although more SFRGs are consistent with the less-efficient Daddi et al. sample. The SFRGs not detected in CO and those far below the ULIRG $L_{\rm FIR}$/$L_{\rm CO}^\prime$ relation would appear to be very efficient star formers (less gas to fuel their high SFRs), however this assumes that AGN contamination is minimal. AGN contamination is more likely than super-efficient star formation and happens when AGN boosts the radio-inferred FIR luminosity, thus inferred star formation rate; as the empirical relation between $L_{\rm FIR}$ and $L_{\rm CO}^\prime$ suggests, AGN contaminated sources would be fainter in CO gas than predicted. We then can classify SFRGs in terms of CO luminosity to FIR luminosity (i.e. the ratio of $L_{\rm CO}^\prime$/$L_{\rm FIR}$), where low ratios are designated 'AGN' in 'CLASS$_{CO}$' in Table~\ref{tab:derived}. While the scatter of local ULIRGs around the $L_{\rm CO}^\prime$/$L_{\rm FIR}$ relation is minimal, $\sim$0.3\,dex, both SFRG and SMG populations have more significant scatter below the relation; this is consistent with many SFRGs and SMGs having powerful AGN which boosts radio (thus FIR) luminosity. The scatter of SFRGs above the ULIRG $L_{\rm CO}^\prime$/$L_{\rm FIR}$ relation is suggestive of low star formation efficiencies as presented by \citet{daddi08a} and \citet{daddi10a}. We caution that sources like \dadb\ and \dada\ which have these low star formation efficiencies, it might be more appropriate to assume that the gas properties are more similar to spiral/disk galaxies than ULIRGs, particularly when converting to molecular gas mass using the CO/\ifmmode{{\rm H}_2}\else{H$_2$}\fi\ conversion factor. This factor differs greatly between ULIRGs (0.8 $M_\odot$/(K\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\,pc$^{2}$)) and Milky Way type disk galaxies (4.5 $M_\odot$/(K\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\,pc$^{2}$)). Since the inferred gas masses differ so greatly given these different assumptions, we give $M_{\ifmmode{{\rm H}_2}\else{H$_2$}\fi}$ for both ULIRG and spiral/disk galaxy assumptions in Table~\ref{tab:derived} but proceed with our interpretation using the ULIRG conversion factor. \subsection{Line Widths: Implications for Merger Stage} \begin{figure} \centering \includegraphics[width=0.90\columnwidth]{nfwhm.pdf} \caption{The distribution in CO line widths for the SFRG sample compared to local ULIRGs \citep{solomon97a} and SMGs. The offset/tentative SFRGs (open histogram) are added on top of the the remaining SFRG sample (hashed area). The distribution for local ULIRGs has been re-normalised with respect to the total number of SFRGs for a more clear comparison. The SMG distribution has been corrected for overestimation in line widths which is caused by fitting a single Gaussian to a double-peaked CO line, and the SMG distribution includes the samples of \citet{neri03a}, \citet{greve05a}, and \citet{tacconi06a}, as well as some yet unpublished SMG observations (Bothwell {\rm et\ts al.}, in preparation). } \label{fig:nfwhm} \end{figure} The distribution in CO line widths provides important insight into the galaxies' dynamics. Figure~\ref{fig:nfwhm} shows the $\Delta V_{\rm \co}$ full width at half maximum (FWHM) distributions for our SFRGs, SMGs (both 'bright' SMGs and the fainter sample observed in CO only recently; Smail, private communication) and local ULIRGs \citep{solomon97a}. None of the line widths presented here have made any inclination angle assumptions. The line widths of SMGs were adjusted to correct for the prior exclusion of double-peaked Gaussians \citep[see the details of this correction in][]{coppin08a}; this has reduced the mean SMG line width from 600\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ to 530\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi. Despite the adjustment, SMGs seem quite distinct from SFRGs and local ULIRGs by having a high-FWHM tail in its distribution. This high-FWHM tail is seen only in bright subsample of SMGs originally surveyed in CO gas. While it could be attributed to selection bias, in that wide CO features are only detectable in the brightest objects where S/N is much higher, we highlight that there was significant improvement in receiver sensitivity between observations of these bright and wide SMG CO lines and the fainter SFRG observations, so the data have comparable S/N. This raises the possibility that only the brightest subsample of high-$z$ ULIRGs ($L_{\rm FIR}\lower.5ex\hbox{\gtsima}$10$^{13}$\,{\rm\,L$_\odot$}) have wide CO line widths ($\lower.5ex\hbox{\gtsima}$500\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi). Broad dispersion-dominated CO lines (and irregular double peaked profiles) in the highest luminosity systems is suggestive of early stage major mergers where two gas rich disks are infalling. Local ULIRGs in contrast have a much narrower line width distribution and are a factor of $\sim$5-10 fainter in $L_{\rm FIR}$. For this reason, local ULIRGs are often said to be in a late starburst phase, at a coalesced point during a merger \citep[when progenitors have coalesced into a single system; for a review see][]{sanders96a}. SFRGs and more modest-luminosity SMGs are difficult to place in this evolutionary sequence, but their populations are not likely to be exclusively dominated by either beginning or ending merger sub-stages. The lower dynamical masses of local ULIRGs (typical sizes $R\,=$\,1\,kpc and $\Delta V\,=$\,300\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi) could be due to downsizing$-$where extreme starbursts today are less massive than those at high-$z$. Both SFRGs and SMGs are consistent with this picture since they seem to be a factor of $\sim$2 larger \citep[$R_{1/2}\,\lower.5ex\hbox{\gtsima}$2\,{\rm\,kpc}, see measurements of CO size in][]{tacconi08a,daddi10a,bothwell10a}. The subsample of SMGs which have very broad features may be in a particular stage of merger where their line profile becomes broadened, perhaps in observation of two distinct components with very small physical separation or gas. This would appear to increase our dynamical mass estimates of these systems. However, the radial `size' and merger correction factor $C$ would both need to be reasessed (both of which have not been measured on a case by case basis for these sources) before physical interpretation of higher dynamical mass estimates were made. Although more observations need to be taken to conclude, an alternate explanation for the narrower line widths observed in modest-luminosity sources, including SFRGs, is that the population consists of fewer major mergers than the very bright systems. Observational evidence indicates that anywhere from 50-90\%\ of local ULIRGs have undergone recent mergers \citep[e.g.]{lawrence89a,melnick90a,clements96a}, but that a sizable fraction might be triggered by other mechanisms. Recent work from \citet{genzel08a} suggests that high star formation rates in secularly evolving disk galaxies may be caused by rapid rotation or smooth accretion of material from its surroundings (e.g. minor merging or tidal accretion). In addition, theoretical work indicates that the high star formation rates and IR luminosities in ULIRGs could often be generated by minor mergers and turbulent disk processes \citep[e.g.][]{monaco04a,dib06a}. ULIRGs driven by secular processes would exhibit narrow CO line widths, consistent with the SFRGs presented in this paper. High spatial resolution gas observations are needed to determine the true nature of their dynamics however. Recent observations and simulation work have reiterated the idea that $>$10$^{13}$ ULIRGs are more often in early-stage major mergers. \citet{tacconi08a} and \citet{engel10a} show that most SMGs at z$\sim$2 exhibit disturbed gas morphologies rather than smoothly rotating disks, and simulations and semi-analytic SMG models tell us that major mergers likely initiate most ultraluminous phases of high-$z$ star formation seen in SMGs \citep{narayanan09a,swinbank08a,baugh05a}. However, recent work from \citet{dave10a} indicate that ULIRGs may also be driven by continual bombardment by very low mass fragments onto a $\sim$10$^{11}$\,{\rm\,M$_\odot$}\ galaxy. Cold streams feeding continual gas buildup \citep[e.g.][]{dekel09a} has also been raised as a possible origin. The tail of large CO line widths in SMGs provides a crucial piece of observational evidence that some SMGs are much more highly disturbed and represent a different phase than SFRGs and local ULIRGs. We note that the mean gas fraction, defined as gas mass to dynamical mass ratio, of SFRGs is $\langle$M$_{\rm gas}$/M$_{\rm dyn}\rangle$=\,$f_{\rm gas}$=\,0.07$^{+0.11}_{-0.02}$, consistent with the same ratio for SMGs, which have $\langle$M$_{\rm gas}$/M$_{\rm dyn}\rangle\,\sim\,$0.09$^{+0.09}_{-0.07}$ (after correction for a 30$^{o}$ inclination angle). While this comparison is between SFRGs which have lower luminosities than CO-observed SMGs, the same molecular gas fraction suggests that the two populations are likely in similar evolutionary stages. We note, however, that if we assume a CO/\ifmmode{{\rm H}_2}\else{H$_2$}\fi\ gas conversion factor consistent with spirals instead of ULIRGs, the gas mass fraction increases substantially to $\sim$0.60. While overall, SFRGs and SMG properties are more consistent with ULIRGs, it is possible that a few outliers, for example \dadb\ and \dada\ described in \citep{daddi08a} and a few of the SMGs exhibiting unusually low SFEs, are much more gas rich than the ULIRGs which comprise the rest of the populations. \subsection{Comparison with Other Populations} An evolutionary transition stage between ULIRG and quasar has been explored extensively by work on Dust Obscured Galaxies \citep[DOGs;][]{dey08a,pope08b}, a population of galaxies with warm dust temperatures and more modest SFRs than the brightest SMGs, consistent with $\sim$10$^{12}$\,{\rm\,L$_\odot$}\ ULIRGs. Ten of fourteen SFRGs with 24$\mu$m\ flux measurements satisfy DOG selection. While DOG selection is quite broad and selects nearly all SFRGs and many SMGs (requiring 24$\mu$m\ flux densities $>$100\,\uJy\ and red optical to IR colors), a subset of DOGs with spectroscopic redshifts have detailed near-IR to FIR photometric constraints \citep{bussmann09a} which show that AGN may contribute significantly to their bolometric luminosities, and as a result, most have warm dust temperatures ($T_d$\,\lower.5ex\hbox{\gtsima}\,45\,K). Due to their high AGN fraction \citep[dependent on luminosity and only $\ll$0.5 in the faintest, S$_{24}\,<\,$0.5\,mJy subset, e.g.][]{pope08b} and heavy dust obscuration, many DOGs are believed to lie at the transition phase between SMGs (or star-forming ULIRG) and luminous quasar \citep{pope08b} and overlap with the SFRG population. While SFRGs might have warm dust temperatures like some DOGs, we find that most are dominated by star formation and not AGN. This is largely a function of the aggressive selection of SFRGs, meant to weed out strong AGN by their spectral indicators in the rest-UV/optical and the observation of minimal 8$\mu$m\ flux excess and of extended radio emission. High star formation rates and a low AGN fraction (with respect to a higher AGN fraction in DOGs) are strong evidence that SFRGs are at a similar ULIRG phase to SMGs despite their warm dust. \begin{figure} \centering \includegraphics[width=0.90\columnwidth]{noeske-sfrmstar2.pdf} \caption{ The star formation rate per unit stellar mass against stellar mass. We compare SFRGs ({\it large circles}) to SMGs ({\it crosses}) and z$\sim$1 starburst galaxies \citep[gray triangles and squares;][]{noeske07a}. We also overplot the derived redshift dependent relations (at $z$=1 and $z$=2, dot-dashed lines) found for GOODS galaxies in \citet{daddi07a}. SFRGs not detected in CO are open circles while detected SFRGs are filled. Like SMGs, SFRGs have very large star formation rates per stellar mass compared with ``blue sequence'' galaxies, although their mean stellar mass ($\sim$7$\times$10$^{10}$\,{\rm\,M$_\odot$}) is less than the mean SMG stellar mass ($\sim$2$\times$10$^{11}$\,{\rm\,M$_\odot$}). } \label{fig:noeske} \end{figure} The molecular gas fractions of z$\sim$2 normal starburst galaxies have been estimated at $f_{\rm gas}\sim$0.4-0.5 \citep[assuming a $L^\prime_{\rm CO}/L_{\rm FIR}$ prior;][]{erb06a}. The gas fraction in SMGs (and a few BX active galaxies) has been measured to be $f_{\rm gas}\sim$0.3-0.5 \citep[these values do not take inclination into account, which is a factor of 1/4][]{tacconi08a}. We measure an internal gas fraction of our SFRG sample of $\sim$0.07, which is consistent with SMGs, both the Tacconi et al. estimate ($f_{\rm gas}\sim$0.1 if corrected for inclination) and our reassessment of the same data ($f_{\rm gas}\,=\,$0.09$^{+0.08}_{-0.06}$). All $f_{\rm gas}$ measurements for high-$z$ ULIRGs ($f_{\rm gas}\sim$0.1) are lower than the inferred gas fractions for normal z$\sim$2 galaxies ($f_{\rm gas}\sim$0.5) from \citet{erb06a}. This could indicate that the more modest-luminosity galaxies, consisting of gas-rich disks, have low dynamical masses but proportionately high gas mass, meaning they would be good progenitor candidates for ULIRG systems, if set on collision courses with other gas-rich disks. During the ULIRG starburst phase, the gas mass would start to decrease with rapid star formation. In contrast, we recognize that by using a higher $X_{\rm \co}$ conversion factor more consistent with quiescent disks, then the measured gas fractions of these ULIRGs would increase by $\sim$6$\times$, making their gas fractions consistent with those of the estimate for normal $z\sim$2 galaxies. Future observations of more modest-luminosity galaxies in CO gas \citep[e.g. like the recent work of][]{tacconi10a} are needed to to truly understand the sequencing and gas properties of ULIRGs and their progenitors. Figure~\ref{fig:noeske} highlights the unusually high star formation rates per unit mass of SFRGs and SMGs above normal starbursts \citep[AEGIS samples;][]{noeske07a}, for a wide range of stellar masses. The relation between stellar mass and specific star formation rate has been shown to evolve with redshift, e.g. \citet{daddi07a}; however, SMGs and SFRGs still lie at higher star formation rates than galaxies of equal mass at the same redshift \citep[e.g.][]{da-Cunha10a}. This enhanced SFR fraction per unit mass reveals that SFRG and SMG star formation processes seem fundamentally different than SF processes in more modest luminosity galaxies, despite the overall range of stellar mass spanning almost two orders of magnitude. The $BzK$ active galaxy selection \citep{daddi04a} is meant to select moderate star forming ($\sim$100-200\,\Mpy), massive ($\sim$10$^{11}$\,{\rm\,M$_\odot$}) galaxies at high redshift$-$systems which are typically below the ULIRG star formation rate threshold ($\lower.5ex\hbox{\gtsima}$200\,\Mpy). \citet{daddi10a} detect several active $BzK$ galaxies in \ifmmode{^{12}{\rm CO}(J\!=\!2\! \to \!1)}\else{$^{12}${\rm CO}($J$=2$\to$1)}\fi\ with surprisingly high gas masses given their star formation rates, indicating that they exhibit gas properties more consistent with normal Milky Way type galaxies, but at 'scaled-up' luminosities, stellar masses and gas masses. It is important to note however that the active $BzK$ galaxies observed in CO were all radio-detected, in other words, most of them have SFRs above the ULIRG cutoff ($>$200\,\Mpy) and might otherwise be characterized as SMGs or SFRGs. All SFRGs in this paper satisfy the active $BzK$ selection criterion, indicating that $BzK$ selection might probe both massive star-bursting galaxies and massive, extreme, dusty starbursts. Similarly, it appears as if SFRG selection might select both high-$z$ ULIRG merging systems {\it and} extreme gas-rich disk galaxies. \subsection{Volume Density}\label{ss:volumedensity} It is probable that the luminosity bias described in section~\ref{ss:brightSMGs} has significant effect on how we can interpret the CO observations of either population. Spectroscopic incompleteness in the SFRG population (as discussed in section~\ref{ss:brightSMGs}) makes volume density difficult to calculate. We estimate that about 35\,\%\ of a complete sample of \uJy\ radio galaxies ($\sim$0.7\,arcmin$^{-2}$) with S$_{1.4}>$20\,\uJy\ have not been observed spectroscopically \citep[see][]{chapman03b}. The majority of these are submillimetre faint (since most spectroscopic observations of \uJy\ radio galaxies has been of SMGs). Of the submm-faint galaxies which were spectroscopically observed, 65\,\%\ have confirmed redshifts, roughly half of which are low luminosity AGN and the other half are star forming galaxies. This means that by completing spectroscopic observations of known submm-faint \uJy\ radio galaxies in well surveyed fields, the source density of star-forming, submm-faint ULIRGs with redshifts would likely increase by $\sim$30\,\%, increasing the source density of all high-$z$ ULIRGs by $\lower.5ex\hbox{\gtsima}$15\%, which could also increase the ULIRG contribution (including SMGs) to the cosmic star formation rate density at its peak \citep[e.g. see][]{bouwens09a,goto10a}. Future observations from {\sc SCUBA2} and the {\it Herschel Space Observatory} in the 50-500\,$\mu$m\ range will dramatically improve the census of warmer dust ULIRGs (and all dusty starburst galaxies) at z$\sim$2 through continued discovery, thus allowing more thorough follow up of the submm-faint (and 50-500$\mu$m\ bright) radio sample. With improved interferometric millimetre line observations at low and high frequencies from the Atacama Large Millimeter Array (ALMA), we will be able to target these high redshift sources in multiple CO transitions, thus removing an observational bias towards certain J-transitions of CO and enabling more accurate calculation of gas masses. Without such strong temperature biases in gas and dust observations, a more complete interpretation of high redshift ultraluminous galaxies will finally be possible. \section{Conclusions}\label{s:conclusions} We have presented CO molecular gas observations of a sample of submillimetre-faint, star-forming radio galaxies (SFRGs). Due to their non-detection at submillimetre wavelengths and lack of dominant AGN, these ultraluminous, \uJy\ radio galaxies are thought to be dominated by star formation but have warmer dust temperatures than SMGs. Out of 16 CO-observed SFRGs (12 from this paper and 4 from the literature), 10 are detected with a mean CO luminosity of $L^\prime_{\rm CO[1-0]}\,\sim\,$2.6$\times$10$^{10}$\,K\,km\,s$^{-1}$\,pc$^2$, which is slightly less luminous than the CO-observed SMG sample, despite being $\sim$2$\times$ less luminous in the radio. We attribute the luminosity difference to a selection bias but suggest that physical driving mechanisms might differ between the very bright ($>$10$^{13}$\,{\rm\,L$_\odot$}) and moderately bright ($\sim$10$^{12}$\,{\rm\,L$_\odot$}) populations. High-resolution radio imaging from MERLIN+VLA shows that the radio emission in the SFRG sample is resolved and extended with mean effective radii $\sim$2\,kpc, suggesting that the SFRG radio luminosities are dominated by star formation rather than AGN. The MERLIN+VLA sizes constraints are consistent with similarly analyzed SMG MERLIN+VLA sizes. While we note that AGN do not dominate our sample, it is possible that several of our sources have non-negligible AGN due in part to their selection as radio galaxies. Due to limited FIR data, we use the FIR/radio correlation to derive $L_{FIR}$ and then compute extinction-free star formation rates from the FIR. The star formation efficiencies (SFEs) of SFRGs are comparable within large uncertainties to those of SMGs and local ULIRGs, even though a few sources appear to have very high SFEs \citep[like those in ][]{chapman08a} or very low SFEs \citep[like those in ][]{daddi08a}. Those with perceived very high SFEs are more likely AGN dominated than super efficient; their FIR luminosities as calculated from the radio are probably overestimated. SFRGs have narrower CO line widths than the bright subsample of SMGs at the same redshifts ($\Delta V_{\rm SFRG}\,\sim\,$320\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ and $\Delta V_{\rm SMG}\,\sim\,$530\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi), suggesting that SFRGs might have less disturbed dynamical environments. The line width distribution is potentially suggestive of different evolutionary stages or processes between SFRGs and SMGs; however, the observed difference with SMGs could be due to a S/N or luminosity bias. The former would mean that intrinsically broad lines would have underestimated FWHMs due to low S/N. The latter is due to the more thorough spectroscopic sampling of the SMG population. SMGs have higher spectroscopic completeness and also include many objects with AGN signatures in the rest-UV/optical. Any SFRGs which have similar AGN signatures were culled from the sample, thus eliminating some of the potentially brightest SFRGs (most of the bright SMGs which have been surveyed in CO contain optical AGN). While less luminous SMGs (at the same luminosities of SFRGs) exist, few have been observed in CO, thus it is difficult to rule out that CO luminosity might relate to the distribution in CO with line width. Despite selection biases, we have explored the possible physical scenarios triggering warm-dust ULIRGs in contrast to the well studied cold-dust ULIRGs. SFRGs appear to bridge the gap between the properties of $>$10$^{13}$\,{\rm\,L$_\odot$} SMGs and $\sim$10$^{12}$\,{\rm\,L$_\odot$}. Quantitatively, their extended radio emissions suggest sizes consistent with SMGs, implying much larger dynamical masses than local ULIRGs. We show that SFRGs have the same AGN fraction as SMGs and are therefore unlikely to represent a `post-SMG' AGN turn-on phase. Luminous SMGs have been characterised as an early infall stage during a major merger, and local ULIRGs are often described at late-type major mergers. Here, we suggest that SFRGs (and less luminous SMGs) span the range of states during peak merger interaction. \section*{Acknowledgments} We thank the anonymous referee for detailed, helpful comments which helped improve this paper greatly. This work is based on observations carried out with the IRAM Plateau de Bure Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). We acknowledge the use of GILDAS software ({\tt http://www.iram.fr/IRAMFR/GILDAS}). This work is also based, in part, on observations by the University of Manchester at Jodrell Bank Observatory on behalf of STFC, and the VLA of the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. CMC thanks the Gates Cambridge Trust for support, IRS thanks STFC for support, and KC is supported by an STFC Postdoctoral Fellowship. Support for this work was provided by NASA through Hubble Fellowship grant HST-HF-51268.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555.
2,877,628,089,999
arxiv
\section{Introduction} The 4D light field camera can simultaneously record spatial and angular information of light rays incident at pixels of the tensor by inserting a microlens array between the main lens and image sensor. Due to its abundant information captured through one imaging, compared to traditional 2D images, the additional angular information can help to synthesize focal stacks and all-focus images with the rendering \cite{levoy1996light} and post-capture refocusing technique \cite{ng2005light}. These advantages make the light field data has been successfully applied to depth estimation. Light field depth estimation has become a popular research topic and plays a more and more important role in a wide range of applications, such as scene reconstruction \cite{kim2013scene}, image super-resolution \cite{pujades2014bayesian,wanner2013variational}, object tracking \cite{yang2013new}, saliency detection \cite{piao2019deep}, image segmentation \cite{zhu20174d}. According to the input data type, existing depth estimation methods from light field can be categorized into three types: depth estimation based on epipolar plane images (EPIs) \cite{kim2014cost,heber2017neural}, depth estimation based on sub-aperture images (sub-apertures) \cite{tomioka2017depth,wang2016depth} and depth estimation based on the focal stack \cite{pertuz2013analysis,tao2013depth,lin2015depth}. The EPIs-based depth estimation explores geometry structures in EPIs to capture the depth information. The sub-apertures-based depth estimation usually predicts depth maps by regarding sub-aperture images as a multi-view stereo configuration. The depth estimation based on focal stack often estimates the depth by utilizing some cues of the focal stack such as defocus cue, shading cue, symmetry, etc. Despite the above three types of the traditional methods have achieved great success, there are still some challenges that limit their application. For example, because of depending on domain-specific prior knowledge to increase the robustness, the generalization ability in different scenarios is limited to get the unsatisfactory depth map. With the development of deep learning, learning-based light field depth estimation emerges and alleviates this problem. Many methods based on deep learning capture depth cues from EPIs or sub-aperture images, while less methods focus on the focal stack. Based on this observation, we plan to predict depth maps with the focal stack in this paper. Focal stack generated from light field contains the focusness information which can help focus at the object in different range of depth. However, focal slices only contain the local focusness information of a scene. In other words, the defocus information may decrease the accuracy of the prediction results. Therefore, it is not enough to get a more robust depth by only using the focal stack \cite{hazirbas2018deep}. Considering the RGB image contains global and high quality structure information, we have strong reasons to believe that incorporating the focal stack and RGB images is helpful for light field depth estimation. In order to improving the performance of depth estimation, there are still some issues needed to be consider. First, we need to focus on the fact that depth value of each pixel is related to the neighboring pixel. Therefore, it is difficult to predict the accurate depth value when considered in isolation, as local image evidence is ambiguous. So, how to effectively capture contextual information to find the long-range correlations between features is essential for reasoning small and thin objects and modeling object co-occurrences in a scene. Second, since the RGB image contains more internal details and the focal slices contain abundant depth information, how to effectively capture and integrate the complementary information between them is an important aspect we should concentrate on. In this paper, our method confronts these challenges successfully. In summary, our main contributions are as follows: \begin{itemize} \item we propose a graph convolution-based context reasoning unit (CRU) to comprehensively extract contextual information of the focal stack and RGB images, respectively. Such a design allows to explore the internal spatial correlation between different objects and regions. It's helpful to clarify local depth confusions and improve the accuracy of depth estimation. \item we propose a attention-guided cross-modal fusion module (CMFA) to integrate different information in the focal stack and RGB images. It captures the complementary information from paired focal slices and RGB features by cross-residual connections to enhance features, and then learns multiple attention weights to integrate the multi-modal information effectively. It's helpful to compensate for the detail loss caused by defocus blur. \item Extensive experiments on two light field datasets show that our method achieves consistently superior performance over state-of-the-art approaches. Moreover, our method is successfully applied to the dataset collected by mobile phones. This demonstrates our method is not limited to the focal stack and is more practical for daily life. \end{itemize} \section{Related Work} {\bfseries{Traditional methods. }}For the depth estimation from light field images, traditional methods make use of the sub-aperture images or epipolar plane images (EPIs) or focal stacks. In terms of sub-aperture images, Georgiev and Lumsdaine \cite{georgiev2010reducing} compute a normalized cross correlation between microlens images to estimate disparity maps; Bishop and Favaro \cite{bishop2011light} propose a interactive method for a multi-view stereo image; Yu \emph{et al}. \cite{yu2013line} analyze the 3D geometry of lines and compute the disparity maps through line matching between the sub-aperture images; Heber and Pock \cite{heber2014shape} use the low-rank structure regularization to align the sub-aperture images for estimating disparity maps; Jeon \emph{et al}. \cite{jeon2015accurate} use the cost volume to estimate the multi-view stereo correspondences with sub-pixels. The early works about EPIs can be traced back to the research by Bolles \emph{et al}. \cite{bolles1987epipolar} who estimate the 3D structure by detecting edges in EPIs and fitting straight-line segments to edges afterwards. Zhang \emph{et al}. \cite{zhang2016robust} propose a local depth estimation method which employs the matching lines and spinning parallelogram operator to remove the effect of occlusion; Zhang \emph{et al}. \cite{zhang2016light} exploit the linear structure of EPIs and locally linear embedding to predict the depth map; Johannsen \emph{et al}. \cite{johannsen2016sparse} employ a specially designed sparse decomposition which leverages the orientation depth relationship on its EPIs; Wanner and Goldluecke \cite{wanner2013variational} compute the vertical and horizontal slopes of EPIs using a structured tensor, formulate the depth map estimation problem as a global optimization approach and refine initial disparity maps using a fast total variation denoising filter; Sheng \emph{et al}. \cite{sheng2018occlusion} propose a method which combines the local depth with occlusion orientation and employs multi-orientation EPIs. Tao \emph{et al}. \cite{tao2013depth} exploit both defocus cues and correspondence \emph{}cues from the focal stack to achieve better performance by using a contrast based measure. However there is a problem with these methods, which are usually too dependent on prior knowledge to generalize to other datasets easily. \noindent{{\bfseries{Learning-based Methods. }}}Recently, convolutional neural networks have performed very well on computer vision tasks such as segmentation \cite{wang2019fast} and classification \cite{kipf2016semi}. However, there are fewer learning-based methods for depth estimation from light field images. Heber \emph{et al}. \cite{heber2016convolutional} propose a network which consists of encoder and decoder parts to predict EPI line orientations. Luo \emph{et al}. \cite{luo2017epi} propose an EPI-patch based CNN architecture and Zhou \emph{et al}. \cite{zhou2018scale} introduce a scale and orientation aware EPI-Patch learning network. Shin \emph{et al}. \cite{shin2018epinet} propose a fully CNN for depth estimation by using the light field geometry. Anwar \emph{et al}. \cite{anwar2017depth} exploit dense overlapping patches to predict depth from a single focal slice. Hazirbas\emph{ et al}. \cite{hazirbas2018deep} propose the first method based deep-learning to compute depth from focal stack. These methods have greatly improve the prediction results but there is still room for improvement. Moreover, these methods pay less attention to the focal stack and the corresponding RGB images. In this paper, we propose a deep learning-based method which effectively incorporates a RGB and a focal stack to improve the prediction. \section{Method} In this section, we focus on the problem how to make effective use of the RGB image and focal stack to predict depth maps. First, we briefly introduce the overall architecture which can be trained end-to-end in Sec.3.1. Then we detail our context reasoning unit (CRU) and its key component in Sec.3.2. Finally, we elaborate on the attention-guided cross-modal fusion module (CMFA) which effectively captures and integrates the paired focal slices and RGB features to significantly improve the performance in Sec.3.3. \begin{figure*}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{fig/Fig2.pdf} \end{center} \vspace{-4mm} \caption{The whole pipeline of our method. It consists of the encoder and decoder. } \label{fig:long} \vspace{-6mm} \end{figure*} \subsection{The Overall Architecture} Our network architecture consists of an encoder and a decoder. It aims to comprehensively extract and effectively integrate features from the focal stack and the RGB image. The overall framework is shown in Fig.1. The encoder has a symmetric two-streams feature extraction structure: focal stack stream and RGB stream. Each stream contains the backbone achieved by VGG-16 \cite{simonyan2014very} that last pooling and fully-connected layers are discarded, and the context reasoning unit (CRU). The decoder consists of a progressive hierarchical fusion structure which contains our proposed attention-guided cross-modal fusion module (CMFA). For the sake of explanation, the shape of features can be denoted as $N\times W\times H\times C $. N, W, H and C represent the number of focal slices, height, width and channel respectively. Specifically, given the RGB image $I_0$ and the focal stack consisted of 12 focal slices $\{I_1,I_2,...I_{12}\}$, we separately feed them into the backbones. Then, with the raw side-out light field features $\{{F_{focal\_i} }\}_{i = 3}^5$ ($12\times W\times H\times C$) and RGB features $\{{F_{rgb\_i} }\}_{i = 3}^5$ ($1\times W\times H\times C$) from the last three layers, we respectively utilize the CRU to comprehensively extract contextual information from them. It can find the long-range correlations between features to explore the internal spatial correlation of focal slices and RGB images. In the decoder, we boost our model by the proposed attention-guided cross-modal fusion module (CMFA) which integrates paired feature $\{{{F_{focal\_i} }}'\}_{i = 3}^5$ ($12\times W_{1}\times H_{1}\times C_{1}$) and $\{{{F_{rgb\_i} }}'\}_{i = 3}^5$ ($1\times W_{1}\times H_{1}\times C_{1}$) which are come from the CRU to get $\{{F_i }\}_{i = 3}^5$ ($1\times W_{2}\times H_{2}\times C_{2}$). Finally, the multi-level features are decoded by the top-down architecture and depth maps are supervised by the ground truths. \subsection{Context Reasoning Unit (CRU)} As focus and refocus regions of focal slices indicate the different depth information and the RGB image contains more detail information of a scene, it is important to extract contextual information for capturing long-range correlation between features and exploring internal spatial correlation in scenes. To do this, we propose a context reasoning unit (CRU). Different from the general context aggregation modules which use the pure ASPP \cite{DBLP:journals/corr/ChenPK0Y16} or combine the ASPP with image-level encoder, our CRU can not only capture the spatial correlations between large objects with multiple dilated convolutions, but also pay more attention to the small and thin objects by capturing more abstract features in the image with multiple graph convolutions. As illustrated in Fig.1, the CRU consists of three branches. The top one is a short-connection operation which learns the residual information, the middle branch is multiple dilated convolutions and the bottom branch is the multiple graph convolutions. In the unit, the output features from the middle branch and the bottom branch are concatenated and convolved, and then added to the features from the top branch to get the final refined features. The multiple dilated convolutions consist of a cross-channel learner and an atrous spatial pyramid pooling. They can learn complex cross-channel interactions by a $1\times1$ convolution operation, and extract features via dilated convolution with small rates as 3, 5, 7. So it captures multi-scale spatial information. The resulting features are dominated by large objects, which effectively model the spatial correlations between the large objects in the image. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{fig/Fig3.pdf} \end{center} \vspace{-4mm} \caption{ The architecture of multiple graph convolutions.} \label{fig:short1} \vspace{-6mm} \end{figure} The multiple graph convolutions model interdependencies along the channel dimensions of the network's feature map. Different from previous methods which adopt image-level encoder structures, such as pure \emph{fc} layers \cite{eigen2014depth} or global average pooling \cite{hu2018squeeze}, our design can effectively model and communicate information from the region-level clues with less parameters and more nodes. Compared to \cite{chen2019graph}, we establish multiple node topological graphs on parallel to cover regions at different scales. The number of nodes in graphs are changing dynamically according to the spatial size of input features. These enable our network to refine spatial relationship between different regions and adapt to the small and thin objects effectively. Therefore, we can produce coherent predictions that consider all objects and regional differences in the image. Fig.2 shows the schematic illustration of our design. Specifically, take the raw side-out focal features $F_{focal\_i}$ ($ 12\times W\times H\times C$) as an example. Given the input features $X=F_{focal\_i}$, we establish three node topological graphs through three parallel branches to refine the spatial relationship. In the \emph{i}-th branch (\emph{i} = 1, 2, 3), the process can be divided to three steps: 1) Space projection: mapping the features from Coordinate Space \emph{S} to Interaction Space \emph{I}. We first reduce the dimension of $X$ with $\psi _i(X)$ and formulate the projection function $\varphi_i(X)=B_i$. In practice, $\psi _i(X)$ is achieved by a $1\times1$ convolution layer with $C_i$ channels and $\varphi _i(X)$ is achieved by a $1\times1$ convolution layer with $ N_i=\frac{W \times H}{{4 \times 2^{i - 1} }}$ channels. Therefore, the input features $X$ is projected to a new feature $V_i$ ($12 \times N_i \times C_i$) in the space \emph{I}. $V_i$ integrates information from different regions of the focal slices. Note the $N_i$ is the number of nodes and it is are changing dynamically according to the spatial size of raw features. This design helps our network effectively adapt to the features with different scales. 2) Feature graph convolution: reasoning the relation with graphs. After projection, we can build a fully-connected graph with adjacency matrix $A_i$ ($ 12 \times N_i \times N_i $) in the Interaction Space \emph{I}, where each node contains the feature descriptor. Therefore, the context reasoning problem is simplified to interaction capturing between nodes. They are achieved by two 1D convolution layers along the channel and node directions. With the adjacency matrix and layer-specific trainable edge weights $W_i$, we can diffuse information across nodes to get the node-feature matrix $M_i$. 3) Reprojection: mapping the features to Coordinate Space \emph{S} from Interaction Space \emph{I}. After the reasoning, we map the new features $M_i$ back into the original coordinate space with the another mapping function ${B_i}^T$ to get $Y_i$ ($ 12\times W \times H\times C_i$). $Y_i$ is extended to the original dimension by a $1\times1$ convolutional layer. Finally, we add the output features $\{ {Y_i } \}_{i = 1}^3$ to the original features $X$. Another convolution layer is attached for the final feature $X_{out}$ ($ 12\times W\times H\times C$). The implementation process can be defined as follows: \begin{equation} \begin{array}{l} \{ {V_i }\}_{i = 1}^3 =\{ {B_i \psi _i }\}_{i = 1}^3 = \{ \varphi_i(X) \psi _i(X)\}_{j = 1}^3 \end{array} \end{equation} \begin{equation} \begin{array}{l} \{ {M_i } \}_{i = 1}^3 = (V_i - A_{i} V_i )W_i \end{array} \end{equation} \begin{equation} \begin{array}{l} \{ {Y_i }\}_{i = 1}^3 = \{ {(B_i )^T M_i } \}_{i = 1}^3 \end{array} \end{equation} \begin{equation} \begin{array}{l} X_{out} = X + Y_1 + Y_2 + Y_3 \end{array}, \end{equation} For the RGB images, the same operations are performed on the RGB features generated from the backbone. \subsection{Attention-guided Cross-Modal Fusion module (CMFA)} The defocus blur could lead to the detail loss which negatively affects the accuracy of the depth map. To address this challenge, with the contextual information extracted from focal slices features and RGB features by the context reasoning unit (CRU), we aim to fuse them to compensate for detail loss. The direct method is to simply concatenating the RGB features and the focal slices features, but this not only ignores the relative contribution of different focal slices features and RGB features to the final result, but also severely destroys the spatial correlation between focal slices. Therefore, we design a attention-guide cross-modal fusion module to integrate the implicit depth information in focal slices and the abundant content information in RGB images effectively. As shown in Fig.1, the process of our module can be divided to two steps: 1) capturing complementary information to enhance features; 2) fusing the enhanced features. Considering that the difference between the focal features and RGB features, we first augment them to highlight the complementarity of multi-modal information. In the first step, we introduce the cross-modal residual connections achieved by simple 3D convolution and 2D convolution to capture complementary information from paired features $\{{{F_{focal\_i} }}'\}_{i = 3}^5$ and $\{{{F_{rgb\_i} }}'\}_{i = 3}^5$. Then we add the complementary information to the another features respectively. And the another $1 \times 1$ 2D convolution is attached to learn more deeply for getting the enhanced paired features $\{{{F_{focal\_i} }}''\}_{i = 3}^5$ and $\{{{F_{focal\_i} }}''\}_{i = 3}^5$. The objective that using cross-modal residual connections to extract complementary characteristic from the paired features can be equivalently posed as approximating residual function. This reformulation disambiguates the multi-modal combination. In the second step, inspired by \cite{meng2019frame}, we aggregate the enhanced RGB features and focal slices features with multiple attention weights. Take the enhanced ${F_{focal\_i}}''$ and ${F_{rgb_i}}''$ as an example, we concatenate them along the slice dimension and denote them as \emph{N} slices features $\{ {f_{i}^j } \}_{j = 1}^N$ (\emph{N}=13). First, in order to concentrate on depth information of every slice and the content information of RGB, we arrange coarse self-attention weights for every slice $f_{i}^j$. With these self-attention weights, we aggregate all the slices features into a global feature $F_{f_1}$. Because $F_{f_1}$ contains all focal slices features and RGB structure information over the entire depth range, we associate each slice features with the global feature representation $F_{f_1}$ to learn more reliable relation-attention weights for improving fusion results. As above, with the self-attention weights and associated attention weights, we integrate all the slice features to get the refine feature representation $F_{f_2}$. Finally, with a simple convolution layer, we can get the final fusion result $F_{i}$. The process can be defined as: \begin{equation} \begin{array}{l} \gamma _j = \sigma( {fc({dropout( {avgpooling(f_{i}^j )})})}) \end{array} \end{equation} \begin{equation} \begin{array}{l} F_{f1} = \frac{{\sum\nolimits_{j = 1}^N{\gamma_j f_{i}^j}}}{{\sum\nolimits_{j = 1}^N {\gamma _j }}} \end{array} \end{equation} \begin{equation} \begin{array}{l} \lambda _j = \sigma ({fc({dropout({avgpooling({\rm{C}}({f_{i}^j ,F_{f1}})})})}) \end{array} \end{equation} \begin{equation} \begin{array}{l} F_{f2} = \frac{{\sum\nolimits_{j = 1}^N {\gamma _j \lambda _j {\rm{C}}( {f_{i}^j ,F_{f1} })} }}{{\sum\nolimits_{j = 1}^N {\gamma _j \lambda _j } }} \end{array} \end{equation} \begin{equation} \begin{array}{l} F_i = conv(F_{f2} ), \end{array} \end{equation} where $\sigma$ is the sigmoid function, $\gamma _j$ is the self-attention weight and $\lambda _j$ is the relation-attention weight of the \emph{j}-th slice features, \emph{C} is the concatenation operation. In summary, this module makes effective use of the complementarity between focal slices and the RGB image. \section{Experiments} \subsection{Dataset} To evaluate the performance of our proposed network, we conduct experiments on two public light field datasets: DUT-LFDD \cite{piao2019depth}, LFSD dataset \cite{li2014saliency} and a mobile phone dataset \cite{suwajanakorn2015depth}. \noindent {\bfseries{DUT-LFDD:}} This dataset contains 967 real-world light field samples which are captured by the Lytro camera. Each light filed consists of a RGB image, a focal stack with 12 focal slices focused at different depth and a corresponding ground truth depth map. We selects 967 samples which include 12 focal slices from the dataset. Specifically, we select 630 samples for training and the remaining 337 samples for testing. Previous studies in \cite{eigen2014depth,eigen2015predicting} show that data argumentation is helpful to improve accuracy as well as to avoid over-fitting. Therefore, to augment the training set, we flip the input images horizontally with $50\%$ chance, paying attention to swapping all images so they are in the correct position relative to each other. We also rotate them with a random degree in the ranges [-5, 5]. And we add color augmentations where we adopt random brightness, contrast, and saturation values by sampling from uniform distributions in the range [06, 1.4]. \noindent {\bfseries{LFSD:}} This dataset is proposed by Li \emph{et al}. It contains 100 light field scenes captured by Lytro camera, including 60 indoor and 40 outdoor scenes. Each scene contains an all-in-focus image, a focal stack with 12 focal slices and a depth map. \noindent {\bfseries{Mobile Phone Dataset:}} This dataset is captured with a Samsung Galaxy phone during auto-focusing. Each scene is consist of a series of focal slices focused at different depth. It contains 13 scenes, such as plants, fruits, windows, etc. The size of every image is $640 \times 340$. \subsection{ Experiment setup} \noindent {\bfseries{Evaluation Metrics.}} In order to comprehensively evaluate various methods, we adopt seven evaluation metrics commonly used in depth estimation task, including root mean squared error (rms), mean absolute relative error (abs rel), squared relative error (sq rel), accuracy with a thread $\delta _i$. \noindent {\bfseries{Implementation details.}} Our method is implemented with pytorch toolbox and trained on a PC with GTX 2080 GPU. The input focal stack and RGB images are uniformly resized to $256 \times 256$. We use the ADM method for optimization with a initial learning rate 0.0001, and modify the learning rate to 0.00003 at 40 epochs. We initialize the backbone of our encoder with corresponding pre-trained VGG-16 net. The other layers in the network are randomly initialized. The batch size is 1 and maximum epoch is set 50. To improve prediction, we employ the loss function in \cite{hu2019revisiting} which consists of L1 loss, gradient loss and surface normal loss. We set the weights as 1 respectively in all the experiments. \subsection{Ablation Studies} In this section, we conduct ablation studies on each component of our network and further explore their functions. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.85\linewidth]{fig/Fig7.pdf} \end{center} \vspace{-4mm} \caption{The baseline network which is used to the RGB image and focal stack. } \label{fig:short2} \vspace{-6mm} \end{figure} \noindent {\bfseries{Baseline.}} For a fair comparison, we design a baseline network and denote it as 'Baseline'. As shown in Fig.3, the baseline network consists of two streams: focal stack stream and rgb stream. Each stream adopts the VGG-16 as the backbone. We also use Conv 2D and Conv 3D to replace CRU and CMFA in the proposed network, respectively. Each Conv 2D or Conv 3D contains 6 plain 2D convolution layers or 6 3D convolution layers. This fully ensures our baseline to have relatively high expressive power. We use the common concatenation to fuse the last three RGB features and corresponding focal slices features from the encoder network. Then we predict the depth map through the decoding network. \noindent {\bfseries{Effect of the multi-modal input.}} To show the advantage of using multi-modal input, we provide the comparisons of using the RGB stream, focal stack stream and baseline. Specifically, the RGB stream and focal stack stream are achieved by corresponding streams in the baseline network, denoting as 'rgb' and 'focal stack' respectively. As shown in Table 1 and Table2, compared to the 'rgb' and 'focal stack', the 'Baseline' improve such high performance by a large margin. The depth maps in the Fig.4 also confirm that the combination of RGB images and the focal stack achieves better performance than using the single-model information alone. \begin{table}[!ht] \centering \setlength{\tabcolsep}{2mm} \begin{threeparttable} \caption{Quantitative results of the ablation analysis on DUT-LFDD for our network. Note that $\delta_i = 1.25^{i} (i=1, 2,3)$.} \label{tab:performance_comparison} \begin{tabular}{ccp{1.05cm}<{\centering}p{0.9cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}} \toprule \multicolumn{1}{c}{\multirow{2}{*}{\scriptsize{Methods}}}& \multicolumn{4}{c}{\scriptsize error metric$\downarrow$}&\multicolumn{3}{c}{\scriptsize accuracy metric$\uparrow$}\cr \cmidrule(lr){2-4} \cmidrule(lr){5-7} &rms &rbs rel&sq rel&$\delta_1$&$\delta_2$&$\delta_3$\cr \midrule \multirow{1}{*} {\scriptsize {rgb }}&\scriptsize.4161&\scriptsize.1977&\scriptsize.1140&\scriptsize{.6520}&\scriptsize{.9164}&\scriptsize{.9867}\cr \multirow{1}{*} {\scriptsize {focal stack}}&\scriptsize.3856&\scriptsize.1830&\scriptsize.0978&\scriptsize{.6927}&\scriptsize{.9298}&\scriptsize{.9890}\cr \multirow{1}{*} {\scriptsize {Baseline}}&\scriptsize.3739&\scriptsize.1727&\scriptsize.0899&\scriptsize{.7020}&\scriptsize{.9373}&\scriptsize{.9919}\cr \multirow{1}{*} {\scriptsize {+CRU}}& \scriptsize{.3393}&\scriptsize{.1616}&\scriptsize.0793&\scriptsize{.7431}&\scriptsize{.9546}&\scriptsize{.9945}\cr \multirow{1}{*} {\scriptsize {+CMFA}}& \scriptsize{.3372}&\scriptsize{.1578}&\scriptsize.0767&\scriptsize{.7488}&\scriptsize{.9551}&\scriptsize{.9943}\cr \multirow{1}{*} {\scriptsize {+CRU(md)+CMFA}}&\scriptsize.3240&\scriptsize.1533&\scriptsize.0738&\scriptsize.7672&\scriptsize.9586&\scriptsize.9943\cr \multirow{1}{*} {\scriptsize {+CRU(mg)+CMFA}}&\scriptsize.3134&\scriptsize.1493&\scriptsize.0697&\scriptsize.7757&\scriptsize.9644&\scriptsize.9945\cr \multirow{1}{*} {\scriptsize {+CRU+CMFA(Ours)}}&\scriptsize.3029&\scriptsize.1455&\scriptsize.0668&\scriptsize.7859&\scriptsize.9685&\scriptsize.9956\cr \bottomrule \end{tabular} \end{threeparttable} \end{table} \begin{table}[!ht] \centering \setlength{\tabcolsep}{2mm} \begin{threeparttable} \caption{Quantitative results of the ablation analysis on LFSD for our network.} \label{tab:performance_comparison} \begin{tabular}{ccp{1.05cm}<{\centering}p{0.9cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}} \toprule \multicolumn{1}{c}{\multirow{2}{*}{\scriptsize{Methods}}}& \multicolumn{4}{c}{\scriptsize error metric$\downarrow$}&\multicolumn{3}{c}{\scriptsize accuracy metric$\uparrow$}\cr \cmidrule(lr){2-4} \cmidrule(lr){5-7} &rms&rbs rel&sq rel&$\delta_1$&$\delta_2$&$\delta_3$\cr \midrule \multirow{1}{*} {\scriptsize {rgb}}&\scriptsize.4637&\scriptsize.2098&\scriptsize.1286&\scriptsize{.6013}&\scriptsize{.8855}&\scriptsize{.9797}\cr \multirow{1}{*} {\scriptsize {focal stack}}&\scriptsize.4106&\scriptsize.1821&\scriptsize.1000&\scriptsize{.6757}&\scriptsize{.9198}&\scriptsize{.9885}\cr \multirow{1}{*} {\scriptsize {Baseline}}&\scriptsize.4029&\scriptsize.1791&\scriptsize.0957&\scriptsize{.6785}&\scriptsize{.9327}&\scriptsize{.9899}\cr \multirow{1}{*} {\scriptsize {+CRU}}& \scriptsize{.3727}&\scriptsize{.1660}&\scriptsize.0833&\scriptsize{.7136}&\scriptsize{.9420}&\scriptsize{.9951}\cr \multirow{1}{*} {\scriptsize {+CMFA}}& \scriptsize{.3647}&\scriptsize{.1622}&\scriptsize.0807&\scriptsize{.7257}&\scriptsize{.9412}&\scriptsize{.9937}\cr \multirow{1}{*} {\scriptsize {+CRU(md)+CMFA}}&\scriptsize.3503&\scriptsize.1566&\scriptsize.0753&\scriptsize.7513&\scriptsize.9546&\scriptsize.9947\cr \multirow{1}{*} {\scriptsize {+CRU(mg)+CMFA}}&\scriptsize.3316&\scriptsize.1490&\scriptsize.0669&\scriptsize.7718&\scriptsize.9658&\scriptsize.9964\cr \multirow{1}{*} {\scriptsize {+CRU+CMFA(Ours)}}&\scriptsize.3167&\scriptsize.1426&\scriptsize.0627&\scriptsize.7814&\scriptsize.9686&\scriptsize.9964\cr \bottomrule \end{tabular} \end{threeparttable} \end{table} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{fig/Fig8.pdf} \end{center} \vspace{-4mm} \caption{The visual results of ablation analysis.} \label{fig:short2} \vspace{-6mm} \end{figure} \noindent {\bfseries{Effect of the Context Reasoning Unit (CRU).}} The CRU is proposed to comprehensively extract contextual information from the focal slices and RGB features for exploring the internal spatial correlation. It effectively reasons the relationship between focus and defocus region of focal slices and the comprehensive structure relation between different regions of RGB images. In order to verify the effectiveness of CRU, we use CRU to replace the Conv 2D of baseline network and evaluate its performance (denoted as '+CRU'). As shown in Table 1 and Table 2, '+CRU' significantly outperforms 'Baseline' on all evaluation indicators. Compared to the baseline, as shown in $1^{st}$ row in the Fig.4, with the help of the CRU, the depth change on the small and thin objects is more obvious. Its powerful relational reasoning ability can better excavate correlations between objects and different regions within the image to clarify the local depth confusion. \noindent {\bfseries{Effect of the Attention-guide Cross-Modal Fusion module (CMFA).}} The CMFA is proposed to integrate the rich information in focal slices and RGB images to compensate for detail loss caused by defocus blur. In order to prove that our fusion method can capture complementary features more easily than simple concatenation, We use the CMFA model (noted as '+CMFA') to replace the simple concatenation and violent 3D convolution layers in 'Baseline' between the corresponding hierarchical features. The quantitative results in Table 1 and Table 2 and visual results in Fig.4 both show that our '+CMFA' can better fuse the information of the paired focal slices and RGB features than baseline network. It greatly reduces the estimation error over the dataset and achieves impressive accuracy improvements. \noindent {\bfseries{Effect of the CRU and CMFA.}} In order to prove that our reasoning unit and fusion method can jointly extract and aggregate the different information in the focal stack and the RGB image, we achieve our final methods by using them together. Moreover, we specifically explore the effectiveness of every component in the CRU. For convenience, we denote 'the multiple dilated convolutions' and 'the multiple graph convolutions' as 'CRU (md)' and 'CRU (mg)', respectively. Compared to our '+CRU' and '+CMFA', combining the 'CRU (md)' module with 'CMFA' (noted as '+CRU(md)+CMFA') and combining the 'CRU (mg)' module with 'CMFA' (noted as '+CRU(mg)+CMFA') both obtains an improvement by a obvious margin. The best results are obtained by combining the two modules together, denoted as '+CRU+CMFA'. As shown in Table 1 and Table 2, compared to 'Baseline', the RMSE value is reduced by nearly ${7\%}$ on DUT-LFDD and ${9\%}$ on LFSD. Our '+CRU+CMFA' greatly reduced the estimation error over the entire dataset and achieve impressive accuracy improvements. This shows our modules can effectively achieve the respective functions and do not interfere with each other. From the visual results in Fig.4, we can clearly observe that the depth maps of '+CRU+CMFA' have more complete information. It obviously prove that our method can better refine and fuse the focal slices features and RGB features. \subsection{Comparisons with State-of-the-arts} We compare results from our method and other six state-of-arts methods, containing both deep-learning-based methods(\emph{DDFF} \cite{hazirbas2018deep}, \emph{EPINet} \cite{shin2018epinet}) and non-deep learning method marked with * (\emph{PADMM}$^*$ \cite{javidnia2018application}, \emph{VDFF}$^*$ \cite{moeller2015variational}, \emph{LFACC}$^*$ \cite{jeon2015accurate}, \emph{LF}$_-$\emph{OCC}$^*$ \cite{wang2015occlusion}). For fair comparisons, we use the parameter settings provided by authors and adjust some of the parameters to fit different datasets as needed. Note that because the LFSD dataset does not contain multi-view images, results of some methods are not available. \begin{table*}[!ht] \centering \setlength{\tabcolsep}{1mm} \begin{threeparttable} \caption{Quantitative comparisons with state-of-the-art methods. From top to bottom: DUT-LFDD Dataset, LFSD dataset. $*$ respects non-deep-learning methods. The best results are shown in \textbf{boldface}.} \label{tab:performance_comparison} \begin{tabular}{ccp{0.8cm}<{\centering}p{1.05cm}<{\centering}p{0.9cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}} \toprule \multicolumn{1}{c}{\multirow{2}{*}{type}}& \multicolumn{1}{c}{\multirow{2}{*}{methods}}& \multicolumn{4}{c}{error metric}&\multicolumn{3}{c}{accuracy metric}\cr \cmidrule(lr){3-5} \cmidrule(lr){6-8} &{}&rms&rbs rel&sq rel&$\delta_1$&$\delta_2$&$\delta_3$\cr \midrule \multirow{7}{*}{DUT-LFDD}&Ours&{\bfseries{.3029}}&{\bfseries{.1455}}&{\bfseries{.0668}}&{\bfseries{.7859}}&{\bfseries{.9685}}&{\bfseries{.9956}}\cr &DDFF&.5282&.2666&.1838&.4817&.8196&.9658\cr &EPINet&.4974&.2324&.1434&.5010 &.8375 &.9837\cr &VDFF$^*$&.7326&.3689&.3303&0.3348&.6283&.8407\cr &PADMM$^*$&.4730&.2253&.1509&.5891&.8560&.9577\cr &LFACC$^*$&.6897&.3835&.3790&.4913&.7549&.8783\cr &LF$_-$OCC$^*$&.6233&.3109&.2510&.4524&.7464&.9127\cr \midrule \multirow{8}{*}{LFSD}&Ours&{\bfseries{.3167}}&{\bfseries{.1426}}&{\bfseries{.0627}}&{\bfseries{.7814}}&{\bfseries{.9686}}&{\bfseries{.9964}}\cr &DDFF&.6222&.3593&.2599&.3447&.7352&.9476\cr &EPINet&-&-&-&-&-&- \cr &VDFF$^*$&.7842&.4736&.5076&.3631&.5932&.8150\cr &PADMM$^*$&.3395&.1798&.1012&.7661&.9363&.9744\cr &LFACC$^*$&-&-&-&-&-&-\cr &LF$_-$OCC$^*$&-&-&-&-&-&-\cr \midrule \end{tabular} \end{threeparttable} \vspace{-2mm} \end{table*} \begin{figure*}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{fig/Fig4.pdf} \end{center} \vspace{-4mm} \caption{ Comparisons with the other methods on DUT-LFDD dataset: RGB images, the corresponding ground truth, estimated results of different methods.} \label{fig:long} \vspace{-2mm} \end{figure*} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{fig/Fig5.pdf} \end{center} \vspace{-4mm} \caption{The visual results of our method on the LFSD dataset.} \label{fig:short1} \vspace{-4mm} \end{figure} \noindent {\bfseries{Quantitative Evaluation.}} As shown in Table 3, compared to other methods, our network can achieve significant superior performance in terms of all evaluation metrics on the LFSD dataset and DUT-LFDD dataset. It indicates that our model is more powerful. More specifically, due to the limited generalization ability brought by the reliance on prior knowledge, all non-deep-learning methods are unable to get better results on both datesets. Compared to DDFF, our method successfully introduces the RGB information, which significantly enhances the depth map. Moreover, benefit from more sufficient contextual information reasoning and fusion of multi-modal features, our method still achieves much better performance than the DDFF and EPINet. \noindent {\bfseries{Qualitative Evaluation. }}We also visually compare our method with the representative method on the DUT-LFDD and LFSD datasets. As shown in Fig.5 and Fig.6, our method is able to handle a wide rage of challenging scenes. In those challenging cases, such as the similar foreground and background, multiple or transparent objects and complex background, our method can highlight depth change and fine detail information. In contrast, most of other methods are unlikely to predict more correct depth value due to the lack of high-level contextual reasoning or robust multi-modal fusion strategy. However, the proposed network is able to utilize both cross-modal and cross-level complementary information to learn cooperatively discriminative depth cues. Although the boundary of the resultant depth map seems a bit blurry, it is more accurate and reduces much of the noise. These visual results powerfully verify our proposed network are more effective and robust. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{fig/Fig6.pdf} \end{center} \vspace{-2mm} \caption{The visual results of our method on the mobile phone dataset.} \label{fig:short1} \vspace{-4mm} \end{figure} \noindent {\bfseries{Adaptation on the Mobile phone Dataset.}} To demonstrate the applicability of our method, we evaluate our framework on the smart-phone camera dataset \cite{suwajanakorn2015depth} which are pre-aligned. Note that alignment should be considered for practical applications. We feed 12 focal slices and RGB image from this dataset into our framework. As shown in Fig.7, our method can be generalized to the mobile phone dataset easily. Compared with the results from DDFF, our method captures more detail information. These demonstrate our method is more suitable for daily life. \section{Conclusions} In this paper, we propose a effective network for predicting depth maps from the focal stack and the RGB image, which can learn end-to-end. Our method enhances the performance from the following aspects: 1) comprehensively extract the contextual information reasons to explore internal spatial correlation by using a effective context reasoning module (CRU); 2) effectively fuses paired contextual information extracted from the focal stack and the RGB image by attention-guided cross-modal fusion module (CMFA). We thoroughly validate the effectiveness of each component in the network and gradually show an increase in cumulative accuracy. Experiment results also demonstrate that our method achieves new state-of-the-art performance on two light field datasets and a mobile phone dataset. \bibliographystyle{splncs04}
2,877,628,090,000
arxiv
\section{Introduction} The last few decades have been characterized by an exponential growth in the amount of news sources available, with the task of following news stories in real-time becoming very difficult to perform manually. As such, a demand has risen for systems that are capable of monitoring and organizing articles into news stories. Most approaches to this task focus mainly on the English language \cite{newslens,dense_vs_sparse,entity_aware}, with multilingual systems being highly dependent on language-specific features such as the entities of a given document \cite{batch_clustering,miranda}. Those approaches perform poorly on a multilingual scenario and are hard to extend to low-resource languages. Taking those limitations into account, we propose an online news clustering system that is able to cluster documents across languages (for which there are pretrained multilingual contextual embeddings) while maintaining performance regarding monolingual scenarios. The contributions described in the paper are: \textbf{(i)} We develop a system that is able to cluster documents without depending on language-specific features; \textbf{(ii)} We empirically demonstrate that the use of multilingual contextual embeddings as the document representation significantly improves clustering quality; \textbf{(iii)} We propose a method to train a classifier in order to merge similar clusters in an online setting, and demonstrate its impact in obtaining state-of-the-art results for multilingual clustering; \textbf{(iv)} We show that our system performs well on languages not seen during training and we describe a zero-shot experimental setting for Chinese, Russian, French, Italian, Slovenian and Croatian. \section{Related Work} The Topic Detection and Tracking (TDT) task \cite{tdt} has the goal of, given a stream of news articles, to arrange the documents into topic clusters called stories. Regarding batch clustering approaches, Laban et al. introduce \textit{newsLens} \cite{newslens}, a batch-based approach to news clustering. \textit{NewsLens} constructs its stories by extracting keywords from the articles and linking them through a community detection algorithm. Staykovski et al. \cite{dense_vs_sparse} follow \textit{newsLens}'s work by implementing a sparse approach through TF-IDF bag-of-words document representations, and compare it against a doc2vec \cite{doc2vec} dense representation approach. Linger et al. \cite{batch_clustering} extend the aforementioned studies into a crosslingual setting by processing batches of articles into monolingual topics and using a fine-tuned multilingual DistilBERT \cite{distilbert} to link topics across languages. For online clustering, Miranda et al. \cite{miranda} approach the problem by processing a continuous stream of multilingual documents into monolingual and crosslingual clusters. Each document is first associated to a monolingual cluster through sparse features, and crosslingual clusters are computed by linking different monolingual clusters using crosslingual word embeddings \cite{gardner}. Saravanakumar et al. \cite{entity_aware} propose an online news clustering system based on the non-parametric K-means algorithm. Their approach uses both sparse and dense features, with a main emphasis on the use of a fine-tuned entity-aware BERT model \cite{bert_base} to produce dense document representations, and is evaluated for the English language. Our system follows the described online clustering approaches to the TDT task: while Miranda et al. was bound to the usage of specific individual models for each language due to processing monolingual clusters, our approach provides a system that is able to leverage on dense multilingual document representations without the need to process the documents into monolingual clusters first. This is accomplished by using a single crosslingual representation for the documents and the consequent training of our ranking and classification models at a crosslingual level, which also allows for a fully dense clustering space and guarantees that our system is not limited to the English language, unlike Sarvanakumar et al.'s approach. \begin{comment} explanation for "much simpler system" \end{comment} \section{The Clustering Algorithm} \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{distilclustering.drawio.pdf} \caption{Representation of our clustering system's ranking, acceptance and merge steps.} \label{fig:architecture} \end{figure*} Our main focus for this task is to build an online multilingual news clustering system that depends as little as possible on language-specific features in order to process news articles for zero-shot languages (where we have no clustering training data) without having a considerable loss in performance. Previous approaches mostly focus on a single language \cite{newslens,dense_vs_sparse,entity_aware} or a specific set of languages \cite{batch_clustering,miranda}. \begin{comment} In particular, extracting information about entities can be a challenging task for low-resource languages. \end{comment} Our system is composed by four main steps (partially displayed in Figure \ref{fig:architecture}): obtaining the document representations, computing the best-ranked cluster, deciding if the document accepts the best-ranked cluster and enters it, and merging clusters that pertain to the same story. \subsection{Document Representation} \label{doc_repr} In contrast to previous work, we use a representation for each document that does not depend on its language, thus eliminating the need of distinguishing representations between monolingual and crosslingual. Documents are comprised of two components: a set of dense vectors $d^r$ corresponding to a contextual representation of the document, and a temporal representation $d^{ts}$. To obtain the dense vectors we use \textit{distiluse}\footnote{https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2}, a model produced through knowledge distillation \cite{knowledge_distil} on Multilingual DistilBERT \cite{distilbert} by using mUSE (multilingual Universal Sentence Encoder) \cite{muse} as its teacher model. Similarly to mUSE, this model aligns text at the sentence-level \cite{sentencebert} into a shared semantic space, thus, similar sentences in different languages will be closely mapped in the vector space. The model supports over 50 languages, and does not require specification of the input language. For each document, $d^r$ contains three dense representations: $d_{1}^r$ corresponds to its body+title, $d_{2}^r$ to its f.p. (first paragraph), and $d_{3}^r$ to its f.p.+title. For the first paragraph, the representation retrieved corresponds directly to the output of the model's encoder, while in the case of the first paragraph + title, mean pooling is performed between the two output vectors corresponding to each component. Finally, for the body + title, the body is segmented into paragraphs and each paragraph is processed individually by the encoder, with the final output vector being obtained through mean pooling of each paragraph's representation and the title. The title and the first paragraph of the documents are used as features with the intuition that sentences in the beginning of a news document usually have the greatest importance to the article \cite{word_importance, automatic_summarization}. Additionally, a representation for the title alone as a feature was not used for the documents due to certain news articles containing only the text of the article and no title. Regarding the temporal representation, we follow previous approaches \cite{miranda} and expose the temporal representation of a document as the value of its timestamp at the level of the day. When comparing a document's timestamp $d^{ts}$ against a given cluster's timestamp $c^{ts}$, we compute the Gaussian similarity between the two timestamps (with $\mu$ and $\sigma$ corresponding to hyper-parameters) as represented in the function below: \begin{equation} score^{ts}(d^{ts}, c^{ts}) = exp \left(-\frac{(d^{ts} - c^{ts}) - \mu}{2\sigma^2}\right) \end{equation} Clusters are also divided between dense ($c^r$) and temporal ($c^{ts}$) representations, with each cluster keeping three centroids for each document representation (body+title $c^r_1$, f.p. $c^r_2 $, f.p.+title $c^r_3$) that correspond to the average of the respective representations of each accepted document. When the document's representations are received, each centroid is updated to take them into account. A cluster also maintains timestamps for the newest document ($c^{ts}_1$), the oldest ($c^{ts}_2$), and the mean timestamp between all documents in the cluster ($c^{ts}_3$). To compare two timestamps, we compute the Gaussian similarity between them as proposed in previous work \cite{miranda}, which we refer to as $score^{ts}$. After a cluster is created, it is stored in the cluster pool, a structure that is responsible for maintaining the clusters and archiving old clusters as the system grows in size. \subsection{Cluster Ranking and Acceptance Models} \label{rank_merge} After computing its representations, a given document $d$ is compared against each cluster $c$ in the cluster pool in order to retrieve the most similar cluster to $d$. To determine the similarity between $d$ and each cluster $c$ in the cluster pool, we compute $c$'s ranking score and the best-ranked cluster is then evaluated by the acceptance model. If the cluster is accepted by the model, then $d$ enters the cluster and its representations are updated; otherwise, a new cluster containing $d$ is created. Temporal features are computed through the aforementioned $score^{ts}$ function, and the dense features are obtained through the computation of the cosine similarity ($score^{cos}$), with $d^{r}$ being a given representation of the document and $c^{r}$ a representation of the cluster, as follows: \begin{equation} score^{cos} (d^{r}, c^{r}) = \frac{d^{r} \cdot c^{r}}{|d^{r}| |c^{r}|} \end{equation} The ranking score for a cluster $c$ given a document $d$ and the ranking model's learned SVM weights $u^r$ and $u^{ts}$ is represented as follows: \begin{equation} \begin{split} score^{rank}(d,c) {} & = \sum_{i = 1}^{3}\left( score^{cos}(d_{i}^{r},c_{i}^{r}) \cdot u_{i}^{r} \right) + \sum_{j = 1}^{2}\left( score^{cos}(d_{j+1}^{r},c_{1}^{r}) \cdot u_{j+3}^{r} \right) \\ & + \sum_{k = 1}^{3}\left( score^{ts}(d^{ts},c_{k}^{ts}) \cdot u_{k}^{ts} \right) \end{split} \end{equation} The ranking model takes the form of a Rank-SVM model \cite{ranksvm}, which we train using a similar scheme to Miranda et al. \cite{miranda}. Given the training partition, each document generates a positive example corresponding to its gold cluster, and 20 negative examples for the 20 best-ranked clusters that are not the gold cluster. These examples are then used in the training of a Rank-SVM to obtain a set of learned weights $w^r$ and $w^{ts}$ for each of the features. After computing the best-ranked cluster $c$ for a given document $d$, the acceptance model determines if the document enters the cluster by computing its acceptance score through an SVM given a bias parameter $b$ and a set of similarity features with learned weights represented by $v^r$ and $v^{ts}$, which takes the following form: \begin{equation} \begin{split} score^{accept}(d,c) {} & = \sum_{i = 1}^{3}\left( score^{cos}(d_{i}^{r},c_{i}^{r}) \cdot v_{i}^{r} \right) + \sum_{j = 1}^{2}\left( score^{cos}(d_{j+1}^{r},c_{1}^{r}) \cdot v_{j+3}^{r} \right) \\ & + \sum_{k = 1}^{3}\left( score^{ts}(d^{ts},c_{k}^{ts}) \cdot v_{k}^{ts} \right) + b \end{split} \end{equation} If $score^{accept}$ is greater than zero, then $d$ is accepted into $c$, otherwise, a new cluster is created and initialized with $d$. The acceptance model is an SVM trained on the training partition of the dataset: each document generates a positive sample for its corresponding gold cluster, and its second-best ranked cluster is given as a negative example. \subsection{Merging Clusters} After a cluster receives a new document, we rank its similarity to each of the other clusters in the cluster pool using the ranking model (described in Section \ref{rank_merge}). Each candidate cluster is then evaluated by a third SVM model, which we call \textit{cluster merge model}, and the documents from each cluster with a positive merge decision are inserted into the source cluster. The intuition for this model is to find separate clusters that have grown to pertain to the same story, and subsequently merge them. This may happen throughout the clustering process, as few documents pertaining to a given story have entered the system, and the acceptance model may mistakenly assign separate clusters to those documents initially. As more relevant documents enter the system, those clusters may end up in similar points in the vector space, and thus should be merged. For this model, we use the eight features specified in Section \ref{rank_merge} as well as $score^{accept}$, and two additional features corresponding to the size of each cluster of the evaluated pair, with the intuition of associating the cluster merging to clusters of small sizes. The $score^{size}$, given a cluster $c$ with $k$ documents and a size limit vector $v$ of length $n$, is represented by the following equation: \begin{equation} score^{size}(c, v) = \sum_{i=1}^{n}\left(\begin{cases} \frac{1}{n}, \textnormal{ if } k > v[i]; \\ 0, \textnormal{ if } k \leq v[i] \end{cases} \right) \end{equation} To train the model, we generate a dataset by sampling pairs of clusters and labeling them according to whether they should be separated or merged. For each pair, we evaluate the relative F1 given the gold label of each document in the cluster: if the computed value is higher when the clusters are merged, a positive sample is produced and the clusters are merged for the training, otherwise, a negative sample is generated. This is done without forcing documents into their gold clusters, and with the ranking and acceptance model trained accordingly. \section{Experiments} \subsection{Dataset} \begin{table} \caption{\label{tab:dataset} Statistics for the train and test partitions of the dataset. The training dataset does not contain documents in Slovenian, Croatian, French, Russian or Italian.} \centering \small \setlength{\tabcolsep}{3pt} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline & \textbf{Language} & en & es & de & zh & sl & hr & fr & ru & it \\ \hline \textbf{Train} & \textbf{Docs} & 12233 & 4527 & 4043 & 10 & - & - & - & - & - \\ & \textbf{Clusters} & 593 & 416 & 377 & 1 & - & - & - & - & - \\ \hline \textbf{Test} & \textbf{Docs} & 8726 & 2177 & 2101 & 440 & 37 & 13 & 61 & 231 & 88 \\ & \textbf{Clusters} & 222 & 149 & 118 & 9 & 3 & 2 & 2 & 1 & 2 \\ \hline \end{tabular} \end{table} We follow previous work on this task and evaluate our system on a news clustering dataset \cite{rupnik}. Besides the three main languages (English, Spanish and German), this dataset also provides a significant amount of documents in Chinese and Russian, as well as documents in Slovenian, Croatian, French and Italian. These samples allow us to roughly preview the performance of the system on other languages besides the ones it was trained in. The dataset is composed by 34,687 news documents, and it is divided into two sets: a training set comprised of 20,813 articles, and a test set that contains 13,874 articles. The articles in the training set are dated from \textit{18-12-2013} to \textit{02-11-2014}, while the articles in the test set are dated between \textit{02-11-2014} and \textit{27-08-2015}, thus guaranteeing that the articles in the test set are newer and their thematic have not been observed in the training set. Further statistics regarding the dataset are presented in Table \ref{tab:dataset}. \subsection{Evaluation} \begin{table*}[] \caption{\label{tab:monolingual} Results for monolingual clustering on the test dataset.} \centering \small \setlength{\tabcolsep}{3pt} \begin{tabular}{c|c|ccc|ccc|c} \hline \multicolumn{1}{c|}{\textbf{Language}} & \multicolumn{1}{c|}{\textbf{Systems}} & \multicolumn{3}{c|}{\textbf{BCubed}} & \multicolumn{3}{c|}{\textbf{Standard}} & \multicolumn{1}{c}{\textbf{Clusters}} \\ & \multicolumn{1}{l|}{} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \multicolumn{1}{l}{} \\ \hline & Miranda et al. & 92.36 & 94.27 & 90.25 & 94.03 & 98.14 & 90.25 & 326 \\ & Staykovski et al. & 94.41 & 95.16 & 93.66 & 98.11 & 97.60 & 98.63 & 484 \\ English & Linger et al. & 93.86 & 94.19 & 93.55 & \textbf{98.31} & 98.21 & 98.42 & 298 \\ & Saravanakumar et al. & \textbf{94.76} & 94.28 & 95.25 & - & - & - & - \\ & Ours & 92.43 & 92.76 & 92.10 & 96.46 & 96.50 & 96.41 & 470 \\ \hline & Miranda et al. & 91.61 & 96.44 & 87.25 & 96.83 & 97.01 & 96.65 & 281 \\ Spanish & Linger et al. & \textbf{91.79} & 93.76 & 90.08 & \textbf{97.68} & 98.02 & 97.34 & 267 \\ & Ours & 90.39 & 95.01 & 86.20 & 95.48 & 95.48 & 95.48 & 293 \\ \hline & Miranda et al. & 93.64 & 98.92 & 88.90 & 97.19 & 99.86 & 94.67 & 229 \\ German & Linger et al. & \textbf{94.62} & 95.13 & 94.31 & 98.70 & 99.16 & 98.24 & 205 \\ & Ours & 93.71 & 97.68 & 90.04 & \textbf{99.07} & 99.64 & 98.50 & 217 \\ \hline \end{tabular} \end{table*} Regarding evaluation metrics, we follow the same approach as \cite{dense_vs_sparse,batch_clustering} and report the F1 score and the BCubed F1 \cite{bcubed} score, as well as the associated Precision and Recall scores. Each sample document of the test dataset contains a label with the expected cluster ID, and since the clusters described in the test dataset are monolingual, the crosslingual connections are given by a positive/negative label between two clusters. As such, for the standard F1 score, a \textit{true positive} is described as a pair of two documents with matching cluster labels (monolingual), or a pair of documents whose cluster labels share a positive connection (crosslingual) that have been accepted into the same cluster. A \textit{false positive} corresponds to a pair of documents whose cluster labels do not match (or share a positive connection) that have been accepted into the same cluster. A \textit{true negative} is represented by a pair of documents whose cluster labels do not match (or share a positive connection) that have been accepted into different clusters, and a \textit{false negative} corresponds to a pair of two documents with matching cluster labels (monolingual), or a pair of two documents whose cluster labels share a positive connection (crosslingual) that have been accepted into different clusters. For the BCubed F1 score, as previously described by Staykovski et al. \cite{dense_vs_sparse} and Amigó et al. \cite{bcubed}, the BCubed precision of a document corresponds to the proportion of documents in its cluster whose cluster label is the same, including itself. The BCubed recall of a document is the proportion of documents with the same label as that document (in the whole dataset) that appear in its cluster. The correctness between two documents $i$ and $j$, given the label $L_i$ and the cluster $C_i$ for each document $i$, is computed as follows: \begin{equation} Correctness(i, j) = \begin{cases} 1, \textnormal{ if } L_i = L_j \textnormal{ and } C_i = C_j \\ 0,\textnormal{ Otherwise} \end{cases} \end{equation} The overall BCubed precision, recall and F1 score are computed as follows: \begin{equation} \begin{split} \textnormal{BCubed } P &= Avg_i[Avg_{i.C_i=C_j}[Correctness(i,j)]] \\ \textnormal{BCubed } R &= Avg_i[Avg_{i.L_i=L_j}[Correctness(i,j)]] \\ \textnormal{BCubed } F_1 &= 2 * \frac{\textnormal{BCubed } P * \textnormal{BCubed } R}{\textnormal{BCubed } P + \textnormal{BCubed } R} \end{split} \end{equation} For the monolingual evaluation, we evaluate the clustering performance of our model on the three main languages of the dataset by performing clustering using only the documents of the specified language, while the crosslingual evaluation uses the entirety of the test set regardless of language. In order to evaluate the results, a gold set of cluster labels is provided for each document that indicates the expected cluster of that document. The clusters are typically multilingual, and in accordance to previous work \cite{miranda, batch_clustering}, the crosslingual evaluation takes into account both monolingual and crosslingual connections between documents of a cluster. \subsection{Experimental Results} \begin{table}[] \caption{\label{tab:crosslingual} Crosslingual clustering results on the test dataset.} \centering \small \setlength{\tabcolsep}{3pt} \begin{tabular}{l|ccc|ccc|c} \hline \multicolumn{1}{c|}{\textbf{Systems}} & \multicolumn{3}{c|}{\textbf{BCubed}} & \multicolumn{3}{c|}{\textbf{Standard}} & \multicolumn{1}{l}{\textbf{Clusters}} \\ \multicolumn{1}{c|}{} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \multicolumn{1}{l}{} \\ \hline Miranda et al. & - & - & - & 84.0 & 83.0 & 85.0 & - \\ Linger et al. & 82.06 & 80.25 & 83.97 & 86.49 & 85.11 & 87.92 & 606 \\ \hline (Ours) 4-F Rank+Acc. & 88.02 & 91.31 & 84.95 & 92.34 & 97.26 & 87.09 & 957 \\ (Ours) 8-F Rank+Acc. & 89.24 & 92.62 & 86.11 & 93.76 & 97.66 & 90.15 & 1023 \\ (Ours) 8-F Rank+Acc.+M. & \textbf{90.10} & 89.70 & 90.51 & \textbf{97.21} & 97.01 & 97.42 & 812 \\ \hline \end{tabular} \end{table} As shown in Table \ref{tab:monolingual}, for the monolingual evaluation our system is on-par with Miranda et al.'s in English and German on both metrics, but is surpassed by Linger et al.'s in all languages and modularities except for German when evaluating both metrics. For crosslingual clustering, as shown in Table \ref{tab:crosslingual}, our system achieves state-of-the-art performance on BCubed F1 \cite{bcubed} (+8.04) and on the standard F1 (+11.33) despite producing a larger amount of clusters. Furthermore, we perform an ablation study that shows the relative importance of the system components. 4-F Rank+Acc. refers to the clustering system with a 4-feature ranking and acceptance model, which used only $score^{cos}(d^{r}_1, c^{r}_{1})$ and the timestamp features. Adding the other features (8-F Rank+Acc.) improved both standard (+1.42) and BCubed F1 (+1.22). Finally, the cluster merge model was added to our system (8-F Rank+Acc.+M.), resulting in gains for both standard (+3.35) and BCubed F1 (+0.86). \begin{table}[] \caption{\label{tab:other_langs} Clustering results on other languages.} \centering \small \setlength{\tabcolsep}{3pt} \begin{tabular}{l|ccc|ccc|c} \hline \multicolumn{1}{c|}{\textbf{Languages}} & \multicolumn{3}{c|}{\textbf{BCubed}} & \multicolumn{3}{c|}{\textbf{Standard}} & \multicolumn{1}{l}{\textbf{Clusters}} \\ \multicolumn{1}{c|}{} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \multicolumn{1}{l}{} \\ \hline Chinese & 96.18 & 100.00 & 92.65 & 99.07 & 100.00 & 98.16 & 28 \\ Slovenian & 76.92 & 100.00 & 62.50 & 79.67 & 100.00 & 66.21 & 12 \\ Croatian & 77.85 & 100.00 & 63.73 & 74.99 & 100.00 & 60.00 & 5 \\ French & 98.50 & 100.00 & 97.04 & 99.69 & 100.00 & 99.39 & 3 \\ Russian & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1 \\ Italian & 98.86 & 100.00 & 97.75 & 98.78 & 100.00 & 97.59 & 3 \\ \hline \end{tabular} \end{table} Given the nature of our system, we evaluated it on the remaining languages of the dataset as shown in Table \ref{tab:other_langs}. Our ranking, acceptance and cluster merge models were not trained on any data from these languages (with the exception of Chinese), making this a zero-shot clustering scenario. Chinese, French, Russian and Italian document clustering had high F1-scores, with results above 95\%, and both Slovenian and Croatian had initial clustering scores above 70\%. \section{Conclusion} We presented a clustering model that produces state-of-the-art results at a multilingual level without depending on language-specific features, and that maintains quality at a monolingual level on-par to previous work on news clustering. \begin{comment} Our model was evaluated on English, German and Spanish, as well as on a significant amount of documents in Chinese, Croatian, Slovenian, French, Russian and Italian despite not seeing these languages during training. \end{comment} We demonstrated that it is possible to improve results by utilizing contextual embeddings to represent documents at a crosslingual level, and how a linear SVM can be trained in order to perform such a task. By reducing the complexity of the clustering space, we motivate future research on topics such as clustering while taking user feedback into account, and high-performance vector search to improve clustering speed and scalability. Our system also enables computational efficiency improvements by allowing most operations to be paralellized. We make our code available as open-source\footnote{\textit{https://github.com/Priberam/projected-news-clustering}}. \section*{Acknowledgements} This work is supported by the EU H2020 SELMA project (grant agreement No 957017).
2,877,628,090,001
arxiv
\section{Introduction} The prediction of landslides, in particular the discovering of the triggering mechanism, is one of the challenging problems in earth science. The term landslide has been defined in the literature as a movement of a mass of rock, debris or earth down a slope under the force of gravity \cite{Varnes1958, Cruden1991} . Landslides occur in nature in very different ways. It is possible to classify them on the basis of the involved material and the type of movement \cite{Varnes1978}. Landslides can be triggered by different factors but in most cases the trigger is an intense or long rain. Rainfall-induced landslides involve different fields, such as engineering geology, soil mechanics, hydrology and geomorphology \cite{Crosta2007}. With the rapid development of computers and advanced numerical methods, detailed mathematical models are increasingly being applied to the investigation of complex process dynamics such as flow-like landslides or debris flows. In the literature, two approaches have been proposed to evaluate the dependence of landslides on rainfall measurements. The first approach relies on dynamical models while the second is based on the definition of empirical rainfall thresholds above which the triggering of one or more landslides is possible \cite{Segoni2009, Martelloni2011}. Several methods have been developed to simulate the propagation of a landslide; most of the numerical methods are based on a continuum approach using an Eulerian point of view \cite{Crosta2003, Patra2005}. An alternative to these approaches is to use Lagrangian discrete-particle methods which represent the material as an ensemble of interacting elements, called particles or grains. The commonly adopted term for the numerical methods for discrete systems made of non deformable elements, is the discrete element method (DEM) and it is particularly suitable to model granular materials, debris flows and flow-like landslide \cite{Iardanoff2010}. The DEM is very closely related to molecular dynamics (MD), the former method is generally distinguished by the inclusion of rotational degrees-of-freedom as well as stateful contact and often complicated geometries. As usual, the computational load can be very onerous with the increasing of the complexity or of the number of the individual element. The inclusion of a more detailed description of the units allows for more realistic simulations. However, the accuracy of the simulation has to be compared with the experimental data available. While for laboratory experiments it is possible to collect very accurate data, this is not possible for real landslides. These arguments motivated us in exploring the consequences of reducing the complexity of the model as much as possible. In this paper we present a toy model applied to the study of the starting and progression of particles down a slope, whose displacement is induced by a rainfall \cite{Massaro2011}. The inclusion of the effect of fluids on a granular material is a challenging problem. The main hypothesis of our model is that the static friction decreases as a result of the rain, which acts as a lubricant: this friction law is inspired by Jop et al. in 2006 \cite{Jop2006}. At present we do not pretend to be able to simulate a real landslide or debris flow, rather we want to explore a new alternative approach useful for this kind of problems. The resulting numerical method, similar to that of molecular dynamics (MD), is based on the use of an interaction potential, i.e. the 2-1 Lennard-Jones one. This approach is particularly suited for the inclusion of nonlinear terms such as those given by instantaneous change of velocities, constitutive relations among different quantities, chemical reactions, etc. This flexibility was also exploited in the modeling of continuous material by means of “mesoscale” models. Although the model is still schematic, and known constitutive relations are not yet included, its emerging behavior is quite promising. The results are consistent with the behavior of real shallow landslides induced by rainfall. Emerging phenomena such as fractures, detachments and arching can be observed in the simulations. In particular, the model reproduces well the time distribution of local avalanches into the landslide, analogous to the observed Omori distributions for earthquakes. These power laws are in general considered the signature of self-organizing phenomena. As in other models, this self-organization is related to a large separation of time scales. The main advantage of these particle methods is given by the capability of following the trajectory of a single particle, possibly identifying its dynamical properties. \section{The Model} We are interested in modeling superficial landslides, therefore we describe an inclined soil layer as a two-dimensional structure formed by a set of masses or blocks. The model is based on the interaction forces that act among blocks in the coordinates system along the surface. The triggering conditions are based on the modeling of Mohr-Coulomb law. The forces that act on the particles are: \subsection*{Force of gravity} \begin{equation} \boldsymbol{F}^{(g)}_{i} = g \sin(\alpha) (m_{i}+w_{i}(t),0), \label{gravity} \end{equation} where $g$ is the gravity acceleration, $\alpha$ the angle of the slope (supposed constant), $m_i$ is the dry mass, variable from block to block and $w_i$ is the cumulative absorbed water in time, defined as: \begin{equation} w_{i}(t) = \int \sigma_{wi} (t)\;\mathrm{d} t, \label{infiltration} \end{equation} where $\sigma_{wi}(t)$ is the water absorbed during time because of the rainfall. \subsection*{Static Friction} The static friction $\boldsymbol{F}^{(s)}_{i}$ is given by: \begin{equation} \boldsymbol{F}^{(s)}_{i} = \mu_{s}(m_{i} + w_{i}(t))g\cos(\alpha)(\mu_s \exp(-w_0 t) + \mu_{slow}(1-\exp(w_0 t))). \label{static} \end{equation} The force in Eq. \ref{static} depends on two friction terms, characterized by coefficients $\mu_s$ and $\mu_{slow}$ respectively the initial coefficient $\mu_s$ at $t=0$ and the final one $\mu_{slow}$ for $t\rightarrow \infty$, with $\mu_s>\mu_{slow}$. In synthesis, the effect of rainfall is to decrease the friction on the sliding surface of the landslide during time (through the constant velocity $w_0$ of the exponential). Moreover the friction coefficients $\mu_s$ and $\mu_{slow}$ vary randomly (in small increments) with the position, thus modeling the roughness of the sliding surface. \subsection*{Dynamic Friction} When the block is moving, the applied force is: \begin{equation} \boldsymbol{F}^{(d)}_{i} = \mu_{d}(m_{i} + w_{i}(t))\cos(\alpha)(\mu_d \exp(-w_0 t) + \mu_{dlow}(1-\exp(w_0 t))) \cdot (-\boldsymbol{v}). \label{dynamic} \end{equation} Eq. \ref{dynamic} is similar to Eq. \eqref{static}, but the direction of the force direction is opposed to velocity. The friction coefficients (static and dynamic) are randomly assigned to spatial zones, according to a Gaussian distribution. In this way we are modeling a rough sliding surface. Similarly, the friction coefficients $\mu_d$ and $\mu_{dlow}$ vary randomly. \subsection*{Interaction forces among blocks} The interaction force between two blocks or particles is defined trough a potential that, in the absence of experimental data, we model after a $2-1$ Lennard-Jones one (in Eq. \eqref{lennard}). The justification of this choice is given in section “simulation methodology”. \begin{equation} \boldsymbol{F}^{(i)}_{ij}=-\boldsymbol{F}^{(i)}_{ji} =-\nabla \left( 4\varepsilon \cdot \left[\left(\frac{r}{R_{ij}}\right)^{-2b} - \left(\frac{r}{R_{ij}}\right)^{-b}\right] \right), \label{lennard} \end{equation} where $R_{ij}=1$ is the equilibrium distance between two blocks, $b=1$ and $r$ is the distance between two blocks: $r= \sqrt{(x_{j} - x_{i})^{2} + (y_{j} - y_{i})^{2}}$. \subsection*{Force of cohesion} At beginning, the system is prepared in equilibrium, that is, the blocks are disposed on a regular grid. We can assume, in agreement with the law of Mohr-Coulomb, modified by Terzaghi in 1943 \cite{Terzaghi1943} (see Eq. \eqref{coulomb}), to have a tension of cut, due to a cohesion force, also in a condition of zero normal tension: such principle is expressed as: \begin{equation} \tau_{f} = c' + \sigma' \tan(\phi'). \label{coulomb} \end{equation} In order to start the rupture, the tension of cut on the sliding surface equals “an adhesive” part $c'$ plus a friction part $\sigma'n \tan(\phi')$. Therefore, in analogy with this method, the motion of the single block will not be initiate until the active forces (indicated in the third term of Eq. \ref{coulomb}) exceed the static friction threshold plus a cohesion term (that depends on the position in stochastic way). Moreover, we consider a speed threshold vd for the static-dynamic transition. Summing all up, the block is at rest under the following conditions: \begin{equation} \begin{split} \lvert \boldsymbol{F}^{(a)}_{i} \rvert < \boldsymbol{F}^{(s)}_{i} + c', \\ \lvert \boldsymbol{v}_{i} \rvert < \boldsymbol{v}_{d},\\ \boldsymbol{F}_{i}^(a) = \boldsymbol{F}^{(g)}_{i} + \sum_{j=1}^n\boldsymbol{F}_{ij} - \mu\boldsymbol{v}_{i}, \end{split} \label{condition} \end{equation} where $\boldsymbol{F}_{i}^{(a)}$ and $\boldsymbol{v}_{i}$ represent the active forces and the speed. $C_i'$ is the term of cohesion, variable with the position, while $\mu v_i$ is the term of viscosity. For $b=1$ the potential energy $dV = -Fdr$ becomes: \begin{equation} V = -k \int \left(\frac{1}{r}-\frac{1}{r^2}\right)dR = -k \left(ln r + \frac{1}{r}-1\right), \label{potential} \end{equation} in which we choose $-1$ as arbitrary constant of integration so to have zero potential energy at the equilibrium distance. \section{Simulation Methodology} In our simulations we consider an interaction among those particles at distance below a given threshold which in our units is $2^{1/2}$. At beginning the particles are arranged on a regular grid, i.e., at the instant $t=0$ each block is placed in the nodes of a regular rectangular grid and therefore every mass interacts with the eight blocks placed in the nearest and next-to-nearest nodes (Figure 1a). For each time step, the interactions are re-calculated for each mass within the interaction range. This technique is used in molecular dynamics and congruent with principle of action and reaction (Figure 1b). \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=3.5cm]{1a.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=3.5cm]{1b.eps}} \caption{\label{fig:initial}(a) At $t=0$ we have an interaction among second neighbours. (b) Technique of ri-calcolous of interactions between particles: at each time step, the interactions are re-calculated for each mass within the interaction range.} \end{figure} In our simulations and generally in MD the positions and velocities are updated using a first or second-order Verlet algorithm \cite{Verlet1967}. This algorithm allows a good numerical approximation and is very stable. It also does not require a large computational power as the forces are calculated once for each time step. When a block is in state of motion, the total force that acts on it is given by the sum of the active forces and the force of dynamic friction, \begin{equation} \boldsymbol{F}_{i}^{tot} = \boldsymbol{F}_{i}^{(a)} + \boldsymbol{F}_{i}^{(d)} \label{int} \end{equation} We have to define a starting time of the landslide, for instance the time of the first block detachment. In case of uniform rainfall, it is simple to deduce theoretically this time, i.e., we can write, in the equilibrium conditions limit, for the single mass $i$, \begin{equation} \lvert \boldsymbol{F}_{i}^{(a)} \rvert= \boldsymbol{F}_{i}^{(s)} + C'_{i}. \label{int1} \end{equation} \begin{equation} \boldsymbol{F}_{i} = \boldsymbol{F}_{i}^{(g)} + \sum_{j=1}^{8} \boldsymbol{F}_{ij} - u\cdot\boldsymbol{v}_i, \label{int2} \end{equation} But since initially the masses are arranged in a regular grid, the interaction term is null as the term of viscosity that depends on the velocity, so: \begin{equation} \begin{split} \sum_{j=1}^{8} \boldsymbol{F}_{ij}=0,\\ v_i = 0, \label{int22} \end{split} \end{equation} thus \begin{equation} \lvert \boldsymbol{F}_{i}^{(g)} \rvert = \boldsymbol{F}_{i}^{(s)} + C'_i, \label{int13} \end{equation} that is: \begin{equation} \begin{split} m_i^*g \sin(\alpha) &= m_i^*g \cos(\alpha) (\mu_s \exp (-w_0 T_p) + \mu_{slow}(1-\exp(-w_0T_p))), \\ m_i^* &= m_i +\Delta w T_{pn},\\ T_{pn} &= T_p/\Delta t, \end{split} \label{int3} \end{equation} where $T_p$ is the time of particle triggering, $T_{pm}$ the number of temporal steps in simulation and $\Delta t$ the amplitude of simulation step $(\Delta t =0.01)$. Solving the Eq. \eqref{int13} we obtain: \begin{equation} \frac{A_0 + B_0}{K + B}=\exp(w_0T_p), \label{int4} \end{equation} where $A_0=(\mu_s - \mu_{slow})m_i$, $B_0=(\mu_s - \mu_{slow})\frac{\Delta w}{\Delta t}$, $A=(\tan(\alpha)-\mu_{slow})m_i$, $B=(\tan(\alpha)-\mu_{slow})\frac{\Delta w}{\Delta t}$ and $K = A - \frac{C'}{g \cos(\alpha)}$. Eq. \eqref{int4} is a transcendental equation solvable with numeric methods. An example of simulation is reported in the \figurename~\ref{fig:second}. The triggering time of particles is variable from $80$ to $180$ temporal steps for a subset of particles depending on random variables (cohesion, friction, mass) in the coordinates system of the slope. In the \figurename~\ref{fig:third}(a) the triggering time versus slope is shown for different values of cohesion. \begin{figure}[t!] \centering {\includegraphics[width=12cm]{2a.eps}} \caption{\label{fig:second} Triggering time of subset of particles depending on random variables (cohesion, friction, mass) in the coordinates system of the slope.} \end{figure} \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=6cm]{3a.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=6cm]{4a.eps}} \caption{\label{fig:third}(a) Triggering time of particles versus slope for different values of cohesion. (b) Triggering time of landslides versus slope for increasing value of the threshold $\epsilon$ (Eq. \eqref{int5}).} \end{figure} Actually, the sliding blocks could stop again after the first detachment, so our first definition of the starting time is not accurate. A more sensible definition for the starting time is based on the motion of the center of mass of the system. Since ours system is discrete, we get: \begin{equation} X_c(T^*)- X_c(T=0) = \frac{\sum_i m_i^*x_i}{\sum_i m_i} > \epsilon. \label{int5} \end{equation} In other words, we consider the starting time $T^*$ as the time for which the center of mass is displaced more than a distance ε from its starting position (assumed to be zero). See also the \figurename~\ref{fig:third}(b). \section{Simulation results} \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=6cm]{7.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=6cm]{10.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=6cm]{13.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=6cm]{14.eps}} \caption{\label{fig:settima}(a) Mean kinetic energy increment distribution of landslide, in case of $\mu = 0.05$, shows a gaussian behavior. (b) Mean kinetic energy increment distribution, in case of $\mu = 0.01$, shows a log-normal behaviour; the distribution of the logarithm of the same data is obviously gaussian. (c) Mean kinetic energy increment distribution, in case of $\mu = 0$ (exponential interpolation). (d) Mean kinetic energy increment distribution, in case of $\mu = 0$ (power law interpolation).} \end{figure} In the simulations, an interesting behavior emerges by varying the coefficient of viscosity. For high values of viscosity, the model reproduces well the observed behavior of the slow shallow landslides, exhibiting a Gaussian distribution of mean kinetic energy increments (\figurename~\ref{fig:settima}(a)), \begin{equation} f(x) = a_1 \exp(-d), \label{int6} \end{equation} where $d = \frac{(x-b_1)^2}{c_1}$. Lowering the viscosity coefficient, the model exhibits a lognormal distribution (\figurename~\ref{fig:settima}(b), Eq. \eqref{int7} considering x the logarithm of the data). With null viscosity, the simulation data can be fitted by an exponential (\figurename~\ref{fig:settima}(c), Eq. \eqref{int7}): \begin{equation} f(x) = a \exp(b), \label{int7} \end{equation} or a power law: \begin{equation} f(x) = a x^b, \label{int8} \end{equation} which seems to better fit the data (\figurename~\ref{fig:settima}(d), Eq. \eqref{int8}). We measure also the intervals between the triggering time of local avalanches, i.e. we measures the time intervals between subsequent simulation steps ($t, t+1$) for which the blocks start to move: in all cases a power law distribution is observed, but with the power coefficient decreasing with viscosity. This is consistent with the local triggering that is more frequent for observations in which values of $\mu$ are close to zero (\figurename~\ref{fig:ottava}(a), \figurename~\ref{fig:ottava}(b) and \figurename~\ref{fig:ottava}(c)). Several authors \cite{Turcotte2004, Turcotte1997, Malamud2004} have observed that some natural hazards such as landslides, earthquakes and forest exhibit a power law distribution. \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=4cm]{8.eps}} \hspace{1mm} \subfigure[] {\includegraphics[width=4cm]{11.eps}} \hspace{1mm} \subfigure[] {\includegraphics[width=4cm]{15.eps}} \caption{\label{fig:ottava} Power law distributions of difference of time triggering of the particles relative to the simulation with $\mu = 0.05$ (a), $\mu = 0.01$ (b) and $\mu = 0$ (c).} \end{figure} \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=6cm]{5.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=6cm]{6.eps}} \caption{\label{fig:quinta} (a) An example of simulation in the inclined coordinate system: in this case arching phenomena have emerged (in red the still particles, in green the particles in motion). (b) The displacements of the particle in the inclined coordinate, relative to the simulation reported in the \figurename~\ref{fig:quinta}(a).} \end{figure} \begin{table}[htb] \caption{\label{tab:table1}[Kinetic energy distribution (KDE) varying the coefficient of viscosity $\mu$.]} \begin{center} \begin{tabular}{ccc|ccc} KDE & $\mu$ = $0.05$\footnotemark[1]&$\mu$ = $0.01$\footnotemark[2]& &$\mu =0 $\footnotemark[3]&\\ \hline \hline \cline{4-6} SSE& 2702 & 1.862 & & E. D.& P.D. \\ R-square& 0.991 & 1 & SSE &50.4&26.95 \\ Adj. R-square & 0.991&1&R-square &0.947&0.972\\ RMSE &7.583&0.331&Adj. R-square&0.944&0.970\\ $a_{1}$ & 295.3 & 370.4 & RMSE & 1.833 & 1.34\\ $b_1$ & 25.26 & 18.07 & $a$ &15.61 & 33.28\\ $c_1$ & 3.901 & 0.515 & $b$ & -0.168 &-1.033\\ \hline \end{tabular} \end{center} \footnotesize{[1]Gauss distribution of energy. [2] Log-Normal distribution of energy. [3] Distribution interpolation with exponential (E.D.) and power law (P.D.).} \end{table} \begin{table}[htb] \caption{\label{tab:table2}[Landslide triggering time ($T_{r}$) distribution varying the coefficient of viscosity $\mu$.]} \begin{center} \begin{tabular}{cccc} $T_r$ distribution \footnotemark[1] &$\mu=0.05$ &$\mu=0.01$&$\mu=0$\\ \hline \hline SSE & 72.41 & 26.31 & 119.2\\ R-Square &0.9942&0.9816&0.9979\\ Adj. R-square &0.9937&0.9803&0.9976\\ RMSE&2.691&1.324&2.504\\ $a$&152&37.06&48.33\\ $b$ &-2.508&-1.485&-1.183\\ \hline \end{tabular} \end{center} \footnotesize[1]{Power law interpolation.} \end{table} All results of distribution interpolation are reported in Table~\ref{tab:table1} and Table~\ref{tab:table2} using some estimator of the fitting accuracy: \begin{equation}\label{e:barwq}\begin{split} SSE&=\sum_{i=1}^n(y_i- f(x_i))^2\\ R^2&=1-\frac{SSE}{SST}; SST=\sum_{i=1}^n(y-\bar{y})^2\\ \bar{R}^2&=1-(1-R)^2\frac{n-1}{n-p-1}\\ RMSE&=\sqrt{\frac{SSE}{n-m}} \end{split}\end{equation} where the first estimator is the \emph{Sum of Squared Residuals (SSE)}, the second is the \emph{Coefficient of Determination} ($R^2$), the third is \emph{R Bar Squared} ($\bar{R}^2$) and the last is the \emph{Root Mean Square Error (RMSE)}. In \figurename~\ref{fig:quinta}(a) and \figurename~\ref{fig:quinta}(b) the results of a simulation are shown. The behavior of the model is similar to that of real landslides: phenomena like fractures, arching and detachments are generated spontaneously during evolution of the system (\figurename~\ref{fig:quinta}(a)). In the \figurename~\ref{fig:quinta}(b) it is possible to observe the variations in the displacements of the particles. The higher displacements are observed at the base of the landslide, while smaller displacements and emerging phenomena, like arching, are observed in the bulk of the landslide. At this point it is possible to discuss the choice of the 2-1 Lennard-Jones potential: in our simulations we tune the powers of potential, but the 2-1 Lennard-Jones allows to have a results similar to real landslide in term of velocity behavior, where is possible to assess the triggering, for example, with Fukuzono method \cite{Fukuzono1985}. \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=4cm]{9.eps}} \hspace{1mm} \subfigure[] {\includegraphics[width=4cm]{12.eps}} \hspace{1mm} \subfigure[] {\includegraphics[width=4cm]{16.eps}} \caption{\label{fig:ottavaa} (a) Landslides mean velocity for simulation with $\mu = 0.05$: the behaviour, after initial acceleration, is similar to a stick and slipe dynamics. (b) Landslides mean velocity for simulation with $\mu = 0.01$: the behaviour is typical of some real cases with acceleration fases. (c) Landslides mean velocity for simulation with $\mu = 0$, the behaviour is typical of some real cases with rapid acceleration fases (this behaviour is similar to rapid shallow landslides).} \end{figure} In the \figurename~\ref{fig:ottavaa}(a), \figurename~\ref{fig:ottavaa}(b) and \figurename~\ref{fig:ottavaa}(c) the trends of the modulus of the landslide mean velocity are reported. In all cases, by varying the coefficient of viscosity, we observe a transient with a rapid acceleration, in particular, for null viscosity (\figurename~\ref{fig:ottavaa}(c) ), we observe the typical trend of rapid landslides \cite{Sornette2004}. \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=6cm]{17.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=6cm]{18.eps}} \caption{\label{fig:diciassette} (a) Inverse of mean velocity in the time simulatuion of landslide, in the square the interval for failure time assessment with Fukuzono method. (b) Square of the \figurename~\ref{fig:diciassette}(a): determination of failure time with Fukuzono method, the simulation data show a convex behavior.} \end{figure} \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=6cm]{19.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=6cm]{20.eps}} \caption{\label{fig:19} (a) Simulation in the coordinate system of the slope at $t = 184$ (in red the still particles, in green the particles in motion). (b) Relative mean displacement between near vertical layers of the particles along $x$ axes of the slope at $t = 184$.} \end{figure} In this case (rapid landslide), the failure time is estimated by using Fukuzono method \cite{Fukuzono1985}; see the \figurename~\ref{fig:diciassette}(a) and the \figurename~\ref{fig:diciassette}(b). The time of triggering is calculated with the calibration of function: \begin{equation} \frac{1}{\nu} = \left[\beta(\alpha-1)\right]^{\frac{1}{\alpha-1}}(t_r-1)^{\frac{1}{\alpha-1}}, \label{fukuzono} \end{equation} where $\nu$ is the mean velocity of landslide (i.e. the all particles in motion), $t_r$ the time of failure, $t$ the time of simulation, while $\alpha$ and $\beta$ are constant. With the calibration we obtain $\alpha = 0.8836$, $\beta = 2.6618$ and $t_r = 184$. The behavior of this simulation is similar to real landslide \citep{Suwa2010}. In the \figurename~\ref{fig:19}(a) the status of landslide at $t = 184$ is shown, while in the \figurename~\ref{fig:19}(b) the relative mean displacement between near vertical layers of the particles along $x$ axes of the slope at $t = 184$ is reported. This distance is defined for particle positions $x_{ij}$ as: \begin{equation} \frac{1}{N_r} \sum_{i=1}^{N_r}x_{i,j+1}-x_{i,j}, \label{fukuzone} \end{equation} where $N_r$ is the number of horizontal layer. \begin{figure}[t!] \centering {\includegraphics[width=12cm]{21.eps}} \caption{\label{fig:21} Relative mean displacement between vertical particle layers x1-x2 (initial layers) and x30-x31 (central layers) in the time: in evidence the time interval where the fracture is formed at initial acceleration phase and where the failure time is estimated with the inverse of velocity \cite{Fukuzono1985}.} \end{figure} \begin{figure}[t!] \centering \subfigure[] {\includegraphics[width=6cm]{22.eps}} \hspace{10mm} \subfigure[] {\includegraphics[width=6cm]{23.eps}} \caption{\label{fig:22} (a) Simulation in the coordinate system of the slope at $t = 220$ (in red the still particles, in green the particles in motion). (b) Simulation in the coordinate system of the slope at $t = 260$ (in red the still particles, in green the particles in motion)} \end{figure} Then in the \figurename~\ref{fig:21} the relative mean displacement between vertical particle layers $x_1-x_2$ (initial layers) and $x_{30}-x_{31}$ (central layers) in the time are shown. Note the time interval where the fracture is formed at initial acceleration phase and where the failure time is estimated with the inverse of velocity (\cite{Fukuzono1985}). Finally in \figurename~\ref{fig:22}(a) and \figurename~\ref{fig:22}(b) the progression of simulated landslide is reported (time step $t = 220$ and $t = 260$). \section{Conclusion} A computational 2D mesoscopic models for shallow landslides, triggered by rainfall, is proposed. The latter is based on interacting particles to describe the features of granular material along a slope, where a horizontal layer with thickness of one particle is arranged. For shallow instability movement we consider that the triggering is caused by the decrease of static friction along sliding surface. Particle triggering is caused by the trespassing of two conditions, e.g., a threshold speed of the particles and the static friction between particles and slope surface, based on the modeling of the failure criterion of Mohr-Coulomb. For the prediction of the positions of these particles, after and during a rainfall, we use the Molecular Dynamic (MD) method, which is very suitable to simulate this type of systems. The results are quite satisfactory in order to claim that this type of modeling could represent a new method to simulate landslide triggered by rainfall. In our simulations emerging phenomena such as fractures, detachments and arching can be observed. In particular, the model reproduces well the energy and time distribution of avalanches, analogous to the observed Gutenberg-Richter and Omori power low distributions for earthquakes. In particular the distribution of landslide mean kinetic energy shows a transition from Gaussian to power law, passing through lognormal to decrease the coefficient of viscosity up to zero. This behavior is compatible with slow landslides (high viscosity) and rapid landslides (low viscosity). The main advantage of these Lagrangian methods is given by the capability of following the trajectory of a single particle, possibly identifying its dynamical properties. Finally, for a large range of model parmaters values, in our simulations we observed a velocity pattern, with acceleration increments, typical of real landslides \cite{Sornette2004}. \section*{Acknowledgements} We thank the \emph{Ente Cassa di Risparmio di Firenze} for its support under the contract \emph{Studio dei fenomeni di innesco e propagazione di frane in relazione ad eventi di pioggia e/o terremoti per mezzo di modelli matematici ed esperimenti di laboratorio su mezzi granulari}. \section*{Bibliography}
2,877,628,090,002
arxiv
\section{Introduction} The notion of dichotomy spectrum of linear time-varying systems initiated from the work of Sacker and Sell in 1970s (see \cite{SackerSell1978}). Since then this notion has played an important role in the qualitative theory of time-varying systems including the stability theory (see \cite{Barreira}), the linearization theory (see \cite{Cuong,Palmer1974}), the invariant manifold theory (see \cite{APS,PS,Barreira}), the normal form theory (see \cite{Siegmund2002b}), the bifurcation theory (see \cite{Rasmussen}), etc.... Due to the wide application of dichotomy spectrum in the qualitative theory of time-varying systems, it is of particular importance to know whether we can control this spectrum. More concretely, we are interested in discrete time-varying linear control system \[ x_{n+1}=A_nx_n+B_n u_n. \] The question is that for a given compact set written as the union of some disjoint intervals whether there exists a linear feedback $u_n=U_nx_n$ for which the dichotomy spectrum of the closed-loop system \[ x_{n+1}=(A_n+B_nU_n)x_n \] is equal to the given compact set (assignability of dichotomy spectrum). In this paper, we show that uniform complete controllability implies assignability of dichotomy spectrum. Note that uniform complete controllability is also a sufficient condition for arbitrary assignability of Lyapunov spectrum of time-varying control systems, see \cite{Popova,Babiarz,Babiarz_2018}. Recall that the Lyapunov spectrum of a time-varying system consists of all possible average growth rates of solutions of this system and it is known that the Lyapunov spectrum is a subset of the dichotomy spectrum. Then, our result in assigning dichotomy spectrum implies the result of assigning Lyapunov spectrum in \cite{Babiarz}, see Remark \ref{Comparison} for a more details. The structure of the paper is follows: The first part of Section \ref{Section2} is devoted to present the basic concept called dichotomy spectrum of discrete time-varying systems (Subsection \ref{Subsection2.1}). The statement of the main result about assignability of dichotomy spectrum is stated in Subsection \ref{Subsection2.2}. The proof of the main result is presented in Subsection \ref{Subsection3.3} of Section \ref{Section3}. The other two subsections of Section \ref{Section3} are preparation for the proof and have the following structure: Subsection \ref{Subsection3.1} is devoted to prove a result on the dichotomy spectrum of upper-triangular discrete time-varying systems, Subsection \ref{Subsection3.2} is used to recall a result in \cite{Babiarz} in transforming an uniformly completely controllable linear systems to upper-triangular linear systems. In the Appendix, we recall the notion of dichotomy spectrum for continuous time-varying systems. A relation between the dichotomy spectral of continuous time-varying systems and the associated $1$-time discrete time-varying systems is established in Lemma \ref{TechnicalLemma}. \\ \noindent \textbf{Notations}: For $d,s\in\N$, let $\mathcal L^{\infty}(\T,\R^{d\times s})$, where $\T$ stands for $\Z,\Z_{\geq 0}, \Z_{\leq 0}$, denote the space $M=(M_n)_{n\in\T}$ with $M_n\in\R^{d\times s}$ satisfying that \[ \|M\|_{\infty}:=\sup_{n\in\T} \|M_n\|<\infty. \] For $d\in\N$, let $\mathcal L^{\rm Lya}(\T,\R^{d\times d})$ denote the set of all Lyapunov sequences $M=(M_n)_{n\in\T}$ in $\R^{d\times d}$, i.e. $M\in \mathcal L^{\infty}(\T,\R^{d\times d})$ and its inverse sequence $M^{-1}:=(M_n^{-1})_{n\in\T}$ exists and $M^{-1}\in \mathcal L^{\infty}(\T,\R^{d\times d})$. \section{Preliminaries and main results}\label{Section2} \subsection{Dichotomy spectrum of discrete time-varying linear system}\label{Subsection2.1} Consider discrete time-varying linear system \begin{equation}\label{ED_01} x_{n+1}=M_nx_n,\qquad \hbox{for } n\in\Z, \end{equation} where $ M:=(M_n)_{n\in\Z}\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d})$. Let $\Phi_M(\cdot,\cdot):\Z\times \Z \rightarrow \R^{d\times d}$ denote the \emph{evolution operator} generated by \eqref{ED_01}, i.e. \[ \Phi_M(m,n):= \left\{ \begin{array}{ll} M_{m}\dots M_{n+1}, & \hbox{ if } m>n,\\[1ex] \id, & \hbox{ if } m=n,\\[1ex] M_{m+1}^{-1}\dots M_{n}^{-1}, & \hbox{ if } m<n. \end{array} \right. \] Next, we introduce the notion of one-sided and two-sided dichotomy spectrum of \eqref{ED_01}. These notions are defined in term of exponential dichotomy. Recall that system \eqref{ED_01} is said to admit an exponential dichotomy on $\T$, where $\T$ is either $\Z,\Z_{\geq 0}$ or $\Z_{\leq 0}$, if there exist $K,\alpha>0$ and a family of projection $(P_n)_{n\in\T}$ in $\R^{d\times d}$ such that for all $m,n\in\T$ we have \[ \begin{array}{cll} \|\Phi_M(m,n)P_n\| & \leq K e^{-\alpha(m-n)} & \quad \hbox{ for } m\geq n,\\[1.5ex] \|\Phi_M(m,n)(\id-P_n)\| & \leq K e^{\alpha(m-n)} & \quad \hbox{ for } m\leq n, \end{array} \] see \cite{Poetyzche}. \begin{definition}[Dichtomy spectrum for discrete time-varying linear systems]\label{Definition_DiscreteED} The \emph{dichotomy spectrum} of \eqref{ED_01} on $\Z, \Z_{\geq 0}, \Z_{\leq 0}$ are defined, respectively, as follows \begin{eqnarray*} \Sigma_{\rm ED}(M) &:=& \big\{\gamma\in\R: x_{n+1}=e^{-\gamma }M_n x_n \hbox{ has no ED on } \Z\big\}, \\[1.5ex] \Sigma_{\rm ED}^{+}(M) &:=& \big\{\gamma\in\R: x_{n+1}=e^{-\gamma}M_n x_n \hbox{ has no ED on } \Z_{\geq 0}\big\}, \\[1.5ex] \Sigma_{\rm ED}^{-}(M) &:=& \big\{\gamma\in\R: x_{n+1}=e^{-\gamma}M_n x_n \hbox{ has no ED on } \Z_{\leq 0}\big\}. \end{eqnarray*} \end{definition} \begin{remark} In \cite{Aulbach,Poetyzche}, the definition of dichotomy spectrum is slightly differential to Definition \ref{Definition_DiscreteED} in which the authors consider the shifted systems of the form \[ x_{n+1}=\frac{1}{\beta}M_nx_n,\qquad\hbox{where } \beta\in (0,\infty). \] Since there is an one-to-one correspondence between $\beta\in (0,\infty)$ and $e^{-\gamma}$, where $\gamma\in \R$, there is an one-to-one correspondence between the spectral in Definition \ref{Definition_DiscreteED} and the ones introduced in \cite{Aulbach,Poetyzche}. \end{remark} Thanks to the above Remark and the spectral theorem proved in \cite{Aulbach,Poetyzche}, the spectrum $\Sigma_{\rm ED}(M)$ (also $\Sigma_{\rm ED}^{+}(M)$ and $\Sigma_{\rm ED}^{-}(M)$) is given as the union of at most $d$ disjoint intervals. The corresponding notions of dichotomy spectrum of continuous time-varying linear systems are introduced in the Appendix. \subsection{Setting and the statement of the main result}\label{Subsection2.2} Consider a discrete time-varying linear control system \begin{equation}\label{MainEq} x_{n+1}=A_nx_n+ B_n u_n, \end{equation} where $A=(A_n)_{n\in\mathbb Z}\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d}), B=(B_n)_{n\in\mathbb Z}\in \mathcal L^{\infty}(\Z,\R^{d\times s})$. Let $x(\cdot,n,\xi,u)$ denote the solution of \eqref{MainEq} satisfying that $x(n)=\xi$. Now, we recall the notion of uniform complete controllability of \eqref{MainEq}, see also \cite{Babiarz}. \begin{definition}[Uniform complete controllability]\label{UniformControllability} System \eqref{MainEq} is called \emph{uniformly completely controllable} if there exist a positive $\alpha$ and a natural number $K$ such that for all $\xi\in\R^d$ and $k_0\in\Z$ there exists a control sequence $u_n, n=k_0,k_0+1,\dots,k_0+K-1$ such that \[ x(k_0+K,k_0,0,u)=\xi \] and \[ \|u_n\|\leq \alpha \|\xi\|\qquad\hbox{for all } n=k_0,k_0+1,\dots,k_0+K-1. \] \end{definition} For a bounded sequence of linear feedback control $U=(U_n)_{n\in\Z}\in\mathcal L^{\infty}(\Z,\R^{s\times d})$, the corresponding closed-loop system is \begin{equation}\label{Closedloop} x_{n+1}=(A_n+B_n U_n)x_n. \end{equation} In the case that $A+BU\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d})$, the dichotomy spectrum of \eqref{Closedloop} is denoted by $\Sigma_{\mathrm{ED}}(A+BU)$. \begin{definition} The dichotomy spectrum of \eqref{Closedloop} is called \emph{assignable} if for arbitrary disjoint closed intervals $[a_1,b_1],\dots,[a_{\ell},b_{\ell}]$, where $1\leq \ell\leq d$, there exists a bounded linear feedback control $U\in L^{\infty}(\Z,\R^{s\times d})$ such that $A+BU\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d})$ and \[ \Sigma_{\mathrm{ED}}(A+BU)=\bigcup_{i=1}^{\ell}[a_i,b_i]. \] \end{definition} We now state the main result of this paper. \begin{theorem}[Assignability for dichotomy spectrum of discrete time-varying linear systems]\label{MainTheorem} Suppose that system \eqref{MainEq} is uniformly completely controllable. Then, the dichotomy spectrum of \eqref{Closedloop} is assignable. \end{theorem} \begin{remark}\label{Comparison} (i) Recall that for a discrete time-varying linear system \begin{equation}\label{Remark_Eq1} x_{n+1}=M_nx_n,\qquad \hbox{where } M:=(M_n)_{n\in\Z}\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d}), \end{equation} the \emph{Lyapunov exponent} of a non-trivial solution $\Phi_M(n,0)\xi$ of \eqref{Remark_Eq1} is given by \[ \chi(\xi):=\limsup_{n\to\infty}\frac{1}{n}\log\|\Phi_M(n,0)\xi\|. \] The Lyapunov spectrum of \eqref{Remark_Eq1} is defined as \[ \Sigma_{\rm Lya}(M):=\bigcup_{0\not=\xi\in\R^d} \chi(\xi). \] It is known that $\Sigma_{\rm Lya}(M)$ consists of at most $d$ elements (cf. \cite[Chapter II]{Andrianova}). Furthermore, suppose that $\Sigma_{\rm ED}(M)$ is represented as a disjoint union of $\ell$ intervals $\bigcup_{i=1}^{\ell}[a_i,b_i]$. Then, \begin{equation}\label{Remark_Eq2} \Sigma_{\rm Lya}(M)\subset \Sigma_{\rm ED}(M),\qquad \Sigma_{\rm Lya}(M)\cap [a_i,b_i]\not=\emptyset, \end{equation} see, e.g. \cite{Johnson}. (ii) Suppose that system \eqref{MainEq} is uniformly completely controllable. Now, let $\{\lambda_1,\dots,\lambda_{\ell}\}$ be an arbitrary set of $\ell$ real numbers, where $1\leq \ell\leq d$. Let $a_i=b_i=\lambda_i$ for $1\leq i\leq \ell$. By virtue of Theorem \ref{MainTheorem}, there exists a bounded linear feedback control $U=(U_n)_{n\in\Z}$ such that $A+BU\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d})$ and $\Sigma_{\rm ED}(A+BU)=\bigcup_{i=1}^{\ell}\{\lambda_i\}$. This together with \eqref{Remark_Eq2} implies that \[ \Sigma_{\rm ED}(A+BU)=\Sigma_{\rm Lya}(A+BU)=\bigcup_{i=1}^{\ell}\{\lambda_i\}. \] Consequently, for discrete time-varying linear control systems assignability of dichotomy spectrum implies assignability of Lyapunov spectrum. \end{remark} \section{Proof of the main results}\label{Section3} The main ingredient of the proof consists of two parts. In the first part, we extent a result in \cite{Battelli} to obtain an explicit computation of the dichotomy spectrum of a special upper-triangular linear difference system. Concerning the second part, we first extend the result in \cite[Theorem 4.6]{Babiarz} to two-sided linear systems and then use this result to find a suitable linear feedback control such that the closed-loop system \eqref{Closedloop} is kinematically equivalent to an upper traingular linear difference system. \subsection{Dichotomy spectrum of upper-triangular linear difference systems}\label{Subsection3.1} In the first part of this subsection, we extend a part of the result about the presentation of dichotomy spectrum of a block upper-triangular differential equations in terms of the dichotomy spectrum of subsystems in \cite{Battelli} to discrete time-varying systems. To do this, we recall this result for continuous time-varying systems. \begin{theorem}\label{TechnicalTheorem} Consider an upper-triangular linear differential equation \begin{equation*}\label{BlockForm_DifferentialEquations} \dot x(t)=W(t)x(t),\qquad\hbox{where } W(t)= \left( \begin{array}{cc} X(t) & Z(t)\\[0.5ex] 0 & Y(t) \end{array} \right), \end{equation*} where $X:\R\rightarrow \R^{k\times k}, Y: \R\rightarrow \R^{(d-k)\times (d-k)}, Z:\R\rightarrow \R^{k\times (d-k)}$ are measurable and essentially bounded. Then, \begin{equation*}\label{Relation_ContinousTime} \Sigma^{\pm}_{\rm ED}(X)\cup\Sigma^{\pm}_{\rm ED}(Y) \subset \Sigma_{\rm ED}(W)\subset \Sigma_{\rm ED}(X)\cup\Sigma_{\rm ED}(Y), \end{equation*} where $\Sigma^{\pm}_{\rm ED}(X):=\Sigma^{+}_{\rm ED}(X)\cup \Sigma^{-}_{\rm ED}(X), \Sigma^{\pm}_{\rm ED}(Y):=\Sigma^{+}_{\rm ED}(Y)\cup \Sigma^{-}_{\rm ED}(Y)$. \end{theorem} \begin{proof} See \cite[Section 4]{Battelli}. \end{proof} Consider discrete time-varying system \begin{equation}\label{BlockForm} x_{n+1}=D_nx_n,\qquad\hbox{where } D_n= \left( \begin{array}{cc} A_n & C_n\\[0.5ex] 0 & B_n \end{array} \right), \end{equation} where $A=(A_n)_{n\in\Z}\in\ \mathcal L^{\rm Lya}(\Z,\R^{k\times k}), B=(B_n)_{n\in\Z}\in\mathcal L^{\rm Lya}(\Z,\R^{(d-k)\times (d-k)})$, and $C=( C_n)_{n\in\Z}\in\mathcal L^{\infty}(\Z,\R^{k\times (d-k)})$. \begin{theorem}[Dichotomy spectrum of upper-triangular discrete time-varying linear systems]\label{KeyTheorem} Let $\Sigma_{\rm ED}(D)$ denote the dichotomy spectrum of \eqref{BlockForm}. Then, \begin{equation}\label{Relation} \Sigma^{\pm}_{\rm ED}(A)\cup\Sigma^{\pm}_{\rm ED}(B) \subset \Sigma_{\rm ED}(D)\subset \Sigma_{\rm ED}(A)\cup\Sigma_{\rm ED}(B), \end{equation} where $\Sigma^{\pm}_{\rm ED}(A):=\Sigma^{+}_{\rm ED}(A)\cup \Sigma^{-}_{\rm ED}(A), \Sigma^{\pm}_{\rm ED}(B):=\Sigma^{+}_{\rm ED}(B)\cup \Sigma^{-}_{\rm ED}(B)$. \end{theorem} \begin{proof} Define a measurable and bounded function $W:\R\rightarrow \R^{d\times d}$ of the form $W(t)=\left( \begin{array}{cc} X(t) & Z(t)\\[0.5ex] 0 & Y(t) \end{array} \right)$, where \[ X(t)= A_n, Y(t)=B_n, Z(t)=C_n\qquad \hbox{ for } t\in [n,n+1), n\in\Z. \] Obviously, equation \eqref{BlockForm} is the $1$-time discrete time-varying systems associated with \begin{equation*}\label{Cont_Eq1} \dot x=W(t)x,\qquad \hbox{ where } t\in\R, \end{equation*} (see Appendix for the notion of the associated $1$-time discrete time-varying systems). Then, by virtue of Lemma \ref{TechnicalLemma} we have \begin{equation}\label{New_Eq1} \begin{array}{ccc} \Sigma^{\pm}_{\rm ED}(A)\cup\Sigma^{\pm}_{\rm ED}(B)&=& \Sigma^{\pm}_{\rm ED}(X)\cup\Sigma^{\pm}_{\rm ED}(Y),\\[1ex] \Sigma_{\rm ED}(D)&=& \Sigma_{\rm ED}(W),\\[1ex] \Sigma_{\rm ED}(A)\cup\Sigma_{\rm ED}(B)&=&\Sigma_{\rm ED}(X)\cup\Sigma_{\rm ED}(Y). \end{array} \end{equation} On the other hand, by definition of $W(t)$ and Theorem \ref{TechnicalTheorem} we have \[ \Sigma^{\pm}_{\rm ED}(X)\cup\Sigma^{\pm}_{\rm ED}(Y) \subset \Sigma_{\rm ED}(W)\subset \Sigma_{\rm ED}(X)\cup\Sigma_{\rm ED}(Y), \] which together with \eqref{New_Eq1} proves \eqref{Relation}. The proof is complete. \end{proof} In the final part of this subsection, we study a special class of upper triangular discrete time-varying systems whose dichotomy spectrum are given as the union of the dichotomy spectrum of the subsystems corresponding to diagonal entries. More concretely, let $(p^1_n)_{n\in\Z},(p^2_n)_{n\in\Z},\dots,(p^d_n)_{n\in\Z}$ be scalar Lyapunov sequences satisfying that \begin{equation}\label{Eq1} p^i_{n}=p^i_{-n}\qquad\hbox{ for all } n\in\Z, i=1,\dots,d. \end{equation} For each $i=1,\dots,d$, we denote by $\Sigma_{\rm ED}(p^i)$ the dichotomy spectrum of the scalar linear system \[ z_{n+1}= p^i_n z_n\qquad\hbox{ for } n\in\Z. \] \begin{proposition}\label{MainProposition} Let $(C_n)_{n\in\Z}$, where $C_n=(c^{(n)}_{ij})_{1\leq i,j\leq d}$, be an arbitrary bounded sequence of upper-triangular matrices in $\R^{d\times d}$ satisfying that \[ c^{(n)}_{ii}= p^i_n\qquad \hbox{ for all } n\in \Z, i=1,\dots,d. \] Then, the dichotomy spectrum $\Sigma_{\rm ED}(C)$ of the system $x_{n+1}=C_n x_n$ is given by \[ \Sigma_{\rm ED}(C)=\bigcup_{i=1}^d \Sigma_{\rm ED}(p^i). \] \end{proposition} \begin{proof} Using Theorem \ref{KeyTheorem}, we obtain that \[ \bigcup_{i=1}^d \Sigma_{\rm ED}^{\pm}(p^i)\subset\Sigma_{\rm ED}(C)\subset \bigcup_{i=1}^d \Sigma_{\rm ED}(p^i), \] where $\Sigma_{\rm ED}^{\pm}(p^i)=\Sigma_{\rm ED}^{+}(p^i)\cup \Sigma_{\rm ED}^{-}(p^i)$. Thus, to complete the proof it is sufficient to show that \begin{equation}\label{Aim} \Sigma_{\rm ED}(p^i)\subset \Sigma_{\rm ED}^{\pm}(p^i)\qquad\hbox{ for all } i=1,\dots,d. \end{equation} For this purpose, let $i\in\{1,\dots,d\}$ and $\gamma\not\in \Sigma_{\rm ED}^{+}(p^i)$ be arbitrary. Then, by Definition \ref{Definition_DiscreteED} one of the following alternatives holds:\\ \noindent \emph{(A1)} There exist $K,\alpha>0$ such that \begin{equation}\label{A1} |p^i_{m-1}\dots p^i_n|\leq K e^{(\gamma-\alpha)(m-n)} \qquad\hbox{ for } m,n\in\Z_{\geq 0}\hbox{ with } m\geq n. \end{equation} Thus, by \eqref{Eq1} we also have that \[ |p^i_{m-1}\dots p^i_n|= \left\{ \begin{array}{ll} |p^i_{m-1}\dots p^i_0||p^i_{1}\dots p^i_{-n}|\leq K^2 e^{(\gamma-\alpha)(m-n)} & \hbox{ for } m\geq 0\geq n,\\[1.5ex] |p^i_{-(m-1)}\dots p^i_{-n}|\leq K e^{(\gamma-\alpha)(m-n)} & \hbox{ for } 0\geq m \geq n. \end{array} \right. \] It means that the shifted system \[ z_{n+1}=e^{-\gamma}p^i_nz_n,\qquad\hbox{where } n\in\Z \] exhibits an exponential dichotomy on $\Z$. Consequently, $\gamma\not\in \Sigma_{\rm ED}(p^i)$.\\ \noindent \emph{(A2)} There exist $K,\alpha>0$ such that \begin{equation*} \left|\frac{1}{p^i_{m}}\dots \frac{1}{p^i_{n-1}}\right|\leq K e^{(\gamma+\alpha)(m-n)} \qquad\hbox{ for } m,n\in\Z_{\geq 0}\hbox{ with } m\leq n, \end{equation*} which implies that \begin{equation*} \left|p^i_{m}\dots p^i_{n-1}\right|\geq \frac{1}{K} e^{(\gamma+\alpha)(n-m)} \qquad\hbox{ for } m,n\in\Z_{\geq 0}\hbox{ with } n\geq m, \end{equation*} Thus, by \eqref{Eq1} we also have that \[ \left|p^i_{m}\dots p^i_{n-1}\right| = \left\{ \begin{array}{ll} |p^i_{-m}\dots p^i_{-1}||p^i_{0}\dots p^i_{n-1}|\geq \frac{1}{K^2} e^{\gamma(m-n)} & \hbox{ for } n\geq 0\geq m,\\[1.5ex] |p^i_{-m}\dots p^i_{-(n-1)}|\geq \frac{1}{K} e^{\gamma(m-n)} & \hbox{ for } 0\geq n\geq m. \end{array} \right. \] It means that the shifted system \[ z_{n+1}=e^{-\gamma}p^i_nz_n,\qquad\hbox{where } n\in\Z \] exhibits an exponential dichotomy on $\Z$. Therefore, in this alternative we also arrive at $\gamma\not\in \Sigma_{\rm ED}(p^i)$. Since $\gamma\not\in \Sigma_{\rm ED}^{+}(p^i)$ is arbitrary it follows that $\Sigma_{\rm ED}(p^i)\subset \Sigma_{\rm ED}^{+}(p^i)$. This shows \eqref{Aim} and the proof is complete. \end{proof} \subsection{Upper-triangularization of uniformly completely controllable systems}\label{Subsection3.2} Recall that two discrete time-varying linear systems \[ x_{n+1}=A_n x_n,\quad y_{n+1}=B_n y_n\qquad \hbox{ for } n\in\T \quad (\T~ \hbox{stands for} ~\Z_{\ge 0}~\hbox{or}~ \Z), \] where $(A_n)_{n\in\T}, (B_n)_{n\in\T}\in \mathcal L^{\rm Lya}(\T,\R^{d\times d})$, are called \emph{kinematically equivalent} (or also called dynamically equivalent) if there exists a transformation $(T_n)_{n\in\T}\in \mathcal L^{\rm Lya}(\T,\R^{d\times d})$ such that \[ A_nT_n=T_{n+1}B_n\qquad\hbox{ for all } n\in\T. \] As was proved in \cite[Theorem 4.6]{Babiarz} that for an uniformly completely controllable one sided discrete time-varying control system and a given diagonal discrete time-varying system, there is a bounded feedback control such that the corresponding closed-loop system is dynamically equivalent to an upper-triangular system whose diagonal part coincides with the given diagonal system. Under a slight modification, this result can be extended to two-sided discrete time-varying control system and we arrive at the following result. \begin{theorem}[Upper-triangularization of uniformly completely controllable two sided discrete time-varying systems]\label{Upper_Theorem} Consider an uniformly completely controllable two-sided discrete time-varying control system \begin{equation}\label{Upper_Eq1} x_{n+1}=A_nx_n+B_n u_n,\qquad\hbox{for } n\in\Z, \end{equation} where $ A=(A_n)_{n\in\mathbb Z}\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d}), B=(B_n)_{n\in\mathbb Z}\in \mathcal L^{\infty}(\Z,\R^{d\times s})$. Let $(p^i_n)_{n\in\Z}, i=1,\dots,d,$ be arbitrary scalar positive Lyapunov sequences. Then, there exist a sequence of upper triangular matrices $(C_n)_{n\in\Z}\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d})$, where $C_n=(c^{(n)}_{ij})_{1\leq i,j\leq d}$ with $c^{(n)}_{ii}=p^i_n$, and a bounded feedback control $U=(U_n)_{n\in\N}\in \mathcal L^{\infty}(\Z,\R^{s\times d})$ satisfying that the following systems \begin{equation*} x_{n+1}=(A_n+B_nU_n)x_n,\quad y_{n+1}=C_ny_n \qquad \hbox{ for } n\in \Z \end{equation*} are kinematically equivalent. \end{theorem} \begin{proof} See \cite[Theorem 4.6]{Babiarz}. \end{proof} \subsection{Proof of the main result}\label{Subsection3.3} \begin{proof}[Proof of Theorem \ref{MainTheorem}] Let $[a_1,b_1],\dots,[a_{\ell},b_{\ell}]$, where $1\leq \ell\leq d$, be arbitrary disjoint closed intervals. For $1\leq i\leq \ell$, we define a positive scalar sequence $(p^i_n)_{n\in\Z}$ with $p^i_n=p^i_{-n}$ for $n\in\Z$ and \begin{equation}\label{Contruction} p^i_n= \left\{ \begin{array}{ll} e^{a_i}, & \hbox{for } n\in [2^{2m},2^{2m+1}), m\in\Z_{\geq 0} ; \\[1ex] e^{b_i}, & \hbox{for } n\in [2^{2m+1},2^{2m+2}), m\in\Z_{\geq 0};\\[1ex] 0, & \hbox{for } n=0. \end{array} \right. \end{equation} Consider the corresponding linear scalar system \begin{equation}\label{CorrespondingSystem} z_{n+1}=p^i_n z_n\qquad\hbox{ for } n\in\Z. \end{equation} By virtue of Proposition \ref{MainProposition}, the dichotomy spectrum of \eqref{CorrespondingSystem} satisfies that $\Sigma_{\rm ED}(p^i)=\Sigma_{\rm ED}^{+}(p^i)$. By \eqref{Contruction}, it is obvious to see that $\Sigma_{\rm ED}^{+}(p^i)=[a_i,b_i]$ and then we arrive at \begin{equation}\label{ComputationED} \Sigma_{\rm ED}(p^i)=[a_i,b_i]\qquad\hbox{ for } i=1,\dots,\ell. \end{equation} For $\ell+1\leq i\leq d$, let $p^i_n=p^1_n$. According to Theorem \ref{Upper_Theorem}, there exists a bounded feedback control and a sequence of upper triangular matrices $(C_n)_{n\in\Z}\in \mathcal L^{\rm Lya}(\Z,\R^{d\times d})$, where $C_n=(c^{(n)}_{ij})_{1\leq i,j\leq d}$ with $c^{(n)}_{ii}=p^i_n$ such that \begin{equation*} x_{n+1}=(A_n+B_nU_n)x_n,\quad y_{n+1}=C_ny_n \qquad \hbox{ for } n\in \Z \end{equation*} are kinematically equivalent. This together with Proposition \ref{MainProposition} and \eqref{ComputationED} implies that \[ \Sigma_{\rm ED}(A+BU)=\Sigma_{\rm ED}(C)=\bigcup_{i=1}^d \Sigma_{\rm ED}(p^i)=\bigcup_{i=1}^{\ell}[a_i,b_i]. \] The proof is complete. \end{proof} \section{Appendix}\label{Section4} Consider a continuous time-varying linear system \begin{equation}\label{ED_01_Cont} \dot x(t)=W(t)x(t),\qquad t\in\R, \end{equation} where $W:\R\rightarrow \R^{d\times d}$ is measurable and bounded. Let $\Phi_W(\cdot,\cdot):\R\times \R \rightarrow \R^{d\times d}$ denote the \emph{evolution operator} generated by \eqref{ED_01_Cont}, i.e. $\Phi(\cdot,s)\xi$ solves \eqref{ED_01_Cont} with the initial valued condition $x(s)=\xi$. Next, we introduce the notion of one-sided and two-sided dichotomy spectrum of \eqref{ED_01_Cont}. These notions are defined in term of exponential dichotomy. Recall that system \eqref{ED_01} is said to admit an exponential dichotomy on $\T$, where $\T$ is either $\R,\R_{\geq 0}$ or $\R_{\leq 0}$, if there exist $K,\alpha>0$ and a family of projection $P:\T\rightarrow \R^{d\times d}$ such that for all $t,s\in\T$ we have \[ \begin{array}{cll} \|\Phi_W(t,s)P(s)\| & \leq K e^{-\alpha(t-s)} & \quad \hbox{ for } t\geq s;\\[1.5ex] \|\Phi_W(t,s)(\id-P(s))\| & \leq K e^{\alpha(t-s)} & \quad \hbox{ for } t\leq s. \end{array} \] \begin{definition}[Dichotomy spectrum for continuous-time varying linear systems]\label{DefinitionofED} The dichotomy spectrum of \eqref{ED_01} on $\R, \R_{\geq 0}, \R_{\leq 0}$ are defined, respectively, as follows \begin{eqnarray*} \Sigma_{\rm ED}(W) &:=& \big\{\gamma\in\R: \dot x=(W(t)-\gamma \id) x \hbox{ has no ED on } \R\big\}, \\[1.5ex] \Sigma_{\rm ED}^{+}(W) &:=& \big\{\gamma\in\R: \dot x=(W(t)-\gamma \id) x \hbox{ has no ED on } \R_{\geq 0}\big\}, \\[1.5ex] \Sigma_{\rm ED}^{-}(W) &:=& \big\{\gamma\in\R: \dot x=(W(t)-\gamma \id) x \hbox{ has no ED on } \R_{\leq 0}\big\}. \end{eqnarray*} \end{definition} It is proved in \cite{Siegmund,Kloeden} that $\Sigma_{\rm ED}(W)$ (also $\Sigma_{\rm ED}^{+}(W)$ and $\Sigma_{\rm ED}^{-}(W)$) is a compact set consisting of at most $d$ disjoint intervals. Now, we introduce systems associated with \eqref{ED_01_Cont}. The following system \begin{equation}\label{AssociatedEq} x_{n+1}=A_n x_n, \qquad \hbox{ where } A_n:=\Phi_W(n+1,n), \end{equation} is called the \emph{$1$-time discrete time-varying linear system associated with \eqref{ED_01_Cont}}, see also \cite{Cuong}. Obviously, the evolution operator $\Phi_A(\cdot,\cdot):\Z\times\Z\rightarrow \R^{d\times d}$ is given by \begin{equation}\label{AssociatedEvolution} \Phi_A(m,n)=\Phi_W(m,n)\qquad\hbox{for } m,n\in\Z. \end{equation} The following lemma shows that the dichotomy spectral of \eqref{ED_01_Cont} and \eqref{AssociatedEq} coincide. \begin{lemma}\label{TechnicalLemma} The following statements hold \[ \Sigma_{\rm ED}(W)=\Sigma_{\rm ED}(A), \Sigma^+_{\rm ED}(W)=\Sigma^+_{\rm ED}(A),\Sigma^{-}_{\rm ED}(W)=\Sigma^{-}_{\rm ED}(A). \] \end{lemma} \begin{proof} We only prove $\Sigma_{\rm ED}(W)=\Sigma_{\rm ED}(A)$ and by using similar arguments we also have $ \Sigma^+_{\rm ED}(W)=\Sigma^+_{\rm ED}(A),\Sigma^{-}_{\rm ED}(W)=\Sigma^{-}_{\rm ED}(A)$. We divide the proof of this fact into two steps:\\ \noindent \emph{Step 1}: We show that $\Sigma_{\rm ED}(A)\subset \Sigma_{\rm ED}(W)$. For this purpose, let $\gamma\not\in \Sigma_{\rm ED}(W)$ be arbitrary. Then, by Definition \ref{DefinitionofED} and the fact that $e^{-\gamma(t-s)}\Phi_W(t,s)$ is the evolution operator of the shifted systems \[ \dot x=(W(t)-\gamma \id) x, \] there exist $K,\alpha>0$ and a family of projection $P:\R\rightarrow \R^{d\times d}$ such that \[ \begin{array}{cll} \|\Phi_W(t,s)P(s)\| & \leq K e^{(\gamma-\alpha)(t-s)} & \quad \hbox{ for } t\geq s,\\[1.5ex] \|\Phi_W(t,s)(\id-P(s))\| & \leq K e^{(\gamma+\alpha)(t-s)} & \quad \hbox{ for } t\leq s. \end{array} \] In particular, by letting $P_n:=P(n)$ for $n\in\Z$ and \eqref{AssociatedEvolution} we arrive at the following properties of the evolution $\Phi_A(m,n)$ generated by \eqref{AssociatedEq} \[ \begin{array}{cll} \|\Phi_A(m,n)P_n\| & \leq K e^{(\gamma-\alpha)(m-n)} & \quad \hbox{ for } m\geq n,\\[1.5ex] \|\Phi_A(m,n)(\id-P_n\| & \leq K e^{(\gamma+\alpha)(m-n)} & \quad \hbox{ for } m\leq n. \end{array} \] Consequently, the shifted discrete time-varying system \[ x_{n+1}=e^{-\gamma}A_nx_n,\qquad n\in\Z, \] exhibits an exponential dichotomy. Thus, $\gamma\not\in \Sigma_{\rm ED}(A)$.\\ \noindent \emph{Step 2}: We show that $\Sigma_{\rm ED}(W)\subset \Sigma_{\rm ED}(A)$. For this purpose, let $\gamma\not\in \Sigma_{\rm ED}(A)$ be arbitrary. By Definition \ref{DefinitionofED}, there exist $K,\alpha>0$ and a family of projection $(P_n)_{n\in\Z}$ of $\R^{d\times d}$ such that \begin{equation}\label{Estimate1} \begin{array}{cll} \|\Phi_A(m,n)P_n\| & \leq K e^{(\gamma-\alpha)(m-n)} & \quad \hbox{ for } m\geq n,\\[1.5ex] \|\Phi_A(m,n)(\id-P_n\| & \leq K e^{(\gamma+\alpha)(m-n)} & \quad \hbox{ for } m\leq n. \end{array} \end{equation} We define a map $P:\R\rightarrow \R^{d\times d}$ by \[ P(t):=\Phi_{W}(t,n)P_n\Phi_W(n,t)\qquad \hbox{ for } t \in [n,n+1), n\in\Z. \] Since $W(\cdot)$ is measurable and essentially bounded, i.e. $\mbox{ess}\sup_{t\in\R}\|W(t)\|<\infty$, it follows with Gronwall's inequality that \[ \kappa:=\sup_{|t-s|\leq 1} \|\Phi_W(t,s)\|<\infty. \] Thus, for any $t\geq s$ by letting $m:=\left\lceil t \right\rceil$ (the smallest integer number greater or equal $t$), $ n:=\left\lfloor s \right\rfloor$ (the largest integer number smaller or equal $s$) and \eqref{Estimate1} we have \begin{eqnarray*} \|\Phi_W(t,s)P(s)\| &=& \|\Phi_W(t,s)\Phi_W(s,n)P_n\Phi_W(n,s)\|\\[0.5ex] &\leq & \kappa^2 \|\Phi_W(m,n)P_n\|\\[0.5ex] &\leq& \kappa^2 K e^{2|\gamma-\alpha|} e^{(\gamma-\alpha)(t-s)}. \end{eqnarray*} Similarly, for $t\leq s$ we have \[ \|\Phi_W(t,s)(\id-P(s))\| \leq \kappa^2 K e^{2|\gamma+\alpha|} e^{(\gamma+\alpha)(t-s)}, \] which implies that the shifted continuous time-varying system \[ \dot x= (W(t)-\gamma \id) x \] exhibits an exponential dichotomy. Thus, $\gamma\not\in \Sigma_{\rm ED}(W)$ and the proof is complete. \end{proof} \section*{Acknowledgement} This research is funded by Vietnam National University of Civil Engineering (NUCE) under grant number 202-2018/KHXD-TD. Our attention to the problem of assignability for dichotomy spectrum of linear discrete time-varying systems initiated from the visit of Dr. Artur Babiarz, Prof. Adam Czornik and Dr. Michal Niezabitowski to Institute of Mathematics, VAST in 2017. The authors thank them for an interesting and highly motivated introduction of this problem. \section*{References}
2,877,628,090,003
arxiv
\section{Introduction}\label{sec:I} The possibility of measuring parity non-conservation (PNC) in atoms was first considered by Zeldovich in 1959\,\cite{Zeldovich}. However, atomic PNC experiments begun only after Bouchiat and Bouchiat showed that parity mixing of atomic states scales as $\sim\rm{Z}^3$ and measurable signals could be obtained for high-Z atoms\,\cite{Bouchiat1}. For high-$Z$, the degree of $s$-$p$ parity mixing in some atomic states, is of order $10^{-12}-10^{-10}$. The precise measurement of this atomic PNC can provide a stringent low-energy test of the standard model\,\cite{Diener}, of inter-nucleon weak interactions, and of nuclear structure\,\cite{Ginges}. \\ \indent There are several methods in which the parity mixing can be measured, and for each method the optimal atomic candidates are usually different. For example, the Stark interference technique has been used to measure PNC in Cs\,\cite{Wieman}, Yb\,\cite{Tsigutkin}, and Dy\,\cite{Nguyen}, and proposed for Fr\,\cite{Bouchiat2008} and Rb\,\cite{Sheng2010,DzubaRb2012}; the optical rotation technique has been used successfully for Tl\,\cite{Vetter}, Bi\,\cite{McPherson}, and Pb\,\cite{Meekhof1}; the ac-Stark shift method is proposed for Ba$^+$\,\cite{Sherman05} and Ra$^+$\,\cite{Wansbeek,Nunez2013} ions; and the hyperfine transition method is proposed for K\,\cite{Potassium}, Rb\,\cite{Sheng2010}, and Fr\,\cite{Gomez2006}. To date, the most successful atomic PNC measurement has been the 0.35\% precision measurement of nuclear-spin-independent PNC in Cs\,\cite{Wieman}. As the precision in the atomic theory of other PNC candidates is not expected to significantly surpass the theoretical precision of Cs, current experiments are aiming at other important goals that are not dependent on extremely precise atomic theory calculations. Examples include the measurement of atomic PNC on a chain of isotopes\,\cite{Dzuba1986,Fortson1990}, and the measurement of nuclear spin-dependent effects\,\cite{Ginges}. Therefore, along these lines, PNC experiments are in progress as mentioned above\,\cite{Tsigutkin, Nguyen, Sherman05, Nunez2013}.\\ \indent Due to the difficulty of controlling all the relevant parameters to the required precision, there have been only a handful of successful atomic PNC measurements, and even the few successful experiments have typically required 10-20 years to yield precise results \,\cite{Wieman, Vetter, Tsigutkin}. In addition, some of the current atomic PNC experiments are no longer tabletop, as they are performed on radioactive isotopes with short half lives of a few minutes, such as on Fr at TRIUMF\,\cite{Gomez2006} and Ra$^+$ at KVI \,\cite{Wansbeek,Nunez2013}.\\ \indent Recently, our group has proposed an extension of the optical rotation technique, with the use of a novel bow-tie optical cavity\,\cite{Bougas}. We show in detail that the proposed cavity-enhanced technique produces large experimental optical rotation signals and robust experimental checks, and allows new atomic candidates to be considered. Specifically, our proposal has several potential advantages, which solve some of the problems of past PNC optical rotation experiments. These advantages include the following:\\ \indent (a) The effective optical-rotation pathlength is enhanced using a high-finesse cavity, by $2\mathcal{F}/\pi$ where $\mathcal{F}$ is the finesse of the cavity (for high-finesse cavities, $\mathcal{F}\sim10^4-10^5$), allowing the study of PNC in atomic systems for which single-pass optical rotation from available column densities is otherwise too small. We focus on metastable states in Hg and Xe\,\cite{Bougas}, and ground-state\,I atoms\,\cite{PNCI}, for which the single-pass optical rotation from available column densities require enhancement of between $10^2$-$10^4$ cavity passes to produce measurable signals. In addition, the proposed atomic systems are compatible with a high-finesse optical cavity, as high atomic densities can be produced at around room temperature (for the case of Tl, Bi, and Pb, temperatures in excess of 1000\,K were required, which is difficult to combine with high-transmission windows and a stable optical cavity).\\ \indent (b) Two novel signal reversals are introduced. The main limitation in the original optical rotation experiments was the lack of rapid subtraction procedures or signal reversals. The proposed signal reversals are effected either by inverting the longitudinal magnetic field in the cavity, or by shifting the cavity resonance to an opposite polarization mode. These signal reversals can be performed at a high repetition rate, and allow the absolute optical rotation to be measured, without needing to remove the gas sample from the cavity. In addition, as metastable Hg and Xe and ground-state I can be produced by optical pumping, photodissociation or electrical discharge, the concentration of these species can be varied very quickly, giving an additional rapid subtraction procedure.\\% from the original optical rotation experiments was the largest source of experimental error.\\ \indent (c) Of the proposed PNC candidates, both Hg and Xe have several, commercially available, stable isotopes. In addition, Hg and Xe each have two isotopes with an odd-neutron nucleus ($^{199}$Hg, $^{201}$Hg, and $^{129}$Xe, $^{131}$Xe). Moreover, iodine has a radioactive isotope, $^{129}$I, which can be commercially obtained. Both I isotopes have an odd-proton nucleus. Therefore, nuclear spin-dependent effects can be measured for both odd-neutron and odd-proton nuclei, as well as PNC measurements along a chain of isotopes\,\cite{Dzuba1986, Fortson1990, Brown2009}. \\ \indent The aim of this paper is to explain the main features of the cavity-enhanced PNC optical rotation scheme in depth and to present simulated experimental signals for Hg, Xe, and I. In Section\,\ref{sec:II} we describe in brief the origin of the PNC optical rotation. In Section\,\ref{sec:AtomicSystems} we introduce the atomic systems considered for future PNC investigations using the optical rotation technique. In addition, we examine the experimental feasibility of PNC measurements in these atomic systems. In Section\,\ref{sec:III} we describe the properties of the cavity-enhanced scheme and derive the eigenmodes of a bow-tie cavity with circular birefringence (Faraday rotation and PNC optical rotation) and linear birefringence, and discuss how the signal reversals are implemented. In Section\,\ref{sec:IV} we simulate the PNC lineshapes for several transitions in Hg, Xe, and I, for a range of experimental conditions, and discuss the results.\\ \section{PNC Optical Rotation}\label{sec:II} \indent In this section we present the physics of the PNC optical rotation technique. We note that the equations appearing here are expressed in S.I. units and the presented formulas follow largely the structure of Ref.\,\cite{Sand,Vet} with helpful material coming from Refs.\,\cite{Sob,Ver,Rus,Sid,Pur,Kat,Steck}. In addition, the derivation of the PNC rotation angle also draws from Refs.\,\cite{Ginges,DzuFlaXeHg,ForLew}.\\ \newline \indent A PNC neutral-current interaction between the electrons and the nucleus of an atom, mixes the parity eigenstates of the atom. This PNC-induced mixing allows for a weak electric-dipole transition, with amplitude $E1_{\rm{PNC}}$, between states of the same parity. The size of $E1_{\rm{PNC}}$ increases approximately as $\sim\rm{Z}^3$ and is inversely proportional to the energy difference between the states of opposite parity mixed by the weak interaction\,\cite{Bouchiat1}, and typically is of order 10$^{-11}$-10$^{-10}\,\rm{e}\alpha_B$ ($e$ is the charge of the electron, and $\alpha_B$ the Bohr radius). Measurement of this small parity nonconserving amplitude is achieved through its interference with a larger parity conserving amplitude. \\ \indent In a PNC optical rotation experiment, the parity conserving amplitude is an allowed magnetic-dipole amplitude $M1$. The interference between the dominant $M1$ allowed amplitude and the PNC-induced $E1_{\rm{PNC}}$ amplitude leads to optical activity. The PNC-induced optical rotation $\varphi_{_{\mathrm{PNC}}}$ arises due to the difference in the indices of refraction for left- and right-circular polarized light in the vicinity of the magnetic dipole resonance: \begin{equation}\label{eq:phinq} \varphi_{_{\mathrm{PNC}}} = \frac{\omega l}{c} \frac{n'_+ - n'_-}{2} = \frac{\pi l}{\lambda} (n'_+ - n'_-), \end{equation} where $l$ is the length of vapor, $\lambda$ is the optical wavelength, $\omega$ is the optical frequency, and $n^{\prime}_{\pm}$ are the real parts of the refractive indices for left- and right-circular polarized light respectively (which are functions of the optical frequency $\omega$). \\%_______ M1 dipole interaction \subsection{$M1$ Magnetic dipole interaction} \indent We assume a magnetic dipole interaction of a laser beam with an atomic vapor. Treating the transition as a damped oscillator with a damping factor $\Gamma$, the index of refraction can be put in the form: \begin{equation}\label{eq:ngen} n = n' + i n'' = 1 + \frac{\pi\mu_{\rm o} e^2}{4 m \omega_{\rm o}}\: \rho f\: \L (\omega-\omega_{\rm o}), \end{equation} \noindent where $\omega_{\rm o}$ is the resonant transition frequency, $\rho$ the vapor density, $f$ the oscillator strength and $\L=\L'+i\L''$ the Lorentz lineshape function (given in Eqs. \eqref{eq:lorentzD} and \eqref{eq:lorentzA}). Assuming that the transition is an isolated $J\rightarrow J'$ line without hyperfine structure, we have: \begin{equation}\label{eq:f} f= \frac{2\, m\, \omega_{\rm o}}{3\, \hbar\, e^2}\: \frac{M1^2}{2 J+1}, \end{equation} \noindent where $M1\equiv\mean{M1} \equiv \JRME{\mu^{(1)}}$ is the reduced matrix element for the magnetic dipole operator. Taking into account the Doppler broadening mechanism of the thermal vapor (see Appendix \ref{app:LorDop}), and using Eq.\,\eqref{eq:f}, we can put Eq.\,\eqref{eq:ngen} in the form: \begin{equation}\label{eq:nmatel} n = 1 + \frac{\pi\mu_{\rm o}}{2 \hbar}\: \frac{\rho}{2 J+1}\: \frac{M1^2}{3}\: \mathcal{V}(\omega-\omega_{\rm o}). \end{equation} The Voigt profile functions in $\mathcal{V} = \mathcal{V}' + i \mathcal{V}''$ are given in Eqs. \,\eqref{eq:voigtA} and \eqref{eq:voigtD}. Assuming a non-zero nuclear spin, $I$, we must take into account the hyperfine structure. Using: \begin{widetext} \begin{equation}\label{eq:FvsJRME} \FRME{T^{(k)}}=(-1)^{I+k+J+F'}\sqrt{\left(2 F+1\right)\left(2 F'+1\right)}\sixj{J}{k}{J'}{F'}{I}{F} \JRME{T^{(k)}}, \end{equation} \end{widetext} \noindent where $k$ is the tensor rank of the operator $T$, and from the fact that the population density of the ground hyperfine state $F$ is: \begin{equation} \rho(F)=\frac{2 F+1}{(2 J+1)(2 I+1)}\, \rho, \end{equation} \noindent then, from Eq. \eqref{eq:nmatel}, we get by summing over final states and averaging over initial states: \begin{equation}\label{eq:nF} n = 1 + n_{\rm o}\:\sum_{F,F'} C_{FF'}\: \mathcal{V}_{FF'}(\omega), \end{equation} \noindent where we have defined: \begin{align}\label{eq:no} n_{\rm o}&= \frac{\pi\,\mu_{\rm o}}{2\, \hbar}\: \frac{\rho}{2 J+1}\: \frac{M1^2}{3},\\ \label{eq:CFF} C_{FF'} &= \frac{\left(2 F+1\right)\left(2 F'+1\right)}{2 I+1}\sixj{J}{1}{J'}{F'}{I}{F}^2, \end{align} and $\mathcal{V}_{FF'}(\omega)\equiv\mathcal{V}(\omega-\omega_{FF'})$ for a specific $F\rightarrow F^{\prime}$ transition. Note that $n_{\rm o}$ is not a dimensionless quantity.\\ \subsection{$E1_{\rm PNC}$ electric dipole interaction} \indent The PNC-induced electric dipole term is included in the above formulas by performing the following substitution in Eq.\,\eqref{eq:nF} and Eq.\,\eqref{eq:no}: \begin{equation} \frac{M1^2}{3}\rightarrow \mid\!\JRME{q\, i\, d^{(1)}_q+\mu^{(1)}_q}\!\mid^2, \end{equation} \noindent where $d^{(1)}$ is the electric dipole operator, $q=\pm1$, and the $i$ ensures that $E1_{\rm PNC}\equiv\mean{E1_{\rm PNC}}\equiv\JRME{id^{(1)}}$ is purely imaginary\cite{Bouchiat1}. \\ \indent The difference between the refractive indices for left- ($\sigma^+$) and right ($\sigma^-$) circularly polarized light is proportional to: \begin{equation}\label{eq:dnPNC} n_+ - n_- \propto 2\,i\, M1 \left(E1_{\rm PNC}-E1_{\rm PNC}^*\right) = -4 M1^2 \mathcal{R}, \end{equation} \noindent where we used $E1_{\rm PNC}^*=-E1_{\rm PNC}$ and introduced the factor $\mathcal{R}$: \begin{equation}\label{eq:R} \mathcal{R}\equiv{\rm Im}\left(\frac{E1_{\rm PNC}}{M1}\right). \end{equation} Using Eq.\,\eqref{eq:phinq} and Eq.\,\eqref{eq:dnPNC}, the PNC optical rotation angle is given by: \begin{equation}\label{eq:phiPNC} \varphi_{_{\mathrm{PNC}}} = -\frac{4 \pi l}{\lambda} \:[n(\omega)- 1]\:\mathcal{R} \end{equation} \noindent where $n(\omega)$ is the index of refraction of the medium (Eq. \eqref{eq:nF}) which is a function of the transition frequency $\omega$. The proportionality relation between the PNC optical rotation angle $\varphi_{_{\mathrm{PNC}}}$ and the ratio $\mathcal{R}$ serves as the basis for this experimental technique.\\ \newline \indent Note that the corresponding electric dipole formulas for Eqs.\,\eqref{eq:f},\eqref{eq:ngen} and \eqref{eq:no}, are obtained simply by substituting $\mu_{\rm o} \rightarrow 1/\varepsilon_{\rm o}$ and $\mean{M1}\rightarrow\mean{E1}$ (with $\mean{M1}$ in $\mu_{\rm B}$ and $\mean{E1}$ in $e \alpha_{\rm o}$). \subsection{Nuclear spin-dependent PNC effects - Anapole moment} \indent Nuclear spin-dependent (NSD) contributions to the atomic parity violation arise due to: (a) neutral weak-current interactions between the electron and the nucleus\,\cite{Novikov}, (b) electromagnetic interaction of the electron with the nuclear anapole moment\,\cite{Khriplovich1980}, and (c) spin-independent electron-nucleon weak interactions combined with magnetic hyperfine interactions\,\cite{Flambaum1985k2}. These contributions can be included in a dimensionless constant $\varkappa$, proportional to the strength of the NSD-PNC interaction\,\cite{Ginges,Flambaum1985,AnapoleFlam1984}: \begin{equation}\label{eq:varkappa} \varkappa=\varkappa_A-\frac{\mathcal{K}-1/2}{\mathcal{K}}\varkappa_2+\frac{I+1}{\mathcal{K}}\varkappa_{\mathcal{Q}_{\rm W}}, \end{equation} where $\mathcal{K}=(-1)^{I+\frac{1}{2}-l}(I+1/2)$ ($l$ is the orbital angular momentum of the valence nucleon), $\varkappa_2\approx-0.05$\,\cite{Ginges,Flambaum1985k2} corresponds to the weak neutral currents, $|\varkappa_{\mathcal{Q}_{\rm W}}|\approx0.02$\,\cite{Ginges} appears as a radiative correction to the NSI part, and $\varkappa_A$ is the nuclear anapole moment contribution to the NSD-PNC effects.\\% For heavy nuclei with an unpaired proton (Tl, Cs) $\varkappa_A\gg\varkappa_{\mathcal{Q}_{\rm W}},\varkappa_2$.\\ \indent The nuclear anapole moment $\varkappa_A$ is given by (in a simple valence model)\,\cite{AnapoleFlam1984}: \begin{equation} \varkappa_A=1.15\times10^{-3}A^{2/3}\mu_m g_m, \label{eq:anapole}\end{equation} where $A$ is the number of nucleons and $\mu_m$ is the magnetic moment of the unpaired nucleon ($\mu_{p}=+2.8$, and $\mu_{n}=-1.9$). The dimensionless constant $g_m$ gives the strength of the weak interactions between the nucleons. Theoretical estimates suggest that for neutrons $g_{n}\approx-1$ and for protons $g_{p}\approx+4.5$\,\cite{AnapoleFlam1997}. From Eq.\,\eqref{eq:anapole} we see that the nuclear anapole moment scales with the number of nucleons ($\varkappa_A\propto A^{2/3}$). For this reason, the anapole moment gives the largest contribution to NSD parity-violating effects in heavy atoms\,\cite{Ginges}. Using Eq.\,\eqref{eq:anapole}, we see that the value of the anapole moment is $\varkappa_A\approx0.1-1$\,\cite{Khriplovich1980,AnapoleFlam1984}.\\ \indent The PNC matrix element is expressed in terms of a nuclear-spin-independent (NSI) and an nuclear-spin-dependent (NSD) component as follows\,\cite{Ginges}: \begin{widetext} \begin{equation}\label{eq:E1PNCSISD} \FRME{E1_{\rm PNC}}=\FRME{E1_{\rm PNC}^{\rm (SI)}}+\FRME{E1_{\rm PNC}^{\rm (SD)}}=K_{FF'} E1_{\rm PNC} (1+r_{FF'}\varkappa), \end{equation} \end{widetext} \noindent where $K_{FF'}$ is the angular factor (using $k=1$ in Eq.\,\eqref{eq:FvsJRME}), $r_{FF'}$ is the ratio of spin-dependent to spin-independent PNC amplitudes, and $\varkappa$ is given by Eq.\,\eqref{eq:varkappa}. From Eq.\,\eqref{eq:E1PNCSISD} we see that measuring the PNC amplitudes for two different hyperfine components of a specific transition, then the value of $\varkappa$ can be expressed via the ratio of the measured amplitudes.\\ \indent The PNC rotation angle can be split into a NSI and a NSD part: \begin{equation}\label{eq:phiPNCSISD} \varphi_{_{\mathrm{PNC}}} = \varphi_{\rm SD} + \varphi_{\rm SI} = -\frac{4 \pi l}{\lambda} \:[n(\omega) - 1]\:(\mathcal{R}_{\rm SI} + \mathcal{R}_{\rm SD}). \end{equation} Calculated values of $E1_{\rm PNC}$, $\mathcal{R}$ and of the ratios $r_{FF'}$ (and thus of $\mathcal{R}_{\rm SI}$ and $\mathcal{R}_{\rm SD}$) for the various proposed transitions in Xe, Hg and I can be found in Refs.\,\cite{DzuFlaXeHg} and \cite{PNCI}.\\ \begin{center} \begin{table*}[ht] \caption{Reduced matrix elements for the $M1$ and $E1_{\rm PNC}$, and $\mathcal{R}\equiv \text{Im}(E1_{\rm PNC})/M1$ for the proposed atomic transitions. Note that for one absorption length $\varphi_{_{\mathrm{PNC}}}^{\rm{max}}\approx \mathcal{R}/2$.} \label{t:pnc} \begin{tabular}{c c c c c c c cc c c c c} \hline\hline\\[-1.4ex] &$Z$ &Transition & $\lambda$ &$\quad$& $M1$ &$\quad$& Isotopes &$\quad$& Im$(E1_{\rm PNC})$ &$\quad$& $\mathcal{R}$ &$\quad$\\%& $\Delta$E\\ & & & (nm) && ($\mu_B$) && with $I\neq0$ && $\times10^{-10}$\,$e\alpha_B$ && $\times10^{-8}$ &$\quad$\\[1.3ex \hline\\[-1.7ex] I &53& $^2$P$_{3/2}\rightarrow ^2$P$_{1/2}$ & 1315 && 1.15 && $^{127}$I && 0.335(67) && 0.80(16) &$\quad$\\[1.3ex] \hline\\[-1.3ex \multirow{3}{*}{} && $^3$P$^{\circ}_{0}\rightarrow ^1$P$^{\circ}_{1}$ & 609 && 0.229 && $\quad$ && 3.4(2),\,3.5(2) && 41(2),\,42(2) &$\quad$\\ Hg&80&$^3$P$^{\circ}_{1}\rightarrow ^1$P$^{\circ}_{1}$ & 682 && 0.199 && \{$^{199}$Hg,\,$^{201}$Hg\} && 5.3(3),\,5.4(3) && 73(4),\,74(4) &$\quad$\\%&8281.64\\ && $^3$P$^{\circ}_{2}\rightarrow ^1$P$^{\circ}_{1}$ & 997 && 0.272 && $\quad$ && 3.7(2),\,3.8(2) && 37(2),\,38(2) &$\quad$\\[1.3ex] \hline\\[-1.7ex] Xe &54& $6s\rm{}^2[3/2]^{^{\text{o}}}_{2} \rightarrow 6s^{\prime} \rm{}^2[1/2]^{^{\text{o}}}_{1}$ & 988 && 1.22 && \{$^{129}$Xe,\,$^{131}$Xe\} && 3.17(31),\,3.23(32) && 7.1(7),\,7.3(7) &$\quad$\\[1.3ex] \hline\hlin \end{tabular} \end{table*} \end{center} \begin{figure}\label{fig:EnergyLevels} \includegraphics[width=\linewidth]{energyLevels.pdf} \caption{\label{fig:EnergyLevels} Partial energy level diagram of Xe, Hg and I (not to scale) showing the proposed $E1_{\rm PNC}$ and $M$1 transitions. In addition, the hyperfine structure levels for the odd-isotopes of Xe, Hg and I are presented. For each atomic system, indicated in green color are the individual $F\rightarrow F^{\prime}$ transitions constituting the separated hyperfine groups of Figs.\,\ref{fig:Hg},\,\ref{fig:Xe},\,\ref{fig:Iodine}.} \end{figure} \section{Atomic Systems \& Experimental Feasibility}\label{sec:AtomicSystems} \subsection{PNC candidates: Xe, Hg \& I } \indent We have identified the following favorable PNC transitions in the atomic systems of Xe, Hg and I: (a) In metastable Xe, the $M1$ transition $(^2P^{^o}_{3/2})6s \text{ } ^2[3/2]^{^o}_{2}\rightarrow (^2P^{^o}_{1/2})6s \text{ } ^2[1/2]^{^o}_{1}$ with transition wavelength $\lambda=988$\,nm, (b) in metastable Hg, the transitions $6s6p \text{ } ^3P^{^o}_J \rightarrow 6s6p \text{ }^1P^{^o}_{1}$ at 609\,nm ($J=0$), 682\,nm ($J=1$), and 997\,nm ($J=2$), and (c) the spin-orbit transition of $^{127}$I, $^{2}$P$_{3/2}\rightarrow ^{2}$P$_{1/2}$ with transition wavelength 1315\,nm. Partial energy diagrams of the three proposed atomic systems are presented in Fig.\,\ref{fig:EnergyLevels}. \\ \indent In Bougas {\it et. al.}\,\cite{Bougas}, preliminary atomic calculations for the magnetic-dipole ($M1$) and the PNC electric-dipole ($E1_{\rm PNC}$) transition amplitudes for the proposed transitions in metastable Xe and Hg were presented (note that the simulations presented in Ref.\,\cite{Bougas} were based on these preliminary calculations). More recently, Dzuba and Flambaum\,\cite{DzuFlaXeHg}, using the configuration interaction technique, presented new calculations for the relevant transition dipole amplitudes of the proposed transitions in Xe and Hg. In particular, for the case of Hg, the spin-forbidden $M1$ transition amplitudes were overestimated in Ref.\,\cite{Bougas} and the new calculated numbers for the $M1$ dipole amplitudes, were found to be strongly suppressed. In the case of Xe the $M1$ dipole amplitude is found to be 6\% different from the initial calculation presented in Ref.\,\cite{Bougas}. In this article we use the transition amplitudes presented in Ref.\,\cite{DzuFlaXeHg} for the simulations of the expected PNC optical rotation signal under specific experimental conditions (see Section \ref{sec:IV}). In Table \ref{t:pnc} we summarize the results presented in Ref.\,\cite{DzuFlaXeHg}, along with the preliminary atomic calculations for the dipole transition amplitudes of the proposed PNC optical-rotation scheme in ground state I, as presented in Ref.\,\cite{PNCI}.\\ \subsection{Experimental feasibility} \indent In the optical rotation experiments using Tl, Bi and Pb vapors, PNC optical rotation angles of $\sim$1\,$\mu$rad were measured (in the case of Tl with an experimental precision of 1\%)\,\cite{Vetter,McPherson,Meekhof1}. In order to achieve PNC rotation angles of the order of $\sim1\mu$rad, column densities of $\sim10^{18}-10^{19}$\,cm$^{-2}$ thermal atoms were required, which correspond to optical depths of 10-60. Using Eq.\,\eqref{eq:phiPNC} an estimate for the maximum expected PNC optical rotation signal can be given. Assuming a Lorentzian dispersion curve, Eq.\,\eqref{eq:phiPNC} yields $\varphi_{_\text{PNC}}\approx \mathcal{R}/2$ for one absorption length.\\ \indent The production of Xe metastable states $6s \text{ } ^2[3/2]^{^o}_{2}$ and Hg $^3P_J$ has been realized using low-pressure electrical discharge lamps \cite{Lawler,Busshian} or optical pumping \cite{Happer}, yielding steady-state densities of about $10^{12}\,\text{cm}^{-3}$, allowing column densities of about $10^{14}\,\text{cm}^{-2}$ (over a single-pass path-length of 100\,cm). Similarly, high iodine atom densities of $\sim10^{16}$\,cm$^{-3}$ have been achieved in glow discharges (requiring high precursor and carrier gas pressures). Also, the photodissociation of $\rm{I}_2$ molecules is expected to yield atomic densities of $10^{14}-10^{16}$\,cm$^{-3}$ of ground-state $^2\rm{P}_{3/2}$ iodine atoms, obtaining thus, single-pass column densities of $10^{18}\,\text{cm}^{-2}$ for an interaction path length of 100\,cm\,\cite{PNCI}.\\ \indent Given the calculated values $\mathcal{R}$ for Xe, Hg and I (Table\,\ref{t:pnc}), and the experimentally feasible column densities for each of the proposed atomic systems stated above, we see that single-pass PNC optical rotation angles of about $\sim10^{-11}-10^{-9}$\,rad are expected. The polarization rotation noise per unit bandwidth in a balanced polarimeter is $\sim2$\,nrad/$\sqrt{\rm Hz}$ (assuming shot-noise limited detection for a probe beam with an intensity of $\sim$10\,mW). This reasoning dictates that an additional enhancement factor ($\sim10^2-10^4$) is necessary to achieve measurable signals.\\ \indent In the following section we revisit the experimental technique proposed in Ref.\,\cite{Bougas}, and describe in detail the principles of the cavity-enhanced scheme as well as the measurement procedure.\\%In Bougas {\it et. al.}\,\cite{Bougas} a cavity-enhanced scheme based on a bow-tie cavity with counter propagating beams, using signal reversals, was proposed. \section{Cavity-Enhanced Polarimetry}\label{sec:III} \indent In comparison to a single-pass instrument, a cavity-enhanced polarimeter introduces a phase-shift enhancement factor of $2\mathcal{F}/\pi$, where $\mathcal{F}\equiv \pi \sqrt[4]{R_t}/(1-\sqrt{R_t})$ is the finesse of the polarimeter ($R_t\!=\!R_1R_2R_3R_4$, where $R_i$ is the reflectivity of the $i$th mirror). Using high-finesse cavities, measurements with shot-noise-limited phase-shift resolution at the level of $3\times10^{-13}$\,rad have been demonstrated\,\cite{Durand}. \\ \indent In Ref.\,\cite{Bougas}, a cavity-enhanced polarimetric technique implementing signal reversals was proposed, for the enhancement and precise measurement of the PNC optical rotation angle $\varphi_{_{\mathrm{PNC}}}$\,\eqref{eq:phiPNC}. The experimental scheme consists of a four-mirror cavity in a bow-tie configuration. A four-mirror cavity design has three main advantages over linear cavities: (a) it provides the ability of measuring simultaneously polarization effects of different symmetry under time-reversal (as in the case of magneto-optical effects and natural optical activity) without altering the apparatus during measurements, (b) it supports counter-propagating beams which give an immediate signal reversal, and (c) it avoids mechanical adjustments of possible intracavity optical elements, as in the case of two-mirror cavities used for the measurement of natural optical activity in the gas phase where intracavity quarter-wave plates needed to be modulated mechanically\,\cite{Poirson}. In this section, we present the eigenpolarization theory for the cavity-enhanced polarimeter, based on the Jones matrix calculus\,\cite{Jones, Byer, Vaccaro} and discuss in detail the principles of the proposed experimental technique.\\ \indent In the Jones matrix formalism, the effect of any optical element on the polarization state vector of the laser light is described as a linear operator, expressed by a $2\times 2$ matrix whose matrix elements are in general complex. The direct incorporation of amplitude and phase information allows for the investigation of coherent phenomena. Furthermore, since the incident CW and CCW beams will be mode-matched into the TEM$_{00}$ mode of the four-mirror cavity, we focus our analysis on the polarization properties of the longitudinal modes for either propagation direction. In addition, changes in the spatial profile of the laser beams, introduced by the intracavity elements, are neglected. The Jones matrices corresponding to each of the optical elements used in the proposed apparatus are denoted hereafter by boldface letters $\mathbf{J}$ \begin{figure}[h!] \begin{center} $\begin{array}{l@{\hspace{0.21in}}r} \multicolumn{1}{l}{\mbox{(a)}} &\\ \includegraphics[angle=0, width=0.9\linewidth]{ExpScheme_DishcargeCell+TGG+532nm_CoilRasterized.pdf}&\\[-0.1cm] \multicolumn{1}{l}{\mbox{(b)}} &\\ \includegraphics[angle=0, width=0.9\linewidth]{CCavityModeFeqSpectrumCases.pdf}&\\[-0.1cm] \end{array}$ \end{center} \caption{\small{(color online). (a) Proposed experimental setup. The input laser beam is split into two parts of equal intensity and orthogonal polarizations. The laser frequency, is brought into resonance with the nearly degenerate R$_{_{\text{CW}}}$-L$_{_{\text{CCW}}}$ modes of the cavity. Upon exit from the cavity, the counterpropagating outputs are recombined into linearly polarized light, and analyzed with linear and circular balanced polarimeters (BP1 and BP2, respectively). The 532\,nm laser beam that will be used for the production of high atomic iodine densities through the photodissociation of I$_2$, is also depicted. (b) Cavity frequency polarization-spectrum: i) Faraday effect splits the cavity spectrum into $R$ and $L$ modes by $2\omega_{_\text{F}}=2\theta_{_{\text{F}}}(c/L)$ (two-fold degeneracy); ii) the PNC optical rotation splits further the CW and CCW modes by $2\omega_{_\text{PNC}}=2\varphi_{_{\mathrm{PNC}}}(c/L)$, while the cavity modes remain circular polarization states; iii) in the presence of linear birefringence ($\delta\neq0$) the frequency splitting of the eigenmodes increases as $\omega^{\prime}_{_{\text{F}}}=1/q\,\omega_{_\text{F}}$ and the measured PNC-induced splitting is reduced $\omega^{\prime}_{_{\text{F}}}=q\,\omega_{_\text{F}}$ ($0\leqslant q\leqslant1$, see Fig.\,\ref{fig:QFactor}); the eigenmodes transform into elliptical states as observed from the different amplitudes of the output light (see text for discussion). For the simulations we assumed that the CW input beam was $p$-polarized, while the CCW beam was $s$-polarized. In i)-iii), the gray, dashed, line corresponds to the four-fold degenerate axial mode of an isotropic cavity.}} \label{fig:PNCexp} \end{figure} \begin{figure}[h!] \includegraphics[angle=0, width=1.\linewidth]{ModeSplittingv3.pdf} \caption{\small{(color online) The presence of linear birefringence $\delta$ prevents the enhancement of the PNC optical rotation $\varphi_{_{\mathrm{PNC}}}$. The resonance peaks of the cavity eigenpolarization frequency spectrum are presented as a function of the ratio $\delta/\alpha$. As $\delta$ increases, and therefore the magnitude of the total cavity anisotropies increases, the frequency difference between the respective CW (or CCW) R-L modes increases by $1/q$, while the PNC-induced frequency splitting (exaggerated here for clarity) is decreasing by q (Eq.\,\eqref{eq:QFactor}).}} \label{fig:QFactor} \end{figure} \subsection{Jones Matrices for Polarization Optics The Jones matrix for reflection is the same for CW and CCW propagation and is given by: \begin{equation} \mathbf{J}_{{\rm M}_i}(\delta_i)=\sqrt{R_i} \left(\begin{array}{cc}-e^{i\delta_i/2} & 0 \\0 & e^{-i\delta_i/2} \end{array}\right), \end{equation} where the index $i$ ranges from 1 to 4. We assume that the Fresnel amplitude reflection coefficients for the \textit{s} and \textit{p} polarizations are equal in magnitude (an assumption expressed by the common factor $\sqrt{R_i}$), which is a good approximation for near-normal angle-of-incidence reflections, as in the case of a bow-tie cavity. The differential $s$-$p$ phase shift $\delta_{i}=\delta_p-\delta_s$, represents the linear birefringence obtained upon mirror reflection. For non-normal incidence, these $s$-$p$ phase shifts can be of the order of $10^{-3}$ rad, while for normal incidence of the order of $10^{-5}$ to $10^{-6}$ rad at a specific design wavelength (for gyroquality super-mirrors at normal incidence, the linear birefringences can be as low as $\sim\!0.1 \mu$rad)\,\cite{Hall}.\\ \indent In the presence of a longitudinal magnetic field a medium becomes circular birefringent, an effect otherwise know as the Faraday effect\,\cite{BudkerRMP}. The Faraday optical rotation is expressed as: $\theta_{_{\text{F}}}=V\text{B}l$, where B is the magnetic field strength along the direction of light propagation, $l$ is the pathlength of interaction, and $V$ is the Verdet constant of the medium. The Jones matrix for the Faraday rotation is an SU(2) rotation matrix with argument $\theta_{_{\text{F}}}$: \begin{equation} \mathbf{J}_{_{\text{F}}}(\theta_{_{\text{F}}})=\left(\begin{array}{cc} \cos\theta_{_{\text{F}}}& -\sin\theta_{_{\text{F}}} \\ \sin\theta_{_{\text{F}}} & \cos\theta_{_{\text{F}}} \end{array}\right). \end{equation} Note that the physical direction of the polarization rotation is defined by the magnetic field orientation. Due to the non-reciprocal nature of the Faraday effect, when either the magnetic field or the direction of propagation of the light reverses, the sign of rotation reverses (in the light-frame). Thus, for the CCW propagation, the Faraday rotation will be $\theta^{\rm{ccw}}_{_{\rm{F}}} \to -\theta^{\rm{cw}}_{_{\text{F}}}$. As we shall see, this directional symmetry breaking, induced by the Faraday effect, is essential to our signal reversals.\\%proposed experimental scheme for the measurement of PNC optical rotation.\\ \indent The Jones matrix representing the PNC optical rotation, will also be that of an SU(2) rotation matrix with argument $\varphi_{_{\mathrm{PNC}}}$: \begin{equation} \mathbf{J}_{_{\text{PNC}}}(\varphi_{_{\mathrm{PNC}}})=\left(\begin{array}{cc} \cos\varphi_{_{\mathrm{PNC}}}& -\sin\varphi_{_{\mathrm{PNC}}} \\ \sin\varphi_{_{\mathrm{PNC}}} & \cos\varphi_{_{\mathrm{PNC}}} \end{array}\right), \end{equation} where $\varphi_{_{\mathrm{PNC}}}$ is given by Eq.\,\eqref{eq:phiPNC}. The PNC optical rotation, being a pseudoscalar quantity, is odd under parity transformations \emph{and} even under time-reversal transformations. Therefore, the Jones matrix describing PNC optical rotation will be the same for both CW and CCW propagation directions, $\varphi_{_{\mathrm{PNC}}}^{\rm{cw}}=\varphi_{_{\mathrm{PNC}}}^{\rm{ccw}}$. \\ \indent Finally, anisotropies such as imperfections of transmission optics, thermal or stress induced birefringences, and stray magnetic fields, can be described as linear birefringent optical elements. The Jones matrix for a general linear wave-retarder, which introduces a differential phase shift $\delta^{\prime}$, and whose ``fast axis'' is oriented at an angle $\theta$ with respect to the x-axis, is given by: \begin{equation} \mathbf{J}(\theta,\delta^{\prime})=S(\theta)\times \left(\begin{array}{cc} e^{i\delta^{\prime}/2} & 0 \\0 & e^{-i\delta^{\prime}/2} \end{array}\right) \times S(-\theta), \end{equation} where $S(\theta)$ describes a general SU(2) rotation matrix. Reversing the direction of propagation (in the light frame), reverses the sign of the angle $\theta$ which specifies the orientation of the retardation axes. For the mirror-reflection linear birefringence, we used $\mathbf{J}(\theta,\delta^{\prime})$ for $\theta=0$. Note that the eigenvectors of the $\mathbf{J}(\theta,\delta^{\prime})$ are linear polarization states.\\ \subsection{CW and CCW round trip matrices} \indent The round-trip Jones matrices for the CW (CCW) propagation are obtained by the ordered multiplication of the Jones matrices representing the optical elements. A convenient starting point for the analysis is the point labeled $S$ in Fig. \ref{fig:PNCexp}, from which the different propagation directions are defined. The round trip Jones matrices are given by: \begin{equation} \mathbf{J}^{_{\text{CW}}}\!=\!\mathbf{J}_{\rm M_2}(^\delta\!/\!_4)\!\cdot\! \mathbf{J}_{\rm M_3}(^\delta\!/\!_4)\!\cdot\! \mathbf{J}(\varphi_{_\text{PNC}})\!\cdot\! \mathbf{J}(\theta_{\text{F}}) \!\cdot\! \mathbf{J}_{\rm M_4}(^\delta\!/\!_4) \!\cdot\! \mathbf{J}_{\rm M_1}(^\delta\!/\!_4) \label{eq:RTcw} \end{equation} for the CW propagation path, and \begin{equation} \mathbf{J}^{_{\text{CCW}}}\!=\!\mathbf{J}_{\rm M_2}(^\delta\!/\!_4)\!\cdot\! \mathbf{J}_{\rm M_3}(^\delta\!/\!_4)\!\cdot\! \mathbf{J}(-\theta_{_\text{F}})\!\cdot\! \mathbf{J}(\varphi_{_\text{PNC}}) \!\cdot\! \mathbf{J}_{\rm M_4}(^\delta\!/\!_4) \!\cdot\! \mathbf{J}_{\rm M_1}(^\delta\!/\!_4) \label{eq:RTccw} \end{equation} for the CCW propagation path. Here, we define $\delta$ as the total single-pass linear birefringence. Note that by reversing the order of the individual operators and changing the sign of each Faraday rotation angle for the CW (CCW) path produces the CCW (CW) path (if an additional linear birefringent element is present, then the sign of its respective orientation angle should be also reversed so as to obtain the CCW propagation matrix).\\ \indent The Jones matrices for the Faraday rotation and the PNC rotation are commutable, a property that reflects the fact that rotations about the same axis are additive ($\mathbf{J}(\varphi_{_\text{PNC}})\!\cdot\! \mathbf{J}(\theta_{\text{F}})=\mathbf{J}(\varphi_{_\text{PNC}}+\theta_{\text{F}})$). Therefore, the total single-pass optical rotation is different for the CW and CCW counterpropagating beams: \begin{equation}\alpha_{_{\text{CW}}}\!=\!\theta_{_{\text{F}}}+\varphi_{_{\mathrm{PNC}}}\quad \text{and} \quad \alpha_{_{\text{CCW}}}\!=-\!\theta_{_{\text{F}}}+\varphi_{_{\mathrm{PNC}}}. \label{eq:AlphaCWCCW} \end{equation} This directional symmetry breaking is key for distinguishing the PNC and Faraday type optical rotation and thus for the sensitive measurement of the PNC optical rotation angle.\\%, in close analogy to Sagnac interferometry and mutli-oscillator ring laser gyroscopes \ref{GYRO}.\\ \indent Re-writing Eq.\,\eqref{eq:RTcw} and Eq.\,\eqref{eq:RTccw} in a compact form, we have: \begin{align} \mathbf{J}^{_{\text{CW}}}& = R^2\!\cdot\! \mathbf{J}(0,\delta/2)\!\cdot\! \mathbf{J}(\alpha_{_\text{CW}})\!\cdot\! \mathbf{J}(0,\delta/2), \label{eq:Jcw} \\ \mathbf{J}^{_{\text{CCW}}}& = R^2\!\cdot\! \mathbf{J}(0,\delta/2) \!\cdot\! \mathbf{J}(\alpha_{_\text{CCW}})\!\cdot\! \mathbf{J}(0,\delta/2), \label{eq:Jccw} \end{align} where we omit the mirror index under the assumption that all four mirrors have the same reflectivity and linear birefringence.\\ \subsection{Frequencies and polarizations of cavity spectrum} \indent The allowed polarizations of the cavity modes, along with their respective frequencies, are determined by the anisotropies of the cavity. Using the explicit form of the transfer matrices for CW and CCW propagation (Eq. \eqref{eq:Jcw},\,\eqref{eq:Jccw}) we can obtain the eigensystem for both directions as a function of the anisotropy parameters ($\theta_{_\text{F}}$, $\varphi_{_\text{PNC}}$, and $\delta$)\,\cite{Byer}. For the following discussion, we set $R=1$, as we are interested only in the properties of the frequency spectrum of the optical resonator.\\ \indent The matrices $\mathbf{J}^{_{\text{CW}}}$ and $\mathbf{J}^{_{\text{CCW}}}$ are unitary matrices, of rank two. Therefore, each matrix has two eigenvalues and two eigenvectors; the eigenvectors $\nu_{\pm}$ are generally complex, orthogonal vectors, and represent the eigenpolarizations of each cavity mode. The eigenvalues can be written in the form $\lambda_{\pm}=e^{\pm i\Phi}$. The phase of each eigenvalue is the round-trip optical phase shift obtained during light propagation, and therefore yields the frequency splittings of the eigenmodes. \\ \indent In the simple case of an isotropic cavity ($\alpha=0$ and $\delta=0$), the four eigenmodes are degenerate and any polarization state can couple into the cavity ($\mathbf{J}^{_{\text{CW}}}$ and $\mathbf{J}^{_{\text{CCW}}}$ become proportional to the identity matrix for $\alpha=0$ and $\delta=0$). The introduction of anisotropies lifts this four-fold degeneracy. Therefore, in the most general case, the spectrum of the cavity is represented by four non-degenerate modes of elliptical polarization, whose frequencies lie above and below the degenerate frequency of the isotropic case. We examine three cases.\\ \indent i) \emph{$\theta_{_\text{F}}\neq0$, $\varphi_{_{\mathrm{PNC}}}=0$, and $\delta=0$ : } The Jones matrices for CW and CCW become: \begin{equation} \mathbf{J}^{_{\text{CW}}}=\mathbf{J}_{_{\text{F}}}(\theta_{_{\text{F}}}) \quad\text{and}\quad \mathbf{J}^{_{\text{CCW}}}=\mathbf{J}_{_{\text{F}}}(-\theta_{_{\text{F}}}). \end{equation} \indent It is easy to verify that the allowed eigenpolarizations of a rotation matrix are circular polarization states. Therefore, in the presence of single pass Faraday rotation $\theta_{_\text{F}}$, the spectrum splits into right circular (RCP) and left circular (LCP) polarization modes; the frequency splitting is equal to $2\omega_{_\text{F}}=2\theta_{_\text{F}}(c/L)$, where c is the speed of light and L is the round-trip cavity length. The non-reciprocal nature of the Faraday effect, embedded in the change of sign of the Faraday rotation when the direction of propagation is reversed, is directly reflected in the frequency spectrum of the cavity. The $R_{_{\text{CW}}}$ mode is degenerate with the $L_{_{\text{CCW}}}$ while the $R_{_{\text{CCW}}}$ mode is degenerate with the $L_{_{\text{CW}}}$ mode (see Fig. \ref{fig:PNCexp}(b), (i)).\\ \indent ii) \emph{$\theta_{_\text{F}}, \varphi_{_{\mathrm{PNC}}}\neq0$, and $\delta=0$ : For single-pass rotations $\varphi_{_\text{PNC}}$ and $\theta_{_\text{F}}$, and in the absence of any linear birefringence ($\delta=0$), the round-trip matrices for CW and CCW correspond to rotation matrices with arguments $\alpha_{_{\text{CW}}}$ and $\alpha_{_{\text{CCW}}}$ (Eq. \eqref{eq:AlphaCWCCW}): \begin{align} \mathbf{J}^{_{\text{CW}}}=\mathbf{J}(\alpha_{_{\text{CW}}}) \quad\text{and}\quad \mathbf{J}^{_{\text{CCW}}}=\mathbf{J}(\alpha_{_{\text{CCW}}}). \end{align} The eigenpolarizations remain circular polarization states for both propagation directions, since the transfer matrices are simply rotation matrices. Their respective eigenvalues are $\lambda^{\pm}_{_\text{CW}}=e^{\pm i\alpha_{_{\rm{cw}}}}$ and $\lambda^{\pm}_{_\text{CCW}}=e^{\pm i\alpha_{_{\text{ccw}}}}$. The difference in rotation (Eq. \eqref{eq:AlphaCWCCW}), results in splitting the CW and CCW modes by $2\omega_{_\text{PNC}}=2\varphi_{_\text{PNC}}(c/L)$, yielding the four-mode structure depicted in Fig. \ref{fig:PNCexp}(b), case (ii).\\ \indent iii) \emph{$\theta_{_{\text{F}}}$,\, $\varphi_{_{\mathrm{PNC}}}$, and $\delta \neq 0$ : Linear birefringence prevents the enhancement of circular birefringence through the transformation of a linearly polarized beam into a circular one. If however, a large circular birefringence is induced, then the effects of linear birefringence wil be averaged out\,\cite{Bougas},\cite{Vaccaro}. Using the general form of the CW and CCW matrices (Eq.\,\eqref{eq:Jcw} and \eqref{eq:Jccw}) we demonstrate how the extraction of $\varphi_{_{\mathrm{PNC}}}$ is affected in the presence of $\delta$. Expanding Eq. \eqref{eq:Jcw} and \eqref{eq:Jccw}, we get: \begin{align} \mathbf{J}^{_{\text{CW}}}&=\left(\begin{array}{cc} e^{\frac{i\delta}{2}}\cos(\theta_{_{\text{F}}}\!+\varphi_{_{\mathrm{PNC}}})& -\sin(\theta_{_{\text{F}}}\!+\varphi_{_{\mathrm{PNC}}}) \\ \sin(\theta_{_{\text{F}}}\!+\varphi_{_{\mathrm{PNC}}}) &e^{-\frac{i\delta}{2}} \cos(\theta_{_{\text{F}}}\!+\varphi_{_{\mathrm{PNC}}}) \end{array}\right)\\ \mathbf{J}^{_{\text{CCW}}}&=\left(\begin{array}{cc} e^{\frac{i\delta}{2}}\cos(\theta_{_{\text{F}}}\!-\varphi_{_{\mathrm{PNC}}})& \sin(\theta_{_{\text{F}}}\!-\varphi_{_{\mathrm{PNC}}}) \\ -\sin(\theta_{_{\text{F}}}\!-\varphi_{_{\mathrm{PNC}}}) &e^{-\frac{i\delta}{2}} \cos(\theta_{_{\text{F}}}\!-\varphi_{_{\mathrm{PNC}}}) \end{array}\right). \end{align} The eigenvalues and eigenvectors are: \begin{align} \lambda^{\pm}_{_\text{cw}}&=\cos\alpha_{_{\text{cw}}}\cos\frac{\delta}{2} \mp i\sqrt{1-\cos^2\alpha_{_{\text{cw}}} \cos^2\frac{\delta}{2} }\nonumber\\ \quad & \nonumber \\ \nu^{\pm}_{_\text{cw}}\!&\propto\! \left(\! \begin{array}{ccc} \csc\alpha_{_{\text{cw}}}\left(\!\cos\alpha_{_{\text{cw}}} \sin\frac{\delta}{2}\! \mp \! \sqrt{1-\cos^2\alpha_{_{\text{cw}}} \cos^2\frac{\delta}{2}\!}\right) \\ \quad \\ -i \\ \end{array}\! \right)\!. \label{eq:eigensystemCW} \end{align} for the CW transfer matrix, and \begin{align} \lambda^{\pm}_{_\text{ccw}}&=\cos\alpha_{_{\text{ccw}}}\cos\frac{\delta}{2} \mp i\sqrt{1-\cos^2\alpha_{_{\text{ccw}}} \cos^2\frac{\delta}{2} }\nonumber \\ \quad & \nonumber \\ \nu^{\pm}_{_\text{ccw}}\!&\propto\!\left(\! \begin{array}{ccc}\! \csc\alpha_{_{\text{ccw}}}\!\left(\!\cos\alpha_{_{\text{ccw}}} \sin\frac{\delta}{2}\! \mp \! \sqrt{1-\cos^2\alpha_{_{\text{ccw}}} \cos^2\frac{\delta}{2}\!}\right) \\ \quad \\ i \\ \end{array}\! \right)\!. \label{eq:eigensystemCCW} \end{align} for the CCW transfer matrix. We see that in the most general case the polarizations eigenstates, for both the CW and CCW modes, are represented by orthogonal ellipses and their frequency splitting is proportional to $\Gamma=\cos^{-1}[ \cos\alpha\cos(\delta/2)]$. \\ \indent Linear birefringence $\delta$ prevents the effective amplification of circular birefringence $\alpha$ by transforming the cavity modes into elliptical polarization states. Therefore, the measurement of $\varphi_{\text{PNC}}$ in the presence of linear birefringence will be reduced to: $\varphi^{\prime}_{\text{PNC}}=q\,\varphi_{\text{PNC}}$, where $q$ ($0\leqslant q\leqslant1$) is the reduction factor. From Eq. \eqref{eq:eigensystemCW} and \eqref{eq:eigensystemCCW}, we obtain the form of the reduction factor for $\varphi_{_\text{PNC}}\ll1$: \begin{equation}\label{eq:QFactor} q=\frac{\Gamma-\Gamma|_{\varphi_{_{\mathrm{PNC}}}=0}}{\varphi_{_{\mathrm{PNC}}}}=\frac{\cos\delta/2\sin\theta_{_\text{F}}}{\sqrt{1-\cos^2\delta/2\cos^2\theta_{_\text{F}}}}+\mathcal{O}(\varphi_{_{\mathrm{PNC}}}). \end{equation} \indent In Fig.\,\ref{fig:QFactor}, we investigate the effect of the linear birefringence $\delta$ as a function of the ratio of the total linear birefringence anisotropy over the total circular birefringence, $\delta/\alpha$. The introduction of this extra anisotropy ($\delta$) will increase the frequency splitting of the modes for each sense of propagation. This effective increase in frequency is inversely proportional to and equal in magnitude with the simultaneous decrease of the PNC-induced splitting, i.e.~$2\Gamma/(2\Gamma|_{\delta=0})\equiv1/q$ (Fig. \ref{fig:QFactor}). Fig.\,\ref{fig:PNCexp}\,(b) case (iii), and Fig.\,\ref{fig:QFactor}, show simulations based on Eq.\,\eqref{eq:eigensystemCW} and Eq.\,\eqref{eq:eigensystemCCW}, which demonstrate how the presence of linear birefringence prohibits the enhancement of circular birefringence, as the PNC-induced mode splitting vanishes for large $\delta/\alpha$. Note that the cavity's eigenpolarization-modes become more elliptical with increasing $\delta$; the input beams are linearly polarized, and therefore the induced ellipticity is depicted on the different intensity amplitudes of the cavity (output) modes. Therefore, to ensure $q\cong1$, one must satisfy $\alpha\gg\delta$ (see also the relevant discussion in ref. \cite{Bougas}).\\ \subsection{Principles of the Measurement} \indent The principles of the measurement have been described previously in Ref.\,\cite{Bougas} and are briefly discussed here. We will assume for the following discussion a bow-tie four-mirror cavity with round-trip cavity length $L=7.5$\,m.\\ \indent A laser beam is split into two beams of equal intensity and orthogonal linear polarizations. The $s$-polarized laser beam is locked to the R$_{_{\text{CW}}}$ mode and frequency-locked using the Pound-Drever-Hall (PDH) scheme \cite{PDH}. Note that alternative locking schemes have also demonstrated shot-noise-limited phase-shift measurements (see Ref.\,\cite{Durand} and references therein). The PNC-related mode splitting is equal to $2\omega_{_{\text{PNC}}}=2\varphi_{_{\mathrm{PNC}}} c/L$. For the different values of $\mathcal{R}$ presented in Table\,\ref{t:pnc}, we get $\omega^{\rm max}_{_{\text{PNC}}}\sim150$\,mHz-15\,Hz. The PNC-induced mode splitting is much smaller than the cavity linewidth $\Delta\omega_{\rm cav.}$ (for $L=7.5$\,m and $\mathcal{F}\sim1.5\times10^4$, $\Delta\omega_{\rm cav.}=2\pi\times2.5$\,kHz). Therefore the $p$-polarized laser beam excites the nearly degenerate L$_{_{\text{CCW}}}$ mode (see Fig.\,\ref{fig:PNCexp}\,(b)). The spatial recombination of the R$_{_{\text{CW}}}$ and L$_{_{\text{CCW}}}$ output beams produces a linearly polarized beam rotated by $N\varphi_{_{\mathrm{PNC}}}$, where $N$ is the average number of round-trip cavity passes. The rotation angle $N\varphi_{_{\mathrm{PNC}}}$ will be measured with a balanced polarimeter. Note that the spatial recombination of the two output beams is expected to be a source of depolarization, for which the signal needs to be corrected. Therefore, we propose the use of two separate balanced polarimeters, implementing rotating half-wave and quarter-wave plates respectively, yielding the complete set of Stokes parameters of the output recombined light (see Ref. \cite{Berry}). \\ \indent Observe that bringing the CW and CCW beams into resonance with the R$_{_{\text{CCW}}}$-L$_{_{\text{CW}}}$ mode pair, the recombination of the exit beams will give now a signal output of $-N\varphi_{_{\mathrm{PNC}}}$, yielding thus a net difference in polarization rotation of $2N\varphi_{_{\mathrm{PNC}}}$. This is accomplished through the use of two signal reversals. First, the frequency of the laser can be brought into resonance with the R$_{_{\text{CCW}}}$-L$_{_{\text{CW}}}$ mode pair with the use of an acoustic-optic-modulator (AOM). Second, reversing the magnetic field is equivalent to the interchange of the CW and CCW beams, and thus the laser will couple to the R$_{_{\text{CCW}}}$-L$_{_{\text{CW}}}$ mode pair. These two novel signal reversals allow for the absolute measurement of the PNC optical rotation, avoiding the need for cell removal, as was required in previous PNC optical rotation experiments. Additionally, these reversals can be performed at high frequencies of $\sim1$\,kHz, allowing a sufficient subtraction of experimental drifts. Note, that the frequency of the reversals are constrained by the photon lifetime inside the cavity (for $R=R_1R_2R_3R_4=0.9999^4$ then $\tau_{\rm{photon}}=L/(c|{\rm ln}R|)\sim63\,\mu\rm{sec}$). \\ \indent In the previous subsections, we saw that a linear birefringence can suppress the enhancement of the PNC optical rotation. In general, linear birefringences originating from mirror-reflection phase-shifts, thermal and/or stress-induced birefringences, are expected to be $\sim\!10^{-3}$\,rad (per single-pass per reflection or transmission). Inducing a large circular birefringence protects the coherent accumulation of the PNC optical rotation inside the cavity. The circular birefringence can be induced using the Faraday effect of the proposed transitions themselves. Theoretical calculations for the Faraday effect on the $M1$ transitions under consideration, and the proposed column densities, yield $\theta_{_{\text{F}}}\!\sim\!10^{-3}$ rad for a $200$\,G magnetic field\,\cite{Sand,Vet}. An alternative is the use of an anti-reflection (AR) coated high-Verdet glass window inside the cavity, a dense flint glass for example. A Terbium Gallium Garnet (TGG) crystal has a Verdet constant of V$\sim\,$45\,$\mu$rad\,G$^{-1}$\,cm$^{-1}$ with losses of $\sim10^{-4}/$\,mm$^{-1}$ at 1064\,nm. For a 1\,mm crystal thickness and magnetic fields of 3000\,G one obtains $\theta_{_\text{F}}\!\sim\!13.5$\,mrad, ensuring that $\alpha\gtrsim10\delta$ for which the depolarization factor $q\gtrsim0.9993$ (see also discussion in Ref.\,\cite{Bougas}). Finally, note that in the case of large linear birefringences, a compensator (for example, a MgF$_2$ thin glass) with a high antireflection coating can be placed appropriately to reduce the cavity's total linear birefringence, and therefore to satisfy the condition $\alpha\gtrsim10\delta$. \\ \indent As a final remark, note that the metastable Xe and Hg are produced in a discharge lamp or via optical pumping, or in the case of I, from molecular photodissociation, and can thus be ``switched'' on and off. This gives us an additional subtraction procedure which allows for the real-time investigation of the ``empty" cavity and thus of possible experimental errors.\\ \section{Theoretical Simulations}\label{sec:IV} \subsection{Optical Absorption Length} Upon exiting from the cavity, the recombined laser beams will be analyzed by a balanced polarimeter. The detected signals will be of the form: \begin{equation}\label{eq:signal} S=2 N\varphi_{_{\mathrm{PNC}}}(\omega) \times T(\omega), \end{equation} where $N$ is the average number of round-trip cavity passes, $\varphi_{_{\mathrm{PNC}}}(\omega)$ denotes the dispersive line-shape of the PNC optical rotation, and $T(\omega)$ is the transmission of the light beam through the vapor, which is governed by the Beer-Lambert law\,\cite{Sob}: \begin{equation}\label{eq:Tr} T(\omega) = \frac{I(\omega)}{I_{\rm o}}= e^{-A(\omega)}\equiv e^{-\rho\sigma(\omega) l} \end{equation} Here, $A(\omega)$ is defined as the absorptivity in terms of the interaction path-length $l$, the number density of the atoms $\rho$, and the absorption cross section $\sigma(\omega)$, which is a function of the optical frequency. $I_{\rm o}$ is the intensity of the incident laser beam. \\ \indent The absorption cross section, $\sigma$, is given by the expression:\\ \begin{equation}\label{eq:sigma} \sigma(\omega) = \sigma_{\rm o}\sum_i\sum_{F,F'} b_i\:C_{FF'} \mathcal{V}''_{FF',i}(\omega), \end{equation} where, $b_i$ is the abundance of isotope $i$ (see Appendix \ref{app:n}), the $C_{FF'}$ are geometry factors (Eq.\,\eqref{eq:CFFQ}) and $\mathcal{V}''_{FF'}$ is the absorptive part of the Voigt profile (given in Eq.\,\eqref{eq:voigtA}). In the equation above, the \emph{integrated absorption cross section}, $\sigma_{\rm o}$, is: \begin{equation}\label{eq:sigmao} \sigma_{\rm o} = \frac{\pi\mu_{\rm o}\omega_{JJ'}}{\hbar~c}\: \frac{1}{2 J+1}\: \frac{M1^2}{3}. \end{equation} \noindent Note that, $\sum_i\sum_{F,F'} b_i\:C_{FF'} = 1$ and, since $\int_{0}^{\infty}\mathcal{V}''(\omega) d\omega = 1$, we get $\int_{0}^{\infty}\sigma(\omega) d\omega = \sigma_{\rm o}$, hence $\sigma_{\rm o}$ is justifying its name. In addition, $\sigma_{\rm o}$ does not have units of area.\\ \indent The extremely long effective path-lengths, that can be realized in stable high-finesse optical cavities, lead to large effective resonant absorption lengths. In Fig.\,\ref{fig:SvsOD} we present calculations for the maximum PNC optical rotation signal expected, as a function of the resonant absorption optical lengths ($l_0$). The PNC optical rotation signal is proportional to the product $\varphi_{_{\mathrm{PNC}}}(\omega)\times T(\omega)$ (Eq.\,\eqref{eq:signal}), i.e. proportional to the product of a dispersive line-shape profile times an absorption line-shape profile. For resonant optical depths $l_0\ll1$, the maximum PNC optical rotation angle increases linearly with increasing column densities, i.e. $\varphi_{_{\mathrm{PNC}}}^{\max} \propto \rho l$ (where $\rho$ is the density and $\rho l$ is column density of the vapor), as seen in the first inset of Fig.\,\ref{fig:SvsOD}. For optical depths $l_0\gg1$, the vapor is optically thick near the line center where $\varphi_{_{\mathrm{PNC}}}$ is largest and can no longer be observed. The effective maximal rotation angle is shifted further off resonance as $\sqrt{\rho l}$, and $\varphi_{_{\mathrm{PNC}}}^{\max } \propto \sqrt{\rho l}$, as can be shown by maximizing the product of dispersion and transmission. Therefore, the rotation angle can still be increased with increasing column density for $l_0\gg1$, only with a rate slower than linear (see second inset of Fig.\,\ref{fig:SvsOD}). For example, in the case of Tl and Pb, vapor densities producing values of $A(\omega)\sim10-60$ absorption lengths at the line center of the $M1$ transitions were realized, obtaining thus PNC rotation angles of about $10^{-6}\,\mu$rad at the dispersion peaks.\\ \begin{figure} \centering \includegraphics[width=\linewidth]{./SvsOD.pdf} \caption{\small{The PNC optical rotation signal is proportional to the product $\varphi_{_{\mathrm{PNC}}}(\omega)\times T(\omega)$ (Eq.\,\eqref{eq:signal}). Assuming Voigt line shape profiles, the maximum PNC rotation signal is plotted as a function of the resonant optical depth (OD). We demonstrate that the signal scales linearly with OD when the vapor is optically thin, and continues to increase with a square root dependence as the vapor becomes thicker. The $y$-axis is given in units of $\mu$rad, and we assumed $\mathcal{R}=14\times10^{-8}$.} \label{fig:SvsOD} \end{figure \subsection{PNC Optical Rotation Simulations}\label{subsec:Sims} \indent In this section we present theoretical simulations of the PNC optical rotation signals for the proposed transitions in Xe, Hg and I, where we explore a range of experimentally feasible parameters. We assume a four-mirror bow-tie cavity of round-trip cavity length $L=7.5$\,m (free spectral range FSR\,$=\,40$\,MHz), each mirror having a reflectivity of $R=99.99$\% (enhancement factor $N$\,$\sim$\,$10^4$), and a gas-cell (lamp) path-length of $l=1.5$\,m. We present the enhanced PNC optical rotation $2N\varphi_{_{\mathrm{PNC}}}(\omega)$, multiplied by the transmission, which depends on the absorptivity of the specified transition through the atomic medium (Sec.\,\ref{sec:IV}, A).\\% (as measured by BP1 and BP2, including the factor of 2 coming from each reversal; see Fig.\,\ref{fig:PNCexp}) \begin{figure}[h!] \begin{center} \includegraphics[angle=0, width=0.95\linewidth]{HgLB.pdf}\\[-1.4ex] \end{center} \caption{\small{(color online). Theoretical simulations of the PNC optical rotation signal in Hg vs optical frequency for two cases. (a) Simulations for the three proposed transitions (transition wavelength $\lambda=609$, 682, 997\,nm) assuming a discharge lamp filled with isotopically pure $^{202}$Hg. For all the initial states, $^3P_0$, $^3P_1$ and $^3P_2$, we use densities $\rho=5\times10^{12}$\,cm$^{-3}$. In addition, identical Lorentz line-widths $\Gamma_{\rm L}=2\pi\times100$\,MHz for all transitions are used, while the Doppler line-widths for the 609, 682, and 997\,nm transitions are $\sim2\pi\times$\,183, 163, and 112\,MHz respectively. The (red) points in the $^3P_2\rightarrow{}^1P_1$ transition are separated by one FSR ($2\pi\times40$\,MHz), and the inset shows the reversal mechanism from allowing alternation between different polarization pairs yielding a net polarization difference of $2\text{N}\varphi_{_{\mathrm{PNC}}}$. (b) Isotopically pure odd-isotope $^{199}$Hg with $\rho=5\times10^{12}$\,cm$^{-3}$. The inset shows the full hyperfine structure of the transition. The effect of the nuclear anapole moment is presented, setting $\varkappa=1$ to yield visibly large signal differences. In each case we assume an isotopically pure filled discharge lamp filled, and density $\rho=5\times10^{12}$\,cm$^{-3}$ and Lorentz contributions of $\Gamma_L=2\pi\times100$\,MHz. See text for a detailed discussion.}} \label{fig:Hg} \end{figure} \indent {\bf Hg}: In Fig.\,\ref{fig:Hg} we present the theoretical PNC optical rotation simulations for the proposed transitions in Hg (using the values for $\mathcal{R}$ from Ref.\,\cite{DzuFlaXeHg} as presented in Table\,\ref{t:pnc}). In Fig.\,\ref{fig:Hg} (a), we assume equal densities $\rho=5\times10^{12}$\,cm$^{-3}$ of pure $^{202}$Hg for all the initial states of the proposed PNC transitions ($^3P_0$, $^3P_1$ and $^3P_2$), produced in a discharge lamp (or using an optical pumping scheme). The line-shape is a Voigt profile, with a Doppler contribution in the line-width of $\Gamma_{\rm D}\simeq2\pi\times$\,267, 238, and 163\,MHz for the 609, 682, and 997\,nm transitions respectively (see Eq.\,\ref{eq:Doppler} for $\sim320$\,K). The Lorentzian contribution for all three lines was taken to be $\Gamma_{\rm L}=2\pi\times100$\,MHz. This assumption is based on the fact that in a low-pressure discharge lamp ($<10$\,mTorr), the pressure broadening mechanisms are negligible compared to other homogeneous broadening mechanisms\,\cite{Lawler}. Therefore, the main contributions come from radiative processes. Lines originating from the $^3P_J$ states have Lorentz line-widths in the order of 20\,MHz, and in the order of 100\,MHz for lines originating from the $^1P_1$ state\,\cite{Lawler}. Assuming an effective path-length of $150\times10^4$\,cm, we get column densities that correspond to 12, 3 and 3 absorption lengths for the 609, 682, and 997\,nm transitions, respectively. \\% (lifetime\,$\approx$\,1.5\,nsec\,[xx]).\\ \indent In Fig.\,\ref{fig:Hg}\,(b) we examine the nuclear spin-dependent PNC effects for the 682\,nm transition in $^{199}$Hg (nuclear spin $I=1/2$). Using the values calculated by Dzuba and Flambaum in Ref.\,\cite{DzuFlaXeHg} for the PNC amplitudes between different hyperfine components, and by setting $\varkappa=1$, we see that the peak signals differ by about $+5.4$\% and $-8.6$\% resulting in total signal differences of up to $\sim14$\%. The actual value of $\varkappa$ can be estimated using Eq.\,\eqref{eq:varkappa} and Eq.\,\eqref{eq:anapole} to be $\sim0.1$ for the Hg nucleus. Therefore, achieving an experimental precision of at least 0.25\% is necessary to measure the NSD-PNC effects with a 6$\sigma$ precision, in the 682\,nm transition for $^{199}$Hg. Note that, similarly to the case of I\,\cite{PNCI}, the PNC signals for the two hyperfine groups, $F\,=\,1/2\rightarrow F^{\prime}\,=\,1/2$ and $F\,=\,1/2\rightarrow F^{\prime}\,=\,3/2$, deviate in opposite directions, a signature that serves as an important experimental check.\\ \begin{figure}[h!] \begin{center} \includegraphics[angle=0, width=0.95\linewidth]{Xe.pdf} \end{center} \caption{\small{(color online). Theoretical prediction of the PNC optical rotation signal for metastable Xe vs optical frequency. (a) For a discharge lamp filled with isotopically pure metastable $^{136}$Xe, of density corresponding to 12 absorption lengths. (b) The (red) points in the $^3P_2\rightarrow{}^1P_1$ transition are separated by one FSR ($2\pi\times40$\,MHz). See text for a detailed discussion. All values taken from Ref.\,\cite{DzuFlaXeHg}. }} \label{fig:Xe} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[angle=0, width=0.95\linewidth]{iodine_plausibility_diagramm_withSim.pdf} \end{center} \caption{\small{(color online) \emph{(top figure)} Theoretical prediction of the PNC optical rotation signal for the $^{2}$P$_{3/2}\rightarrow ^{2}$P$_{1/2}$ transition in $^{127}$I (we assume a column density of $\rho l=1.75\times10^{21}$\,cm$^{-2}$, and $\Gamma_{\rm L}=2\pi\times 3$\,MHz). \emph{(lower figures)} Calculations of the maximum (peak) PNC optical rotation angle are presented, as a function of the Lorentzian broadening of the line and the average number of passes $N$ (proportional to the finesse of the cavity). The simulations are performed for two different extreme-case densities, $\rho=10^{14}$ and $10^{16}$\,cm$^{-3}$, assuming constant interaction path-length and temperature ($\Gamma_{\rm D}=2\pi\times151$\,MHz). The non-smooth features in the simulations are the result of the fact that the peak rotation is not always associated with the same hyperfine component, but switches between hyperfine components (depicted with black circles in the top figure).}} \label{fig:Iodine} \end{figure} \indent {\bf Xe}: In Fig.\,\ref{fig:Xe} the theoretical simulations for the expected PNC rotation signals for metastable Xe are presented. In the simulations presented in Fig.\,\ref{fig:Xe} (a) we assume densities of $\rho=1\times10^{12}$\,cm$^{-3}$ of $^{136}$Xe, which can be produced produced in a discharge lamp. The Doppler width is $\Gamma_{\rm D}\simeq2\pi\times192$\,MHz (300\,K) and the Lorentz width $\Gamma_{\rm L}\simeq2\pi\times60$\,MHz, based on preliminary measurements on a low-pressure He-Xe discharge lamp performed in our lab, and on measurements presented in Ref.\,\cite{Busshian}. Assuming $l_{\rm eff}=150\times10^4$\,cm, we calculate column densities that correspond to 12 absorption lengths at the center of the absorption. Fig.\,\ref{fig:Xe}\,(b) shows the PNC optical rotation signal for the case of pure $^{129}$Xe (with nuclear spin $I=1/2$) demonstrating a resolved hyperfine structure. Assuming the same density, Doppler and Lorentz width as in the simulations for the $^{136}$Xe, we obtain column densities that correspond to 7 absorption lengths (at maximum absorption). Similarly to Hg, we set $\varkappa=1$ to see the experimental sensitivity to NSD effects. Using the values from Ref.\,\cite{DzuFlaXeHg}, we see a total signal difference of up to $\sim$\,6.2\%. As the actual value of $\varkappa$ is expected again to be $\sim0.1$ (Xe has an odd-neutron nucleus), an experimental precision of about 0.1\% (6$\sigma$ precision) is required to measure the nuclear anapole moment in Xe.\\ \indent In addition, Hg and Xe have large distributions of stable isotopes ($\Delta N/N=8/120$ and 12/76 respectively). Ratios of atomic PNC measurements along an isotope chain of the same element, can exclude large errors associated with atomic-structure effects\,\cite{Dzuba1986} and are sensitive to variations in the neutron distribution\,\cite{Fortson1990, Brown2009}. \\%entering the interpretation of the data \indent {\bf $^{127}$I}: In Ref.\,\cite{PNCI}, investigations of the expected PNC optical rotation signal in the 1315\,nm transition in $^{127}$I were presented. Here we explore further the range of experimental conditions, for which a measurable PNC optical rotation signal is achievable. In Fig.\,\ref{fig:Iodine} we present the maximum (peak) PNC-optical rotation angle as a function of the Lorentzian broadening of the line and the average number of passes $N$ (proportional to the finesse of the cavity) for two different extreme-case densities, $\rho=10^{14}$ and $10^{16}$\,cm$^{-3}$ (the former is the minimum density needed to produce observable PNC signals and the latter is the largest that can be produced using the photodissociation method\,\cite{PNCI}). Note that the peak optical rotation is not always associated with the same hyperfine component, but switches between hyperfine components depending on the experimental conditions. This peak switching is responsible for the kinks present in the curves of Fig.\,\ref{fig:Iodine}. Finally, we propose the production of these densities from the photodissociation of $I_2$ with 532\,nm radiation (see relevant discussion in Ref.\,\cite{PNCI}).\\% \indent Using the values presented in Fig.\,\ref{fig:Iodine}, we see that for densities of $\rho=10^{16}$\,cm$^{-3}$, a Lorentzian contribution of $\Gamma_{\rm L}=2\pi\times10$\,MHz, and 400 average number of passes, a peak $\varphi_{_{\mathrm{PNC}}}^{\rm max}$ optical rotation angle of $\sim1\,\mu$rad is expected. Setting $\varkappa=1$, we observe NSD-PNC signal differences of about $\sim8.5$\%. Using the previously measured values of $\kappa$ for Cs\,\cite{Wieman} as the expected value for the anapole moment in iodine ($\varkappa(^{127}{\rm I})~ \simeq ~ -\varkappa(^{133}{\rm Cs}) \simeq ~-0.38(6)$), we see that a measurement of about $\sim$0.5\% sensitivity, corresponding to a 5\,nrad detection sensitivity, is required to measure the NSD-PNC effects in $^{127}$I with a 6$\sigma$ precision (see also discussion in Ref.\,\cite{PNCI}). \\ \section{Conclusions} \indent In this article we presented the fundamental elements of a cavity-enhanced polarimetric measurement of PNC optical rotation. The polarization eigenstates of a four-mirror bow-tie cavity supporting counter propagating beams were presented. We demonstrate how an absolute measurement of the PNC optical rotation is possible even in the presence of linear birefringence. The measurement procedure and the availability of robust subtraction procedures using two distinct signal reversals were also discussed. Furthermore, theoretical simulations for the expected PNC optical rotation signals, utilizing the cavity-enhanced optical rotation technique under experimentally feasible parameters, were presented. These suggest that, for the proposed systems and experimental conditions, measurements of odd-neutron and odd-proton NSD-PNC effects are experimentally feasible. In addition, all the proposed systems are suitable for PNC measurements along a chain of isotopes, particularly Xe that has the largest distribution of stable isotopes. Finally, we demonstrate that particularly for the case of $^{127}$I, large optical rotation signals are expected. We argue that the proposed experimental conditions, and the corresponding expected signal values and detection sensitivities for the proposed transition in iodine, compare favorably to those of successful PNC optical-rotation experiments\,\cite{Vetter,McPherson,Meekhof1}, suggesting that iodine is the most favorable candidate for future PNC optical rotation experiments, currently pursued in our laboratory.\\ \acknowledgements The authors would like to thank Dr. Ren\`e Bussiahn for providing the discharge lamp, on which preliminary measurements were performed, and for helpful discussions, and V. A. Dzuba and V. V. Flambaum for supporting atomic structure calculations and discussions. LB thanks Annie Clark for fruitful discussions. The work was supported by the European Research Council (ERC) grant TRICEPS (GA No. 207542), by the National Strategic Reference Framework (NSRF) grant Heracleitus II (MIS 349309-PE1.30) co-financed by EU (European Social Fund) and Greek national funds.
2,877,628,090,004
arxiv
\subsection{Application to $q$-Gaussian} Let us start with a preliminary result and consider the integral, for $q>1$, $a$ and $b$ in general complex and $\Re(b)>0$, \begin{multline} \int\exp_q(ia z-bz^2)\ddz\\ =\int\exp_q\left[-\left(\sqrt{b}z-\frac{ia}{2\sqrt b}\right)^2-\frac{a^2}{4b}\right]\dd z\\ =\frac{1}{\sqrt b}\int_{\gamma}\exp_q\left[-\zeta^2-\frac{a^2}{4b}\right]\dd\zeta\\ =\frac{\e_q^{-\frac{a^2}{4b}}}{\sqrt b}\int_{\gamma}\exp_q\left[-\zeta^2\left(\e_q^{-\frac{a^2}{4b}}\right)^{q-1}\right]\dd\zeta. \end{multline} Here $\gamma$ is the line $\gamma=\{z\in\mathds{C}\colon z=\sqrt{b}t-\frac{ia}{2\sqrt b},\ t\in\mathds{R}\}$. Observe that the poles of the previous integral are at \begin{equation} \zeta_\pm=\pm i\sqrt{\frac{1}{q-1}+\frac{a^2}{4b}}. \end{equation} For $q\to 1$, $\zeta_\pm\to\pm i\infty$. The poles are such that If we consider a $q$-Gaussian distribution, with $q\in\left[1,1+\frac{1}{d}\right)$, \begin{equation} G_q(x)=\sqrt{\frac{\beta}{\pi}}\frac{\sqrt{q-1}\Gamma\left(\frac{1}{q-1}\right)}{\Gamma\left(\frac{3-q}{2q-2}\right)}\e_q^{-\beta x^2}\equiv g_q \e_q^{-\beta x^2}, \end{equation} we have \begin{multline} \hat G_q(k;x)\coloneqq\int G_q(z)\odot_q\e_q^{ik(z-x)}\,dz\\ =g_q\e_q^{-ikx}\int \exp_q\left[i A(x)k z-B(x) z^2\right]\,dz. \end{multline} In the previous expression we have introduced the complex quantities \begin{equation} A(x)=\left(\frac{g_q}{\exp_q(-ikx)}\right)^{q-1},\quad B(x)=\frac{\beta}{\exp^{q-1}_q(-ikx)}. \end{equation} Observe that $\Re(B(x))=\beta>0$. \iffalse \begin{figure*}\centering \subfigure[Real part of $\e_{\phi}^z$.\label{f:qexprph}]{\includegraphics[width=0.30\textwidth]{PlotRphi.pdf}} \subfigure[Imaginary part of $\e_{\phi}^z$.\label{f:qexpiph}]{\includegraphics[width=0.30\textwidth]{PlotIphi.pdf}} \subfigure[Argument $\arg\left(\e_{\phi}^z\right)$.\label{f:qexparph}]{\includegraphics[width=0.30\textwidth]{PlotArgphi.pdf}} \caption{Real part, imaginary part and argument of the $q$-exponential function in Eq.~\eqref{qexp} for $\eta=\phi=\frac{1+\sqrt{5}}{2}$ on the complex plane respect to the variable $z=x+iy$. Observe the presence of the pole at $z_\eta=\phi$. The branch cut in the complex plane is depicted in white.}\label{f:qexpphi} \end{figure*} \begin{figure*}\centering \subfigure[Real part of $\e_{5}^z$.\label{f:qexpr}]{\includegraphics[width=0.3\textwidth]{PlotR.pdf}} \subfigure[Imaginary part of $\e_{5}^z$.\label{f:qexpi}]{\includegraphics[width=0.3\textwidth]{PlotI.pdf}} \subfigure[Argument $\arg\left(\e_{5}^z\right)$.\label{f:qexpar}]{\includegraphics[width=0.3\textwidth]{PlotArg.pdf}} \caption{Real part, imaginary part and argument of the $q$-exponential function in Eq.~\eqref{qexp} for $\eta=5$ on the complex plane respect to the variable $z=x+iy$. Observe the presence of the pole at $z_\eta=5$. The branch cut in the complex plane is depicted in white.}\label{f:qexp} \end{figure*}\fi
2,877,628,090,005
arxiv
\section{\label{sec:level1}Introduction} One of the main goals of thermodynamics has been to study heat engines and thermodynamic processes, dating back to the now famous work of Sardi Carnot in 1824 \cite{Carnot}. With the advent of nano physics and the control of quantum systems down to single atoms a better understanding of thermodynamics on the basis of quantum mechanics is necessary. Since the first attempts to analyze thermodynamic machines on the quantum level \cite{Scovil1959, Geusic1967} considerable progress has been made in the last decades. Different kinds of models like machines built of harmonic oscillators, of uncoupled spins, particles in a potential or different three level systems have been studied \cite{Feldmann2003, Palao2001, Segal2006, Bender2000, Allahverdyan2005}. Also the question about the violation of the second law of thermodynamics in the quantum regime has come up now and then. For a heat engine this would lead to an efficiency larger than the Carnot efficiency. All attempts to do so have failed and could be resolved, e.g., with the help of Maxwell's demon \cite{Kieu2006}. Two level systems (TLS) like spins or qubits are the essential ingredients for quantum computation \cite{Loss1998}. Much effort has been directed towards control of small clusters and chains of qubits in quantum optical systems \cite{Cirac2000}, nuclear magnetic resonance \cite{Gershenfeld1997} and solid state systems \cite{Makhlin1999}. A serious problem in any such realization is the interaction of the respective quantum network with its environment. In the present work we study a model consisting of three TLS arranged in a chain in contact with two baths of different temperatures as studied for transport scenarios, e.g., in \cite{Saito2000,Michel2003,Michel2004}. Here an energy gradient on the system and an incoherent driving of the TLS in the middle let this system act as a thermodynamic machine. For possible experiments the set up may require more TLS's. Under special conditions the Carnot efficiency may be reached by a TLS heat engine but can never go beyond: If the Carnot efficiency is reached the machine flips its function, e.g., from a heat pump to a heat engine. We start with a discussion of the concept of work and heat. This is done by considering the change of the energy expection value of a quantum system. With the help of the Gibbs relation heat can be associated with the change of occupation numbers of a quantum system whereas work is the change of the spectrum. We then introduce our thermodynamic machine consisting of three TLS's \cite{Henrich2006}. Thermodynamic properties can be imparted on this system by an appropriate embedding into a larger quantum environment \cite{GeMiMa2004, Henrich2005, Michel2005}, without the need of any thermal bath. In the present context, though, it is much simpler to settle for the open system approach based on a quantum master equation (QME). In Sec.~\ref{sec:level2} the used QME will be introduced. Our numerical results are detailed in Sec.~\ref{sec:level4}. In Sec.~\ref{sec:level5} we compare the numerical investigation with an ideal TLS machine where ideal process steps are assumed. The obtained result is rather general and valid for any kind of TLS machines. \section{\label{sec:level23}Thermodynamic Variables} \subsection{Work and Heat} To describe thermodynamic processes and machines one first has to define the pertinent variables heat, work, temperature and entropy for the system under consideration. Starting from the energy expectation value \begin{equation} U=\left\langle E \right\rangle = \Tr{\lbrace \Hop \op \rho \rbrace } \end{equation} for a quantum system $\Hop$ with discrete spectrum ($\op \rho$ is the density operator) and considering the temporal change of $\left\langle E \right\rangle$ \begin{equation} \dod {}{t} \left\langle E \right\rangle = \Tr{\left\lbrace \dod {}{t}\Hop \op \rho\right\rbrace } + \Tr{\left\lbrace \Hop \dod{}{t} \op \rho\right\rbrace }, \label{eq5} \end{equation} change of work $W$ can be associated with the first term of (\ref{eq5}) where only the spectrum changes \begin{equation} \dod {}{t} W=\Tr{\left\lbrace \dod {}{t}\Hop \op \rho\right\rbrace }=\sum_i \dot{E}^i p^i. \label{eq6} \end{equation} $\dot{E}^i$ is the change per time of the $i$-th eigenvalue and $p^i$ is the corresponding occupation probability. The change of heat $Q$ is then the second part of (\ref{eq5}) \begin{equation} \dod {}{t} Q=\Tr{\left\lbrace \Hop \dod {}{t} \op \rho\right\rbrace }=\sum_i \dot{p^i} E^i. \label{eq7} \end{equation} Equation (\ref{eq5}) thus boils down to the famous Gibbs relation \begin{equation} \Delta U=\Delta W+\Delta Q, \label{eq8} \end{equation} where $\Delta U$ is the energy change of the system. For cyclic processes work $\Delta W$ and heat $\Delta Q$ can also be calculated with the help of the $S T$-diagram. For a closed path in the $S T$-plane $\Delta U = 0$ and thus \begin{equation} \Delta W = - \Delta Q =- \oint T \dd S. \label{eq9} \end{equation} While connected with bath $\alpha$ $\Delta Q_\alpha$ can alternatively be calculated from the respective heat current $J_\alpha$ over one cycle of duration $\tau$ \begin{equation} \Delta Q_\alpha=\int_0^\tau J_\alpha \dd t. \label{eq10} \end{equation} Typically there are two baths $\alpha=h,c$ and thus two contributions (see Fig.~\ref{fig1}) \begin{equation} \Delta Q= \Delta Q_h + \Delta Q_c. \label{eq10b} \end{equation} \subsection{Temperature and Entropy} The temperature of a system can be defined if the state in the energy eigenbasis is canonical. For a TLS $\mu$ the temperature is given by \begin{equation} T_\mu=-\frac{E^1_\mu-E^0_\mu}{\text{ln}p^1_\mu-\text{ln}p^0_\mu}, \label{eq11} \end{equation} with occupation probability $p_\mu^i$ of the energy level $E_\mu^i$. Due to the fact that all coherences will be damped out by the bath it is always possible to get a local temperature for a single TLS. The von-Neumann-entropy \begin{equation} S_\mu=-\Tr{\left\lbrace \op \rho_\mu \ln\op \rho_\mu\right\rbrace }=-\sum_i p^i_\mu \ln p^i_\mu \label{eq12} \end{equation} can then be taken as the thermodynamic entropy. \subsection{Efficiencies} The efficiency of a heat pump is defined by the ratio of the heat $\Delta Q_h$ pumped per cycle to the hot reservoir and the work applied \begin{equation} \eta^p=-\Delta Q_h/\Delta W \label{eq14} \end{equation} which reduces for the Carnot heat pump to \begin{equation} \eta_\text{Car}^p=T_h/(T_h-T_c). \label{eq15} \end{equation} For the heat engine the efficiency is \begin{equation} \eta^e=-\Delta W/\Delta Q_h \label{eq15a} \end{equation} which in the Carnot case leads to \begin{equation} \eta_\text{Car}^e=1-T_c/T_h. \label{eq16} \end{equation} \section{Driven Spin System} \subsection{Hamilton-Model} \begin{figure} \centering \includegraphics[width=.45 \textwidth]{fig1.eps} \caption{Schematic representation of the system under investigation: An inhomogeneous 3-spin-chain is interfaced between two baths. Spin~1 (with energy splitting $\delta_1$) and spin~3 ($\delta_3$) act as filters whereas spin~2 [$\delta_2(t)$] is the working gas by deformation of its spectrum.} \label{fig1} \end{figure} The model under investigation is an inhomogeneous spin chain with nearest neighbor coupling of Heisenberg type described by the Hamiltonian \begin{equation} \Hop =\sum_{\mu=1}^3 \left\lbrace \frac{\delta_\mu}{2} \Z_\mu + \lambda \sum_{i=x,y,z} \op \sigma^i_\mu \otimes \op \sigma_{\mu+1}^i \right\rbrace . \label{eq1} \end{equation} The $\op \sigma^i_\mu$'s are the Pauli-operators of the $\mu$th spin. $\lambda$ is the coupling strength which is chosen to be small compared to the local Zeeman splitting $\delta_\mu$, $\lambda \ll \delta_\mu$. Because $\delta_\mu \ne \delta_{\mu+1}$ we call the spin chain inhomogeneous. We will need at least three spins in order to have this system work as a thermodynamic pump or machine. The spin chain is brought in contact locally with two baths at different temperatures as depicted in Fig.~\ref{fig1}. The detuning between spin 1 and spin 3 is $\delta_{13}=(\delta_1-\delta_3)/2>0$ \subsection{\label{sec:level2}Quantum Master Equation} There are different ways to describe the thermal behavior of quantum systems coupled to environments. Examples are the path integral method \cite{Weiss} or schemes based on the complete Schr\"odinger dynamics of the small system embedded into a larger quantum environment \cite{GeMiMa2004, Henrich2005, Michel2005}. Because it is much simpler for the present context we settle for a master equation approach. Such an approach has been widely applied to describe system bath models \cite{Breuer,Kubo}. To derive the master equation for our model one usually starts from the Liouville-von-Neumann equation for the total system (we set $\hbar$ and the Boltzmann constant $k_B$ equal to 1) \begin{equation} \dod{}{t} \op \rho(t) = -\iu \Kom{\Hop}{\op \rho(t)}. \label{eq1y} \end{equation} The Hamiltonian is composed of three terms \begin{equation} \Hop =\Hop_\text{s}+\Hop_\text{env}+\kappa \Hop_\text{int} \label{eq1b} \end{equation} with $\Hop_\text{s}$ the Hamiltonian of the relevant system, $\Hop_\text{env}$ the environment/bath Hamiltonian, $\Hop_\text{int}$ the interaction of coupling strength $\kappa$ between system and bath. With the help of a projection operator technique up to second order in $\kappa$ and with the use of the Born-Markov approximation the dynamics of the reduced system density $ \op \rho_\text{s} (t)$ reads \begin{align} & \dod{}{t} \op \rho_\text{s} (t) = \notag \\ & - \kappa^2 \int_{t_0}^{t} \dd s \Tr_\text{env} \left\lbrace {\Kom{\Hop_\text{int}(t)}{\Kom{\Hop_\text{int}(t-s)}{\op \rho_\text{s}(t)\otimes \op \rho_\text{env}}}} \right\rbrace , \label{eq1c} \end{align} where $\op \rho_\text{env}$ is a fixed state of the environment and $\text{Tr}_\text{env}$ denotes the trace over all degrees of freedom of the environment (see \cite{Breuer}). In general the interaction Hamiltonian $\Hop_\text{int}$ is defined as \begin{equation} \Hop_\text{int}=\sum_i \op X_i \otimes \op B_i \label{eq1d} \end{equation} where $\op X_i$ operates on the system and $\op B_i$ on the environment. For the coupling with a single spin we take $\op X = \X$. By putting (\ref{eq1d}) into (\ref{eq1c}) and going to the Schr\"odinger picture the following compact form can be obtained \begin{equation} \dod{}{t}\op \rho_\text{s}=-\iu \Kom{\Hop_\text{s}}{\op \rho_\text{s}} + \mathcal{\op D}(\op \rho_\text{s}). \label{eq1e} \end{equation} As in \cite{Saito2000} we use the dissipator $\mathcal{\op D}(\op \rho_\text{s})$ \begin{equation} \op{\mathcal{D}}(\op \rho) = \Kom {\op X}{\op R \op \rho}+\Kom{\op X}{\op R \op\rho}^\dagger\, \label{eq1f} \end{equation} with \begin{equation} \Bra{l}\op R \Ket{m}=\Bra{l} \op X \Ket{m} \Phi(E_l-E_m) \label{eq1z} \end{equation} suppressing the system label s in the following. $\Bra{l}$ and $\Ket{m}$ are system eigenstates with the respective energy eigenvalue $E_{l/m}$. $\Phi(E_l-E_m)=\Phi(\omega_{lm})$ is the bath correlation tensor \begin{equation} \Phi(\omega_{lm})=\int_0^\infty \text{e}^{\omega_{lm} s} \langle \op B(s) \op B(0) \rangle \dd s \label{eq1g} \end{equation} containing the bath correlation function \begin{equation} \langle \op B(s) \op B(0) \rangle=\Tr_\text{env}{\lbrace \op B(s) \op B(0) \op \rho_\text{env}\rbrace }. \label{eq2x} \end{equation} Assuming that the state of the bath is a thermal one \begin{equation} \op \rho_\text{env} = \frac{\text{e}^{- \beta \Hop_\text{env}}}{Z_\text {env}}, \label{eq2y} \end{equation} ($Z_\text{env}$ being the partition function) and that the bath consists of uncoupled harmonic oscillators $\Phi(\omega_{lm})$ takes the form \begin{equation} \Phi(\omega_{lm})=\kappa\left( \frac{\theta (\omega_{lm})}{\text{e}^{\omega_{lm}\beta_\alpha}-1}+\theta (\omega_{ml})\frac{\text{e}^{\omega_{ml}\beta_\alpha}}{\text{e}^{\omega_{ml}\beta_\alpha}-1}\right). \label{eq2e} \end{equation} $\theta(\omega_{lm})$ is the step function and $\beta_\alpha$ the respective inverse bath temperature. For a three spin chain between two heat baths of different temperatures $T_h$ and $T_c$ and local coupling at the two chain boundaries with \begin{align} \op X_h & = \op \sigma_1^x \otimes \op 1_2 \otimes \op 1_3 \label{eq2c} \\ \op X_c & = \op 1_1 \otimes \op 1_2 \otimes \op \sigma^x_3. \label{eq2d} \end{align} we get instead of (\ref{eq1e}) (cf. \cite{Saito2000}) \begin{equation} \dod{}{t} \op \rho=-\text{i} \Kom{\Hop}{\op \rho}+\op{\mathcal{D}}_h(\op \rho)+\op{\mathcal{D}}_c(\op \rho). \label{eq2} \end{equation} The stationary state of (\ref{eq1e}) for fixed $\delta_\mu$ is easily shown to be canonical of the form $\op \rho^\text{stat}=\text{e}^{-\beta \Hop_\text{s}}/ \Tr{\{ \text{e}^{-\beta \Hop_\text{s}}\}}$. However, with both baths in place the spin chain might be viewed as a molecular bridge generating a stationary leakage current $J_L=J_h=-J_c$. Here the heat current $J_\alpha$ between the 3-spin-system and the bath $\alpha$ can be defined by the energy dissipated via bath $\alpha$ (cf.~\cite{Breuer}) \begin{equation} J_\alpha = \Tr{\{ \Hop \op {\mathcal{D}}_\alpha(\op \rho)\} }. \label{eq3} \end{equation} In the following a current out of bath $\alpha$ into the machine will be defined as positive. \section{\label{sec:level4}Numerical Results} \subsection{\label{sec:level4a} Non-equilibrium stationary states of a spin chain} First we note that the heat current through a spin chain depends on the local Zeeman splittings within the system. To analyze the heat current we solve (\ref{eq2}) and calculate the stationary state of the system $\op \rho^\text{stat}$. With this solution and with the help of (\ref{eq3}) we can then calculate the currents $J_\alpha$ for each bath. We consider a system with $\delta_1=\delta_3=1$. Both heat currents (\ref{eq3}) as function of $\delta_2$ are shown in Fig.~\ref{fig2}. $J_h$ is positive and the relation $J_h = -J_c$ is fullfilled. If $\delta_2=\delta_1=\delta_3$ (the homogeneous case) the heat currents reach their maximum. By detuning the local energy splitting it is thus possible to uncouple the respective bath from the rest of the system. This resonance effect will now be used to build out of three spins a quantum thermodynamic machine. \begin{figure} \centering \includegraphics[width=.46 \textwidth]{fig2.eps} \caption{Stationary heat current $J_h$ [see (\ref{eq3})] (from hot bath) and $J_c$ (to cold bath) as function of the local energy splitting $\delta_2$ of spin 2 with $\delta_1=\delta_3=1, T_h=2.63$ and $T_c=2.5$.} \label{fig2} \end{figure} \subsection{\label{sec:level4b}Time dependent behavior: spin system as thermodynamic heat pump or machine} \subsubsection{\label{sec:level4b1}The heat current} The above situation changes when the energy splitting of spin 2 is chosen to be time-dependent, i.e., \begin{equation} \delta_2(t)=\sin(\omega t) b + b_0. \label{eq13} \end{equation} Different energy splittings of the boundary spins, e.g., $\delta_1=2.25$ and $\delta_3=1.75$, are used to install left/right-selective resonance effects. The parameters of (\ref{eq13}) are chosen as $b_0=(\delta_1+\delta_3)/2$ and $b$ is the detuning $\delta_{13}$. To enable the bath to damp the system $\omega \ll \delta_2$ must be fullfilled. For solving (\ref{eq2}) we have used a four step Runge-Kutta-algorithm. At each time step the bath correlation function is calculated explicitly. We choose the following parameters for our numerical results: $\lambda = 0.01$, $\kappa=0.001$, $\delta_1=2.25$, $\delta_3=1.75$, $\omega=1/128$, $T_c=2.5$ and $T_h$ is varied, unless stated otherwise. Both coupling parameters $\lambda$ and $\kappa$ are chosen to stay in the weak coupling limit. Now when spin 2 is driven periodically as in (\ref{eq13}), we can distinguish four different steps: \begin{enumerate} \item Spin 2 (the "working gas") is in resonance with spin 3 [$\delta_2(t) \approx \delta_3$] and thus couples with bath $c$ at temperature $T_c$. Because of this energy resonance the current $J_c$ via spin 3 will be large, whereas the current $J_h$ via spin 2 will be negligible. The occupation probabilities of spin 2 and 3 approach each other and so do the respective local temperatures. \item Quasi-adiabatic step: Spin 2 is out of resonance with spin 3 [$\delta_1 > \delta_2(t) > \delta_3$], now $J_c$ is suppressed while $J_h$ nearly stays unchanged. The occupation probability of spin 2 does not change significantly and there is almost no change in the entropy $S_2$. \item Spin 2 is in resonance with spin 1 [$\delta_2(t) \approx \delta_1$] and by that in contact with bath $h$ at temperature $T_h$. $J_h$ is large whereas $J_c$ is very small. The local temperatures of spin 1 and 2 nearly equal each other. \item Quasi-adiabatic step, as in step 2. \end{enumerate} \begin{figure} \centering% \includegraphics[width=.46 \textwidth]{fig3.eps} \caption{Heat currents $J_\alpha(t)$ for the heat pump over one cycle with duration $\tau=2 \pi/\omega=804.25$ for $T_h=2.63$ and $T_c=2.5$. The peaks are a result of the resonance-effect.} \label{fig3} \end{figure} Fig.~\ref{fig3} shows the heat currents $J_\alpha$ of both baths over one period with the bath temperatures $T_h=2.63$ and $T_c=2.5$. As can be seen the resonance effect decouples spin 2 from the bath if its energy splitting is different from the boundary or filter spins. This decoupling is never perfect, though. As a consequence there is a leakage current $J_L$ which will be discussed in more detail later on. \subsubsection{\label{sec:level4b2}Heat, Work and Efficiencies} That the studied system indeed works as a heat pump can be seen from the $S_2T_2$-diagram of spin 2 in Fig.~\ref{fig4}. The local entropy $S_2$ of spin 2 is given by (\ref{eq12}) and the local temperature $T_2$ by (\ref{eq11}). The four different steps as explained in Sec.~\ref{sec:level4b1} are shown as well as the direction of circulation. \begin{figure} \centering% \includegraphics[width=.46 \textwidth]{fig4.eps} \caption{$S_2T_2$-Diagram for the quantum heat pump for $T_h=2.63, T_c=2.5$ ($\Delta T=0.13$) and $\tau=2 \pi/\omega=804.25$. The arrows indicate the direction of circulation.} \label{fig4} \end{figure} To determine the efficiency of this heat pump one needs to know the quantity of heat $\Delta Q_h$ pumped to the hot bath and the used work $\Delta W$. $\Delta Q_h$ can be calculated by integrating the heat current $J_h$ over one period [cf.~(\ref{eq10})]. The exchanged work $\Delta W$ is given by the area enclosed in the $S_2T_2$-plane according to (\ref{eq9}). We find that indeed $\Delta W+\Delta Q_c+\Delta Q_h=0$ in all cases and by that confirm the use of $T_2$ and $S_2$ as effective thermodynamic variables. In contrast to the Carnot model our machine is working in finite time. If driven too fast the bath is not able to damp the system and if driven too slow (quasi-stationary) the system would have reached its momentary steady state transport configuration (i.e. $\omega \ll \kappa$). The $S_2T_2$-area then vanishes as depicted in Fig.~\ref{fig7}. This is caused by the leakage current. \begin{figure} \centering \includegraphics[width=.46 \textwidth]{fig7} \caption{$S_2T_2$-diagram for the quasi-statically driven quantum heat pump (with parameters as in Fig.~\ref{fig4}). Because of the leakage current the enclosed $S_2T_2$-area vanishes and no work is exchanged.} \label{fig7} \end{figure} Figure~\ref{fig5} shows the Carnot efficiency for the heat pump $\eta_\text{Car}^p$, for the machine $\eta_\text{Car}^e$ and the respective efficiencies for our quantum heat pump $\eta_\text{qm}^p$ [according to (\ref{eq14})] and machine $\eta_\text{qm}^e$ [according to (\ref{eq15a})] as a function of the temperature difference $\Delta T=T_h-T_c$. We point out the following interesting findings: \begin{itemize} \item The efficiency curve of the quantum heat pump or machine is always below the respective Carnot efficiency. As expected, the 2nd law is never violated. \item For $\Delta T=0$ $\eta_\text{qm}^p$ does neither diverge nor go to zero. This means that the machine can start out of equilibrium and begin to cool a reservoir. \item At a specific temperature difference $\Delta T$, here $\Delta T_\text{max} \approx 0.6$, the heat pump switches to operate as a heat engine. To illustrate this fact Fig.~\ref{fig11} shows the area in the $S_2T_2$-plane for $\Delta T=3.33 > \Delta T_\text{max}$. As depicted, the direction of circulation has reversed. \end{itemize} \begin{figure} \centering \includegraphics[width=.46 \textwidth]{fig11} \caption{$S_2T_2$-Diagram for the quantum heat engine ($\Delta T = 0.83 > \Delta T_\text{max}$ with $T_h=3.33, T_c=2.5$ and $\tau=2 \pi/\omega=804.25$. The arrows indicate the direction of circulation.} \label{fig11} \end{figure} \begin{figure} \centering \includegraphics[width=.47 \textwidth]{fig5} \caption{The Carnot-efficiency $\eta_\text{Car}^p$ and the efficiency $\eta_\text{qm}^p$ of the quantum heat pump ($\Delta T < \Delta T_\text{max}$) and $\eta_\text{Car}^e$ and $\eta_\text{qm}^e$ of the heat engine ( $\Delta T > \Delta T_\text{max}$) as function of the temperature difference $\Delta T$. Following parameters are chosen: $T_c=2.5, \delta_1=2.25, \delta_3=1.75$ and $\tau=2 \pi/\omega=804.25$.} \label{fig5} \end{figure} To make the last point more plausible Fig.~\ref{fig6} shows the work $\Delta W$, the heat $Q_h$ and $Q_c$ as function of $\Delta T$. While $\Delta T$ is increasing, $\Delta Q_h$ and $\Delta Q_c$ are decreasing as well as $\Delta W$ until first $\Delta Q_c$ changes its sign, then $Q_h$ and last $\Delta W$. At the point where $\Delta W=0$ (for $\Delta T=\Delta T_\text{max}$) only the leakage current $J_L$ is flowing from the hot bath to the cold one. Beyond this $\Delta T_\text{max}$ the system starts to work as an engine. \begin{figure} \centering \includegraphics[width=0.46 \textwidth ]{fig6} \caption{Heat $\Delta Q_c$ and $\Delta Q_h$ and work $\Delta W$ performed over one cycle as function of the temperature difference $\Delta T$ (same parameters as in Fig.~\ref{fig5}. The inset shows these functions around the point $\Delta T=\Delta T_\text{max}$ in more detail. $\Delta Q_L$ is the leakage heat per cycle.} \label{fig6} \end{figure} \section{\label{sec:level5}Analytical Results} \subsection{Ideal Quantum Machine} To understand the above numerical results we compare them to the maximal reachable heat and work which could be pumped or extracted by a TLS quantum machine. All process steps will be taken to be ideal steps. By ideal we mean that we have total control of each process step. Then no leakage current will disturb the system and the heat exchange at bath contact will be without loss. In addition we assume a machine which only works during the adiabatic steps. Heat will be exchanged only if the machine is in contact with a bath. This can be compared with the Otto-cycle \cite{Feldmann2003, Feldmann2004}. We start with spin~2 in contact with spin~3 and thus with cold bath. The state of spin~2 after this contact is a canonical one of the form \begin{equation} \op \rho_\text{s} = \frac{1}{Z} \left( \begin{array}{cc} \text{e}^{\delta_3/(2T_c)} & 0 \\ 0 & \text{e}^{-\delta_3/(2T_c)} \end{array}\right). \label{eq50} \end{equation} $Z$ is the partition function and we have assumed that the energy of the ground state is $E_2^0=E_3^0=-\frac{\delta_3}{2}$ and the excited state $E_2^1=E_3^1=\frac{\delta_3}{2}$ because both spins are in resonance. After this equilibration with the cold bath at $T_c$ the spin~2 is driven until its local energy splitting is equal to spin~1 ($E_2^0=E_1^0=-\frac{\delta_1}{2}$ and $E_2^1=E_1^1=\frac{\delta_1}{2}$). The work for this step can be calculated with (\ref{eq6}). This step is adiabatic as $\op \rho_2$ does not change. The work $W_{3 \rightarrow 1}$ is then given by the energy difference before and after reaching the splitting of spin~1 \begin{equation} W_{3 \rightarrow 1}=\frac{1}{2} (\delta_3 - \delta_1) \tanh \left( \frac{\delta_3}{2T_c} \right) \label{eq51} \end{equation} In contact with spin~1 spin~2 exchanges heat $\Delta Q_h^\text{id}$ with the hot bath at $T_h$. No work will be done and only the occupation probabilities of spin~2 will change to a thermal state with $T_2=T_h$. The exchanged heat can be calculated by the energy difference before and after thermalisation \begin{equation} \Delta Q_h^\text{id}=\frac{\delta_1}{2} \left[ \tanh \left( \frac{\delta_3}{2T_c}\right) - \tanh \left(\frac{\delta _1}{2T_h} \right)\right] \label{eq52} \end{equation} Then spin~2 is driven back to the energy splitting of spin~3 ($E_2^0=E_3^0=\frac{\delta_3}{2}$ and $E_2^1=E_3^1=\frac{\delta_3}{2}$). The work $W_{1 \rightarrow 3}$ for this step is given by \begin{equation} W_{1 \rightarrow 3}=\frac{1}{2} (\delta_1 - \delta_3) \tanh \left( \frac{\delta_1}{2T_h} \right). \label{53} \end{equation} Finally the heat $Q_c^\text{id}$ \begin{equation} \Delta Q_c^\text{id}=\frac{\delta_3}{2} \left[ \tanh\left( \frac{\delta_1}{2T_h}\right)-\tanh \left(\frac{\delta _3}{2T_c} \right)\right] \label{eq54} \end{equation} will be exchanged with the cold bath via spin~3. The total work $\Delta W_\text{tot}$ is given by \begin{equation} \Delta W_\text{tot}=W_{3 \rightarrow 1}+W_{1 \rightarrow 3}. \label{eq54b} \end{equation} The Gibbs-relation \begin{equation} \Delta W_\text{tot}+\Delta Q_h^\text{id}+\Delta Q_c^\text{id}=0 \label{eq54c} \end{equation} can easily be verified. With the help (\ref{eq51}) - (\ref{eq54b}) it is now possible to calculated the efficiency of this ideal machine. For the heat pump we get \begin{equation} \eta_\text{id}^p=\frac{-\Delta Q_h^\text{id}}{\Delta W_\text{tot}}=\frac{\delta_1}{\delta_1 - \delta_3}, \label{eq55} \end{equation} for the machine \begin{equation} \eta_\text{id}^e=\frac{-\Delta W_\text{tot}}{\Delta Q_h^\text{id}}=\frac{\delta_1 - \delta_3}{\delta_1}. \label{eq56} \end{equation} This result is similar to that obtained by Kieu \cite{Kieu2004,Kieu2006} and is the maximum a TLS can reach. Here we want to compare the efficiency of the ideal pump $\eta_\text{id}^p$ and engine $\eta_\text{id}^e$ with the respective Carnot efficiencies for the parameters used for our numerical results. \begin{figure}[h] \centering \includegraphics[width=0.5 \textwidth ]{fig8} \caption{Carnot-efficiency $\eta_\text{Car}^p$ for the heat pump and engine $\eta_\text{Car}^e$ as function of temperature difference $\Delta T$ while $T_c=2.5, \delta_1=2.25$ and $\delta_3=1.75$ as in Fig.~\ref{fig5}. $\eta_\text{qm}^p$ and $\eta_\text{qm}^e$ are the efficiencies of the ideal pump/engine [see (\ref{eq55}) and (\ref{eq56})]. $\tilde \eta^p_\text{id}=12.36$ and $\Delta \tilde T_\text{max}=0.22$ can be realized for $\delta_1=1.904$, $\delta_3=1.75$ and $T_c=2.5$.} \label{fig8} \end{figure} Figure~\ref{fig8} shows the Carnot efficiencies as well as the one from (\ref{eq55}) and (\ref{eq56}). $\eta_\text{id}^{p/e}$ is always below $\eta_\text{Car}^{p/e}$ until it reaches a maximal temperature difference $\Delta T_\text{max}$ (with $T_c=2.5, \delta_1=2.25$ and $\delta_3=1.75$ we get $\Delta T_\text{max}=0.714$). At this temperature the heat pump is working lossless and no heat can be pumped. Just like the quasi-stationary Carnot heat pump this pump has zero power. Only in this particular case $\eta_\text{id}^p = \eta_\text{Car}^p$. By further increasing the temperature $T_h$ the heat pump starts working as a heat engine. Figure~\ref{fig9} illustrates this behavior where $\Delta W^\text{id}, \Delta Q_h^\text{id}$ and $\Delta Q_c^\text{id}$ are depicted as function of $\Delta T$. At $\Delta T_\text{max}$ no heat $\Delta Q_h^\text{id}$ is pumped and therefore no work used or no heat exhausted to do work. \begin{figure}[h] \centering \includegraphics[width=0.46 \textwidth ]{fig9} \caption{Work $\Delta W^\text{id}$, heat $\Delta Q_h^\text{id}$ from/to the hot bath and heat $\Delta Q_c^\text{id}$ from/to the cold bath for the ideal machine as function of temperature difference $\Delta T$ while $T_c=2.5, \delta_1=2.25$ and $\delta_3=1.75$ as in Fig.~\ref{fig6}. At $\Delta T=\Delta T_\text{max}$ $\eta_\text{id}^p = \eta_\text{Car}^{p/e}$ and therefore $\Delta W^\text{id}=0$, $\Delta Q_h^\text{id}=0$ and $\Delta Q_c^\text{id}=0$.} \label{fig9} \end{figure} This is qualitatively the same behavior as our model shows in Fig.~\ref{fig5} and Fig.~\ref{fig6}. Two differences can be seen. First the critical temperature in our numerical result deviates from the theoretical expected one. From the numerics we get $\Delta T_\text{max}\approx 0.6$. Second the inset in Fig.~\ref{fig6} shows that $\Delta Q_c$ changes its sign before $\Delta Q_h$ does. The reason for both effects is due to the leakage current as will be explained below. For a given bath temperature (like in our example $T_c=2.5$) it is possible by changing the energy splittings of $\delta_1$ and/or $\delta_3$ to influence $\Delta T_\text{max}$. In Fig.~\ref{fig8} also a different efficiency $\tilde \eta_\text{id}^p$ is depicted. $\tilde \eta_\text{id}^p$ can be realized by increasing $\delta_1$ so that $\Delta T_\text{max}$ will be decreased to $\Delta \tilde T_\text{max}$. \subsection{Quantum machine with leakage current} The efficiency of an ideal two level quantum machine is independent of $\Delta T$ except at $\Delta T= \Delta T_\text{max}$, where it jumps between its heat pump and its heat engine value. The efficiency obtained from the numerical simulation deviates somewhat from this expected behavior. For the heat pump the efficiency of our model is even larger than the ideal one (see Fig.~\ref{fig10}). To understand this effect we analyze the leakage current from a phenomenological point of view. \begin{figure} \centering \includegraphics[width=0.5 \textwidth ]{fig10} \caption{Fitted efficiency $\eta_\text{qm}^p$ for the quantum heat pump and quantum engine $\eta_\text{qm}^e$ as function of temperature difference $\Delta T$ while $T_c=2.5, \delta_1=2.25$ and $\delta_3=1.75$ as in Fig.~\ref{fig5}. $\eta_\text{id}^p$ and $\eta_\text{id}^e$ are the efficiencies of the ideal pump/engine [see (\ref{eq55}) and (\ref{eq56})].} \label{fig10} \end{figure} First we assume that the leakage current causes the gas spin~2 to approach a thermal state which is not in accordance with the bath temperature. In this case $\Delta Q_h$ and $\Delta Q_c$ will be decreased. This effect is responsible for the vanishing of $\eta_\text{qm}^{p/m}$ before reaching $\Delta T_\text{max}$. But it can not explain why the efficiency of $\eta_\text{qm}^p$ is sometimes larger than $\eta_\text{id}^p$. Taking into account that also less work is performed due to the leakage current it is possible to find a larger efficiency. This can be interpreted in that the gas spin~2 does not ``see'' the full energy splitting $\delta_1$. As shown in Fig.~\ref{fig10} our phenomenological model fits the numerical data quite good. For the efficiency of the heat engine $\eta_\text{qm}^e$ it can be seen from Fig.~\ref{fig10} that it is always worse than the ideal engine $\eta_\text{id}^p < \eta_\text{qm}^e$. \section{Conclusion} We have studied a driven 3-spin system coupled to two split heat baths. We have shown that such small quantum networks may be used not only as quantum information processors but also as quantum thermodynamic machines. For the latter proposal we would primarily exploit the (time-dependent) deformation of discrete spectra and associated resonance transfer. While interesting functionality appears already for N = 3 spins, also larger spin networks subject to such very limited control could be envisaged without losing inherent stability: eventually this stability is dictated by the increase of entropy, i.e., by the second law of thermodynamics. For a thermodynamic TLS machine working with ideal heat transport and adiabatic steps we have derived an ideal efficiency. This efficiency is independent of the bath temperatures. By tuning the energy splitting of the TLS the quantum thermodynamic machine can be used as a heat pump or heat engine. The Carnot efficiency will only be reached when a TLS machine is working losslessly. Taking dissipation into account it is possible to understand the leakage current present in our numerics from a phenomenological point of view. Surprisingly a leakage current could even increase the efficiency of a heat pump whereas for a heat engine it only decreases the efficiency. There are a number of different options for implementations \cite{Haefner2005, Maklin2001} and also various possibilities to introduce the time-dependent control. For simplicity we have restricted ourselves here to external driving; alternatively one might look for autonomous system designs \cite{Tonner2005}, e.g., by using a mechanical oscillator (cantilever) \cite{Schwab2005}. Artificial autonomous nanomotors powered by visible light have recently been demonstrated experimentally \cite{Balzani2006}. From a fundamental point of view several interesting questions remain: What is the status of thermodynamic variables for such quantum systems? To what extent are these measurable in the nano-domain - without being operators? And if measured, how would the measurement result fluctuate \cite{Esposito2006}? As noted already, a two-level system diagonal in its local energy basis can always be described as canonical with some temperature $T$, i.e., there is conceptionally no space for nonequilibrium here. It is remarkable that for periodic operation work can then be associated with the area defined by the closed path in the effective entropy-temperature plane for the driven spin, as in macroscopic models. This may challenge the subjective ignorance interpretation of none-pure states as classical mixtures, i.e., assuming the individual spin to be either up or down at any time. If the thermal state was taken to result from quantum entanglement with the environment \cite{GeMiMa2004}, this classical picture would no longer be needed; those concepts from quantum information seem to be more appropriate here. \begin{acknowledgments} We thank J. Gemmer, F. Rempp, G. Reuther, H. Schmidt, H. Schr\"oder, J. Teifel, P. Vidal and H. Weimer for fruitful discussions. We thank the Deutsche Forschungsgemeinschaft for financial support. \end{acknowledgments} \bibliographystyle{apsrev}
2,877,628,090,006
arxiv
\section{ Introduction and preliminaries} The notion of selectors comes from {\it Topology}. Let $X$ be a topological space, $exp \ X$ denotes the set of all non-empty closed subsets of $X$ endowed with some (initially, the Vietoris) topology, $\mathcal{F}$ be a non-empty closed subset of $exp \ X$. A continuous mapping $f: \mathcal{F}\rightarrow X$ is called an $\mathcal{F}$-selector of $X$ if $f(A)\in A$ for each $A\in \mathcal{F}.$ Formally, coarse spaces, introduced independently in \cite{b9} and \cite{b13} can be considered as asymptotic counterparts of uniform topological spaces. But actually, this notion is rooted in {\it Geometry, Geometrical Group Theory} and {\it Combinatorics}, see \cite{b13}, \cite{b3}, \cite{b5} and \cite{b9}. The investigation of selectos of coarse spaces was initiated in \cite{b8}. We begin with some basic definitions. \vspace{5 mm} Given a set $X$, a family $\mathcal{E}$ of subsets of $X\times X$ is called a {\it coarse structure} on $X$ if \begin{itemize} \item{} each $E \in \mathcal{E}$ contains the diagonal $\bigtriangleup _{X}:=\{(x,x): x\in X\}$ of $X$; \vspace{3 mm} \item{} if $E$, $E^{\prime} \in \mathcal{E}$ then $E \circ E^{\prime} \in \mathcal{E}$ and $ E^{-1} \in \mathcal{E}$, where $E \circ E^{\prime} = \{ (x,y): \exists z\;\; ((x,z) \in E, \ (z, y)\in E^{\prime})\}$, $ E^{-1} = \{ (y,x): (x,y) \in E \}$; \vspace{3 mm} \item{} if $E \in \mathcal{E}$ and $\bigtriangleup_{X}\subseteq E^{\prime}\subseteq E$ then $E^{\prime} \in \mathcal{E}$. \end{itemize} Elements $E\in\mathcal E$ of the coarse structure are called {\em entourages} on $X$. For $x\in X$ and $E\in \mathcal{E}$ the set $E[x]:= \{ y \in X: (x,y)\in\mathcal{E}\}$ is called the {\it ball of radius $E$ centered at $x$}. Since $E=\bigcup_{x\in X}( \{x\}\times E[x]) $, the entourage $E$ is uniquely determined by the family of balls $\{ E[x]: x\in X\}$. A subfamily ${\mathcal E} ^\prime \subseteq\mathcal E$ is called a {\em base} of the coarse structure $\mathcal E$ if each set $E\in\mathcal E$ is contained in some $E^\prime \in\mathcal E^\prime$. The pair $(X, \mathcal{E})$ is called a {\it coarse space} \cite{b13} or a {\em ballean} \cite{b9}, \cite{b12}. A coarse spaces $(X, \mathcal{E})$ is called {\it connected} if, for any $x, y \in X$, there exists $E\in \mathcal{E}$ such that $y\in E[x]$. A subset $Y\subseteq X$ is called {\it bounded} if $Y\subseteq E[x]$ for some $E\in \mathcal{E}$, and $x\in X$. If $(X, \mathcal{E})$ is connected then the family $\mathcal{B}_{X}$ of all bounded subsets of $X$ is a bornology on $X$. We recall that a family $\mathcal{B}$ of subsets of a set $X$ is a {\it bornology} if $\mathcal{B}$ contains the family $[X] ^{<\omega} $ of all finite subsets of $X$ and $\mathcal{B}$ is closed under finite unions and taking subsets. A bornology $\mathcal B$ on a set $X$ is called {\em unbounded} if $X\notin\mathcal B$. A subfamily $\mathcal B^{\prime}$ of $\mathcal B$ is called a base for $\mathcal B$ if, for each $B \in \mathcal B$, there exists $B^{\prime} \in \mathcal B^{\prime}$ such that $B\subseteq B^{\prime}$. Each subset $Y\subseteq X$ defines a {\it subspace} $(Y, \mathcal{E}|_{Y})$ of $(X, \mathcal{E})$, where $\mathcal{E}|_{Y}= \{ E \cap (Y\times Y): E \in \mathcal{E}\}$. A subspace $(Y, \mathcal{E}|_{Y})$ is called {\it large} if there exists $E\in \mathcal{E}$ such that $X= E[Y]$, where $E[Y]=\bigcup _{y\in Y} E[y]$. Let $(X, \mathcal{E})$, $(X^{\prime}, \mathcal{E}^{\prime})$ be coarse spaces. A mapping $f: X \to X^{\prime}$ is called {\it macro-uniform } if for every $E\in \mathcal{E}$ there exists $E^{\prime}\in \mathcal{E}^{\prime}$ such that $f(E(x))\subseteq E^{\prime}(f(x))$ for each $x\in X$. If $f$ is a bijection such that $f$ and $f ^{-1 }$ are macro-uniform, then $f $ is called an {\it asymorphism}. If $(X, \mathcal{E})$ and $(X^{\prime}, \mathcal{E}^{\prime})$ contain large asymorphic subspaces, then they are called {\it coarsely equivalent.} For a coarse space $(X,\mathcal{E})$, we denote by $exp \ X$ the family of all non-empty subsets of $X$ and by $exp \ \mathcal{E}$ the coarse structure on $exp \ X$ with the base $\{ exp \ E : E\in \mathcal{E}\}$, where $$(A,B)\in exp \ E \Leftrightarrow A \subseteq E[B], \ \ B\subseteq E[A],$$ and say that $(exp \ X, exp \ \mathcal{E} )$ is the {\it hyperballean} of $(X,\mathcal{E})$. For hyperballeans, see \cite{b4}, \cite{b10}, \cite{b11}. Let $\mathcal{F}$ be a non-empty subspace of $exp \ X$. We say that a macro-uniform mapping $f: \mathcal{F} \longrightarrow X$ is an $\mathcal{F}$-{\it selector} of $(X,\mathcal{E})$ if $f(A)\in A$ for each $A\in \mathcal{F}$. In the case $\mathcal{F}\in [X]^2$, $\mathcal{F}= \mathcal{B}_X$ and $\mathcal{F}= exp \ X$, an $\mathcal{F}$- selector is called a $2$-{\it selector}, a {\it bornologous selector} and a {\it global selector} respectively. We recall that a connected coarse space $(X,\mathcal{E})$ is {\it discrete} if, for each $E\in \mathcal{E}$, there exists a bounded subset $B$ of $(X,\mathcal{E})$ such that $E[x]=\{x\}$ for each $x\in X\setminus B$. Every bornology $\mathcal{B}$ on a set $X$ defines the discrete coarse space $X_\mathcal{B} = (X,\mathcal{E} _\mathcal{B})$, where $\mathcal{E}_\mathcal{B}$ is a coarse structure with the base $\{ E_B: B\in \mathcal{B}\}$, $E_B [x]=B$ if $x\in B$ and $E_B [x]= \{x\}$ if $x\in X\setminus B$. On the other hand, every discrete coarse space $(X, \mathcal{E})$ coincides with $X_\mathcal{B}$, where $\mathcal{B}$ is the bornology of bounded subsets of $(X, \mathcal{E})$. \vspace{7 mm} {\bf Theorem 1 [8]. } {\it For a bornology $\mathcal{B}$ on a set $X$, the discrete coarse space $X_{\mathcal{B}}$ admits a 2-selector if and only if there exists a linear order $\leq$ on $X$ such that the family of intervals $\lbrace [a,b]: a,b \in X, a\leq b \rbrace$ is a base for $\mathcal{B}$. \vspace{7 mm} } In section 2, we analyze interrelations between linear orders compatible with coarse structures and selectors. In Section 3, we apply obtained results to characterize cellular ordinal coarse spaces which admit global selectors. We conclude with Section 4 on selectors of universal spaces. \section{ Selectors and orderings } {\bf Proposition 1.} Let $(X, \mathcal{E})$ be a coarse space, $f: [X]^2 \rightarrow X$, $f(A)\in A$ for each $A\in [X]^2 $. Then the following statements are equivalent \vspace{5 mm} {\it (i)} {\it $f$ is a 2-selector}; \vspace{3 mm} {\it (ii)} {\it for every $E\in \mathcal{E}$, there exists $F\in \mathcal{E}$ such that $E\subseteq F$ and if $\lbrace x,y\rbrace \in [X]^2$, $f(\lbrace x,y\rbrace)=x$ $(f(\{x,y\})=y)$ and $y\in X\setminus F[x]$ then $f(\lbrace x^\prime,y\rbrace)=x^\prime$ $(f(\{x^\prime,y\})=y)$ for each $x^\prime\in E[x]$. \vspace{5 mm} Proof.} $(i) \Rightarrow(ii)$. Let $E= \mathcal{E}$. Since $f$ is macro-uniform, there exists $F\in \mathcal{E}$, $F=F^{-1}$, $E\subseteq F$ such that, for any $(A, A^\prime) \in exp \ E$, we have $(f(A), f(A^\prime)) \in F$. Let $A=\lbrace x, y \rbrace$, $A^\prime=\lbrace x^\prime, y \rbrace$, $x^\prime\in E[x]$, $f(\lbrace x, y\rbrace)=x$. Then $f(\lbrace x, y\rbrace), f(\lbrace x^\prime, y\rbrace) \in F$, $(x,f(\lbrace x^\prime, y\rbrace) \in F$ so $f(\lbrace x^\prime, y\rbrace) = x^\prime$. The case $(f(\{x,y\})=y)$ is analogical. \vspace{5 mm} $(ii) \Rightarrow(i)$. Let $E\in \mathcal{E}$, $E= E^{-1}$ and let $F\in \mathcal{E}$, $F= F^{-1}$ is given by $(ii)$. To verify that $f$ is macro-uniform, we show that if $A, A^\prime\in [X]^2$ and $(A, A^\prime)\in exp \ E $ then $(f(A), f(A^\prime))\in F $. Let $A=\lbrace x,y\rbrace$, $f(\lbrace x,y\rbrace)=x$, $A^\prime=\lbrace x^\prime,y^\prime\rbrace, f(\lbrace x^\prime,y^\prime\rbrace) =x^\prime$. We suppose that $(x,x^\prime)\notin F$ and $f(\lbrace x^\prime,x\rbrace) =x$. By the choice of $F$, $f(\lbrace x^\prime,z\rbrace) =z$ for each $z\in E[x]$. Since $E[x]\cap A^\prime \neq \emptyset$, we have $y^\prime \in E[x]$ so $f(\{x^\prime, y^\prime \})= y^\prime$, contradicting $f(\{x^\prime, y^\prime \})= x^\prime$. Hence, $(x,x^\prime)\in F$. The case $(f(\{x^\prime,x\})=x^\prime)$ is analogical. $ \ \ \ \Box $ \vspace{7 mm} Let $(X, \mathcal{E})$ be a coarse space. We say that a linear order $\leq$ on $X$ is {\it compatible with the coarse structure} $\mathcal{E}$ if, for every $E\in \mathcal{E}$, there exists $F\in \mathcal{E}$ such that $E\subseteq F$ and if $\{x,y\}\in [X]^2$, $x< y$ $(y<x)$ and $y\in X\setminus F[x]$ then $x^\prime <y$ $(y<x^\prime)$ for each $x^\prime \in E[x]$. \vspace{7 mm} {\bf Proposition 2. } {\it Let $(X, \mathcal{E})$ be a coarse space and let $\leq$ be a linear order on $X$ compatible with $\mathcal{E}$. Then the following statements hold \vspace{3 mm} $(i)$ the mapping $f: [X]^2 \rightarrow X$, defined by $f(A)= min \ A$, is a 2-selector of $(X, \mathcal{E})$; \vspace{3 mm} $(ii)$ for every $E\in \mathcal{E}$, there exists $H\in \mathcal{E}$ such that $E\subseteq H$ and if $A, A^\prime \in [X]^2$ and $(A, A^\prime) \in exp \ E$ then $(min \ A, \ min \ A^\prime) \in H$; \vspace{3 mm} $(iii)$ if $(X, \mathcal{E})$ is connected then, for any $a, b \in X$, $a<b$, the interval $[a,b]=\{ x\in X: a\leq x \leq b \}$ is bounded in $(X, \mathcal{E})$. \vspace{5 mm} Proof. } The statement (i) follows from Proposition 1, $(ii)$ follows from $(i)$. To prove $(iii)$, we use the connectedness of $(X, \mathcal{E})$ to find $E\in \mathcal{E}$, $E=E^{-1}$ such that $(a,b)\in E$. Then we take $F\in \mathcal{E}$, $F= F^{-1}$ given by the definition of an order compatible with the coarse structure. We assume that $[a,b]$ is unbounded and choose $ c \in [a,b]$, $a< c <b$ such that $c\in X\setminus F[a]$. Then $x<c$ for each $x\in E[a]$, in particular $b<c$ and we get a contradiction. $ \ \ \ \Box $ \vspace{7 mm} {\bf Proposition 3. } {\it Let $(X, \mathcal{E})$ be a coarse space, $\leq$ be a well order on $X$ compatible with $\mathcal{E}$. Then $X_\mathcal{E}$ has a global selector. \vspace{5 mm} Proof. } For each $A\in exp \ X$, we put $f(A)=min \ A $ and note that $f$ is a global selector. $ \ \ \ \Box $ \vspace{7 mm} {\bf Proposition 4. } {\it Let $(X, \mathcal{E})$ be a connected coarse space with the bornology $\mathcal{B}$ of bounded subsets, $X_\mathcal{B}$ denotes the discrete coarse space defined by $\mathcal{B}$. If $f$ is a 2-selector of $(X, \mathcal{E})$ then $f$ is a 2-selector of $X_\mathcal{B}$. \vspace{5 mm} Proof. } For each $B\in \mathcal{B}$, we denote by $E_B$ the set $\{(x.y): x,y\in B \}\cup \vartriangle_X$. Then $\{E_B : B\in\mathcal{B}\}$ is the coarse structure of $X_\mathcal{E}$ and $E_B \in \mathcal{E}$ for each $B\in\mathcal{B}$. Let $A, A^\prime \in [X]^2$ and $(A, A^\prime) \ \in exp \ E_B$. Since $f$ is a 2-selector of $(X, \mathcal{E})$, there exists $F\in \mathcal{E}$, $F=F^{-1}$ such that $(f(A), f(A^\prime))\in F$. If $A\cap B=\emptyset$ then $A= A^\prime$. If $A\subseteq B$ then $A^\prime\subseteq B$, so $(f(A), f(A^\prime))\in E_B$. Let $A=\{b, a\}$, $A^\prime=\{b^\prime, a\}$, $b\in B$, $b^\prime\in B$ and $a\in X\setminus B$. If $a\in F[\{b, b^\prime\}]$ then $f(A), f(A^\prime)\in F[\{b, b^\prime\}] $. If $a\notin F[\{b, b^\prime\}]$ then either $f(A)= f(A^\prime) = a $ of $f(A), f(A^\prime)\in \{b, b^\prime\} $. In all considered cases, we have $(f(A), f(A^\prime))\in E_{F[B] }$. Hence, $f$ is a 2-selector of $X_\mathcal{B}$. $\ \ \ \Box $ \vspace{7 mm} {\bf Proposition 5. } {\it Let $(X, \mathcal{E})$, $(X^\prime, \mathcal{E}^\prime)$ are coarsely equivalent. If $(X^\prime, \mathcal{E}^\prime)$ admits a global selector then $(X, \mathcal{E})$ admits a global selector. The same is true for 2-selector and bornologous selectors. \vspace{5 mm} Proof. } We consider the case of global selector. Let $f^\prime : exp \ X^\prime \rightarrow X^\prime$ is a global selector of $(X^\prime), \mathcal{E}^\prime))$. We suppose that $(X, \mathcal{E})$, $(X^\prime, \mathcal{E}^\prime)$ are asymorphic and $h: (X, \mathcal{E})\rightarrow (X^\prime, \mathcal{E}^\prime)$ is an asymorphism. We denote by $\overline{h}$ the natural extension $\overline{h}: exp \ X\rightarrow exp \ X^\prime$ of $h$. Then the straitforward verification gives that $h^{-1} f^\prime \overline{h}$ is a global selector of $(X, \mathcal{E})$. \vspace{3 mm} Now let $X^\prime$ is a large subset of $(X, \mathcal{E})$, $\mathcal{E}^\prime = \mathcal{E}_{X^\prime} $, $f^\prime : exp \ X^\prime\rightarrow X$ is a global selector of $(X^\prime, \mathcal{E}^\prime)$. We take $H\in \mathcal{E}$ such that $X=H [X^\prime]$. Let $Y\in exp \ X$. For each $y\in Y$, we pick $z_y\in X^\prime$ such that $y\in H[z_y]$. Let $Z=\{ z_y : y\in Y \}$ and $z=f^\prime (Z)$. We take $x_z\in Y$ such that $x_z \in H[z]$ and put $f(Y)=x_z$. Then the straightforward verification gives us $f: exp \ X\rightarrow X$ is a global selector of $(X, \mathcal{E})$. $ \ \ \ \Box $ \vspace{7 mm} {\bf Question 1. } {\it Let $\leq$ be a linear order on $X$ compatible with $\mathcal{E}$. Is $\mathcal{E}$ an interval coarse structure?} \vspace{7 mm} {\bf Question 2. } {\it Let a coarse space $(X, \mathcal{E})$ admits a global selector. Does there exist a linear order on $X$ compatible with $\mathcal{E}$? } \vspace{7 mm} {\bf Question 3. } {\it Let a coarse space $(X, \mathcal{E})$ admits a 2-selector. Does $(X, \mathcal{E})$ admit a bornologous selector?} \vspace{7 mm} {\bf Question 4. } {\it Let a coarse space $(X, \mathcal{E})$ admits a bornologous selector. Does $(X, \mathcal{E})$ admit a global selector? } \section{ Selectors of cellular spaces} Let $(X, \mathcal{E})$ be a coarse space. An entourage $E\in \mathcal{E}$ is called {\it cellular} if $E$ is an equivalence relation. If $(X, \mathcal{E})$ is connected and $ \mathcal{E}$ has a base consisting of cellular entourages then $(X, \mathcal{E})$ is called cellular. By [12, Theorem 3.1.3], $(X, \mathcal{E})$ is cellular if and only if $asdim \ (X, \mathcal{E})=0$. Every discrete coarse space and every coarse space of an ultrametric space are cellular. Following [12, p.63], we say that a coarse space $(X, \mathcal{E})$ is {\it ordinal} if $ \mathcal{E}$ has a base well-ordered by inclusion. We note that if $ \mathcal{E}$ has a base linearly ordered by inclusion then $(X, \mathcal{E})$ is cellular. For the structure of cellular ordinal spaces, see \cite{b1}. \vspace{5 mm} Let $\kappa$, $\gamma$ be cardinals. Following \cite{b1}, we denote $\kappa^{<\gamma}= \{ (x_\alpha)_{\alpha<\gamma} : x_\alpha \in \kappa, \ x_\alpha =0 \ $ for all but finitely many $ \ \ \alpha<\gamma \}$, $K_{\alpha}= \{ ((x_\alpha)_{\alpha<\gamma}, \ (y_\alpha)_{\alpha<\gamma}) : x_\beta =y_\beta$ for each $\beta\leq \alpha \}$. We take the coarse structure $\mathcal{K}_\gamma$ with the base $\{K_\gamma: \alpha<\gamma \}$ and observe that each entourage $K_\alpha$ is cellular. Thus, the macrocube $(\kappa_\gamma , \mathcal{K}_\gamma)$ is cellular and ordinal. We denote ${\bf 0} = (x_\alpha)$, $x_\alpha =0$ for each $\alpha<\gamma$ and, for $x=(x_\alpha)_{\alpha<\gamma}, \ x\neq 0$, $max \ x= \{ max \ \alpha : x_\alpha \neq 0 \}$. Given any $x=(x_\alpha)_{\alpha<\gamma}, \ y= (y_\alpha)_{\alpha<\gamma}, \ x\neq {\bf 0} \ y\neq 0$, we write $x\prec y$ if either $max \ x < max \ y$ or $max \ x = max \ y=\alpha$ and $x_\alpha < y_\alpha$. Also, {\bf 0} $\prec x$ for $x\neq $ {\bf 0}. Then $\preceq$ is a total order on $\kappa_\gamma$ compatible with the coarse structure $\mathcal{K}_\gamma$. \vspace{7 mm} {\bf Theorem 2. } {\it Every cellular ordinal space $(X, \mathcal{E})$ admits a well-ordering compatible with $\mathcal{E}$. \vspace{3 mm} Proof. } We put $\kappa=|X|$. By [1, Lemma 5.1], there exists an asymorphic embedding $f: (X, \mathcal{E})\rightarrow (\kappa, \mathcal{K}_\kappa)$. The total order $\preceq$ defined above on $\kappa^\kappa$ induces the total order $\preceq_{f(X)}$ on $f(X)$ compatible with the coarse of the subspace $f(X)$ of $(\kappa, \mathcal{K}_\kappa)$. Applying $f^{-1}$, we get the desired total order on $(X, \mathcal{E})$. $ \ \ \ \Box $ \vspace{7 mm} {\bf Theorem 3. } {\it Every cellular ordinal space $(X, \mathcal{E})$ admits a global selector $f: exp \ X \rightarrow X$. \vspace{3 mm} Proof. } Apply Theorem 2 and Proposition 3. $ \ \ \ \Box $ \vspace{7 mm} {\bf Question 5. } {\it How can one detect whether a given cellular coarse space admits a global selector?} \vspace{7 mm} Now we apply obtained results to coarse spaces of groups. Let $G$ be a group with the identity. We denote by $\mathcal{E}_G$ the coarse structure of $G$ with the base $$\{\{ (x, y)\in G\times G : y\in Fx: F\in [G]^{<\omega}, \ e\in F\}$$ and say that $(G, \mathcal{E}_G)$ is the {\it finitary coarse space } of $G$. It should be mentioned that finitary coarse spaces of groups are used as tools in {\it Geometric Group Theory}, see \cite{b3}, \cite{b5}. \vspace{7 mm} {\bf Theorem 4. } {\it If a group $G$ is uncountable then $(G, \mathcal{E}_G)$ does not admit a 2-selector. \vspace{3 mm} Proof. } We note that the bornology of bounded subsets of $(G, \mathcal{E}_G))$ is $[G]^{<\omega}$. Apply Proposition 4 and Theorem 1. $ \ \ \ \Box $ \vspace{7 mm} It is easy to see that $(G, \mathcal{E}_G))$ is cellular if and only if $G$ is locally finite (i.e. each finite subset of $G$ generates a finite subgroup. \vspace{7 mm} {\bf Theorem 5. } {\it If $G$ is a countable locally finite group then the finitary coarse space $(G, \mathcal{E}_G)$ admits a global selector. \vspace{3 mm} Proof. } We note that $\mathcal{E}_G))$ has a countable base and apply Theorem 2.$ \ \ \ \Box $ \vspace{7 mm} Any two countable locally finite groups are coarsely equivalent \cite{b2}, for classification of countable locally finite groups up to asymorphisms, see \cite{b6}. \section{Selectors of universal spaces} Let $X$ be a set, $\mathcal{E} \subseteq X\times X$, $\delta_X \subseteq X$. We say that an entourage $E$ is \vspace{7 mm} \begin{itemize} \item{} {\it locally finite} if $E [x]$, $E^{-1} [x]$ are finite for each $x\in X$; \vspace{3 mm} \item{} {\it finitary} if there exists a natural number $n$ such that $|E[x]|< n$, $|E^{-1}[x]| < n$ for each $x\in X$. \end{itemize} \vspace{5 mm} A coarse space $(X, \mathcal{E})$ is called {\it locally finite (finitary)} if each entourage $E\in \mathcal{E}$ is locally finite (finitary). If $E, H$ are locally finite (finitary) then $E\circ H$, $E^{-1}$ are locally finite (finitary). We denote \vspace{4 mm} $ \ \ \Lambda= \{ E: E\in \omega\times\omega, \ E $ is a locally finite entourage$\},$ $ \ \ \mathcal{F}= \{ E: E\in \omega\times\omega, \ E $ is a finitary entourage$\},$ \vspace{4 mm} \noindent and say that $(\omega, \Lambda)$ $(resp. (\omega, \mathcal{F}))$ is the universal {\it locally finite} (resp. {\it finitary}) space. We denote by $S_\omega$ the group of all permutations of $\omega$, $id$ is the identity permutation. By [7, Theorem 3], the coarse structure $\mathcal{F}$ has the base $$\{\{ (x,y): x\in Fy\}: F \in [S_\omega]^{\omega}, \ \ id \in F \}. $$ {\bf Theorem 6.} {\it The coarse space $(\omega, \Lambda)$ admits a global selector. \vspace{5 mm} Proof.} We denote by $\leq$ the natural order on $\omega$, prove that $\leq$ is compatible with $\Lambda$ and apply Proposition 3. For $E\in \Lambda$, let $\overline{E}= \{ (x, y): min E [x] \leq y \leq max E[x]\}. $ Clearly, $\overline{E}\in \Lambda$. If $x, y\in X$, $x<y$ and $y\in \omega\setminus \overline{E}[x]$ then $x^\prime < y$ for each $x^\prime \in E[x]$. $ \ \ \ \Box $ \vspace{7 mm} {\bf Theorem 7.} {\it The coarse space $(\omega, \mathcal{F})$ does not admit 2-selectors. \vspace{5 mm} Proof.} We suppose the contrary and let $f$ be a 2-selector of $(\omega, \mathcal{F})$. We define a binary relation $ \prec$ on $X$ by $x\prec y$ if and only if $x\neq y$ and $f(\{x,y\})=x$. Then we choose inductively an injective sequence $(a_n)_{n\in\omega}$ in $\omega$ such that either $a_i\prec a_j$ for all $i<j$ or $a_j \prec a_i$ for all $i<j$. We consider only the first case, the second is analogous. We partition $\{a_n : n<\omega \}$ into consequive with respect to $\prec$ intervals $\{ T_n : n<\omega\}$ of length $2n+1$. We define a permutation $h$ of order 2 of $\omega$ as follows. For $x\in \omega \setminus \{ a_n: n<\omega \}$, $hx=x$. We take $T_n$, $T_n = \{ a_m , \dots , a_{m+2n+1} \}$ and put $h a_m =a_{m+2n+1}$, $h a_{m+1} =a_{m+2n}, \dots ,$ $h a_{m+n+1} =a_{m+n+1}$. We put $F=\{h, id\}$, $E=\{ (x,y): y\in Fx \}$. Since $f$ is macro-uniform, there exists $H\in \mathcal{F}$ such that if $A, A^\prime \in [X]^2$, $A\subseteq E[A^\prime]$, $A^\prime\subseteq E[A]$ then $(f(A), f(A^\prime)) \in H$. We take $k$ such that $|H[x]|< k$ for each $x\in \omega$. Let $n>k$. Since $(\{a_{m+i}, a_{m+n+1}\},$ $ \{a_{m+n+1}, a_{m+2n+1 -i })\in E$ for each $i\in \{0, \dots , n_1 \}$, $(a_{m+i}, a_{m+n+1})\in H$ contradicting $|H[a_{m+n+1}]|<k$. $\ \ \Box $
2,877,628,090,007
arxiv
\section{Introduction} An important achievement of the research community in logic is the invention of proof assistants. Such tools allow for interactively writing proofs, which are then checked automatically and can then be reused in other developments. Unfortunately, a proof written in a proof assistant cannot be reused in another one, which makes each tool isolated in its own library of proofs. This is specially the case when considering two proof assistants with incompatible logics, as in this case simply translating from one syntax to another would not work. Therefore, in order to share proofs between systems it is very often required to do logical transformations. One approach to share proofs from a proof assistant $ A $ to a proof assistant $ B $ is to define a transformation acting directly on the syntax of $ A $ and then implement it using the codebase of $ A $. However, this code would be highly dependent on the implementation of $ A $ and can easily become outdated if the codebase of $ A $ evolves. Moreover, if there is another proof assistant $ A' $ whose logic is very similar to the one of $ A $ then this transformation would have to be implemented another time in order to be used with $ A' $. \vspace{-1em} \paragraph*{Dedukti} The logical framework \textsc{Dedukti}{} \cite{dedukti} is a good candidate for a system where multiple logics can be encoded, allowing for logical transformations to be defined uniformly \textit{inside} \textsc{Dedukti}{}. Indeed, first, the framework was already shown to be sufficiently expressive to encode the logics of many proof assistants \cite{thU}. Moreover, previous works have shown how proofs can be transformed inside \textsc{Dedukti}{}. For instance, Thiré describes in \cite{sttfa} a transformation to translate a proof of Fermat's Little Theorem from the Calculus of Inductive Constructions to Higher Order Logic (HOL), which can then be exported to multiple proof assistants such as \textsc{HOL}, \textsc{PVS}, \textsc{Lean}, etc. Géran also used Dedukti to export the formalization of Euclid's Elements Book 1 in \textsc{Coq} \cite{geocoq} to several proof assistants \cite{yoan}. \vspace{-1em} \paragraph*{(Im)Predicativity} One of the challenges in proof interoperability is sharing proofs coming from impredicative proof assistants (the majority of them) to predicative ones such as \textsc{Agda}{}. Indeed, impredicativity, which is the ability in a logic to quantify over arbitrary entities, regardless of size considerations, is incompatible with predicative systems, in which each entity can only quantify over smaller ones. Therefore, it is clear that any proof that uses such characteristic in an essential way cannot be translated to a predicative system. Nevertheless, one can wonder if most proofs written in impredicative systems really need impredicativity and, if not, how one could devise a way for detecting and translating them to predicative systems. \vspace{-1em} \paragraph*{Our contribution} In this paper, we tackle this problem by proposing an algorithm that tries to do precisely this. This algorithm was implemented on top of the \textsc{DkCheck}{} type-checker for \textsc{Dedukti}{} with the tool \textsc{Predicativize}, allowing for the translation of proofs semi-automatically inside \textsc{Dedukti}{}. These proofs can then be exported to \textsc{Agda}{}, the main proof assistant based on predicative type theory. This tool has been used to translate many proofs semi-automatically to \textsc{Agda}{}, including \textsc{Matita}'s arithmetic library. It contains many-non trivial proofs, and in particular a proof of Bertrand's Postulate which was the subject of a whole publication \cite{bertrand} -- thanks to our tool, the same hard work did not have to be repeated in order to make it available in \textsc{Agda}{}. \vspace{-1em} \paragraph*{Outline} We start in Section \ref{sec:dedukti} with an introduction to \textsc{Dedukti}, before moving to Section \ref{sec:firstlook}, where we present informally the problems that appear when translating proofs to predicative systems. We then introduce in Section \ref{sec:upp} a predicative universe-polymorphic system, which is a subsystem of \textsc{Agda}{} and is used as the target of the translation. This is followed by Section \ref{sec:alg}, the main one, where we present our algorithm. Section \ref{sec:solving} then proposes an (incomplete) unification algorithm for universe levels, which is used by the predicativization algorithm. We then introduce the tool \textsc{Predicativize} in Section \ref{sec:predicativize}, and describe the translation of \textsc{Matita}'s library in Section \ref{sec:matita}. We end with some remarks in Section \ref{sec:conc}. The proofs not given in the main body of the article can be found in the long version (see link on the first page). \section{Dedukti} \label{sec:dedukti} \tolong{ \begin{figure} {\small\begin{center} \AxiomC{} \RightLabel{\texttt{Empty}} \UnaryInfC{$-; -~\texttt{well-formed}$} \DisplayProof \hskip 1.5em \AxiomC{$\Sigma;- \vdash A : \textbf{s}$} \RightLabel{\texttt{Decl-cons}} \LeftLabel{$c \notin \Sigma$} \UnaryInfC{$\Sigma, c : A;-~\texttt{well-formed}$} \DisplayProof \end{center} \begin{center} \AxiomC{$\Sigma;- \vdash M : A$} \RightLabel{\texttt{Decl-def}} \LeftLabel{$c \notin \Sigma$} \UnaryInfC{$\Sigma, c : A := M;-~\texttt{well-formed} $} \DisplayProof \hskip 1.5em \AxiomC{$\Sigma;\Gamma \vdash A : \textbf{\textup{Type}}$} \RightLabel{\texttt{Decl-var}} \LeftLabel{$x \notin \Gamma$} \UnaryInfC{$\Sigma;\Gamma, x : A~\texttt{well-formed} $} \DisplayProof \end{center} \begin{center} \AxiomC{$\Sigma;\Gamma~\texttt{well-formed}$} \RightLabel{\texttt{Cons}} \LeftLabel{$c : A\text{ or }c : A := M \in \Sigma$} \UnaryInfC{$\Sigma;\Gamma \vdash c : A $} \DisplayProof \hskip 1.5em \AxiomC{$\Sigma;\Gamma~\texttt{well-formed}$} \RightLabel{\texttt{Var}} \LeftLabel{$ x : A \in \Gamma $} \UnaryInfC{$\Sigma;\Gamma \vdash x : A $} \DisplayProof \end{center} \begin{center} \AxiomC{$ \Sigma;\Gamma~\texttt{well-formed} $} \RightLabel{\texttt{Sort}} \UnaryInfC{$\Sigma;\Gamma \vdash \textbf{\textup{Type}} : \textbf{\textup{Kind}}$} \DisplayProof \hskip 1.5em \AxiomC{$\Sigma;\Gamma \vdash M : A $} \AxiomC{$\Sigma;\Gamma \vdash B : \textbf{s} $} \RightLabel{\texttt{Conv}} \LeftLabel{$A \equiv B$} \BinaryInfC{$\Sigma;\Gamma \vdash M : B $} \DisplayProof \end{center} \begin{center} \AxiomC{$\Sigma; \Gamma, x : A \vdash B : \textbf{s} $} \RightLabel{\texttt{Prod}} \UnaryInfC{$\Sigma;\Gamma \vdash \Pi x : A . B : \textbf{s} $} \DisplayProof \hskip 1.5em \AxiomC{$\Sigma;\Gamma \vdash M : \Pi x : A . B $} \AxiomC{$\Sigma; \Gamma \vdash N : A $} \RightLabel{\texttt{App}} \BinaryInfC{$\Sigma;\Gamma \vdash M N : B\{N/x\} $} \DisplayProof \end{center} \begin{center} \AxiomC{$\Sigma; \Gamma, x : A \vdash B : \textbf{s} $} \AxiomC{$\Sigma; \Gamma, x : A \vdash M : B $} \RightLabel{\texttt{Abs}} \BinaryInfC{$\Sigma;\Gamma \vdash \lambda x : A . M :\Pi x : A . B $} \DisplayProof \end{center} \caption{Typing rules for \textsc{Dedukti}} \label{typing-dk}} \end{figure} } In this work we use \textsc{Dedukti}{} \cite{dedukti, thU} as the metatheory in which we express the various logic systems and define our proof transformation. Therefore, we start with a quick introduction to this system. The logical framework \textsc{Dedukti}{} has the syntax of the $ \lambda $-calculus with dependent types ($ \lambda\Pi $-calculus). \begin{align*} A, B, M, N &::= x \mid c \mid M N \mid\lambda x : A . M\mid\Pi x : A. B \mid \textbf{\textup{Type}} \mid\textbf{\textup{Kind}} \end{align*} Here, $ c $ ranges in a set of constants $ \mathcal{C} $, and $ x $ ranges in an infinite set of variables $ \mathcal{V} $ disjoint from $ \mathcal{C} $. We call a type of the form $ \Pi x : A.B $ a \textit{dependent product}, and we write $ A \to B $ when $ x $ does not appear free in $ B $. We use $ \textbf{s} $ to refer to either $ \textbf{\textup{Type}} $ or $ \textbf{\textup{Kind}} $. A \textit{context} $ \Gamma $ is a finite sequence of entries of the form $ x : A $. A \textit{signature} $ \Sigma $ is a finite sequence of entries of the form $ c : A $ (constant declarations) or $ c : A := M $ (definitions). It can be useful to split the signature into a global signature $ \Sigma $ and a local signature $ \Delta $ defined on top of the global one. The global signature holds the definition of the object logic we are working in, whereas the local one holds axioms and definitions inside the logic. For instance, when working with natural numbers in predicate logic we would have $ \land : Prop \to Prop \to Prop \in \Sigma $, as $ \land $ is in the definition of predicate logic, but $ + : Nat \to Nat \to Nat \in \Delta$, given that the natural numbers and addition are not part of predicate logic, but can be defined on top of it. The main difference between \textsc{Dedukti}{} and the $ \lambda \Pi $-calculus is that we also consider a set $ \mathscr{R} $ of \textit{rewrite rules}, which are pairs of the form $ c~l_{1}...l_{k} \xhookrightarrow{} r$ where $ l_{1},...,l_{k},r $ are terms. Given a signature $\Sigma,\Delta$, we also consider the $ \delta $ rules allowing for the unfolding of definitions: we have $ c \xhookrightarrow{} M \in \delta$ for each $ c :A := M \in \Sigma,\Delta $. We then denote by $ \xhookrightarrow{}_\mathscr{R} $ the closure by context and substitution of $ \mathscr{R} $, and by $ \xhookrightarrow{}_\delta $ the closure by context of $ \delta $. Finally, we write $\xhookrightarrow{}_{\beta\mathscr{R}\delta}$ for $\xhookrightarrow{}_\beta \cup \xhookrightarrow{}_\mathscr{R} \cup \xhookrightarrow{}_\delta$ and $ \equiv_{\beta\mathscr{R}\delta} $ for its reflexive, symmetric and transitive closure. Rewriting allows us to define equality by computation, but not all equalities can be defined like this in a well-behaved way, e.g. the commutativity of some operator. Therefore, we also consider \textit{rewriting modulo equations} \cite{blanqui03rta}. If $\mathscr{E}$ is a set of pairs of Dedukti terms (written as $M \approx N$), we write $\simeq_{\mathscr{E}}$ for its congruent closure -- that is, its reflexitive, symmetric and transitive closure by context and substitution. Because $\mathscr{R}$ and $\mathscr{E}$ are usually kept fixed, in the following we write $\xhookrightarrow{}$ for $\xhookrightarrow{}_{\beta\mathscr{R}\delta}$, $\simeq$ for $\simeq_{\mathscr{E}}$ and $\equiv$ for the reflexitive, symmetric and transitive closure of $\xhookrightarrow{}\cup\simeq_{\mathscr{E}}$. One very important notion that we will use in this work is that of a \textit{theory}, which is a triple $ (\Sigma, \mathscr{R}, \mathscr{E}) $ where $ \Sigma $ is a global signature and all constants appearing in $ \mathscr{R} $ and $\mathscr{E}$ are declared in $ \Sigma $. Theories are used to define in \textsc{Dedukti}{} the object logics in which we work (for instance, predicate logic). The typing rules for \textsc{Dedukti}{} are given in \tolong{Figure \ref{typing-dk}}\toshort{Appendix \ref{sec:dktyping}, along with some basic metaproperties that we use in the subsequent proofs}. We remark in particular that the conversion rule of the system allows to exchange types which are equivalent modulo $ \equiv $, which can use not only $\beta$ but also $\delta$, $\mathscr{R}$ and $\mathscr{E}$. \tolong{ We recall the following basic metaproperties of \textsc{Dedukti}{}. Proofs can be found in \cite{frederic-phd, saillard15phd}. \begin{theorem}[Basic metaproperties] ~ \begin{enumerate} \item Weakening: If $ \Sigma;\Gamma \vdash M : A $, $ \Gamma \subseteq \Gamma' $ and $ \Sigma;\Gamma'~\textup{\texttt{well-formed}} $ then $ \Sigma;\Gamma' \vdash M : A $ \item Substitution Lemma: If $ \Sigma; \Gamma,x:B,\Gamma' \vdash M : A $ and $ \Sigma;\Gamma \vdash N : B $ then $ \Sigma;\Gamma,\Gamma'\{N/x\}\vdash M\{N/x\} : A\{N/x\} $ \item Well-sortedness: If $ \Sigma;\Gamma \vdash M : A $ then either $ A = \textbf{\textup{Kind}} $ or $ \Sigma;\Gamma \vdash A : \textup{\textbf{s}} $ for $ \textbf{\textup{s}} = \textbf{\textup{Type}}$ or $ \textbf{\textup{Kind}} $. \item Subject reduction of $ \delta $: If $ \Sigma;\Gamma \vdash M : A $ and $ M \xhookrightarrow{}_\delta M' $ then $ \Sigma;\Gamma \vdash M' : A $ \item Subject reduction of $ \beta $: If injectivity of dependent products holds, then $ \Sigma;\Gamma \vdash M : A $ and $ M \xhookrightarrow{}_\beta M' $ implies $ \Sigma;\Gamma \vdash M' : A $. \item Contexts are well typed: If $ x : A \in \Gamma $ then $ \Sigma;\Gamma \vdash A : \textbf{\textup{Type}} $ \item Signatures are well typed: If $ c : A \in \Sigma$ then $ \Sigma;- \vdash A : \textup{\textbf{s}}$ and if $ c : A := M \in \Sigma $ then $ \Sigma;-\vdash M : A$ \item Inversion of typing: Suppose $ \Sigma;\Gamma \vdash M : A $ \begin{itemize} \item If $ M = x $ then $ x : A' \in \Gamma $ and $ A \equiv A' $ \item If $ M = c $ then $ c : A' \in \Sigma $ and $ A \equiv A' $ \item If $ M = \textbf{\textup{Type}} $ then $ A \equiv \textbf{\textup{Kind}} $ \item $ M= \textbf{\textup{Kind}} $ is impossible \item If $ M = \Pi x : A_1. A_2 $ then $ \Sigma; \Gamma,x:A_1 \vdash A_2 : \textbf{\textup{s}} $ and $ \textbf{\textup{s}} \equiv A $ \item If $ M = M_1 M_2 $ then $ \Sigma; \Gamma \vdash M_1 : \Pi x: A_1.A_2 $, $ \Sigma;\Gamma \vdash M_2 : A_1 $ and $ A_2\{M_2/x\} \equiv A $ \item If $ M = \lambda x : B. N $ then $ \Sigma; \Gamma, x:B \vdash C:\textbf{\textup{s}} $, $ \Sigma;\Gamma,x:B \vdash N:C $ and $ A \equiv \Pi x:B.C $ \end{itemize} \item Weak permutation: If $\Sigma; \Gamma, x :A, y : B, \Gamma' \vdash M : C$ and $\Gamma \vdash B : \textbf{\textup{Type}} $ then $\Sigma; \Gamma, y : B, x :A,\Gamma' \vdash M : C$. \end{enumerate} \end{theorem} } \subsection{Defining Pure Type Systems in Dedukti} \label{subsec:pts} We briefly review how Pure Type Systems (PTSs) \cite{pts} can be defined in \textsc{Dedukti}{} \cite{dowek2007} (other approaches also exists, such as \cite{felicissimo:LIPIcs.FSCD.2022.25}), as we will need this in the rest of the article. Recall that in PTSs, universes and function types can be specified by a set $ \mathcal{S} $ of universes, and two relations $ \mathcal{A}\subseteq\mathcal{S}^2$ and $ \mathcal{R}\subseteq\mathcal{S}^3$ -- which we suppose to be functional relations here, as is usually the case. These specify that, if $ (s_1,s_2) \in \mathcal{A} $, then $ s_1 $ is of type $ s_2 $, and if $ (s_1,s_2,s_3) \in \mathcal{R} $ then when $ A : s_1$ and $ B :s_2 $ we have $ \Pi x : A. B : s_3 $. Given a PTS specification $ (\mathcal{S},\mathcal{A},\mathcal{R}) $, we can define the corresponding PTS with a \textsc{Dedukti}{} theory in the following manner. We first start with the definition of universes. For each universe $ s \in \mathcal{S} $, we declare a \textsc{Dedukti}{} type $ U_s : \textbf{\textup{Type}}$ holding the types in the universe $ s $. We then also declare a function symbol $ El_s : U_s \to \textbf{\textup{Type}} $ mapping each member of $ U_s $ to the type of its elements. We might see the elements of $ U_s $ as the codes for the types in $ s $, and $ El_s $ as the decoding function, mapping a code to its true type. In order to represent the fact that a universe $ s_1 $ is a member of $ s_2 $ when $ (s_1,s_2) \in \mathcal{A} $ we add the constant $ u_{s_1} : U_{s_2} $. However, now the universe $ s_1 $ is represented both by $El_{s_2}~u_{s_1}$ and $U_{s_1}$. Therefore, we add the rewrite rule $ El_{s_2}~u_{s_1} \xhookrightarrow{} U_{s_1} $, stating that $ u_{s_1} $ decodes to $ U_{s_1} $. Finally, to define dependent functions, for each $ (s_1,s_2,s_3) \in \mathcal{R} $ we add a symbol $ \pi_{s_1,s_2} : \Pi A:U_{s_1}. (El_{s_1}~A \to U_{s_2}) \to U_{s_3} $. Intuitively, the type $ El_{s_3}~(\pi_{s_1,s_2}~A~(\lambda x.B)) $ should hold the functions from $ x:El_{s_1}~A $ to $ El_{s_2}~B $, where $ x $ might occur in $ B $. To make this representation explicit, we add a rewrite rule $ \pi_{s_1,s_2}~A~B \xhookrightarrow{} \Pi x : El_{s_1}~A. El_{s_2}~(B~x) $. Because the type of functions from $ x:A $ to $ B $ is now represented by the framework's function type, the framework's abstraction and application can be used to represent the ones of the encoded system. In the following, we allow ourselves to write $ \pi_{s_1,s_2}~A~(\lambda x. B) $ informally as $ \pi_{s_1,s_2}~x : A. B $ in order to improve clarity. When $ x \not \in FV(B) $, we might also write $ A \leadsto_{s_1,s_2} B $. \begin{figure} \noindent\parbox{.5\textwidth}{ \begin{align*} &U_s : \textbf{\textup{Type}} &\text{for }s \in \mathcal{S}\\ &El_s :U_s \to \textbf{\textup{Type}} &\text{for }s \in \mathcal{S} \end{align*}} \parbox{.5\textwidth}{ \begin{align*} &u_{s_1} : U_{s_2}&\text{for } (s_1,s_2) \in \mathcal{A}\\ &El_{s_2}~u_{s_1} \xhookrightarrow{} U_{s_1}&\text{for } (s_1,s_2) \in \mathcal{A} \end{align*}} \vspace{-2.3em} \begin{align*} &\pi_{s_1,s_2} : \Pi A : U_{s_1}. (El_{s_1}~A \to U_{s_2}) \to U_{s_3} &\text{for }(s_1,s_2,s_3) \in \mathcal{R}\\ &El_{s_3}~(\pi_{s_1,s_2}~A~B) \xhookrightarrow{} \Pi x : El_{s_1}~A. El_{s_2}~(B~x)&\text{for }(s_1,s_2,s_3) \in \mathcal{R} \end{align*}\vspace{-2em} \caption{The \textsc{Dedukti}{} theory which defines the PTS specified by $ (\mathcal{S} , \mathcal{A}, \mathcal{R})$} \end{figure} \section{An informal look at the challenges of proof predicativization} \label{sec:firstlook} In this informal section we present the problem of proof predicativization and discuss the challenges that arise through the use of examples. Even though the examples might be unrealistic, they showcase real problems we found during our first predicativization attempt, of Fermat's little theorem library in HOL \cite{sttfa} -- some of them being already noted in \cite{delort:hal-02985530}. We first start by defining the theories \textbf{I} and \textbf{P}, which we will use to represent the core logics of impredicative and predicative proof assistants. These theories are defined as Pure Type Systems as explained in Subsection \ref{subsec:pts} and are described by the specifications bellow. Remember that a universe $ s $ is said to be impredicative when it is closed under dependent products indexed by some bigger sort, that is, for some $ s' $ with $ (s, s') \in \mathcal{A} $ we have $ (s', s, s) \in \mathcal{R} $. Therefore, $ \textbf{I} $ is an impredicative system and $ \textbf{P} $ is a predicative one. \noindent\parbox{.5\textwidth}{ \begin{align*} \mathcal{S}_{\boldsymbol{I}} &= \{*, \square\}\\ \mathcal{A}_{\boldsymbol{I}} &= \{(*, \square)\} \\ \mathcal{R}_{\boldsymbol{I}} &= \{(*, *, *), (\square, *, *), (\square,\square,\square)\} \end{align*} } \parbox{.5\textwidth}{ \begin{align*} \mathcal{S}_{\boldsymbol{P}} &= \mathbb{N} \\ \mathcal{A}_{\boldsymbol{P}} &= \{(n, n+1) \mid n \in \mathbb{N} \} \\ \mathcal{R}_{\boldsymbol{P}} &= \{(n, m, max\{n,m\}) \mid n,m \in \mathbb{N}\} \end{align*} } In this setting, the problem of proof predicativization consists in defining a transformation such that, given a local signature $ \Delta $ with $ \Sigma_I, \Delta;-~\texttt{well-formed} $, allows to translate it to a local signature $ \Delta' $ with $ \Sigma_P,\Delta';-~\texttt{well-formed} $. Stated informally, we would like to translate constants (which represent axioms) and definitions (which also represent proofs) from \textbf{I} into \textbf{P}. Note in particular that such a transformation is not applied to a single term but to a sequence of constants and definitions, which can be related by dependency -- this dependency turns out to be a major issue as we will see. In the following we represent the local signature $ \Delta $ in a more readable way as a list of entries $\textbf{constant}~c : A$ and $ \textbf{definition}~c : A := M $ Now that our basic notions are explained, let us dive into proof predicativization. For our first step, consider a very simple development showing that for every type $ P $ in $ * $ we can build an element of $ P \leadsto_{*,*} P $ -- if $ * $ is a universe of propositions, then this is just a proof that each proposition in $ * $ implies itself. \begin{flalign*} &\textbf{definition}~thm_1 : El_{*}~(\pi_{\square,*}~P :u_{*}.P \leadsto_{*,*}P) := \lambda P : U_{*}. \lambda p : El_*~P. p& \end{flalign*}% To translate this simple development, the first idea that comes to mind is to define a mapping on universes: the universe $ * $ is mapped to $ 0 $ and the universe $ \square $ is mapped to $ 1 $. However, because our syntax in Dedukti is heavily annotated, we should only apply this map to the constants $u$ and $U$, which represent the universes, and then try to recalculate the annotations of the other constants $El$ and $ \pi$ (remember that $\leadsto$ is just an alias for $\pi$). This would then yield the following local signature, which is indeed valid in \textbf{P}. \begin{flalign*} &\textbf{definition}~thm_1 : El_{1}~(\pi_{1,0}~P :u_{0}.P \leadsto_{0,0} P) := \lambda P : U_{0}. \lambda p : El_0~P. p& \end{flalign*}% This naive approach however quickly fails when considering other cases. For instance, suppose now that one adds the following definition -- once again, if $ *$ is a universe of propositions, then this is just a proof of the proposition $ (\forall P . P \Rightarrow P) \Rightarrow \forall P . P \Rightarrow P $. \begin{flalign*} &\textbf{definition}~thm_3 : El_*~((\pi_{\square,*}~P:u_*. P \leadsto_{*,*}P) \leadsto_{*,*} \pi_{\square,*}~P:u_*. P \leadsto_{*,*}P) \\ &\hspace{5em} := thm_1~(\pi_{\square,*}~P:u_*. P \leadsto_{*,*}P)& \end{flalign*}% If we try to perform the same syntactic translation as before, we get the following result: \begin{flalign*} &\textbf{definition}~thm_3 : El_1~((\pi_{1,0}~P:u_0. P \leadsto_{0,0}P) \leadsto_{1,1} \pi_{1,0}~P:u_0. P \leadsto_{0,0}P) \\&\hspace{5em}:= thm_1~(\pi_{1,0}~P:u_0. P \leadsto_{0,0}P)& \end{flalign*}% However, one can verify that this term is not well typed. Indeed, in the original term one quantifies over all types in $ * $ in the term $ \pi_{\square,*}~P:u_*. P \leadsto_{*,*}P $, and because of impredicativity this term stays at $ * $. However, in \textbf{P} quantifying over all elements of the universe $ 0 $ in $ \pi_{1,0}~P:u_0. P \leadsto_{0,0}P $ raises the type to the universe $ 1 $. As $ thm_1 $ expects a term in the universe $ 0 $, the term $ thm_1~(\pi_{1,0}~P:u_0. P \leadsto_{0,0}P) $ is not well-typed. This suggests that impredicativity introduces a kind of \textit{typical ambiguity}, as it allows us to hide in a single universe $ * $ all kinds of bigger types which would have to be placed in bigger universes in a predicative setting. Hence, in order to handle cases like this one, which arise a lot in practice, we should not translate every occurrence of $ * $ as $ 0 $ naively as we did, but try to compute for each occurrence of $ * $ some natural number $ i $ such that replacing it by $ i $ would produce a valid result. Thankfully, performing such kind of transformations is exactly the goal of \textsc{Universo} \cite{thire}. This tool allows one to transport typing derivations between two PTS specifications. To understand how this works, let us come back to the previous example. \textsc{Universo}{} starts here by replacing all sorts by occurrences of $ l $, where $ l $ is a fresh metavariable representing a natural number. \begin{flalign*} &\textbf{definition}~thm_1 : El_{l_1}~(\pi_{l_2,l_3}~P:u_{l_4}.P \leadsto_{l_5,l_6} P) := \lambda P : U_{l_7}. \lambda p : El_{l_8}~P. p&\\ &\textbf{definition}~thm_3 : El_{l_{9}}~((\pi_{{l_{10}},{l_{11}}}~P:u_{l_{12}}. P \leadsto_{{l_{13}},{l_{14}}} P) \leadsto_{{l_{15}},{l_{16}}} \pi_{{l_{17}},{l_{18}}}~P:~u_{l_{19}}.P \leadsto_{{l_{20}},{l_{21}}} P) \\ &\hspace{5em} := thm_1~(\pi_{{l_{22}},{l_{23}}}~P:u_{l_{24}}. P \leadsto_{{l_{25}},{l_{26}}} P)& \end{flalign*}% These of course are not valid proofs in \textbf{P}, but in the following step \textsc{Universo}{} typechecks such development and generates constraints in the process. These constraints are then given to a SMT solver, which is used to compute for each metavariable $ l $ a natural number so that the local signature is valid in \textbf{P}. For instance, applying \textsc{Universo}{} to our previous example would produce the following valid local signature in \textbf{P}. {\begin{flalign*} &\textbf{definition}~thm_1 : El_{2}~(\pi_{2,1}~P:u_{1}. P \leadsto_{1,1} P)) := \lambda P : U_{1}. \lambda p : El_{1}~P. p&\\ &\textbf{definition}~thm_3 : El_{1}~((\pi_{1,0}~P:u_{0}.P \leadsto_{0,0} P) \leadsto_{1,1} \pi_{1,0}~P:u_{0}. P \leadsto_{0,0} P) \\&\hspace{5em}:= thm_1~(\pi_{1,0}~P:u_{0}. P \leadsto_{0,0} P)& \end{flalign*}}% By using \textsc{Universo} it is possible to go much further than with the naive syntactical translation. Still, this approach also fails when being employed with real libraries. To see the reason, consider the following minimum example, in which one uses an element of $ \pi_{\square,*}~P:u_*.P \leadsto_{*,*} P $ twice to build another element of the same type. {\begin{flalign*} &\textbf{definition}~thm_1 : El_*~(\pi_{\square,*}~P:u_*. P \leadsto_{*,*} P) := \lambda P : U_*. \lambda p : El_*~P. p&\\ &\textbf{definition}~thm_2 : El_*~(\pi_{\square,*}~P:u_*. P \leadsto_{*,*} P) := thm_1~(\pi_{\square,*}~P:u_*. P \leadsto_{*,*} P)~thm_1& \end{flalign*}}% If we repeat the same procedure as before, we get the following term, which generates unsolvable constraints. {\begin{flalign*} &\textbf{definition}~thm_1 : El_{l_{1}}~(\pi_{l_{2},l_{3}}~P:u_{l_{4}}. P \leadsto_{l_{5},l_{6}} P) := \lambda P : U_{l_{7}}. \lambda p : El_{l_{8}}~P. p&\\ &\textbf{definition}~thm_2 : El_{l_{9}}~(\pi_{l_{10},l_{11}}~P:u_{l_{12}}.P \leadsto_{l_{13},l_{14}} P) := thm_1~(\pi_{l_{15},l_{16}}~P:u_{l_{17}}. P \leadsto_{l_{18},l_{19}} P)~thm_1& \end{flalign*}}% This happens because the application $ thm_1~(\pi_{l_{15},l_{16}}~P:u_{l_{17}}. P \leadsto_{l_{18},l_{19}} P)~thm_1 $ forces $ l_4 $ to be both $ l_{17} $ and $ l_{17} +1 $, which is impossible. This example suggests that impredicativity does not only hide the fact that types are stratified, but also the fact that they can be used at any level of this stratification. For instance, in our example we would like to use $ thm_1 $ one time with $ l_4 = l_{17} $ and another time with $ l_4 = l_{17+1} $. In general, when trying to translate libraries using \textsc{Universo} we found that at very early stages a translated proof or object was already needed at multiple universes at the same time, causing the translation to fail. Therefore, in order to properly compensate for the lack of impredicativity, our solution uses \textit{universe polymorphism}, a feature in type theory (and also present in \textsc{Agda}) that allows defining terms that can later be used at multiple universes \cite{typechecking-with-universes,coq}. Our translation works by trying to compute for each definition or declaration its most general universe polymorphic type, and using it when translating the subsequent declarations or definitions. To understand how this is done precisely, let us first introduce universe polymorphism, which is the subject of the following section. \section{A Universe-Polymorphic Predicative Type System} \label{sec:upp} In this section, we define the Universe-Polymorphic Predicative Type System (or just \textbf{UPP}{}), which enriches the Predicative PTS \textbf{P} with prenex universe polymorphism \cite{typechecking-with-universes,guillaume}. This is in particular a subsystem of the one underlying the \textsc{Agda} proof assistant \cite{agda}. As usual, we define this system as a \textsc{Dedukti}{} theory $ \textbf{UPP}=(\Sigma_{UPP},\mathscr{R}_{UPP},\mathscr{E}_{UPP}) $ The main change with respect to \textbf{P} is that, instead of indexing the constants $ El_s, U_s, u_s, \pi_{s_1,s_2} $ externally, we index them inside the framework \cite{assaf}. To do this, we first introduce a syntax for \textit{universe levels} inside \textsc{Dedukti}{} by the following grammar \[ l, l' ::= i \in \mathcal{I} \mid \textup{\texttt{z}} \mid \textup{\texttt{s}}~l \mid l \sqcup l' \]where the constants $\textup{\texttt{z}}, \textup{\texttt{s}}$ and $\sqcup$ are defined bellow and $\mathcal{I}\subsetneq \mathcal{V}$ is a set of level variables. We also enforce that level variables can only be substituted by other levels. \noindent\parbox{.36\textwidth}{ \begin{align*} &Level : \textbf{\textup{Type}}\\ &\textup{\texttt{z}} : Level \end{align*}} \parbox{.5\textwidth}{ \begin{align*} &\textup{\texttt{s}} : Level \to Level\\ &\sqcup : Level \to Level \to Level\quad\text{(written infix)} \end{align*}} The definitions in the theory \textbf{P} of $ El_s, U_s, u_s, \pi_{s_1,s_2} $ and the related rewrite rules are then replaced by the following ones.\footnote{Note that in the following rewrite rules we do not need to impose $i'$ to be equal or convertible to $\textup{\texttt{s}}~i$ or $i_{A}\sqcup i_{B}$, given that, for well-typed instances of the rule, this is ensured by typing \cite{blanqui05mscs, assaf, saillard2015type, blanqui2020type}.} \noindent\parbox{.36\textwidth}{ \begin{align*} &U : Level \to \textbf{\textup{Type}}\\ &El : \Pi i : Level. U~i \to \textbf{\textup{Type}}\\ &u : \Pi i : Level. U~(\textup{\texttt{s}}~i) \end{align*}} \parbox{.5\textwidth}{ \begin{align*} &\pi : \Pi (i_A~i_B : Level)~(A : U~i_A). (El~i_A~A \to U~i_B) \to U~(i_A \sqcup i_B)\\ &El~i'~(u~i) \xhookrightarrow{} U~i\\ &El~i'~(\pi~i_A~i_B~A~B) \xhookrightarrow{} \Pi x : El~i_A~A. El~i_B~(B~x) \end{align*}} We however still allow ourselves to write $ El_l, U_l, u_l,\pi_{l,l'} $ in order to improve clarity. We also reuse the previous convention to write $ \pi_{l,l'}~A~(\lambda x.B) $ as $ \pi_{l,l'}~x:A.B $, or even $ A \leadsto_{l,l'} B $ when $ x \not \in FV(B) $. Now, (prenex) universe polymorphism can be represented directly with the use of the framework's function type \cite{assaf}. Indeed, if a definition contains free level variables, it can be made universe polymorphic by abstracting over such variables. The following example illustrates this. \begin{example} The universe polymorphic identity function is given by \[ id = \lambda (i : Level). \lambda (A : U_i). \lambda (a : El_i~A). a \] which has type $ \Pi i : Level. El_{(\textup{\texttt{s}}~i)}~(\pi_{ (\textup{\texttt{s}}~i),i}~A : u_i. A \leadsto_{i,i}A)$. This then allows to use $ id $ at any universe level: for instance, we can obtain the polymorphic identity function at the level $ \textup{\texttt{z}}$ with the application $ id~\textup{\texttt{z}}$, which has type $ El_{(\textup{\texttt{s}}~\textup{\texttt{z}})}~(\pi_{(\textup{\texttt{s}}~\textup{\texttt{z}}),\textup{\texttt{z}}}~A :u_\textup{\texttt{z}}. A \leadsto_{\textup{\texttt{z}},\textup{\texttt{z}}}A) $. \end{example} Finally, in order to finish our definition we need to specify the definitional equality satisfied by levels, which is the one generated by the following equations \cite{agda}. Note that, as stated before, we enforce that $i, i_{1},i_{2},i_{3} \in \mathcal{I}$ can only be replaced by other levels. \begin{align*} &i_{1}\sqcup (i_{2} \sqcup i_{3}) \approx (i_{1}\sqcup i_{2}) \sqcup i_{3} &&\textup{\texttt{s}}~(i_{1} \sqcup i_{2}) \approx \textup{\texttt{s}}~i_{1} \sqcup \textup{\texttt{s}}~i_{2} &&i \sqcup \textup{\texttt{z}} \approx i\\ &i_{1}\sqcup i_{2}\approx i_{2} \sqcup i_{1} &&i \sqcup \textup{\texttt{s}}~i \approx \textup{\texttt{s}}~i &&i \sqcup i \approx i \end{align*} This definition is justified by the following property. Given a function $\sigma : \mathcal{I} \to \mathbb{N}$, define the interpretation $\trans{l}_{\sigma}$ of a level $ l $ by interpreting the symbols $ \textup{\texttt{z}}, \textup{\texttt{s}} $ and $ \sqcup $ as zero, successor and max, and by interpreting each variable $i$ by $\sigma(i)$. \begin{proposition}\label{semanticlvl} We have $l_{1} \simeq l_{2}$ iff $\trans{l_{1}}_{\sigma} = \trans{l_{2}}_{\sigma}$ holds for all $\sigma$. \end{proposition} \tolong{ \begin{proof} Note that for each $l \approx l' \in \mathcal{E}_{UPP}$ we have $\trans{l}_{\sigma}=\trans{l'}_{\sigma}$ for all $\sigma$, and thus the direction $\Rightarrow$ can be showed by an easy induction on $l_{1}\simeq l_{2}$. For the other direction, suppose that we have $\trans{l_{1}}_{\sigma} = \trans{l_{2}}_{\sigma}$ for all $\sigma$, and let us show $l_{1}\simeq l_{2}$. First note that we can show $l \sqcup \textup{\texttt{s}}^{n}~l \simeq \textup{\texttt{s}}^{n}~l$ for all $n, l$ by induction on $n$. Using this identity and the others in $\mathcal{E}_{UPP}$, we can show that any level $l$ is related by $\simeq$ to a level $\hat{l}$ of the form $\textup{\texttt{s}}^{k}~\textup{\texttt{z}}\sqcup\textup{\texttt{s}}^{n_{i_{1}}}~i_{1}\sqcup ... \sqcup \textup{\texttt{s}}^{n_{i_{p}}}~i_{p}$, where $i_{1}...i_{p}$ are different variables and $n_{i_{m}} \leq k$ for all $m = 1,...,p$. By doing this for $l_{1}$ and $l_{2}$, we get $l_{1} \simeq \hat{l_{1}}$ and $l_{2} \simeq \hat{l_{2}}$, and thus $\trans{\hat{l_{1}}}_{\sigma}=\trans{\hat{l_{2}}}_{\sigma}$ for all $\sigma$. By varying $\sigma$ over suitable functions we can show that their normal forms are equal up to reordering, and thus are identified by $\simeq$. \end{proof}} This also shows that our definition of $\simeq$ agrees with the one used in other works about universe levels \cite{guillaume, gaspard, blanqui22fscd}. The following basic properties show that $\xhookrightarrow{}$ and $\simeq$ interact well. \begin{proposition}~ \begin{enumerate} \item $\xhookrightarrow{}$ is confluent \item If $M \simeq N \xhookrightarrow{} N'$ then, for some $M'$, we have $M \xhookrightarrow{} M' \simeq N'$. \item If $M \equiv N$ then $M \reds M' \simeq N' \invreds N$. \end{enumerate} \end{proposition} \tolong{ \begin{proof} \begin{enumerate} \item Follows from the fact that our rewrite rules define an orthogonal combinatory rewrite system \cite{CRS}. \item By induction on the rewrite context of $N \xhookrightarrow{} N'$. The induction steps are easy, we only show the base cases. \begin{enumerate} \item If $N = El~j~(u~i) \xhookrightarrow{} U~i$, then we have $M = El~j'~(u~i')$ with $j \simeq j'$ and $i \simeq i'$. Therefore, $El~j'~(u~i')\xhookrightarrow{} U~i' \simeq U~i$. \item If $N = El~j~(\pi~i_{1}~i_{2}~A~B) \xhookrightarrow{} \Pi x : El~i_{1}~A.El~i_{2}~(B~x)$, then we have $M = El~j'~(\pi~i'_{1}~i'_{2}~A'~B')$, with $P \simeq P'$ for $P=j,i_{1},i_{2},A,B$. Therefore $El~j'~(\pi~i'_{1}~i'_{2}~A'~B') \xhookrightarrow{} \Pi x : El~i'_{1}~A'.El~i'_{2}~(B'~x) \simeq \Pi x : El~i_{1}~A.El~i_{2}~(B~x)$. \item If $N$ is a $\beta$-redex, we have two possibilities. \begin{enumerate} \item $N = (\lambda x . P_{1}) P_{2} \xhookrightarrow{} P_{1}\{P_{2}/x\}$ with $x \not \in \mathcal{I}$. Then $M = (\lambda x. P_{1}')P_{2}'$ with $P_{1} \simeq P_{1}'$ and $P_{2}\simeq P_{2}'$, and we can show $P_{1}\{P_{2}/x\} \simeq P'_{1}\{P'_{2}/x\}$. Therefore $(\lambda x. P_{1}')P_{2}' \xhookrightarrow{} P_{1}'\{P_{2}'/x\} \simeq P_{1}\{P_{2}/x\}$. \item $N = (\lambda i. P) l \xhookrightarrow{} P\{l/i\}$ where $i \in \mathcal{I}$. Therefore $M = (\lambda i. P')l'$ with $P \simeq P'$ and $l \simeq l'$. Moreover, because $i$ is a level variable, and we suppose that only levels can be replaced for level variables, $l$ must be a level, and so $l'$. Using this, we can show $P\{l/i\}\simeq P'\{l'/i\}$. Therefore, $(\lambda i. P')l' \xhookrightarrow{} P'\{l'/i\} \simeq P\{l/i\}$. \end{enumerate} \end{enumerate} \item If $M \equiv N$, then we have $M(\simeq(\xhookrightarrow{} \cup \xhookleftarrow{})^{*})^{n}N$ for some $n$. We prove the result by induction on $n$, the base case being trivial. For the inductive step, we have \[M (\simeq(\xhookrightarrow{} \cup \xhookleftarrow{})^{*})^{n}P(\simeq(\xhookrightarrow{} \cup \xhookleftarrow{})^{*}) N\] for some $P$. By confluence of $\xhookrightarrow{}$ we have $P \simeq ~ \reds \invreds N$ and thus by iterating $(2)$ we get $P \reds ~\simeq ~\invreds N$. By IH we have $M \reds ~ \simeq ~ \invreds P $. Then, joining the rewrite sequences gives $M \reds~\simeq~\invreds\reds~\simeq~\invreds N$. By confluence another time we have $M \reds~\simeq~\reds\invreds~\simeq~\invreds N$. Now it suffices to iterate $(2)$ once again to conclude. \qedhere \end{enumerate} \end{proof} } Using the third property, one can apply known techniques to show that $\xhookrightarrow{}$ satisfies subject reduction \cite{blanqui2020type, saillard15phd} (this can also be automatically verified using \textsc{DkCheck} or \textsc{Lambdapi}). \begin{proposition} If $\Sigma_{UPP}, \Delta; \Gamma \vdash M : A$ and $M \xhookrightarrow{} M'$ then $\Sigma_{UPP}, \Delta;\Gamma \vdash M' : A$. \end{proposition} The third property is also very important from a practical perspective: it shows that in order to check $M \equiv N$ one does not need to use matching modulo $\simeq$. \section{The algorithm} \label{sec:alg} We are now ready to define the (partial) translation of a local signature $ \Delta $ to the theory $ \textbf{UPP} $. The idea of the translation is that we traverse the signature $ \Delta $ and at each step we try to compute the most general universe polymorphic version of a definition or constant. The result of a previously translated definition or declaration can then be used at multiple levels for translating entries occurring later in the signature. In order to understand all the following steps intuitively, we will make use of a running example. \begin{example} The last example of Section \ref{sec:firstlook} corresponds to the local signature {\begin{align*} \Delta_I=&~thm_1 : El_*~(\pi_{\square,*}~P:u_*. P \leadsto_{*,*} P) := \lambda P : U_*. \lambda p : El_*~P. p~,&\\ &~thm_2 : El_*~(\pi_{\square,*}~P:u_*. P \leadsto_{*,*} P) := thm_1~(\pi_{\square,*}~P:u_*. P \leadsto_{*,*} P)~thm_1& \end{align*}}% which is well-formed in the theory $ \textbf{I} $. Let us suppose that the first entry of the signature has already been translated, giving the following signature $ \Delta_{thm_1} $. {\begin{align*} \Delta_{thm_1}=&~thm_1 : \Pi i : Level.El_{(\textup{\texttt{s}}~i)}~(\pi_{(\textup{\texttt{s}}~i),i}~P:u_i. P \leadsto_{i,i} P) := \lambda i:Level.\lambda P : U_i. \lambda p : El_i~P. p \end{align*}}% Therefore, as a running example, we will translate step by step the second entry $ thm_2 $. \end{example} Let us start with some basic auxiliary definitions. Given a local signature $ \Delta $ such that $ \Sigma_{UPP},\Delta;-~\texttt{well-formed} $ and a constant $ c $ occurring in $ \Delta $, let us define $ \textsc{Arity}(c) $ as the greatest natural number $ k $ such that the type of $ c $ is of the form $ \Pi i_1~..~i_k:Level. A $. Informally, it is the number of level arguments that this constant expects. For instance, we have $ \textsc{Arity}(thm_1)=1 $. Using this function, let us define $ \textsc{InsertMetas}(M) $, by the following equations. This function allows us to insert the fresh level variables that will be used to compute the constraints. We suppose that the inserted variables come from a dedicated subset of level variables $ \mathcal{M} \subsetneq \mathcal{I} $ and that each inserted variable is fresh. \noindent{\small\parbox{.4\textwidth}{ \begin{align*} &\textsc{InsertMetas}(El_s) = El_i\\ &\textsc{InsertMetas}(U_s) = U_i \end{align*}} \parbox{.5\textwidth}{ \begin{align*} &\textsc{InsertMetas}(u_s) = u_i \\ &\textsc{InsertMetas}(\pi_{s_1,s_2}) = \pi_{i,j} \end{align*}} \vspace{-1.5em} \begin{align*} &\textsc{InsertMetas}(c) = c~i_1 ... i_k \text{ where }k = \textsc{Arity}(c) \text{ and } c \neq El_s,U_s,u_s,\pi_{s_1,s_2}\\ &\textsc{InsertMetas}(M) = M \text{ if }M \text{ is a variable } x \text{ or } \textbf{\textup{Type}} \text{ or } \textbf{\textup{Kind}}\\ &\textsc{InsertMetas}(\Pi x : A . B) = \Pi x : \textsc{InsertMetas}(A). \textsc{InsertMetas}(B)\\ &\textsc{InsertMetas}(\lambda x : A . M) = \lambda x : \textsc{InsertMetas}(A). \textsc{InsertMetas}(M)\\ &\textsc{InsertMetas}(M N) = \textsc{InsertMetas}(M)~\textsc{InsertMetas}(N) \end{align*}}% \begin{example} By applying $ \textsc{InsertMetas} $ to the type and body of $ thm_2 $ we get {\small\begin{align*} &\textsc{InsertMetas}(El_*~(\pi_{\square,*}~P:u_*.P \leadsto_{*,*}P)) = El_{i_1}~(\pi_{i_2,i_3}~P:u_{i_4}.P \leadsto_{i_5,i_6} P)\\ &\textsc{InsertMetas}(thm_1~(\pi_{\square,*}~P:u_*.P \leadsto_{*,*} P)~thm_1) = thm_1~i_7~(\pi_{i_8,i_9}~P:u_{i_{10}}. P \leadsto_{i_{11},i_{12}}P)~(thm_1~i_{13}) \end{align*}}% \end{example} \begin{remark} Note that because our first step is erasing the universes that appear in the terms, this translation is defined for all PTS local signatures, and not only those in \textbf{I}. Therefore, it can be applied to proofs coming from systems featuring much more complex universes hierarchies then \textbf{I}, such as the PTS underlying the type systems of \textsc{Coq} and \textsc{Matita}. \end{remark} Once the fresh level variables are inserted, the next step is to compute the constraints between levels. To do this, we use an approach similar to \cite{typechecking-with-universes} and define a bidirectional type checking/inference algorithm. Figures \ref{cstr-conv} and \ref{cstr-type} define rules for computing constraints required for two terms to be convertible or for a term to be typable, respectively. In these rules, we write $ \hat{M} $ for the weak head normal form of $ M $ when it exists --- thus $\hat{(-)}$ is a partial function, but becomes total if we suppose the termination of $\xhookrightarrow{}$. As usual, $ M \Rightarrow A $ denotes type inference, whereas $ M \Leftarrow A $ denotes type checking. We also write $ M \Rightarrow_{sort} \textbf{s} $ or $ M \Rightarrow_\Pi \Pi x : A. B$ as a shorthand for $ M \Rightarrow A' $ and $ \hat{A'}=\textbf{s} $ or $ \hat{A'}=\Pi x: A. B $ respectively. \begin{figure}[ht] {\small \begin{center} \end{center} \begin{center} \AxiomC{$ l, l'~Level $} \UnaryInfC{$l \teq^? l' \downarrow \{l = l'\}$} \DisplayProof \hskip 1.5em \AxiomC{$ M = x, c, \textbf{\textup{Type}}, \textbf{\textup{Kind}}$} \UnaryInfC{$M \teq^? M \downarrow \emptyset$} \DisplayProof \hskip 1.5em \AxiomC{$ M \teq^? M' \downarrow C_1$} \AxiomC{$ \hat{N} \teq^? \hat{N'} \downarrow C_2$} \BinaryInfC{$M N \teq^? M' N' \downarrow C_1 \cup C_2$} \DisplayProof \end{center} \begin{center} \AxiomC{$ \hat{A} \teq^? \hat{A'} \downarrow C_1$} \AxiomC{$ \hat{B} \teq^? \hat{B'} \downarrow C_2$} \BinaryInfC{$ \Pi x: A. B \teq^? \Pi x : A'. B' \downarrow C_1 \cup C_2$} \DisplayProof \hskip 0.7em \AxiomC{$ \hat{A} \teq^? \hat{A'} \downarrow C_1$} \AxiomC{$ \hat{M} \teq^? \hat{M'} \downarrow C_2$} \BinaryInfC{$\lambda x : A. M \teq^? \lambda x : A'. M' \downarrow C_1 \cup C_2$} \DisplayProof \end{center}} \vspace{-1em} \caption{Inference rules for computing constraints for two terms in whnf to be convertible} \label{cstr-conv} \end{figure} Intuitively, these judgments define a conditional typing relation that depends on the constraints being satisfied. This intuition is formalized by the following results. \begin{definition} Given a level substitution $ \theta $ (sending level variables to levels) and a set of constraints $ C $, containing pairs of levels $ l = l' $, we write $ \theta \vDash C $ when $ \text{for all } l = l' \in C ,l\theta \lvleq l' \theta $. \end{definition} \begin{lemma} If $ M \teq^? N \downarrow C $ and $ \theta \vDash C $ then $ M \theta \teq N\theta $. \end{lemma} \tolong{\begin{proof} By induction on $M \teq^{?} N \downarrow C$. All cases as similar, we do the application case as an example. By IH we have $M \theta \equiv M' \theta$ and $\hat{N}\theta \equiv \hat{N'} \theta$. Because $N \reds \hat{N}$ and $N' \reds \hat{N'}$, we also have $N\theta \reds \hat{N}\theta$ and $N'\theta \reds \hat{N'}\theta$, and thus $N \theta \equiv \hat{N}\theta \equiv \hat{N'}\theta \equiv N' \theta$. Therefore, $(M N)\theta = (M \theta) (N \theta) \equiv (M' \theta) (N' \theta)=(M'N')\theta$. \end{proof}} \begin{figure} {\footnotesize \begin{center} \AxiomC{$ c : A := M \in \Sigma_{UPP},\Delta \text{ or } c : A \in \Sigma_{UPP},\Delta $} \RightLabel{\textsc{Cons}} \UnaryInfC{$\Sigma_{UPP},\Delta;\Gamma \vdash c \Rightarrow A \downarrow \emptyset $} \DisplayProof \hskip 1.5em \AxiomC{$ x : A \in \Gamma $} \RightLabel{\textsc{Var}} \UnaryInfC{$\Sigma_{UPP},\Delta;\Gamma \vdash x \Rightarrow A\downarrow \emptyset $} \DisplayProof \end{center} \begin{center} \AxiomC{$i \in \mathcal{M}$} \RightLabel{\textsc{Lvl-Var}} \UnaryInfC{$\Sigma_{UPP},\Delta;\Gamma \vdash i \Rightarrow Level\downarrow \emptyset $} \DisplayProof \hskip 1.5em \AxiomC{} \RightLabel{\textsc{Sort}} \UnaryInfC{$\Sigma_{UPP},\Delta;\Gamma \vdash \textbf{\textup{Type}} \Rightarrow\textbf{\textup{Kind}}\downarrow \emptyset$} \DisplayProof \end{center} \begin{center} \AxiomC{$\Sigma_{UPP},\Delta;\Gamma \vdash A \Leftarrow \textbf{\textup{Type}} \downarrow C_1$} \AxiomC{$\Sigma_{UPP},\Delta;\Gamma, x : A \vdash B \Rightarrow_{sort} \textbf{s} \downarrow C_2$} \RightLabel{\textsc{Prod}} \BinaryInfC{$\Sigma_{UPP},\Delta;\Gamma \vdash \Pi x : A. B \Rightarrow \textbf{s} \downarrow C_1 \cup C_2$} \DisplayProof \end{center} \begin{center} \AxiomC{$\Sigma_{UPP},\Delta;\Gamma \vdash A \Leftarrow \textbf{\textup{Type}} \downarrow C_1$} \AxiomC{$\Sigma_{UPP},\Delta;\Gamma , x : A \vdash M \Rightarrow B \downarrow C_3$} \AxiomC{$\Sigma_{UPP},\Delta;\Gamma ,x : A \vdash B \Rightarrow_{sort} \textbf{s}\downarrow C_2$} \RightLabel{\textsc{Abs}} \TrinaryInfC{$\Sigma_{UPP},\Delta;\Gamma \vdash \lambda x : A. M \Rightarrow \Pi x : A. B \downarrow C_1 \cup C_2 \cup C_3$} \DisplayProof \end{center} \begin{center} \AxiomC{$\Sigma_{UPP},\Delta;\Gamma \vdash M \Rightarrow_\Pi \Pi x: A.B \downarrow C_1$} \AxiomC{$\Sigma_{UPP},\Delta;\Gamma \vdash N \Leftarrow A \downarrow C_2$} \RightLabel{\textsc{App}} \BinaryInfC{$\Sigma_{UPP},\Delta;\Gamma \vdash M N \Rightarrow B\{N/x\} \downarrow C_1 \cup C_2$} \DisplayProof \end{center} \begin{center} \AxiomC{$\Sigma_{UPP},\Delta;\Gamma \vdash M \Rightarrow A \downarrow C_1$} \AxiomC{$\hat{A} \teq^? \hat{B} \downarrow C_2$} \RightLabel{\textsc{Check}} \BinaryInfC{$\Sigma_{UPP},\Delta;\Gamma \vdash M \Leftarrow B \downarrow C_1 \cup C_2$} \DisplayProof \end{center}} \vspace{-1em} \caption{Inference rules for computing constraints for a term to be typable} \label{cstr-type} \end{figure} Let us write $ \vec{i}_X $ for the free level variables in $ X $. We also shorten $ \vec{i} : Level $ as $ \vec{i} $. \begin{lemma}\label{cstr-calc} Given a level substitution $ \theta $, suppose $ \Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta},\Gamma\theta~\textup{\texttt{well-formed}} $. \begin{itemize} \item If $\Sigma_{UPP},\Delta;\Gamma\vdash M \Rightarrow A \downarrow C $ and $ \theta \vDash C $ then $ \Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup\vec{i}_{M\theta}\cup\vec{i}_{A\theta},\Gamma\theta\vdash M\theta : A\theta $ \item If $\Sigma_{UPP},\Delta;\Gamma\vdash M \Leftarrow A \downarrow C $, $ \theta \vDash C $ and $ \Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup\vec{i}_{A\theta},\Gamma\theta \vdash A\theta : \textbf{\textup{s}} $ then we have $ \Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup\vec{i}_{M\theta}\cup\vec{i}_{A\theta},\Gamma\theta\vdash M\theta : A\theta $ \end{itemize} \end{lemma} \tolong{\begin{proof} By induction on the derivation, the cases \textsc{Var}, \textsc{Cons}, \textsc{Lvl-Var}, and \textsc{Sort} being easy. \textbf{Case Prod}: We can show $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta},\Gamma\theta \vdash \textbf{\textup{Type}} : \textbf{\textup{Kind}}$, therefore by the IH on the first premise we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta},\Gamma\theta \vdash A \theta : \textbf{\textup{Type}}$. Using this, we can also derive $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta},\Gamma\theta, A\theta~\textup{\texttt{well-formed}}$. Therefore, by IH on the second premise we have $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta}\cup\vec{i}_{T \theta},\Gamma\theta, A\theta \vdash B \theta : T \theta$, where $T \xhookrightarrow{}^{*} \textbf{s}$. Therefore, we also have $T\theta \xhookrightarrow{}^{*} \textbf{s}$, and thus we can derive $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta}\cup\vec{i}_{T \theta},\Gamma\theta, A\theta \vdash B \theta : \textbf{s}$. By applying the substitution lemma with the substitution sending all $i \in \vec{i}_{T\theta}\setminus (\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta})$ to $\textup{\texttt{z}}$, we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta},\Gamma\theta, A\theta \vdash B \theta : \textbf{s}$. By weakening on $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta},\Gamma\theta \vdash A \theta : \textbf{\textup{Type}}$, we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup \vec{i}_{B \theta},\Gamma\theta \vdash A \theta : \textbf{\textup{Type}}$. Hence, it suffices to apply \texttt{Prod} to conclude $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup \vec{i}_{B \theta},\Gamma\theta \vdash \Pi x : A\theta. B \theta : \textbf{s}$. \textbf{Case Abs}: We can show $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta},\Gamma\theta \vdash \textbf{\textup{Type}} : \textbf{\textup{Kind}}$, therefore by the IH on the first premise we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta},\Gamma\theta \vdash A \theta : \textbf{\textup{Type}}$. Using this, we can also derive $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta},\Gamma\theta, A\theta~\textup{\texttt{well-formed}}$. Therefore, by IH on the second premise we have (1) $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{B\theta},\Gamma\theta, A\theta \vdash M \theta :B\theta$. By IH now on the third premise we have $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta}\cup\vec{i}_{T \theta},\Gamma\theta, A\theta \vdash B \theta : T \theta$, where $T \xhookrightarrow{}^{*} \textbf{s}$. Therefore, we also have $T\theta \xhookrightarrow{}^{*} \textbf{s}$, and thus we can derive $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta}\cup\vec{i}_{T \theta},\Gamma\theta, A\theta \vdash B \theta : \textbf{s}$. By applying the substitution lemma with the substitution sending all $i \in \vec{i}_{T\theta}\setminus (\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta})$ to $\textup{\texttt{z}}$, we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta},\Gamma\theta, A\theta \vdash B \theta : \textbf{s}$. By applying weakening, we can derive (2) $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta}\cup \vec{i}_{M\theta},\Gamma\theta, A\theta \vdash B \theta : \textbf{s}$. Using (1) and (2), we apply \texttt{Abs} to conclude $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{A \theta}\cup\vec{i}_{B \theta}\cup \vec{i}_{M\theta},\Gamma\theta \vdash \lambda x : A \theta. M\theta:\Pi x : A \theta. B \theta$. \textbf{Case App}: By IH on the first premise, we have $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{T\theta},\Gamma\theta \vdash M\theta : T \theta$, where $T \xhookrightarrow{}^{*} \Pi x : A. B$. We thus also have $T \theta \reds \Pi x : A \theta. B \theta$, and thus by well-sortedness, subject reduction and the rule \texttt{Conv} we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{T\theta},\Gamma\theta \vdash M\theta : \Pi x :A \theta. B\theta$. By applying the substitution lemma with the substitution sending all $i \in \vec{i}_{T\theta}\setminus (\vec{i}_{\Gamma\theta}\cup\vec{i}_{M \theta}\cup\vec{i}_{A \theta}\cup\vec{i}_{B \theta})$ to $\textup{\texttt{z}}$, we get (1) $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{A\theta}\cup \vec{i}_{B\theta},\Gamma\theta \vdash M\theta : \Pi x :A \theta. B\theta$. By well-sortedness and inversion, we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{A\theta}\cup \vec{i}_{B\theta},\Gamma\theta \vdash A \theta : \textbf{\textup{Type}}$. By applying the substitution lemma with the substitution sending all $i \in (\vec{i}_{B\theta}\cup\vec{i}_{M\theta})\setminus (\vec{i}_{\Gamma \theta}\cup\vec{i}_{A \theta})$ to $\textup{\texttt{z}}$, we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{A\theta},\Gamma\theta \vdash A \theta : \textbf{\textup{Type}}$. Hence, we can apply the IH to the second premise, which gives (2) $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{N\theta}\cup \vec{i}_{A\theta},\Gamma\theta \vdash N \theta :A \theta$. By applying weakening to (1) and (2) we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{N\theta}\cup \vec{i}_{A\theta}\cup \vec{i}_{B\theta},\Gamma\theta \vdash M\theta : \Pi x :A \theta. B\theta$ and $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{N\theta}\cup \vec{i}_{A\theta}\cup \vec{i}_{B\theta},\Gamma\theta \vdash N \theta :A \theta$. By applying the rule \texttt{App} we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{N\theta}\cup \vec{i}_{A\theta}\cup \vec{i}_{B\theta},\Gamma\theta \vdash (M\theta) (N\theta) : B\theta \{N\theta/x\}$. Finally, by applying the substitution lemma with the substitution sending all $i \in \vec{i}_{A\theta}\setminus (\vec{i}_{\Gamma \theta}\cup\vec{i}_{M \theta}\cup\vec{i}_{N \theta}\cup \vec{i}_{B \theta})$ to $\textup{\texttt{z}}$, we conclude $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta}\cup \vec{i}_{N\theta}\cup \vec{i}_{B\theta},\Gamma\theta \vdash (MN) \theta : (B\{N/x\})\theta$. \textbf{Case Check}: By hypothesis, we have (1) $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{B\theta}, \Gamma \theta\vdash B \theta : \textbf{s}$. By the IH applied to the first premise, we have (2) $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{M\theta}\cup \vec{i}_{A\theta},\Gamma\theta\vdash M \theta : A \theta$. By applying weakening to (1) and (2) we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta}\cup \vec{i}_{M\theta} \cup \vec{i}_{A\theta}\cup \vec{i}_{B\theta},\Gamma\theta\vdash B \theta : \textbf{s}$ and $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{M\theta}\cup \vec{i}_{A\theta}\cup \vec{i}_{B\theta},\Gamma\theta\vdash M \theta : A \theta$. By Lemma \ref{cstr-calc}, we have $ \hat{A}\theta \equiv\hat{B}\theta$, and thus $A \theta \equiv B \theta$. Hence, by the rule \texttt{Conv} we get $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{M\theta}\cup \vec{i}_{A\theta}\cup \vec{i}_{B\theta},\Gamma\theta\vdash M \theta : B \theta$. Finally, by applying the substitution lemma with the substitution sending all $i \in \vec{i}_{A\theta}\setminus (\vec{i}_{\Gamma \theta}\cup\vec{i}_{M \theta}\cup \vec{i}_{B \theta})$ to $\textup{\texttt{z}}$, we conclude $\Sigma_{UPP},\Delta;\vec{i}_{\Gamma\theta} \cup \vec{i}_{M\theta}\cup \vec{i}_{B\theta},\Gamma\theta\vdash M \theta : B \theta$. \end{proof}} \begin{example} We can use the rules with our running example. We first calculate $\Sigma_{UPP},\Delta_{thm_1};- \vdash El_{i_1}~(\pi_{i_2,i_3}~P:u_{i_4}.P \leadsto_{i_5,i_6}P) \Rightarrow_{sort} \textbf{s} \downarrow C_1$. Therefore, any substitution $ \theta $ with $ \theta \vDash C_1 $ applied to the previous term results in a valid type. We then calculate {\begin{align*} \Sigma_{UPP},\Delta_{thm_1};- \vdash ~&thm_1~i_7~(\pi_{i_8,i_9}~P:u_{i_{10}}. P\leadsto_{i_{11},i_{12}}P)~(thm_1~i_{13}) \\ &\Leftarrow El_{i_1}~(\pi_{i_2,i_3}~P:u_{i_4}. P\leadsto_{i_5,i_6}P) \downarrow C_2 \end{align*}}% This gives $ C_1 \cup C_2 = \{i_8 = \textup{\texttt{s}}~i_{10}, i_{11} = i_{10},i_{12}=i_{10},i_9=i_{10}\sqcup i_{12}, i_8 \sqcup i_9 = i_7, i_{13}=i_{10}, i_1 = i_2\sqcup i_3, \textup{\texttt{s}}~i_4 = i_2, i_5 = i_4, i_6 = i_4, i_3 = i_5\sqcup i_6, i_4 = i_{10}\}$. \end{example} Once the constraints are computed the next step is to solve them. However, as explained in Section \ref{sec:firstlook}, we do not want a numerical assignment of level variables that satisfies the constraints, but rather a general symbolic solution which allows the term to be instantiated later at different universe levels. This leads us to use equational unification but, as levels are not purely syntactic entities, one needs to devise a unification algorithm for the equational theory $ \lvleq $. For now, let us postpone this to the next section and assume we are given a (partial) function $ \textsc{Unify} $ which computes from a set of constraints $ C $ a unifier $ \theta $ -- that is, a substitution satisfying $ \theta \vDash C $. We however do not assume that $ \theta $ is the most general unifier -- as we show later in Theorem \ref{no-mgu}, such a most general unifier might not exist. After an unifier $ \theta $ is found, the final step is then to apply it. \begin{example} Given the previous computed constraints $ C_1 \cup C_2 $ we can compute the substitution $ \theta $ which sends all variables to $ i_4 $, except for $ i_1, i_2, i_7,i_8 $, which are sent to $ \textup{\texttt{s}}~i_4 $, and verify that $ \theta \vDash C_1\cup C_2 $. By applying $ \theta $, and by Lemma \ref{cstr-calc}, we have {\begin{align*} \Sigma_{UPP},\Delta_{thm_1};i_4 \vdash ~&thm_1~(\textup{\texttt{s}}~i_4)~(\pi_{(\textup{\texttt{s}}~i_4),i_4}~P:u_{i_{4}}. P\leadsto_{i_{4},i_{4}}P)~(thm_1~i_{4}) \\ &: El_{(\textup{\texttt{s}}~i_4)}~(\pi_{(\textup{\texttt{s}}~i_4),i_4}~P:u_{i_4}. P\leadsto_{i_4,i_4}P) \end{align*}} Note that in this term, the constant $ thm_1 $ is used at two different universe levels. \end{example} \begin{figure} {\footnotesize \begin{flalign*} |-|=&~ -\\ |\Delta, c : A|= &\text{ let }A' = \textsc{InsertMetas}(A)\\ &\text{ let }C \text{ be such that } \Sigma_{UPP},|\Delta|;-\vdash A' \Rightarrow_{sort} \textbf{s} \downarrow C\text{ else }raise~\bot\text{ if no such }C\\ &\text{ let }\theta = \textsc{Unify}(C)\text{ else }raise~\bot\text{ if no such }\theta\\ &\text{ let }\vec{i} = \vec{i}_{A'\theta}\\ &~|\Delta|, c : \Pi \vec{i} : Level. A'\theta\\ |\Delta, c : A := M|= &\text{ let }A', M' = \textsc{InsertMetas}(A), \textsc{InsertMetas}(M)\\ &\text{ let }C_1 \text{ be such that } \Sigma_{UPP},|\Delta|;-\vdash A' \Rightarrow_{sort} \textbf{s} \downarrow C_1 \text{ else }raise~\bot\text{ if no such }C_1\\ &\text{ let }C_2 \text{ be such that } \Sigma_{UPP},|\Delta|;-\vdash M' \Leftarrow A' \downarrow C_2, \text{ else }raise~\bot\text{ if no such }C_2\\ &\text{ let }\theta = \textsc{Unify}(C_1 \cup C_2)\text{ else }raise~\bot\text{ if no such }\theta\\ &\text{ let }\tau = i \mapsto \text{if}~ i \in \vec{i}_{M'\theta}\setminus \vec{i}_{A'\theta} \text{ then }\textup{\texttt{z}} \text{ else }i \\ &\text{ let }\vec{i} = \vec{i}_{A'\theta}\\ &~|\Delta|, c : \Pi \vec{i} : Level. A'\theta := \lambda \vec{i} : Level.M'\theta\tau \end{flalign*}} \vspace{-2em} \caption{Pseudocode of the predicativization algorithm} \label{predicativize-alg} \end{figure} The final algorithm can now be described by the pseudocode in Figure \ref{predicativize-alg}. The algorithm might fail at any point when either it is not able to compute the constraints, or if the unification algorithm is not capable of inferring a substitution from the constraints. However, if the algorithm returns, its correctness is guaranteed by the following theorem: \begin{theorem}\label{algcorrect} If $ |\Delta| $ is defined, then $ \Sigma_{UPP},|\Delta|;-~\textup{\texttt{well-formed}}$. \end{theorem} \begin{proof} By induction on $ \Delta $, the base case being trivial. For the induction step, we have either $ \Delta = \Delta'; c : A $ or $ \Delta = \Delta'; c : A := M $. In both cases, if $ |\Delta| $ is defined, then so is $ |\Delta'| $, and thus by induction hypothesis we have $ |\Delta'|;-~\textup{\texttt{well-formed}} $. We proceed with a case analysis on the entry. \textbf{Definition:} As $ \Sigma_{UPP},|\Delta'|; -\vdash A'\Rightarrow T \downarrow C_1 $ and $ \theta \vDash C_1 $, by Lemma \ref{cstr-calc} we get $ \Sigma_{UPP},|\Delta'|;\vec{i}_{A'\theta}\cup\vec{i}_{T\theta} \vdash A'\theta:T\theta $. Because $ \hat{T}= \textbf{s} $, we also have $ T\theta \xhookrightarrow{}^* \textbf{s} $, hence we can derive $ \Sigma_{UPP},|\Delta'|; \vec{i}_{A'\theta}\cup\vec{i}_{T\theta} \vdash A'\theta : \textbf{s} $. By applying the substitution lemma with the substitution sending every variable $ i $ in $ \vec{i}_{T\theta} $ but not in $ \vec{i}_{A'\theta}$ to $ \textup{\texttt{z}} $, we get $\Sigma_{UPP},|\Delta'|; \vec{i}_{A'\theta} \vdash A'\theta : \textbf{s}$. Because we also have $ \Sigma_{UPP},|\Delta'|;-\vdash M' \Leftarrow A' \downarrow C_2 $ and $ \theta \vDash C_2 $, by Lemma \ref{cstr-calc} again we get $ \Sigma_{UPP},|\Delta'|;\vec{i}_{M'\theta}\cup\vec{i}_{A'\theta}\vdash M'\theta:A'\theta $. By applying the substitution lemma with the substitution $\tau$, we get $\Sigma_{UPP},|\Delta'|;\vec{i}\vdash M'\theta\tau:A'\theta $, where $\vec{i}:= \vec{i}_{A'\theta}$. Finally, by abstracting each free level variable, we get $ \Sigma_{UPP},|\Delta'|;-\vdash \lambda \vec{i} : Level.M'\theta\tau : \Pi\vec{i}: Level.A'\theta $. Hence, we can derive $ \Sigma_{UPP},|\Delta'|,c:\Pi \vec{i} : Level.A'\theta := \lambda \vec{i}:Level.M'\theta\tau;-~\textup{\texttt{well-formed}}$. \textbf{Constant:} Similar to the previous case.\qedhere \end{proof} \begin{remark} One could also wonder if the algorithm always terminates and produces a result (be it a valid signature or $ \bot $). By supposing strong normalization for \textbf{UPP}, and by checking at each step of the rules in Figure \ref{cstr-type} that the constraints are consistent, one could show termination of the algorithm by using a similar technique as in \cite{typechecking-with-universes}. As we do not investigate strong normalization of \textbf{UPP} in this paper, we leave termination of the algorithm for future work. However, as we will see in Section \ref{sec:matita}, when using it in practice we were able to translate many proofs without non-termination issues. \end{remark} Our algorithm relies on an unspecified function $ \textsc{Unify} $ in order to solve the constraints. In order to fully specify it, we thus present an unification algorithm for $ \lvleq $ in the next section. As we will discuss, the unification algorithm we propose is not guaranteed to always find a most general unifier whenever there is one. However, note that our predicativization algorithm can in principle be used with any unification algorithm for $ \lvleq $. Therefore, if we have a better unification algorithm in the future, we do not have the modify the algorithm of Figure \ref{predicativize-alg}. \section{Solving level constraints} \label{sec:solving} Before addressing the problem of how to solve level constraints, the first natural question that comes to mind is if one can always find a most general unifier (mgu) when the constraints are solvable. The following result answers this negatively. \begin{theorem}\label{no-mgu} Not all solvable problems of unification modulo $ \lvleq $ over levels have most general unifiers. \end{theorem} \begin{proof} Consider the equation $ \textup{\texttt{s}}~i_1 = i_2 \sqcup i_3 $, which is a solvable unification problem, and suppose it had a mgu $ \theta $. Note that $ \theta_1 = i_1 \mapsto \textup{\texttt{z}}, i_2 \mapsto \textup{\texttt{s}}~\textup{\texttt{z}}, i_3 \mapsto \textup{\texttt{z}} $ is also a solution, thus there is some $ \tau $ such that $ i_3\theta \tau \lvleq \textup{\texttt{z}} $. Therefore, there can be no occurrence of $ s $ in $ i_3\theta $. By taking $ \theta_2 = i_1 \mapsto \textup{\texttt{z}}, i_2 \mapsto \textup{\texttt{z}}, i_3 \mapsto \textup{\texttt{s}}~\textup{\texttt{z}} $ we can show similarly that there can be no occurrence of $ \textup{\texttt{s}} $ in $ i_2\theta $. But by taking the substitution $ \theta' = \_ \mapsto \textup{\texttt{z}}$ mapping all variables to $ \textup{\texttt{z}} $, we get $ (i_2 \sqcup i_3)\theta\theta' \lvleq \textup{\texttt{z}} $, which cannot be equivalent to $ (\textup{\texttt{s}}~i_1)\theta\theta' $. Hence, $ \textup{\texttt{s}}~i_1 = i_2 \sqcup i_3 $ has no mgu. \end{proof} Therefore, no unification algorithm for $ \lvleq $ can always produces a mgu. Hence, our algorithm will produce three kinds of results: either it produces a substitution, in which case it is a mgu; or it produces $ \bot $, in which case there is no solution to the constraints; or it gets stuck on a set of constraints that it cannot handle. Still, it is not guaranteed to compute a mgu whenever there is one. Before presenting the algorithm, the first issue we have to address is the fact that levels can have multiple equivalent representations. It would be convenient if we had a syntactical way to compare them. Thankfully, previous works have already addressed this problem. Let us assume from this point on that level variables in $ \mathcal{I} $ admit a total order $ < $. Given a strictly increasing sequence of level variables $ V = i_1,...,i_k $, and an $ V $-indexed family of levels $ \{l_i\}_{i \in V} $, let $ \sqcup_{i \in V}l_i $ denote the term $ l_1 \sqcup (l_2 \sqcup ... (l_{k-1} \sqcup l_k) ...) $. Moreover, given a natural number $ k $, let $ \textup{\texttt{s}}^{k}~l $ be inductively defined by $ \textup{\texttt{s}}^{0}~l = l $ and $ \textup{\texttt{s}}^{n+1}~l = \textup{\texttt{s}}~(\textup{\texttt{s}}^n~l) $. \begin{definition}[Level normal form] A level is in normal form when it is of the form $\textup{\texttt{s}}^k~\textup{\texttt{z}} \sqcup (\sqcup_{i \in V} \textup{\texttt{s}}^{n_i}~i) $ with $ n_i \leq k $ for all $ i $. \end{definition} Previous works \cite{guillaume,gaspard,blanqui22fscd} have established that for every level $ l $ there is a unique level in normal form, which we refer to as $ \hat{l} $, with $ l \lvleq \hat{l} $ -- see for instance Lemma 6.2.5 of \cite{gaspard}. We will not describe explicitly here the algorithm for computing normal forms, as this has already been thoroughly explained in previous works, such as in \cite{guillaume,blanqui22fscd} -- we note nevertheless that this procedure is sketched in the proof of Proposition \ref{semanticlvl}. We also define a notion of normal form for constraints. \begin{definition} A constraint $ l_1 = l_2 $ is said to be in normal form if \begin{enumerate} \item Both $ l_1, l_2 $ are in normal form -- so we write $ l_p = \textup{\texttt{s}}^{k_p}~\textup{\texttt{z}} \sqcup (\sqcup_{i \in V_p} \textup{\texttt{s}}^{n^p_i}~i) $ for $ p = 1, 2 $ \item If $ i \in V_1 \cap V_2 $, then $ n^1_i = n^2_i $ \item At least one of the numbers in $ \{k_1, k_2\}\cup \{n^1_i\}_{i \in V_1}\cup \{n^2_i\}_{i \in V_2} $ is equal to $ 0 $ \end{enumerate} \end{definition} Every constraint can be put in normal form, and for this we can use the algorithm in Figure \ref{normal-cstr}. From the second line on, we use $ k^p, n_i^p $ to refer to the indices in the normal forms of $ l_1, l_2 $. Moreover, the pseudocode should be read imperatively, in the sense that $ l_1, l_2, V_1, V_2 $ are updated at each step. \begin{figure}[ht] {\footnotesize\begin{align*} &\text{let }l_1, l_2 = \hat{l_1}, \hat{l_2}\\ &\text{for each } i \in V_1\cap V_2\\ &\quad\text{if } n^1_i < n^2_i\text{ then remove } \textup{\texttt{s}}^{n^1_i}~i \text{ from }l_1 \\ &\quad\text{else if } n^1_i > n^2_i\text{ then remove } \textup{\texttt{s}}^{n^2_i}~i \text{ from }l_2 \\ &\text{substract the minimum value of the set } \{k_1, k_2\}\cup \{n^1_i\}_{i \in V_1}\cup \{n^2_i\}_{i \in V_2}\text{ from all of its elements} \end{align*}} \vspace{-2em} \caption{Imperative algorithm for putting a constraint in normal form} \label{normal-cstr} \end{figure} \newcommand{\cstrnf}[1]{\widehat{#1}} Given a set of constraints $ C $, let $ \cstrnf{C} $ denote the result of putting all constraints of $ C $ in normal form by applying the algorithm of Figure \ref{normal-cstr}. \begin{lemma}\label{cstr-nf} For all substitutions $ \theta $, we have $ \theta \vDash C $ iff $ \theta \vDash \cstrnf{C} $. \end{lemma} \tolong{\begin{proof} It suffices to show that for each step transforming $ l_1 = l_2 $ into $ l_1' = l_2' $, we have $ l_1\theta \lvleq l_2 \theta $ iff $ l_1'\theta \lvleq l_2'\theta $. For the first part, which transforms $ l_1 = l_2$ into $ \hat{l_1} = \hat{l_2} $, because we have $ l_1 \lvleq \hat{l_1}$, $ l_2 \lvleq \hat{l_2} $, we thus deduce $ l_1\theta \lvleq \hat{l_1}\theta$ and $ l_2\theta \lvleq \hat{l_2}\theta $, from which the result follows For the second part, it suffices to show that for any $ l_1, l_2, l, n $ with $ n > 0 $, we have $(\star)$ $l_1 \sqcup l \lvleq l_2 \sqcup (\textup{\texttt{s}}^n~l) \iff l_1 \lvleq l_2 \sqcup (\textup{\texttt{s}}^n~l)$. Note that for any $ p_1, p_2, m, q \in \mathbb{N} $ with $ q >0 $ we have $max\{p_1, m\} = max\{p_2, q + m\} \iff p_1 = max\{p_2, q + m\}$. Indeed, the direction $ \Leftarrow $ is clear, whereas for $ \Rightarrow $ if we had $ m = max\{p_2, q + m\} $ then we would have $ m > m $, contradiction. We then can show $ (\star) $ by applying this fact together with Lemma \ref{semanticlvl}. For the third part, it suffices to note that $ \textup{\texttt{s}}^m~l \sqcup \textup{\texttt{s}}^m~l' \lvleq \textup{\texttt{s}}^m~(l \sqcup l') $ and that $ \textup{\texttt{s}}^m~l \lvleq \textup{\texttt{s}}^m~l' $ iff $ l \lvleq l' $. \end{proof}} By putting constraints in normal form, we can help our unification algorithm to find a solution, as shown by the following example. \begin{example} Consider the constraint $ i\sqcup \textup{\texttt{s}}~(i \sqcup \textup{\texttt{s}}~j) = j \sqcup \textup{\texttt{s}}~(\textup{\texttt{s}}~i) $ -- which, as we will see, cannot be treated by our unification algorithm if it is not normalized first. By first computing the level normal form of each side, we get $ \textup{\texttt{s}}^2~\textup{\texttt{z}} \sqcup \textup{\texttt{s}}~i \sqcup \textup{\texttt{s}}^2~j = \textup{\texttt{s}}^2~\textup{\texttt{z}} \sqcup \textup{\texttt{s}}^2~i \sqcup j $. As both variables appear in both sides, we remove from each of the sides the occurrence with the smaller index, giving $ \textup{\texttt{s}}^2~\textup{\texttt{z}} \sqcup \textup{\texttt{s}}^2~j = \textup{\texttt{s}}^2~\textup{\texttt{z}} \sqcup \textup{\texttt{s}}^2~i$. Finally, as the minimum among all indices is $ 2 $, we subtract this from all of them, giving $ \textup{\texttt{z}} \sqcup j = \textup{\texttt{z}} \sqcup i $ -- a constraint that can be treated by our unification algorithm. \end{example} Write $\mathcal{I}(l)$ for the level variables appearing in $l$. Given a substitution $ \theta $, we define the sets $dom~\theta = \{i \mid i \neq i \theta\}$ and $range~\theta = \cup_{i\in dom~\theta} \mathcal{I}(i\theta)$, and the substitutions $ \widehat{\theta} = \{i \mapsto \widehat{i\theta}\}_{i \in dom~\theta} $, and $ \theta\{l/j\} = \{i \mapsto i\theta\{l/j\}\}_{i \in dom~\theta}$. Finally, we also define $\mathcal{I}(\theta) = range~\theta\cup dom~\theta$. \begin{figure}[ht] {\footnotesize\begin{flalign*} &(\textbf{Trivial}) &\{l = l\} \cup C ; \theta &\leadsto C ; \theta\\ &(\textbf{Orient}) & \{l = l'\} \cup C ; \theta &\leadsto \{l' = l\} \cup C ; \theta&\text{if }l' = \textup{\texttt{z}} \text{ or }\textup{\texttt{z}} \sqcup i \\ &(\textbf{Eliminate 1}) &\{\textup{\texttt{z}} \sqcup i = l\} \cup C ; \theta &\leadsto \cstrnf{C\{l/i\}} ; \widehat{\theta\{l/i\}}, i \mapsto l &\text{if }i \notin l\\ &(\textbf{Eliminate 2}) &\{\textup{\texttt{z}} \sqcup i = l\} \cup C ; \theta &\leadsto &\text{if }\textup{\texttt{s}}^m~i \in l\text{ with }m = 0\\ & & & \hspace{-6em}\text{let }l' = l \{i' / i\} \text{ in }\cstrnf{C\{l'/i\}} ; \widehat{\theta\{l'/i\}}, i \mapsto l' & \text{for some } i'\in \mathcal{I}_{fresh}\setminus \mathcal{I}(i, l,C, \theta) \\ &(\textbf{Decompose}) &\{\textup{\texttt{z}} = \textup{\texttt{z}} \sqcup (\sqcup_{i \in V}~i) \} \cup C ; \theta &\leadsto \{\textup{\texttt{z}} \sqcup i = \textup{\texttt{z}}\}_{i \in V} \cup C; \theta &\\ &(\textbf{Clash}) &\{\textup{\texttt{z}} = l \} \cup C ; \theta &\leadsto \bot &\hspace{-1em}\text{ if } \textup{\texttt{s}}^{n}~i \in l\text{ or } \textup{\texttt{s}}^{n} ~\textup{\texttt{z}} \in l \text{ with } n\neq 0 \end{flalign*}} \vspace{-2em} \caption{Unification algorithm for $ \lvleq $} \label{unification} \end{figure} We are now ready to present the unification algorithm, whose rules are given in Figure \ref{unification}. Steps are represented by rules of the form $ C; \theta \leadsto C'; \theta' $, with the pre-conditions that constraints in $ C $ are in normal form, $dom~\theta $ is disjoint from $ range~\theta$, the image of $ \theta $ contains only levels in normal form, and $dom~\theta$ is disjoint from $ \mathcal{I}(C) $ -- these properties are preserved by each step. In rule (Eliminate 2), $\mathcal{I}_{fresh}\subsetneq \mathcal{I}$ is an infinite set of fresh level variables. Finally, it may happen that, for some non-empty sets of constraints $C$, no rule applies. This corresponds to the cases in which our algorithm gets stuck and does not produce a solution. Let us write $ \theta_1\subseteq\theta_2 $ when for all $ i \in dom~\theta_1 $, $ i\theta_1 = i\theta_2 $ and $dom~\theta_{2} \setminus (dom~\theta_{1})\subseteq \mathcal{I}_{fresh}$ -- that is, $\theta_{2}$ extends $\theta_{1}$ only inside $\mathcal{I}_{fresh}$. The following lemma is key in showing the main properties of our algorithm. \begin{lemma}[Key lemma]\label{key} Suppose $ C;\theta \leadsto C';\theta' $. For all $ \tau $, if $ \tau \vDash C $ and $ \tau \lvleq \tau\circ\theta$ then there is a substitution $ \tau' $ with $ \tau \subseteq \tau' $ such that (1) $ \tau' \vDash C' $ and (2) $ \tau' \lvleq \tau'\circ\theta' $. Conversely, for all $ \tau $, if $ \tau \lvleq \tau \circ \theta' $ and $ \tau \vDash C'$, then $ \tau \lvleq \tau \circ \theta $ and $ \tau \vDash C $. \end{lemma} \tolong{\begin{proof} We first start with the following auxiliary lemma: \begin{lemma}\label{aux} Suppose $ i \theta \lvleq l\theta $. Then we have \begin{enumerate} \item $ l'\theta \lvleq l' \{l/i\}\theta $, for all $ l' $ \item $ \theta \vDash C $ iff $ \theta \vDash C\{l/i\} $. \end{enumerate} \end{lemma} \begin{claimproof} Part (1) can be shown by a simple induction on $ l' $. For part (2), given $ l_1 = l_2 \in C $, we have $ l_1 \theta \lvleq l_1 \{l/i\}\theta$ and $ l_2 \theta\lvleq l_2 \{l/i\}\theta $, and thus $ l_1 \theta\lvleq l_2 \theta$ iff $ l_1 \{l/i\}\theta \lvleq l_2 \{l/i\}\theta $. \end{claimproof} We now continue with the proof of Lemma \ref{key}. It is done by case analysis on the rule, the cases \textbf{Trivial} and \textbf{Orient} being trivial. In the following, we might use Lemmas \ref{aux} and \ref{cstr-nf} implicitly. \textbf{Eliminate 1}: Suppose $ \tau \vDash \{\textup{\texttt{z}} \sqcup i = l\}\cup C $ and $ \tau \lvleq \tau \circ \theta $, and pose $ \tau' = \tau $. First note that we have $ i\tau \lvleq l\tau $, and thus $ i\tau' \lvleq l\tau' $. \begin{enumerate} \item We have $ \tau \vDash C $, but because $ i\tau \lvleq l\tau $ we get $ \tau \vDash C\{l/i\} $ and thus $ \tau \vDash \cstrnf{C\{l/i\}} $. Because $\tau'=\tau$, this shows (1). \item It suffices to show $j\tau' \simeq j \theta' \tau'$ for $j \in dom~\theta' = dom~\theta \cup \{i\}$. For all $ j \in dom~\theta $, we have $ j\tau'\lvleq j\theta\tau' \lvleq j\theta\{l/i\}\tau' \lvleq \widehat{j\theta\{l/i\}}\tau' = j \theta'\tau'$, showing $j\tau' \simeq j\theta' \tau'$ for $j \in dom~\theta$. For $j = i$, this follows from the fact that $i\tau' \simeq l \tau'$ and $i\theta'=l$. \end{enumerate} Conversely, suppose now $ \tau \vDash \cstrnf{C\{l/i\}} $ and $ \tau \simeq \tau \circ \theta'$. As $ i\theta' = l $, it follows that $ i\tau \lvleq l \tau $, showing $ (\textup{\texttt{z}} \sqcup i)\tau \lvleq l\tau $. Moreover, $ \tau\vDash\cstrnf{C\{l/i\}} $ implies $ \tau \vDash C\{l/i\} $, which then implies $ \tau \vDash C $. Finally, for $ j \in dom~\theta $, $ j\tau \lvleq j\theta'\tau = \widehat{j\theta\{l/i\}} \tau\lvleq j\theta\{l/i\} \tau \simeq j\theta\tau $, showing $\tau \simeq \tau \circ \theta$. \textbf{Eliminate 2}: Suppose $ \tau \vDash \{\textup{\texttt{z}} \sqcup i = l\}\cup C $ and $ \tau \lvleq \tau \circ \theta $. Pose $ \tau' = \tau, i' \mapsto i\tau $. First note that $l' \tau' = l\{i'/i\}\tau' \simeq l \tau' \simeq (\textup{\texttt{z}} \sqcup i)\tau'\simeq i\tau'$, and thus $ i\tau' \lvleq l'\tau' $. \begin{enumerate} \item We have $ \tau \vDash C $ and thus $\tau' \vDash C$, but because $ i\tau' \lvleq l'\tau' $ we get $ \tau' \vDash C\{l'/i\} $ and thus $ \tau' \vDash \cstrnf{C\{l'/i\}} $. \item It suffices to show $j\tau' \simeq j \theta' \tau'$ for $j \in dom~\theta' = dom~\theta \cup \{i\}$. For all $ j \in dom~\theta $, we have $ j\tau'\lvleq j\theta\tau' \lvleq j\theta\{l'/i\}\tau' \lvleq \widehat{j\theta\{l'/i\}}\tau' = j \theta'\tau'$, showing $j\tau' \simeq j\theta' \tau'$ for $j \in dom~\theta$. For $j = i$, this follows from the fact that $i\tau' \lvleq l' \tau'$ and $i\theta'=l'$. \end{enumerate} Conversely, suppose now $ \tau \vDash \cstrnf{C\{l'/i\}} $ and $ \tau = \tau \circ \theta'$. Let us first show that $(\textup{\texttt{z}} \sqcup i)\tau \simeq l \tau$. We can decompose $l$ as $l_{0} \sqcup i \simeq l$, where $l_{0}$ does not contain $i$. Because $ i \tau \simeq i \theta'\tau$, we have $ i \tau \simeq l' \tau \simeq l_{0}\tau \sqcup i' \tau $, and thus $(\textup{\texttt{z}} \sqcup i)\tau \simeq l' \tau \simeq l_{0}\tau \sqcup i'\tau \simeq l_{0}\tau \sqcup l_{0}\tau \sqcup i'\tau \simeq l_{0}\tau \sqcup l' \tau \simeq l_{0}\tau \sqcup i \tau \simeq l \tau $, showing $(\textup{\texttt{z}} \sqcup i)\tau \simeq l \tau$. To conclude $\tau \vDash \{\textup{\texttt{z}} \sqcup i = l\}\cup C$ it now suffices to show $\tau \vDash C$, but $ \tau\vDash\cstrnf{C\{l'/i\}} $ implies $ \tau \vDash C\{l'/i\} $, which then implies $ \tau \vDash C $. Finally, for $ j \in dom~\theta $, $ j\tau \lvleq j\theta'\tau = \widehat{j\theta\{l'/i\}} \tau\lvleq j\theta\{l'/i\} \tau \simeq j\theta\tau $, showing $\tau \simeq \tau \circ \theta$. \textbf{Decompose}: Suppose $ \tau \vDash \{\textup{\texttt{z}} = \textup{\texttt{z}} \sqcup (\sqcup_{i \in V}i) \} \cup C $ and $ \tau \lvleq \tau \circ \theta $. Pose $ \tau' = \tau $. Point (2) is trivial. For (1), as $ \textup{\texttt{z}} \lvleq \sqcup_{i \in V} i\tau $, we must have $ i\tau \lvleq \textup{\texttt{z}} $ for all $ i \in V $. Thus $ (\textup{\texttt{z}} \sqcup i)\tau' \lvleq \textup{\texttt{z}} \tau'$ for all $ i \in V $. Finally, for all $ l_1 = l_2 \in C $, we have $ l_1\tau' \lvleq l_2 \tau' $ by hypothesis. A symmetric reasoning shows also the converse statement. \end{proof}} It is clear that $ \leadsto$ does not always terminate, given that some rules create constraints and in particular that the rule (Orient) can loop. However, it is easy to check that these created constraints can always be eliminated by applying other rules. Indeed, the constraints created by (Decompose) can be eliminated by using (Eliminate 1), and the constraint created by (Orient) can be eliminated either by (Eliminate 1), (Eliminate 2), (Clash) or (Decompose), whose created constraints are eliminated once again using (Eliminate 1). Let $\leadsto_{0}$ be the relation that packs all of this into a single reduction, which therefore never creates constraints. \begin{lemma}\label{termination} $ \leadsto_0 $ terminates. \end{lemma} \tolong{\begin{proof} Each step of $ \leadsto_0 $ decreases the number of constraints in $C$. \end{proof}} In the following theorems, we suppose that $\mathcal{I}(C) $ and $ \mathcal{I}_{fresh}$ are disjoint. \begin{theorem}\label{nosol} If $ C; id \leadsto_0^* \bot $, then for no $ \theta $ we have $ \theta \vDash C $. \end{theorem} \tolong{\begin{proof} If $ C; id \leadsto_0^* \bot $, then the calculation finishes with (Clash). If $ \theta \vDash C $, then by iterating Lemma \ref{key} we get that for some $ \theta' $, $ \textup{\texttt{z}}\theta' \lvleq (\textup{\texttt{s}}^{k}~\textup{\texttt{z}} \sqcup (\sqcup_{i \in V}\textup{\texttt{s}}^{n_i}~i))\theta' $, where $k>0$ or $n_{i}>0$ for some $i$. But for any $ \sigma $ we have $ \trans{\textup{\texttt{z}}\theta'}_\sigma = 0 $ and $ \trans{(\textup{\texttt{s}}^{k}~\textup{\texttt{z}} \sqcup (\sqcup_{i \in V}\textup{\texttt{s}}^{n_i}~i))\theta'}_\sigma > 0 $, contradiction. \end{proof}} \begin{theorem} If $ C; id \leadsto_0^* \emptyset; \theta $, then $ \theta $ is a most general unifier. \end{theorem} \begin{proof} Let $ \tau $ be a unifier. We thus have $ \tau \vDash C$. Moreover, we have $ \tau \lvleq \tau \circ id $. By iterating Lemma \ref{key}, we get a substitution $ \tau' $ such that $ \tau' \lvleq \tau' \circ \theta $ and $\tau \subseteq \tau'$. Because $dom~\tau' \setminus (dom~\tau) \subseteq \mathcal{I}_{fresh}$, which is disjoint from $\mathcal{I}(C)$, we have $ i\tau = i\tau' $ for $ i \in \mathcal{I}(C) $. Hence, for $ i \in \mathcal{I}(C) $ we have $ i\tau \lvleq i\theta\tau' $, showing that $ \tau $ is an instance of $ \theta $. To show that $ \theta $ is a unifier, note that $ \theta = \theta \circ \theta $ and $ \theta \vDash \emptyset $, hence by iterating Lemma \ref{key} in the inverse direction we get $ \theta \vDash C $. \end{proof} We have seen that when the algorithm finishes with $ \emptyset;\theta $, then $ \theta $ is a mgu, and when it finishes with $ \bot $, then there is no solution to the constraints. However, the algorithm can also get stuck on constraints that it does not know how to solve. In practice, it is very unsatisfying for the unification to get stuck, as this means that the whole predicativization algorithm has to halt. Thus, in order to prevent this, in our implementation we extended the unification with heuristics that are \textit{only} applied when none of the presented rules applies. Then, whenever the heuristics are applied, the universe polymorphic definition or declaration that is produced might not be the most general one. \section{\textsc{Predicativize}, the implementation} \label{sec:predicativize} In this section we present \textsc{Predicativize}, an implementation of our algorithm. It is publicly available at \url{https://github.com/Deducteam/predicativize/}. Our tool is implemented on top of \textsc{DkCheck}{} \cite{saillard15phd}, a type-checker for \textsc{Dedukti}{}, and thus does not rely neither on the codebase of \textsc{Agda}{}, nor on the codebase of any other proof assistant. Like \textsc{Universo}{} \cite{thire}, our implementation instruments \textsc{DkCheck}{}'s conversion checking in order to implement the constraint computation algorithm described in Section \ref{sec:alg}. Because the currently available type-checkers for \textsc{Dedukti}{} do not implement rewriting modulo for equational theories other than AC (associative commutative), we used Genestier's encoding of levels \cite{guillaume} in order to define the theory \textbf{UPP} in a \textsc{DkCheck}{} file. To see how everything works in practice, we invite the reader to download the code and run \texttt{make running-example}, which translates our running example and produces a \textsc{Dedukti}{} file \texttt{output/running_example.dk} and an \textsc{Agda}{} file \texttt{agda_output/running-example.agda}. In order to test the tool with a more realistic example, the reader can also run \texttt{make test_agda}, which translates a proof of Fermat's little theorem from the \textsc{Dedukti}{} encoding of \textsc{HOL} \cite{sttfa} to \textbf{UPP}{}. In the following, let us go through some important particularities of how the tool works. \vspace{-1em} \paragraph*{User added constraints} As we have seen, our tool tries to compute the most general type for a definition or declaration to be typable. However, it is not always desirable to have the most general type, as shown by the following example. \begin{example}\label{succ} Consider the local signature {\small\begin{align*} \Delta = Nat : U_\square; zero : El_\square~Nat; succ : El_\square~(Nat\leadsto_{\square,\square}Nat) \end{align*}}% defining the natural numbers in \textbf{I}. The translation of this signature by our algorithm is {\small\begin{align*} |\Delta| = Nat : \Pi i : Level. U_i; zero : \Pi i: Level.El_i~(Nat~i); succ :\Pi i~j : Level. El_{(i \sqcup j)}~((Nat~i)\leadsto_{i,j} (Nat~j)) \end{align*}}% However, we normally would like to impose $ i $ to be equal to $ j $ in the type of $ succ $, or even to impose $ Nat $ not to be universe polymorphic. \end{example} In order to solve this problem, we added to \textsc{Predicativize} the possibility of adding constraints by the user, in such a way that we can for instance impose $ Nat $ to be in $ U_\textup{\texttt{z}} $, or $ i = j $ in the type of the successor. Adding constraints can also be useful to help the unification algorithm. \vspace{-1em} \paragraph*{Rewrite rules} \label{subsec:rewrite} The algorithm that we presented and proved correct covers two types of entries: definitions and constants. This is enough for translating proofs written in higher-order logic or similar systems, in which every step either poses an axiom or makes a definition or proof. However, when dealing with full-fledged type theories, such as those implemented by \textsc{Coq} or \textsc{Matita}, which also feature inductive types, it is customary to use rewrite rules to encode recursion and pattern matching. If we simply ignore these rules when performing the translation, we would run into problems as the entries that appear after may need those rewrite rules to typecheck. Therefore, our implementation extends the presented algorithm and also translate rewrite rules. In order to do this, we use \textsc{DkCheck}{}'s subject reduction checker to generate constraints and proceed similarly as in the algorithm. Because this feature is still work in progress, this step can require user intervention in some cases. In this case, the user has to manually add constraints over some symbols to help the translation. \rewriterules{ However, one of the particularities is that we then need to linearize\footnote{That is, replace non-linear occurrences by fresh variables.} the left-hand side of rewrite rules after translating them, as shown by the following example. \begin{example} Suppose we have in \textbf{I} the signature $ \Sigma = f : Type \to Type \to Type, \bot : Prop $, the rewrite rule $ f~x~x \xhookrightarrow{} Prop $. The translation of $ \Sigma $ gives $ |\Sigma| = f : \forall i j k. Set_i \to Set_j \to Set_k, \bot : \forall i. Set_i $. For the rewrite rule, by replacing the metavariables in the rule, we would get $ f\cdot i_1 \cdot i_2 \cdot i_3~x~x \xhookrightarrow{} Set_{i_4} $. Then, by using \textsc{DkCheck}{}'s subject reduction check to generate constraints, we get $ i_3 = i_4 $, thus leading to the rule $ f\cdot i_1 \cdot i_2 \cdot i_3~x~x \xhookrightarrow{} Set_{i_3} $. Suppose now we would like to translate $ \Sigma; - \vdash \bot : f~Prop~Prop $. To do this, we typecheck $ |\Sigma|;- \vdash \bot\cdot i_1 : f\cdot i_2 \cdot i_3 \cdot i_4~Set_{i_5}~Set_{i_6} $ and collect the constraints. However, because $ Set_{i_5} \neq Set_{i_6} $ the rewrite rule $ f\cdot i_1 \cdot i_2 \cdot i_3~x~x \xhookrightarrow{} Set_{i_3} $ does not apply here, and thus the constraint generation phase fails. However, if we take instead the rule $ f\cdot i_1 \cdot i_2 \cdot i_3~x~y \xhookrightarrow{} Set_{i_3} $ then it is possible to generate the constraints. \end{example} This can sometimes cause an undesired effect. Because the linearization is performed \textit{after} the rule is translated, this can cause the translated rule to not preserve typing anymore. In such specific cases, the user then has to manually add constraints over the used symbols, as shown in the following example. \begin{example}\label{cstr-rules} Consider the signature $ \Sigma = \bot : Prop, P : Prop, c : \bot \to P, g : P \to \bot $ with the rule $ g~(c~x) \xhookrightarrow{} x $. By translating this, we get \[ |\Sigma| = \forall i. \bot : Set_i, P : \forall i. Set_i, c : \forall i j. \bot \cdot i \to P \cdot j, g : \forall i j. P \cdot i \to \bot\cdot j \]and $ g\cdot i \cdot j~(c\cdot i'\cdot j'~x) \xhookrightarrow{} x $. However, the translated rewrite rule does not preserve typing anymore. But if in the translation of $ c $ and $ g $ we impose $ i = j $, we get \[ |\Sigma| = \forall i. \bot : Set_i, P : \forall i. Set_i, c : \forall i. \bot \cdot i \to P \cdot i, g : \forall i. P \cdot i \to \bot\cdot i \]Now the rule is translated as $ g\cdot i~(c\cdot i'~x) \xhookrightarrow{} x $, which preserves typing. \end{example} Even though this requires human intervention, among the three kinds of entries (definitions, declarations and rewrite rules), rewrite rules are the least common, and thus this intervention is still feasible for small to medium-sized libraries. To cope with larger libraries, we would like in the future to investigate possible ways to automate this procedure. } \vspace{-1em} \paragraph*{Agda output} \label{subsec:agda} \textsc{Predicativize}{} produces files in \textbf{UPP}{}, which is a subsystem of the encoding of \textsc{Agda}. In order to translate these files to \textsc{Agda}{} itself, we also integrated in \textsc{Predicativize}{} a translator that performs a simple syntactical translation from the \textsc{Agda}{} encoding in \textsc{Dedukti}{} to \textsc{Agda}{}. For instance, \texttt{make test\_agda\_with\_typecheck} translates Fermat's Little Theorem proof from HOL to \textsc{Agda}{} and typechecks it. \section{Translating Matita's arithmetic library to Agda} \label{sec:matita} \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{diagram.pdf} \end{center} \caption{Diagram representing the translation of \textsc{Matita}'s arithmetic library into \textsc{Agda}} \label{matita} \end{figure} We now discuss how we used \textsc{Predicativize} to translate \textsc{Matita}'s arithmetic library to \textsc{Agda}{}. The translation is summarized in Figure \ref{matita}, where \textbf{DK}[X] stands for the encoding of system X in \textsc{Dedukti}{}. \textsc{Matita}'s arithmetic library was already available in \textsc{Dedukti}{} thanks to \textsc{Krajono} \cite{assaf}, a translator from \textsc{Matita} to the encoding \textbf{DK}[Matita] in \textsc{Dedukti}{}. Therefore, the first step of the translation was already done for us. Then, using \textsc{Predicativize} we translated the library from \textbf{DK}[Matita] to \textbf{DK}[Agda] (which is a supersystem of \textbf{UPP}). As the encoding of \textsc{Matita}'s recursive functions uses rewrite rules, their translation required some user intervention to add constraints over certain symbols, as mentioned in the previous section. Once this step is done, the library is known to be predicative, as it typechecks in \textbf{DK}[Agda]. We then used \textsc{Predicativize} to translate these files to \textsc{Agda}{} files. However, because the rewrite rules in the \textsc{Dedukti}{} encoding cannot be translated to \textsc{Agda}{}, and given that they are needed for typechecking the proofs, the library does not typecheck directly. Therefore, to finish our translation we had to define the inductive types and recursive functions manually in \textsc{Agda}{}. This step of our translation admittedly requires some time, however the effort is orders of magnitude less than rewriting the whole library in \textsc{Agda}{}, specially given that the great majority of the library is made of proofs, whose translations we did not need to change. Note that this manual step is not exclusive to our work as it is also needed in \cite{sttfa}. Defining inductive types also required us to add constraints. For instance, we saw in Example \ref{succ} that the successor symbol is translated as $ succ :\Pi i~j : Level. El_{(i \sqcup j)}~((Nat~i)\leadsto_{i,j} (Nat~j)) $, but in order to be able to implement this symbol as a constructor of an inductive type, we need to impose $ i = j $. If one then wishes to align $ Nat $ with the built-in type of natural numbers in \textsc{Agda}, we would also have to impose $ i = \textup{\texttt{z}} $, which would then allow us to replace $ Nat $ by the built-in type in the result of the translation. The result of this translation is available at \url{https://github.com/thiagofelicissimo/matita_lib_in_agda} and, as far as we know, contains the very first proofs in \textsc{Agda}{} of Bertrand's Postulate and Fermat's Little Theorem. It also contains a variety of other interesting results such as the Binomial Law, the Chinese Remainder Theorem, and the Pigeonhole Principle. Moreover, this library typechecks with \textsc{Agda}{}'s \texttt{---safe} flag, attesting that it does not use any unsafe features. \section{Conclusion} \label{sec:conc} We have tackled the problem of sharing proofs with predicative systems, by proposing an algorithm for it. Our implementation allowed to translate many non-trivial proofs from \textsc{Matita}'s arithmetic library to \textsc{Agda}{}, showing that our algorithm works well in practice. Our solution uses unification modulo arithmetic equivalence on universe levels. We designed an incomplete algorithm for this problem which is powerful enough for our needs. Still, one can wonder if there is an algorithm which always finds a most general solution when there is one, or if this problem is undecidable. One could improve our algorithm by using ACUI unification to solve constraints not containing $\textup{\texttt{s}}$. However, there are problems with most general unifiers that would also not be handled by this extension. \textsc{Agda}{} also features an algorithm for solving level metavariables which uses an approach different from ours, but it does not seem to have been formalized in the literature. Therefore, the question if such an algorithm exists seems to be open. For future work, we would also like to look at possible ways of making \textsc{Predicativize} less dependent on user intervention. In particular, the translation of inductive types and recursive functions involves some considerable manual work. Thus if we want to be able to translate larger libraries, there is definitely a need for automating this step.
2,877,628,090,008
arxiv
\section{Introduction} Throughout this work, we consider $q\in(0,1)$. The $q$-number is defined to be the number of the form \begin{align*} \left[ \alpha \right]_q = \frac{{q^\alpha - 1}}{{q - 1}}, \qquad \text{for any } \alpha \in \mathbb{C}. \end{align*} In particular, if $\alpha =n\in \mathbb{N}$, then the positive $q$-integer is defined to be \begin{align*} \left[n \right]_q = \frac{{q^n - 1}}{{q - 1}} =1+q+q^2+\cdots q^{n-1}. \end{align*} In special case, we have $\left[1 \right]_q=1$ and $\left[0 \right]_q=\frac{1}{1-q}=\left[\infty \right]_q$. We define the $q$-factorial of the number $[n]_q$ and the $g$-binomial coefficient by \begin{align*} \left[ 0 \right]_q ! &= 1, \qquad \left[ n \right]_q ! = \left[ n \right]_q \left[ {n - 1} \right]_q \cdots \left[ 2 \right]_q \cdot \left[ 1 \right]_q \qquad \left[ {\begin{array}{*{20}c} n \\ j \\ \end{array}} \right]_q = \frac{{\left[ n \right]_q !}}{{\left[ j \right]_q !\left[ {n - j} \right]_q !}} \end{align*} with the convention that \begin{align*} \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array}} \right]_q&=1,\qquad \left[ {\begin{array}{*{20}c} 0 \\ j \\ \end{array}} \right]_q=0, \,\,\forall j\ge1. \end{align*} The $q$-Pochammer symbol is defined to be \begin{align*} \left( {x - a} \right)_q^n = \prod\limits_{j = 0}^{n - 1} {\left( {x - q^j a} \right)},\,\,\,\text{with}\,\, \left( {x - a} \right)_q^{(0)}=1,\,\,\,\text{and}\,\, \left( {x - a} \right)_q^{ - n} = \frac{1}{{\left( {x - q^{ - n} a} \right)_q^n }}. \end{align*} This formula plays an important role in combinatorics. For instance, for $x=1$ and $x=a$ this formula make sense as $n=\infty$: \begin{align*} \left( {1 + x} \right)_q^n = \prod\limits_{j = 0}^{\infty} {\left( {1 + q^j x} \right)}. \end{align*} The above infinite product converges if $q\in \left(0,\infty\right)$. We adopt the symbol \begin{align*} \left( {1 + x} \right)_q^\alpha = \frac{{\left( {1 + x} \right)_q^\infty }}{{\left( {1 + q^\alpha x} \right)_q^\infty }} \end{align*} for any number $\alpha$. Clearly, this definition coincides with definition of $\left( {1 + x} \right)_q^n $ when $\alpha=n\in \mathbb{N}$. \begin{lemma}\cite{K}\label{lem1} For any two numbers $\alpha$ and $\beta$, we have \begin{align*} \left( {1 + x} \right)_q^\alpha =\frac{ \left( {1 + x} \right)_q^{\alpha + \beta }}{ \left( {1 + q^\alpha x} \right)_q^\beta } \end{align*} and \begin{align*} D_q \left( {1 + x} \right)_q^\alpha = \left[ \alpha \right]_q \left( {1 + qx} \right)_q^{\alpha - 1}. \end{align*} \end{lemma} The $q$-derivative of any real valued function $f$ is defined to be \begin{align*} D_q f\left( x \right) = \frac{{f\left( {qx} \right) - f\left( x \right)}}{{\left( {q - 1} \right)x}}, \qquad x\ne0. \end{align*} Clearly, as $q\to 1^-$ then $D_q f\left( x \right)$ tends to $f^{\prime}(x)$, provided that $f$ is differentiable. Two fundamentals $q$-binomial formulas are well know in Literature. The $q$-Gauss binomial which has the form \begin{align*} \left( {1 + x} \right)_q^n = \sum\limits_{j = 0}^n {\left[ {\begin{array}{*{20}c} n \\ j \\ \end{array}} \right]_q q^{j\left( {j - 1} \right)/2} x^j } \end{align*} and the $q$-Heine's binomial formula \begin{align*} \frac{1}{\left( {1 - x} \right)_q^n} = \sum\limits_{j = 0}^\infty {\left[ {\begin{array}{*{20}c} n \\ j \\ \end{array}} \right]_q x^j }. \end{align*} However, since \begin{align*} \mathop {\lim }\limits_{n \to \infty } \left[ {\begin{array}{*{20}c} n \\ j \\ \end{array}} \right]_q = \frac{1}{{\left( {1 - q} \right)\left( {1 - q^2 } \right) \ldots \left( {1 - q^j } \right)}} \end{align*} Applying this for $q$-Gauss and $q$-Heine's binomial formulas, we get two formal power series in $x$. Namely, we have \begin{align} \left( {1 + x} \right)_q^\infty = \sum\limits_{j = 0}^\infty { q^{j\left( {j - 1} \right)/2} \frac{{x^j }}{{\left( {1 - q} \right)\left( {1 - q^2 } \right) \ldots \left( {1 - q^j } \right)}} },\label{eq1.1} \end{align} and \begin{align} \frac{1}{\left( {1 - x} \right)_q^\infty} = \sum\limits_{j = 0}^\infty { \frac{{x^j }}{{\left( {1 - q} \right)\left( {1 - q^2 } \right) \ldots \left( {1 - q^j } \right)}} }.\label{eq1.2} \end{align} These two series are very useful in the theory of $q$-calculus, since they were used to define the $q$-analogue of exponential function. From \eqref{eq1.2} \begin{align*} \frac{1}{\left( {1 - x} \right)_q^\infty} =\sum\limits_{j = 0}^n {\frac{{\left( {{\textstyle{x \over {1 - q}}}} \right)^j }}{{\left( {{\textstyle{{1 - q} \over {1 - q}}}} \right)\left( {{\textstyle{{1 - q^2 } \over {1 - q}}}} \right) \cdots \left( {{\textstyle{{1 - q^j } \over {1 - q}}}} \right)}}} = \sum\limits_{j = 0}^n {\frac{{\left( {{\textstyle{x \over {1 - q}}}} \right)^j }}{{\left[ 1 \right]_1 \left[ 2 \right]_q \cdots \left[ j \right]_q }}} = \sum\limits_{j = 0}^n {\frac{{\left( {{\textstyle{x \over {1 - q}}}} \right)^j }}{{\left[ j \right]_q !}} = {\rm{e}}_q^{{\textstyle{x \over {1 - q}}}} }, \end{align*} or we write \begin{align*} {{\rm{e}}_q^x } =\frac{1}{\left( {1 - \left(1-q\right)x} \right)_q^\infty}. \end{align*} Similarly, if we use \eqref{eq1.1} the companion $q$-exponential function is defined to be \begin{align*} \left( {1 + x} \right)_q^\infty =\sum\limits_{j = 0}^n {\frac{{q^{j\left( {j - 1} \right)/2} \left( {{\textstyle{x \over {1 - q}}}} \right)^j }}{{\left( {{\textstyle{{1 - q} \over {1 - q}}}} \right)\left( {{\textstyle{{1 - q^2 } \over {1 - q}}}} \right) \cdots \left( {{\textstyle{{1 - q^j } \over {1 - q}}}} \right)}}} = \sum\limits_{j = 0}^n {\frac{{q^{j\left( {j - 1} \right)/2} \left( {{\textstyle{x \over {1 - q}}}} \right)^j }}{{\left[ 1 \right]_1 \left[ 2 \right]_q \cdots \left[ j \right]_q }}} = \sum\limits_{j = 0}^n {\frac{{\left( {{\textstyle{x \over {1 - q}}}} \right)^j }}{{\left[ j \right]_q !}} = {\rm{E}}_q^{{\textstyle{x \over {1 - q}}}} } \end{align*} or we write \begin{align*} {{\rm{E}}_q^x } = \left( {1 + \left(1-q\right)x} \right)_q^\infty. \end{align*} The derivatives of the above two $q$-exponential functions are given as \begin{align*} D_q{{\rm{E}}_q^x } ={{\rm{E}}_q^{qx} },\qquad D_q {{\rm{e}}_q^x }={{\rm{e}}_q^x } \end{align*} We note that the additive property of the exponentials does not hold in general, i.e., \begin{align*} {\rm{e}}_q^x {\rm{e}}_q^y = {\rm{e}}_q^{x + y}. \end{align*} However, if $x$ and $y$ satisfy the commutation relation $yx=qxy$, then the additive property holds. The two functions ${\rm{E}}_q^x$ and ${\rm{e}}_q^x$ are connected to each other by the relations \begin{align*} {\rm{E}}_q^{ - x} {\rm{e}}_q^x = 1 ,\qquad {\rm{e}}_{1/q}^x = {\rm{E}}_q^x. \end{align*} Naturally, it is important to know the relation between these $q$-quantities. One of the most effective method is to use inequalities. Among others, one of the most famous and applicable inequalities used in mathematics is the Bernoulli inequality, which is well known as: \begin{align} \left( {1 + x} \right)^n \ge 1 + n x \label{eq1.3} \end{align} for every $x>-1$ and every positive integer $n\ge1$. This was extended to more general form such as \cite{MPF}: \begin{align} \left( {1 + x} \right)^\alpha \ge 1 +\alpha x, \qquad \alpha \ge 1\label{eq1.4} \end{align} and \begin{align} \left( {1 + x} \right)^\alpha \le 1 + \alpha x, \qquad 0<\alpha < 1.\label{eq1.5} \end{align} This inequality has important applications in proving some classical theorems in Analysis and Statistics. Due to its important role, in this work we prove the $q$-analogue of Bernoulli inequality and give some other related inequalities. \section{ $q$-Bernoulli inequality} Let us begin with the following version of $q$-Bernoulli inequality for integers. \begin{theorem}\label{thm1} Let $q\in (0,1)$. If $x>-1$ then the $q$-Bernoulli inequality \begin{align} \left( {1 + x} \right)_q^n \ge 1 + \left[ n \right]_q x,\label{eq2.1} \end{align} is valid for every positive integer $n\ge1$. \end{theorem} \begin{proof} Our proof carries by Induction. Define the statement \begin{align} {\rm{P}}\left(n\right): \qquad\left( {1 + x} \right)_q^n \ge 1 + \left[ n \right]_q x.\label{eq2.2} \end{align} If $x=0$, then the we get equality for all $n$ and thus \eqref{eq2.1} holds. If $x>-1$. Then, \begin{align*} {\rm{P}}\left(1\right): \qquad\left( {1 + x} \right)^1_q= \left( {1 + x} \right) \ge 1 + x=1 + \left[ 1 \right]_q x \end{align*} Assume \eqref{eq2.2} holds for $n=k$, i.e., \begin{align*} {\rm{P}}\left(k\right): \qquad\left( {1 + x} \right)_q^k \ge 1 + \left[ k \right]_q x \qquad\text{is true}. \end{align*} We need to show that \begin{align*} {\rm{P}}\left(k+1\right): \qquad\left( {1 + x} \right)_q^{k+1} \ge 1 + \left[ k+1 \right]_q x \end{align*} is true?. Starting with the left-hand side \begin{align*} \left( {1 + x} \right)_q^{k+1}&=\left( {1 + x} \right)_q^k\left( {1 + q^kx} \right) \\&\ge \left(1 + \left[ k \right]_q x\right)\left( {1 + q^kx} \right)\qquad \qquad\qquad\text{(follows by assumption \eqref{eq2.2} for $n=k$)} \\ &=1 + \left[ k \right]_q x + q^k x + q^k \left[ k \right]_q x^2 \\ &\ge 1 + \frac{1}{{q }}q \left[ k \right]_q x + \frac{q}{q}q^k x \\ &= 1 + \frac{1}{{q }}\left( {\left[ {k + 1} \right]_q - 1} \right)x + \frac{1}{q}\left( {q^{k + 1} x} \right) \qquad \qquad\text{(since $ \left[ {k + 1} \right]_q = q \left[ k \right]_q + 1$)} \\ &= 1 + \frac{1}{q}\left( {\left[ {k + 1} \right]_q + q^{k + 1} - 1} \right)x \\ &= 1 + \frac{1}{q}\left( {\left[ {k + 1} \right]_q + \left( {q - 1} \right)\left[ {k + 1} \right]_q } \right)x\qquad\qquad \text{(since $ q^{k + 1} - 1 = \left( {q - 1} \right)\left[ {k + 1} \right]_q$)} \\ &= 1 + \left[ {k + 1} \right]_q x, \end{align*} which means the statement ${\rm{P}}\left(k+1\right)$ is true and thus by Mathematical Induction hypothesis the inequality in \eqref{eq2.1} holds for every $n\in \mathbb{N}$ and $x>-1$. \end{proof} \begin{remark} \label{rem1}As $q\to 1$ in \eqref{eq2.1}, then the $q$-Bernoulli inequality \eqref{eq2.1} reduces to the original version of Bernoulli inequality \eqref{eq1.3} for integer case. \end{remark} \begin{remark} For the case $-1<x<0$, we prefer to write \eqref{eq2.1} in the form \begin{align*} \left( {1 - y} \right)_q^n \ge 1 - \left[n\right]_qy \end{align*} for every $0< y<1$ and $n \ge 1$.\label{rem1} \end{remark} \begin{corollary}\label{cor1} Let $q\in (0,1)$. If $x>-1$, then the generalization $q$-Bernoulli inequality \begin{align*} \left( {1 + x} \right)_q^{m + n} \ge \left(1 + \left[ m \right]_q x\right) \left( {1 + q^m x} \right)_q^n, \end{align*} is valid for every $m\in \mathbb{N}$ and $n\in \mathbb{Z}$. \end{corollary} \begin{proof} The result is an immediate consequence of Theorem \ref{thm1}, by substituting $\left( {1 + x} \right)_q^m = \frac{{\left( {1 + x} \right)_q^{m + n} }}{{\left( {1 + q^m x} \right)_q^n }}$ in \eqref{eq2.1}. So that the result follows for every $m\in \mathbb{N}$ and $n\in \mathbb{Z}$. \end{proof} The following generalization of \eqref{eq2.1} is valid for any real number $\alpha\ge0$. \begin{theorem}\label{thm2} Let $q\in (0,1)$. If $x\ge0$ then the $q$-Bernoulli inequality \begin{align} \left( {1 + x} \right)_q^\alpha \ge 1 + \left[ \alpha \right]_q x, \qquad \alpha \ge 1 \label{eq2.3} \end{align} and \begin{align} \left( {1 + x} \right)_q^\alpha \le 1 + \left[ \alpha \right]_q x, \qquad 0<\alpha < 1 \label{eq2.4} \end{align} is valid. \end{theorem} \begin{proof} Let us recall that \cite{G}, for $0<a<b$ (or $0>a>b$), a function $f(x)$ is said to be $q$-increasing (respectively, $q$-decreasing) on $[a,b]$, if $f(qx)\le f(x)$ (respectively, $f(qx)\ge f(x)$) whenever, $x\in [a,b]$ and $qx \in [a,b]$. As a direct consequence we have, $f(x)$ is $q$-increasing (respectively, $q$-decreasing) on $[a,b]$ iff $D_qf(x)\ge 0$ (respectively, $D_qf(x)\le 0$), whenever, $x\in [a,b]$ and $qx \in [a,b]$. Let $f\left( x \right) = \left( {1 + x} \right)_q^\alpha - \left[ \alpha \right]_q x - 1$, $x\ge0$. Since $ \left( {1 + x} \right)_q^\alpha = \frac{{\left( {1 + x} \right)_q^\infty }}{{\left( {1 + q^\alpha x} \right)_q^\infty }}$, inserting $qx$ instead of $x$ and replace $\alpha$ by $\alpha-1$ we get $ \left( {1 + qx} \right)_q^{\alpha - 1} = \frac{{\left( {1 + qx} \right)_q^\infty }}{{\left( {1 + q^\alpha x} \right)_q^\infty }} $. Therefore, we have \begin{align*} D_q f\left( x \right) &= \left[ \alpha \right]_q \left( {1 + qx} \right)_q^{\alpha - 1} - \left[ \alpha \right]_q\\ &= \left[ \alpha \right]_q \frac{{\left( {1 + qx} \right)_q^\infty }}{{\left( {1 + q^\alpha x} \right)_q^\infty }} - \left[ \alpha \right]_q\\ &= \left[ \alpha \right]_q \frac{{\sum\limits_{j = 0}^\infty {{\textstyle{{q^{j\left( {j - 1} \right)/2} } \over {\prod\limits_{k = 1}^j {\left( {1 - q^k } \right)} }}}q^j x^j } }}{{\sum\limits_{j = 0}^\infty {{\textstyle{{q^{j\left( {j - 1} \right)/2} } \over {\prod\limits_{k = 1}^j {\left( {1 - q^k } \right)} }}}q^{j\alpha } x^j } }} - \left[ \alpha \right]_q\qquad (\text{with the convention} \prod\limits_{k = 1}^0 {\left( {1 - q^j } \right)} = 1) \\ &= \left[ \alpha \right]_q \frac{{1 + \sum\limits_{j = 1}^\infty {{\textstyle{{q^{j\left( {j - 1} \right)/2} } \over {\prod\limits_{k = 1}^j {\left( {1 - q^k } \right)} }}}q^j x^j } }}{{1 + \sum\limits_{j = 1}^\infty {{\textstyle{{q^{j\left( {j - 1} \right)/2} } \over {\prod\limits_{k = 1}^j {\left( {1 - q^k } \right)} }}}q^{j\alpha } x^j } }} - \left[ \alpha \right]_q\\ &= \left[ \alpha \right]_q \frac{{\sum\limits_{j = 1}^\infty {{\textstyle{{q^{j\left( {j - 1} \right)/2} } \over {\prod\limits_{k = 1}^j {\left( {1 - q^k } \right)} }}}\left( {q^j - q^{j\alpha } } \right)x^j } }}{{1 + \sum\limits_{j = 1}^\infty {{\textstyle{{q^{j\left( {j - 1} \right)/2} } \over {\prod\limits_{k = 1}^j {\left( {1 - q^k } \right)} }}}q^{j\left( {\alpha - 1} \right)} x^j } }} \ge 0, \end{align*} since $q\in (0,1)$ and $\alpha \ge 1$ then $\left( {q^j - q^{j\alpha } } \right)>0$, and this implies that $D_q f\left( x \right) \ge 0$ for all $x\ge0$, which means that $f$ is $q$-increasing and thus the inequality \eqref{eq2.3} is proved. The inequality \eqref{eq2.4} is deduced from the above proof by noting that $\left( {q^j - q^{j\alpha } } \right)<0$ for all $0<\alpha< 1$. \end{proof} \begin{remark}\label{rem2} Setting $\alpha =n\in \mathbb{N}$ in \eqref{eq2.3}, then the inequality \eqref{eq2.3} reduces to the $q$-version of Bernoulli inequality \eqref{eq2.1} for integer case but for $x\ge0$. Moreover, as $q\to 1$ \eqref{eq2.3} and \eqref{eq2.4} reduces to the classical versions \eqref{eq1.4} and \eqref{eq1.5}; respectively. \end{remark} Testing the validity of \eqref{eq2.3} and \eqref{eq2.4} for $-1<x<0$ arbitrarily, we find that these inequalities can be extended but with additional restriction on $q\in (0,1)$, as given in the following result. \begin{theorem} \label{thm3}There exists $\widehat{q}\in (0,1)$ such that the inequalities \eqref{eq2.3} and \eqref{eq2.4} are hold for every $q\in (\widehat{q},1)$ and every $x>-1$. \end{theorem} \begin{proof} Firstly, we need to recall the $q$-Mean Value Theorem ($q$-MVT) given in \cite{RSM}, it states that: For a continuous function $g$ defined on $[a,b]$ $(0<a<b)$, there exist $\eta\in (a,b)$ and $\widehat{q}\in (0,1)$ such that \begin{align} \label{q-MVT}g\left( {b} \right) - g\left( a \right) = D_q g\left( \eta \right)\left( {b - a} \right) \end{align} for all $q\in \left(\widehat{q},1\right) $.\\ \noindent\textbf{Case I.} If $x\ge0$. We consider the function $f\left(t\right)=\left(1+t\right)^\alpha_q$ defined for $t\ge0$. Clearly $f$ is continuous for $t\in [0,x]\subset [0,\infty)$, and $D_q f\left( c \right) = \left[ \alpha \right]_q \left( {1 + qc} \right)_q^{\alpha - 1}$. Applying, \eqref{q-MVT} for $a=0$ and $b=x$ then there exist $\eta\in (a,b)$ and $\widehat{q}\in (0,1)$ \begin{align*} \left(1+x\right)^\alpha_q-1=\left[ \alpha \right]_q\left( {1 + q\eta} \right)_q^{\alpha - 1} \left(x-0\right) \ge \left[ \alpha \right]_qx\qquad \forall q\in \left(\widehat{q},1\right). \end{align*} This yields that \begin{align*} \left(1+x\right)^\alpha_q \ge 1+ \left[ \alpha \right]_qx \end{align*} $\forall q\in \left(\widehat{q},1\right)$, and this proves \eqref{eq2.3}.\\ \noindent\textbf{Case II.} If $-1<x<0$. Let us rewrite \eqref{eq2.3} as follows: \begin{align} 1 - \left[ \alpha \right]_q y \le \left( {1 - y} \right)_q^\alpha, \qquad 0< y < 1\label{eq2.6} \end{align} Consider the function $f\left(t\right)=\left(1-t\right)^\alpha_q$ defined for $0\le t\le 1$. Clearly $f$ is continuous for $t\in [0,y]\subset [0,1]$, and $D_q f\left( c \right) = \left[ \alpha \right]_q \left( {1 + qc} \right)_q^{\alpha - 1}$. Applying, \eqref{q-MVT} for $a=0$ and $b=y$ then there exist $\eta\in (0,x)$ and $\widehat{q}\in (0,1)$ \begin{align} \left(1-y\right)^\alpha_q-1=-\left[ \alpha \right]_q\left( {1 - q\eta} \right)_q^{\alpha - 1} \left(y-0\right) \ge -\left[ \alpha \right]_qy\qquad \forall q\in \left(\widehat{q},1\right).\label{eq2.7} \end{align} This yields that \begin{align*} \left(1-y\right)^\alpha_q \ge 1- \left[ \alpha \right]_qy \end{align*} $\forall q\in \left(\widehat{q},1\right)$ with $0<y<1$, and this proves the inequality. The reverse inequality in \eqref{eq2.6} holds since the inequality in \eqref{eq2.7} is reversed for $0<\alpha<1$, which proves \eqref{eq2.4} \end{proof} A generalization of \eqref{eq2.3} and \eqref{eq2.4} is given as follows: \begin{proposition}\label{prp1} Let $\beta\in \mathbb{R}$. There exists $\widehat{q}\in (0,1)$ such that for every $x>-1$ the inequalities \begin{align} \left( {1 + x} \right)_q^{\alpha + \beta } \ge \left( {1 + \left[ \alpha \right]_q x} \right)\left( {1 + q^\alpha x} \right)_q^\beta \qquad \alpha \ge 1\label{eq2.8} \end{align} and \begin{align} \left( {1 + x} \right)_q^{\alpha + \beta } \le \left( {1 + \left[ \alpha \right]_q x} \right)\left( {1 + q^\alpha x} \right)_q^\beta \qquad 0<\alpha < 1\label{eq2.9} \end{align} are hold for every $q\in (\widehat{q},1)$. \end{proposition} \begin{proof} From Lemma \ref{lem1} we have $\left( {1 + x} \right)_q^\alpha = \frac{{\left( {1 + x} \right)_q^{\alpha + \beta } }}{{\left( {1 + q^\alpha x} \right)_q^\beta }}$. Substituting in \eqref{eq2.3} we get the required result. \end{proof} \begin{remark}\label{rem3} Setting $\beta=0$ in \eqref{eq2.8} and \eqref{eq2.9} we recapture \eqref{eq2.3} and \eqref{eq2.4}, respectively. \end{remark} \begin{corollary}\label{cor6} Let $\beta\in \mathbb{R}$. There exists $\widehat{q}\in (0,1)$ such that for every $x>-1$ the inequalities \begin{align} \left( {1 + x} \right)_q^{\infty } \ge \left( {1 + \left[ \alpha \right]_q x} \right)\left( {1 + q^\alpha x} \right)_q^\beta\left( {1 + q^{\alpha+\beta} x} \right)_q^\infty \qquad \alpha \ge 1\label{eq2.10} \end{align} and \begin{align} \left( {1 + x} \right)_q^{\infty } \le \left( {1 + \left[ \alpha \right]_q x} \right)\left( {1 + q^\alpha x} \right)_q^\beta\left( {1 + q^{\alpha+\beta} x} \right)_q^\infty \qquad 0<\alpha < 1\label{eq2.11} \end{align} are hold for every $q\in (\widehat{q},1)$ \end{corollary} \begin{proof} Substituting $\left( {1 + x} \right)_q^{\alpha+\beta} =\frac{{\left( {1 + x} \right)_q^\infty }}{{\left( {1 + q^{\alpha+\beta} x} \right)_q^\infty }}$ in \eqref{eq2.8} and \eqref{eq2.9}; respectively, we get the required result. \end{proof} \begin{remark} Replacing $`(1-q)x$' instead of $`x$' in \eqref{eq2.10} and \eqref{eq2.11}, we get inequalities for the exponential function ${\rm{E}}_q^x$ for all $x>\frac{-1}{1-q}$. Similarly, for ${\rm{e}}_q^x$ with a bit changes in the substitution. \end{remark} \begin{corollary} There exists $\widehat{q}\in (0,1)$ such that for every $x>-1$ the inequalities \begin{align} \left( {1 + x} \right)_q^{\infty } \ge \left( {1 + \left[ \alpha \right]_q x} \right) \left( {1 + q^{\alpha} x} \right)_q^\infty \qquad \alpha \ge 1\label{eq2.12} \end{align} and \begin{align} \left( {1 + x} \right)_q^{\infty } \le \left( {1 + \left[ \alpha \right]_q x} \right)\left( {1 + q^{\alpha} x} \right)_q^\infty \qquad 0<\alpha < 1\label{eq2.13} \end{align} are hold for every $q\in (\widehat{q},1)$ \end{corollary} \begin{proof} Setting $\beta =0$ in \eqref{eq2.10} and \eqref{eq2.11}; respectively, we get the required result. \end{proof}
2,877,628,090,009
arxiv
\section{Introduction} How do we infer what people believe and desire from their observable behavior? One way to address this problem is to invoke some version of the \emph{principle of charity}: We attribute beliefs and desires to an agent that \emph{rationalize} the behavior we observe \cite{Davidson1973, Lewis1974}. What does it mean to be rational? One way to articulate what it means to be rational is the framework of \emph{decision theory}, a mathematical theory which defines optimal decisions given an subjective probability function and a utility function \cite{Savage1954, Jeffrey1990}. Ramsey \cite{Ramsey1926} shows how we can use the assumption that people follow one particular decision theoretic principle, viz. expected utility maximization, to infer their subjective probability function and utility function from their observable betting behavior. In this paper, I focus on Ramsey's method to infer subjective probabilities from betting behavior after the utility function is already determined. Ramsey advises us to think of subjective probabilities as betting odds. However, when agents are sensitive to risk, subjective probabilities and betting odds can come apart, so Ramsey's method no longer works. I show that we can extend Ramsey's method, or something very similar to it, to agents which follow a weaker decision theoretic principle: risk-weighted expected utility maximization, defended by Buchak \cite{Buchak2013}. \section{Measuring Subjective Probability} Before explaining Ramsey's method for measuring subjective probability, let me briefly explain expected utility theory in the framework of Savage \cite{Savage1954}. \subsection{Expected Utility Theory} We start with a sets of \emph{states} $\mathscr{S}$, which contains all the maximally specific ways the world might be. Subsets of $\mathscr{S}$ are called \emph{events} and form the event space $\mathscr{E}$, which is a $\sigma$-algebra on $\mathscr{S}$. We further have a set $\mathscr{O}$ of all possible \emph{outcomes}, which are the bearers of value for our agent. Following Savage, we are assuming \emph{state-independent utilities}, which means that the value of outcomes does not depend on the state of the world. We model \emph{gambles} as functions from states to outcomes, that is, as functions from $\mathscr{S}$ to $\mathscr{O}$. (Savage calls these `acts'.) We assume that gambles assign different outcomes to only finitely many events. This means that we can write a gamble $g$ in the following form: $\{o_1, E_1;...;o_n, E_n\}$. This gamble yields outcome $o_1$ if event $E_1$ obtains, outcome $o_2$ if event $E_2$ obtains and so on. Further, $E_1, E_2,... E_n$ form a \emph{partition} of our state space $\mathscr{S}$, which means that they are mutually exclusive and collectively exhaustive. We assume that our agent comes with a \emph{subjective probability function} $p$ on the event space $\mathscr{E}$, which models our agents' credences. Further, our agent comes with a \emph{utility function} $u$ on the outcome space $\mathscr{O}$, which models our agents' preferences over outcomes. Given this setup, the \emph{expected utility} (EU) of gamble $g$ is defined as follows: \begin{equation*} EU(g) = \sum_{i =1}^{n} p(E_i)u(o_i). \end{equation*} Our agent has preferences over gambles, which are reflected in their observable choice behavior. We write $f \preceq g$ to mean that our agent prefers gamble $g$ to gamble $f$, and $f \sim g$ to mean that our agent is indifferent between gambles $f$ and $g$.\footnote{As usual, we define $f \sim g$ as $f \preceq g \land g \preceq f$ and $f \prec g$ as $f \preceq g \land g \not \preceq f$.} Our agents' preference ordering over gambles is \emph{representable} as EU maximization just in case, for any gambles $f$ and $g$, we have \begin{equation*} f \preceq g \iff EU(f) \leq EU(g) \end{equation*} for some subjective probability function $p$ and utility function $u$. There are various \emph{representation theorems} for EU theory, which show that any preference ordering among gambles which satisfies certain conditions is representable as EU maximization relative to a unique probability function $p$ and a utility function $u$ unique up to positive linear transformation \cite{Savage1954}. One common way to interpret Ramsey's posthumously published essay \emph{Truth and Probability} is as an (early and perhaps incomplete) representation theorem for EU theory \cite{Fishburn1981, Elliott2017}. I want to focus on a different aspect of Ramsey's essay, which comes out in the following passage: \begin{quote} ``The subject of our inquiry is the logic of partial belief, and I do not think we can carry it far unless we have at least an approximate notion of what partial belief is, and how, if at all, it can be measured [...] We must therefore try to develop a purely psychological method of measuring belief.'' \cite[p. 166]{Ramsey1926} \end{quote} Ramsey is ultimately interested in \emph{measuring} the partial beliefs, or subjective probabilities, of an agent. To achieve this end, Ramsey suggests assuming that the agent is an EU maximizer: \begin{quote} ``I suggest that we introduce as a law of psychology that his behaviour is governed by what is called the mathematical expectation [...]'' \cite[p. 174]{Ramsey1926} \end{quote} Ramsey goes on to list various conditions which an agents' preferences over gambles must satisfy in order to be representable as EU maximization. However, this representation theorem is merely a means to the end of measuring subjective probability. \subsection{Ramsey's Method} Let me now turn to explain the method for measuring subjective probability sketched by Ramsey \cite{Ramsey1926}.\footnote{Also see \cite{Savage1971, Bradley2004, Bermudez2011}.} Assuming \begin{enumerate} \item an agent's preference ranking among gambles is representable as EU maximization, \item we have already determined our agent's utility function $u$ (up to positive linear transformation), \end{enumerate} Ramsey gives us a way to measure the subjective probability our agent assigns to an arbitrary event by observing our agents' preferences over gambles.\footnote{Note that Ramsey also sketches a method for measuring the utilities of our agent, but the details of this will not concern us here. The basic idea is that Ramsey tells us under which condition the utility difference between outcome $a$ and $b$ equals the utility difference between outcome $c$ and $d$, which fixes the utility function up to positive linear transformation \cite{Jeffrey1990, Parmigiani2009}.} Ramsey's basic idea is that subjective probabilities are betting odds: \begin{quote} ``This amounts roughly to defining the degree of belief in $p$ by the odds at which the subject would bet on $p$ [...]'' \cite[pp. 179-80]{Ramsey1926} \end{quote} Let me now explain how Ramsey's method works in more detail. We want to find out which subjective probability our agent ascribes to event $E$. First, we have to find three outcomes $b,m,w$ which satisfy what I will call \emph{Ramsey's conditions}: \begin{enumerate}[i.] \item our agent strictly prefers $b$ over $w$: $w \prec b$,\footnote{Note that every outcome $o$ corresponds to a \emph{constant act}: a gamble which always yields outcome $o$ no matter what. To simplify notation, I will denote the constant act yielding $o$ by $o$.} \item our agent is indifferent between getting $m$ for certain and a gamble which yields $b$ if $E$ happens and $w$ otherwise: $m \sim \{b, E; w, \textrm{not-}E\}$. \end{enumerate} Note that we have to assume that our agent's outcome space is sufficiently rich to contain such outcomes for any event $E$ whose probability we want to measure. By our first assumption, our agent maximizes EU, so \begin{equation*} u(m) = u(w) + p(E)(u(b) - u(w)), \end{equation*} which means that $p(E) = \frac{u(m) - u(w)}{u(b) - u(w)}$. Now, we can use our second assumption, that we have already determined our agent's utility function $u$, to compute $p(E)$. Since $E$ was an arbitrary event, Ramsey has shown us how to measure the subjective probability our agent assigns to arbitrary events. To illustrate Ramsey's method, consider Alice, who only cares about money and values money linearly, so we have $u(\$x) = x$. Ramsey tells us to measure the subjective probability Alice ascribes to event $E$ by finding out for which price $x$ between zero and one dollar Alice is indifferent between getting $x$ cents for certain and a bet which pays one dollar if $E$ happens and nothing otherwise. Now we can use the assumption that Alice maximizes EU to infer that $p(E) = x$.\footnote{We have $p(E) = \frac{u(x) - u(0)}{u(1) - u(0)}$, so by the assumption that Alice values money linearly, $p(E) = \frac{x}{1} = x$.} Note that Ramsey's method falls short of an \emph{algorithm} for determining the subjective probability an agent assigns to arbitary events. This is because Ramsey hasn't given us a general method for finding outcomes $b,m,w$ which satisfy Ramsey's conditions. Still, if we can always find these outcomes, we have a completely general way to determine the subjective probabilities of our agent by their observable betting behavior. \section{Beyond Expected Utility} In this paper, I show that we can generalize Ramsey's method to agents who do not maximize expected utility. In particular, I generalize Ramsey's method to \emph{risk-weighted expected utility maximizers} by Buchak \cite{Buchak2013}. \subsection{Limits of Expected Utility Theory} It is well known that people sometimes have preferences which are difficult to reconcile with expected utility theory. Here is an example first discussed by Allais \cite{Allais1953}: \begin{quote} \textbf{Allais paradox.} You get to choose between the following two gambles: \begin{enumerate} \item One million dollars for certain.\label{L1} \item 89 \% chance of winning one million dollars, 10 \% chance of winning five million dollars, 1 \% chance of winning nothing.\label{L2} \setcounter{first-gamble}{\value{enumi}} \end{enumerate} \noindent Now, you get to choose between the following two gambles: \begin{enumerate} \setcounter{enumi}{\value{first-gamble}} \item 89 \% chance of winning nothing, 11 \% chance of winning one million dollars.\label{L3} \item 90 \% chance of winning nothing, 10 \% chance of winning five million dollars.\label{L4} \end{enumerate} \end{quote} If you picked (\ref{L1}) and (\ref{L4}), you might be surprised to learn that these preferences are inconsistent with EU maximization. There is no way to assign utility values to dollar amounts that makes the expected utility of (\ref{L1}) higher than the expected utility of (\ref{L2}), while also making the expected utility of (\ref{L4}) higher than the expected utility of (\ref{L3}). Now as it turns out, many subjects exhibit just these preferences: they prefer (\ref{L1}) over (\ref{L2}) and (\ref{L4}) over (\ref{L3}) \cite{Machina1987, Oliver2009}. Call these the \emph{Allais-preferences}. There is a substantive question about whether it is \emph{rational} to have Allais-preferences. However, we do not need to decide this question here. What matters for our purposes is that real-life agents sometimes \emph{have} Allais-preferences, whether they are rational or not. This means that real-life agents are not always EU maximizers. Therefore, Ramsey's method is not always applicable to real-life agents. \subsection{Risk-Weighted Expected Utility Theory} There are alternative decision theories which are designed to accommodate Allais-preferences, such as \emph{risk-weighted expected utility theory} defended by Buchak \cite{Buchak2013}.\footnote{See \cite{Buchak2017} for an accessible introduction. REU theory builds on earlier work on rank-dependent utility theory, in particular \cite{Quiggin1983, machina_more_1992, KW2003}.} In this theory, agents are characterized by a subjective probability function, a utility function and a \emph{risk function} $r$, which measures our agent's attitude towards risk. They do not maximize expected utility, but \emph{risk-weighted expected utility}. Given subjective probability function $p$, utility function $u$ and risk function $r$, the \emph{risk-weighted expected utility (REU)} of gamble $g = \{o_1,E_1;...;o_n, E_n\}$ with $u(o_1) \leq ... \leq u(o_n)$ is defined as follows:\footnote{We stipulate that $u(o_0) = 0$.} \begin{equation*} REU(g) = \sum\limits_{j=1}^n \left( r \left(\sum\limits_{i=j}^n p(E_i)\right) \left( u(o_j) - u(o_{j-1}) \right) \right). \end{equation*} To understand the difference between EU theory and REU theory, it is best to look at an example. Meet Alice and Bob. Both Alice and Bob value money linearly, so we have $u(\$x) = x$ for both Alice and Bob. Now suppose I have a fair coin and offer both Alice and Bob a gamble which yields one dollar if the coin comes up heads and nothing otherwise. Since both believe that the coin is fair, their subjective probability that it will come up heads is $\frac{1}{2}$. However, depending on their attitude towards risk, Alice and Bob might give more or less \emph{weight} to this event when deciding how much to pay for my gamble. Alice is indifferent towards risk, so Alice is willing to pay up to 50 cent for my gamble. In contrast, Bob prefers 50 cents for certain over my gamble. In fact, Bob is only willing to pay up to 10 cents for my gamble. Therefore, Bob can't be an EU maximizer. Rather, Bob is \emph{risk-averse}: for Bob, worse outcomes play a larger role in determining the value of a gamble than better ones.\footnote{Note that we assumed that Bob's utility function is a linear function of money. We could try to model risk averse agents within EU theory by concave utility functions. However, this approach faces several problems. First, there are risk-averse preferences which can't be captured in EU theory, such as the Allais paradox discussed above. Second, concave utility functions for small stakes predict extreme risk aversion for bigger stakes, see \cite{Rabin2000}. Further see \cite{Okasha2007} for an evolutionary explanation of why real-world agents are risk-averse in ways which are incompatible with EU theory.} We can model the disagreement between Alice and Bob in the framework of REU theory. According to REU theory, two agents with the same probability and utility function can disagree about the value of a gamble if they have different attitudes towards risk. We can write my gamble from above as $f =\{\$1, H ; \$0,T\}$, which yields one dollar if the coin lands heads and zero dollar otherwise. The risk-weighted expected utility of $f$ is \begin{equation*} REU(f) = u(\$0) + r(p(H))((u(\$1) - u(\$0)) = r(p(H)). \end{equation*} Intuitively, $r$ measures how much \emph{weight} our agent puts on the event that the coin lands heads. An agent who is indifferent towards risk, such as Alice, weights each event with its subjective probability. (This means that EU maximization is the special case of REU maximization where $r$ is the identity function.) A risk averse agent, such as Bob, weights an event with \emph{less} than its subjective probability if the event is not certain. A risk loving agent weights an event with \emph{more} than its subjective probability if the event is not certain. Thus, the risk function maps subjective probability to \emph{decision weight}, where the decision weight of an event is the contribution which the probability of the event makes to the value of a gamble in which this event yields the best outcome. Following Buchak \cite{Buchak2013}, I assume that the risk function $r$ satisfies the following constraints: \begin{enumerate}[i.] \item $r: [0,1] \to [0,1]$, \item $r(1) = 1$ and $r(0) = 0$, \item $r$ is strictly increasing: $a < b$ implies $r(a) < r(b)$, \item $r$ is continuous, \end{enumerate} all of which correspond to natural constraints on the relationship between subjective probability and decision weight. Both subjective probability and decision weights are numbers between zero and one, the decision weight of a certain event should be maximal and the decision weight of an impossible event minimal, more probable events should get higher decision weights and the decision weight of an event should vary continuously with its subjective probability. If you think that it is rational to be sensitive towards risk in the way modeled by REU theory, this is a good reason to think that REU theory, and not EU theory, is the correct normative theory of decision making. However, I will not defend this claim here. I am merely committed to the weaker claim that REU theory is more adequate than EU theory when it comes to \emph{describing} and \emph{interpreting} the behavior of real-word agents.\footnote{See \cite{Buchak2016} for a discussion of the difference between normative, descriptive and interpretative uses of decision theory.} Like Ramsey, I am interested in REU theory as a `general psychological theory'. \subsection{Limits of Ramsey's Method} Given that the goal of Ramsey is to measure to beliefs of real-world agents, it would be great if we could use Ramsey's method to measure the subjective probabilities of REU maximizers. However, as it stands, Ramsey's method does not work for REU maximizers. Our example above shows that while Ramsey's method works for agents like Alice, who are indifferent towards risk, it does not work for risk averse agents like Bob. According to Ramsey's method, the subjective probability that Bob ascribes to the event that the coin comes up heads is .1, but that is not the right prediction. From the perspective of REU theory, the problem is that Ramsey's method conflates subjective probabilities and betting odds, which are the \emph{joint product} of subjective probability and attitude towards risk. If you are sensitive to risk, then your betting odds are not identical to your subjective probabilities, so measuring your subjective probabilities by looking at your betting odds is a bad idea. We can make this point a bit more precise. Suppose we want to use Ramsey's method to measure which subjective probability a REU maximizer ascribes to event $E$. We first find outcomes $b,m,w$ which satisfy Ramsey's conditions. Then, we use the assumption that our agent maximizes REU to infer that \begin{equation*} u(m) = u(w) + r(p(E))(u(b) - u(w)), \end{equation*} so $r(p(E)) = \frac{u(m) - u(w)}{u(b) - u(w)}$. However, now we are stuck. We know $r(p(E))$, the decision weight of event $E$, which is the joint product of subjective probability and attitude towards risk. Note that $r(p(E))$ is not equal to $p(E)$, unless we are considering an agent who is indifferent towards risk. So the question is: \emph{How can we separate the distinct contributions of risk attitude and subjective probability?} In the rest of this paper, I sketch an answer to this question. The core idea of my proposal is a method to measure the risk attitude of a REU maximizer by their observable preferences among gambles, similar to Ramsey's method to measure the subjective probabilities of an EU maximizer by their observable preferences among gambles. Note that my project is different from a representation theorem for REU theory. Buchak \cite{Buchak2013} proves a representation theorem for REU theory, which shows that any preference ranking on gambles which satisfy certain conditions can be represented as REU maximization relative to a unique probability function $p$, a unique risk function $r$ and a utility function $u$ which is unique up to positive linear transformation. However, this representation theorem does not tell us how to \emph{construct} the risk function and probability function of a REU maximizer from their observable behavior. This is my goal in this paper. I am assuming that an agent is a REU maximizer and then use this fact to `reverse engineer' their risk function and probability function from their observable behavior. \section{Measuring Risk Attitudes} I now turn to explain how we can measure the risk function of an REU maximizer. I end by explaining how we can leverage this method to measure the subjective probabilities of a REU maximizer. \subsection{Ramsey Meets Risk} My goal is the following. Assuming \begin{enumerate} \item an agent's preference ranking among gambles is representable as REU maximization, \item we already determined our agent's utility function $u$ (up to positive linear transformation),\footnote{Note that we can determine the utility function of a REU maximizer in a similar way to Ramsey's approach to determine the utility function of a EU maximizer. We can write down a certain condition on gambles involving the outcomes $a,b,c,d$, viz. comonotonic tradeoff consistency, which ensures that $u(a) - u(b) = u(c) - u(d)$. See \cite[p. 103]{Buchak2013}.} \end{enumerate} I describe a method to measure our agent's attitude towards risk by uniquely determining their risk function $r$. As explained above, the risk function maps subjective probabilities to decision weights. Now we already have a method for measuring the decision weight of an arbitrary event. As explained above, this is what Ramsey's original method does in the context of REU theory. So to measure our agent's risk function, we need some independent way to find events with known subjective probabilities. Then, we can use Ramsey's method to measure the decision weight of these events, and so infer the mapping from subjective probabilities to decision weights. It is important to observe that to determine our agent's risk function $r$ uniquely, it suffices to determine the decision weights of all events with \emph{rational-valued} probabilities, that is, to determine $r(a)$ for all $a \in [0,1]\cap \mathbb{Q}$. This is because the risk function $r$ is continuous, and any real-valued continuous function is uniquely determined by its value on all rational points.\footnote{This, in turn, is because the rationals are dense in the reals: There is a rational number between every two distinct real numbers \cite[p. 20]{Abott2001}.} This reduces our problem to the following: We need an independent way to find events with known rational-valued subjective probabilities. Then, we can measure the decision weights of those events by Ramsey's method, and so determine $r(a)$ for all $a \in [0,1]\cap \mathbb{Q}$, and so determine $r$ uniquely. \subsection{The Method of Fair Lotteries} Our goal is to find events with known rational-valued subjective probability. To achieve this goal, we look back at Ramsey, who shows us how to find an event with subjective probability $\frac{1}{2}$.\footnote{See \cite[p. 19]{Ramsey1926}. This is \emph{en route} to Ramsey's derivation of utilities, which we don't focus on here.} First, we find two outcomes $b$ and $w$ such that our agent strictly prefers $b$ over $w$. Then, we find an event $E$ which satisfies the following condition: \begin{equation*} \{E, b ; \textrm{not-}E, w\} \sim \{E, w ; \textrm{not-}E, b\}, \end{equation*} so our agent is indifferent between a gamble which yields $b$ if $E$ happens and $w$ otherwise and another gamble which yields $b$ if $E$ doesn't happen and $w$ otherwise. This means that our agent doesn't care whether the `good prize' is on $E$ or not-$E$. Ramsey then uses the assumption that the agent is an EU maximizer to infer that $p(E) = \frac{1}{2}$. For our purposes, the crucial observation is that the assumption of EU maximization is not essential to this argument. It also goes through with on weaker assumption that the agent is a REU maximizer. In fact, the assumption required to make the argument work is even weaker: \begin{quote} \textbf{Better Prize Condition.} If an agent strictly prefers $b$ to $w$ and is indifferent between the gamble $\{E, b ; \textrm{not-}E, w\}$ and the gamble $\{E, w; \textrm{not-}E, b\}$, then our agent thinks that $E$ and not-$E$ are equally probable: $p(E) = p(\textrm{not-}E)$. \end{quote} This assumption is entailed by both EU maximization and REU maximization. The important observation is that while decision weights are not the same as probabilities, the fact that two events have \emph{equal} decision weight implies that they have \emph{equal} probability.\footnote{Here is the argument: Since $\{E, b ; \textrm{not-}E, w\} \sim \{E, b; \textrm{not-}E, w \}$, we have $u(w) +r(p(E))(u(b) - u(w)) = u(w) +r(p(\textrm{not-}E))(u(b) - u(w))$, so $r(p(E)) = r(p(\textrm{not-}E))$. Since $r$ is strictly increasing, and so injective, $p(E) = p(\textrm{not-}E)$.} Therefore, if an agent does not care whether the `good prize' is on event $E$ or not-$E$, our agent must think that the event $E$ and the event not-$E$ are equally probable, even if our agent is sensitive to risk. Intuitively, this is because the choice is between two gambles -- there are no certainties to be had. So whether or not you are sensitive to risk, you should prefer the gamble which has the better prize on the more probable event. Further, the only way for $E$ and not-$E$ to be equally probable is if both events have probability $\frac{1}{2}$. Therefore, we can use Ramsey's idea to find an event with subjective probability $\frac{1}{2}$. Now the crucial point is the following: We can generalize Ramsey's idea to all rational numbers, using what I call the \emph{method of fair lotteries}. Pick any rational number $a$ in the $[0,1]$ interval, so $a \in [0,1]\cap \mathbb{Q}$. We want to find some event $E$ with $p(E) = a$. Because $a$ is a rational number, we have $a = \frac{k}{n}$ for some natural numbers $k$ and $n$, where $k \leq n$. We can think of an event with probability $\frac{k}{n}$ in the following way: It is the event that one of the first $k$ tickets in a fair lottery with $n$ tickets in total wins. So to solve our problem, we have to find a lottery with $n$ tickets which our agent considers to be fair. We can model a fair lottery with $n$ tickets as a partition $\{E_1, ... ,E_n\}$ of our state space $\mathscr{S}$ into $n$ equiprobable events. So, given a partition with $n$ events, how do we find out whether our agent considers all the events in the partition to be equally probable? Here is an answer to this question, generalizing Ramsey's idea for how to find an event with subjective probability $\frac{1}{2}$. First, we find outcomes $b$ and $w$ such that our agent strictly prefers $b$ over $w$. Now, we check whether our agent is indifferent between a gamble which yields $b$ if $E_i$ happens and $w$ otherwise, and a gamble which yields $b$ if $E_j$ happens and $w$ otherwise, for all distinct events $E_i$ and $E_j$ in our partition. This means that our agent does not care whether the `good prize' is on the first event in the partition, the second event in the partition and so on. Now, we assume the following: \begin{quote} \textbf{Generalized Better Prize Condition.} If an agent strictly prefers $b$ to $w$ and is indifferent between the gambles $\{E_1, b ; \textrm{not-}E_1, w\}$ and $\{E_2, b; \textrm{not-}E_2, w\}$, then our agent thinks that $E_1$ and $E_2$ are equally probable: $p(E_1) = p(E_2)$. \end{quote} Again, this assumption is entailed by both EU maximization and REU maximization.\footnote{Here is the argument: Since $\{E_1, b ; \textrm{not-}E_1, w\} \sim \{E_2, b; \textrm{not-}E_2, w \}$, we have $r(p(E_1)) = r(p(E_2))$. Since $r$ is strictly increasing, and so injective, $p(E_1) = p(E_2)$.} Then, we use this assumption to infer that our agent must consider all events to be equally probable, that is $p(E_i) = p(E_j)$ for all events $E_i$ and $E_j$ in our partition. Again, this is quite intuitive: If our agent doesn't care whether the good prize is on event $E_1$ or $E_2$ or .... or $E_n$, our agent must consider all events in the partition to be equally probable, even if our agent is sensitive to risk. Now that we have our fair lottery $\{E_1, ... ,E_n\}$, we also have an event with probability $\frac{k}{n}$ with $k \leq n$. This is simply the event that one of the first $k$ tickets wins: $\bigcup_{i=1}^k E_i = E_1 \cup ... \cup E_k$. Since all events in $\mathcal{P}$ are equiprobable, we have $p(E_i) = \frac{1}{n}$ for all $i \leq n$. Further, all events in our lottery are disjoint, so \begin{equation*} p(\bigcup_{i=1}^k E_i) = \sum_{i=1}^k p(E_i) = \frac{k}{n}. \end{equation*} Therefore, we have found our desired event $E$ with probability $p(E) = \frac{k}{n} = a$. At this point, we can use Ramsey's method to determine $r(a)$. First, we find outcomes $b,m,w$ which satisfy Ramsey's condition, so \begin{enumerate}[i.] \item our agent strictly prefers $b$ over $w$: $w \prec b$, \item our agent is indifferent between getting $m$ for certain and a gamble which yields $b$ if $E$ happens and $w$ otherwise: $m \sim \{b, E; w, \textrm{not-}E\}$. \end{enumerate} Then, we infer that $r(p(E)) = \frac{u(m) - u(w)}{u(b) - u(w)}$. Therefore, we know that $r(a) = \frac{u(m) - u(w)}{u(b) - u(w)}$. Given the assumption that we have already determined our agent's utility function, we can compute $r(a)$. This is the decision weight our agent ascribes to events with probability $a = \frac{k}{n}$. Thus, we have a method to find decision weights for events with any rational-valued probability. As explained earlier, this gives us enough information to determine our agents' risk function $r$ uniquely. Therefore, we can measure the risk function of a REU maximizer by their observable preferences among gambles. I described a method to measure our agent's attitude towards risk. This method relies on `fair lotteries': partitions of our event space into $n$ equiprobable events. Two remarks are in order. First, this method is not an algorithm for finding events with any rational-valued probability. This is because I have merely described a method for checking when our agent considers a partition to be a fair lottery, and no general method for finding such partitions. However, Ramsey does not provide an algorithm for finding outcomes which satisfy Ramsey's conditions. Therefore, the lack of an algorithm does not make our method worse than Ramsey's original method. Second, our method assumes that there are partitions of our agent's event space into arbitrarily many equiprobable events. Note that this goes beyond Ramsey's original assumptions. However, partitions of our agent's event space into arbitrarily many equiprobable events have a natural interpretation: they correspond to fair lotteries with arbitrarily many tickets. To find such partitions, we just have to find lotteries with arbitrarily many tickets which our agent considers to be fair. I admit that assuming the existence of such lotteries is still a considerable idealization. But in the serious business of decision theory, we have to make some idealizations. The important point is that, when it comes to idealizations in decision theory, assuming the existence of fair lotteries is a pretty mild one.\footnote{Due to a result by \cite{WS1922}, we know that such partitions exist if our agent's subjective probability function is \emph{countably additive} and \emph{non-atomic}. Also see \cite[p. 36]{Billingsley1995}. Relatedly, note that the existence of fair lotteries (or something stronger) has been quite frequently assumed to get from a qualitative ordering of events by their comparative probability to a unique probability measure which represents this ranking. See, for example, \cite{deFinetti1937, Koopman1940} and in particular the discussion in \cite[p. 38-39]{Savage1954}.} \subsection{From Risk Attitude to Subjective Probability} I close by explaining two different ways in which we can leverage the method of fair lotteries to measure the subjective probabilities of an REU maximizer. First observe that the method of fair lotteries already gives us a straightforward way to measure all \emph{rational-valued} subjective probabilities. If our agent ascribes some rational probability value $\frac{k}{n}$ to event $E$, we can discover this by observing that our agent considers the event $E$ to be equally probable as the event that one of the first $k$ tickets in a fair lottery with $n$ tickets wins. Note, however, that this doesn't quite work for irrational-valued subjective probabilities. If our agent ascribes an irrational probability value to event $E$, such as $\frac{1}{\pi}$, then $E$ is not equally probable to any fair lottery event. Yet we can still use the method of fair lotteries to find pairs of events which are more and less probable than $E$, and then `squeeze' these two events closer and closer to each other. We obtain the probability of $E$ as the limit point of this process. Thus, we can use the method of fair lotteries to measure all subjective probabilities of a REU maximizer, although in a less constructive way than Ramsey's original method. Second, once we have measured out our agent's attitude towards risk, when can use this information to recover subjective probabilities from decision weights. Suppose we want to find out which subjective probability our agent ascribes to event $E$. We find outcomes $b,m,w$ which satisfy Ramsey's condition. Like Ramsey, we have to assume that our agent's outcome space is sufficiently rich to contain such outcomes for any event $E$ whose probability we want to measure. Then we use Ramsey's method, together with the assumption that our agent maximizes REU, to infer that $r(p(E)) = \frac{u(m) - u(w)}{u(b) - u(w)}$. Now since we know the risk function $r$, we can disentagle subjective probabilities and attitudes towards risk: \begin{equation*} p(E) = r^{-1}(r(p(E))) = r^{-1}\left(\frac{u(m) - u(w)}{u(b) - u(w)}\right). \end{equation*} Thus, we have a method to measure the subjective probabilities of a REU maximizer. The catch is that we first have to determine their risk function, which will take infinitely many steps. For this reason, this second method is also less constructive than Ramsey's original method. \section{Conclusion} Ramsey shows how we can measure the subjective probabilities of an agent by their observable preferences among gambles, assuming that the agent is an EU maximizer and we have enough information about their utility function. In this paper, I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: REU maximizers. In particular, I show how we can measure the risk attitudes of an agent by their observable preferences among gambles, assuming that the agent is an REU maximizer and we have enough information about their utility function. I further explain how we can leverage this method to measure subjective probabilities of a REU maximizer. Since EU theory is a special case of REU theory, our method also works for EU maximizers. Thus, we have a method for measuring both risk attitudes and subjective probability which is strictly more general than Ramsey's original method. In a nutshell, the upshot is this: When we are using decision theory as a tool of hermeneutics, a method of making sense of each other, moving to a more permissive decision theory allows us to make sense of more kinds of agents. Ramsey's original insight was that we can use decision theory to get from people's overt behavior to an interpretation of their mental states, their beliefs and desires. I hope to have demonstrated that we can extend this great insight beyond the narrow confines of orthodox expected utility theory. This doesn't show that being sensitive to risk is \emph{rational} -- but it shows that we can make sense of it. The scope of `decision theoretic hermeneutics' includes agents which are sensitive to risk. Let me close with two open questions which I hope to address in future work. First, our method only works if the utility function of the agent is already known. But we would also like measure beliefs and risk attitudes if we have \emph{no information} about an agent's utility function. Thus, it would be great to get rid of the assumption that the utility function is known. Second, our method only works if the agent's outcome space is sufficiently rich -- for any event $E$ whose probability we want to measure, we need to find outcomes which satisfy Ramsey's condition. However, it would be great to extend our method to agents who do not make such fine-grained distinctions between outcomes. Note that both of these limitations also apply to Ramsey's original method. Thus, we need to generalize Ramsey even further that I have done here. \bigskip \textbf{Acknowledgments.} Thanks to audiences at the 7th CSLI Workshop on Logic, Rationality \& Intelligent Interaction at Stanford in 2018 and the 2019 Formal Epistemology Workshop in Turin, where earlier versions of this material were presented. Special thanks to Lara Buchak, Mikayla Kelley, Krzysztof Mierzewski and Yifeng Ding for helpful comments and discussion. \bibliographystyle{eptcs}
2,877,628,090,010
arxiv
\section{Introduction}\label{sec:introduction} Recent years show an advancing interest in the field of autonomous vessels. This is due to the variety of challenging applications where autonomous systems can be advantageous to humanly-operated vessels but also because of the task to solve the arising complex problems that involve environmental disturbances and nonlinear vessel dynamics, see \cite{Streng2020}. Along with classical path-following scenarios using PID controllers as shown in \cite{Barslett2018}, more advanced approaches such as Lyapunov-based methods involving, e.g., passivity and backstepping techniques were applied in \cite{Fossen2002a,Breivik2004,Do2006,Do2009a,Fossen2011}. Furthermore, exact feedback linearization and differential flatness were exploited in \cite{Agrawal2004a,DeAquinoLimaverdeFilho2013,Paliotta2018}. In general, the mentioned approaches are not able to handle input and state constraints. To deal with such issues, a third branch has emerged which utilizes optimization-based techniques, see, e.g., \cite{Bitar2018,Bitar2019,Lekkas2016}. Essentially, optimization-based methods aim to minimize a cost functional depending on the control inputs subject to the system dynamics and additional equality and inequality constraints. Methods to solve this optimal control problem (OCP) can be characterized as indirect or direct, where the former leads to a two-point boundary value problem and the latter directly minimizes the cost functional by suitable discretization. While nonlinear and optimization-based techniques constitute independent methods, their combination can lead to increased performance and reduced complexity. This combined approach goes back to \cite{Agrawal1998} and was further extended to the class of differentially flat systems, e.g. in \cite{Milam2000}. Herein, the so-called flat outputs are parameterized with B-spline functions to obtain an OCP, where the constraint imposed by the system dynamics is implicitly fulfilled. Therefore, this constraint can be omitted in the problem setup. Subsequent discretization in time transfers the OCP to a static optimization problem (direct method). This approach has already been used for fuel optimization in hybrid electric drives and trajectory generation for quadrocopters, see \cite{Abel2015} and \cite{Abel2016}, respectively. In this contribution, the combined flatness and optimization approach is extended and applied to an underactuated surface vessel model. In \cite{Agrawal2004a} the flatness of the considered model is verified under restrictive assumptions on the model parameters. Moreover, the resulting flat state and input parameterizations contain several singularities, which severely restrict its applicability. To address this in the following the so-called defect elimination method is used as suggested, e.g., in \cite{Oldenburg2002}. Utilizing this approach, the underactuated dynamics is achieved by means of the singularity free flat parameterization obtained for a fully actuated vessel model. This comes at the cost of an additional equality constraint that must be imposed on the parameterized, non-controllable input. For practical reasons, however, this equality constraint is replaced by two inequality constraints. This approach is evaluated for driving maneuvers in confined environments including mooring based on closed-loop MPC involving disturbances induced by wind. Herein, the maneuver is separated into two phases. The first phase will be referred to as the driving phase where the ship geometry is not explicitly considered to evaluate obstacle collisions. Subsequently, the second phase will be referred to as the mooring phase, where the ship geometry is approximated to ensure obstacle avoidance for the entire ship hull. The paper is organized as follows. The vessel model is introduced in Section \ref{sec:surface_vessel_model} together with its flat state and input parameterization. Section \ref{sec:optimal_control} describes the general form of an OCP and introduces the used approach for obstacle modeling with constructive solid geometry (CSG) functions. Additionally, the flatness-based direct solution method is described by briefly introducing the main properties of B-spline functions and formulating their connection to flat outputs. To account for wind-induced disturbances, the extension to MPC is proposed in Section \ref{sec:model_predictive_control}. Subsequently, a two-phase MPC is presented together with short remarks on the used disturbance model which is assumed to be unknown to the MPC. Finally, Section \ref{sec:simulation_results} shows simulation results and the paper closes with some conclusions in Section \ref{sec:conclusion}. \section{Surface vessel model}\label{sec:surface_vessel_model} \begin{figure}[t] \begin{center} \includegraphics{figures/pdf/vessel_schematic.pdf} \caption{Vessel position and orientation in NED frame and velocities in body-fixed frame for 3DOF surface vessel.} \label{fig:vessel_schematic} \end{center} \end{figure} Assuming that the vessel operates in calm sea conditions, e.g., in harbor areas or near shore shipping applications, roll, pitch and heave velocities can be neglected. This results in a three degrees of freedom (3DOF) description of a surface vessel for which two sets of coordinates are required. The first set $\veta=\transpose{[x\ y\ \psi]}$ describes the vessel location and pose in the North-East-Down (NED) frame with origin $0_r$, where $x$ corresponds to the north and $y$ to the east coordinate. The third component $\psi$ describes the vessel orientation w.r.t. the north axis. This set of coordinates is a reference frame for the second set of coordinates $\vnu=\transpose{[u\ v\ r]}$ which represents the vessels surge and sway velocities as well as its yaw rate in a body-fixed coordinate frame, respectively. These relations can be observed in Fig. \ref{fig:vessel_schematic}. \subsection{Vessel dynamics} By applying Newton's second law the equations of motion for a surface vessel can be described using matrix-vector notation, see \cite{Fossen2002a}, in the form \begin{subequations} \label{eq:dxdt_robot} \begin{align} \label{eq:rpsi} \vetad &= R(\psi)\vnu\\ \label{eq:dnudt} M\vnud &= -\big(\cnu+\dnu\big)\vnu + B_{\tau}\vtau_{\textit{c}} + \vtau_{\text{w}} \end{align} \end{subequations} where \begin{align} \rpsi = \rpsimat \end{align} is the rotation matrix and \begin{align} M \!=\!\! \begin{bmatrix} m_{11} & 0 & 0 \\ 0 & m_{22} & m_{23}\\ 0 & m_{32} & m_{33} \end{bmatrix} \!\!=\!\! \begin{bmatrix} m\!-\!X_{\dot{u}} &0 & 0\\ 0 & m\!-\!Y_{\dot{v}} & mx_g \!-\! Y_{\dot{r}}\\ 0 & mx_g \!-\!N_{\dot{v}} & I_{zz} \!-\! N_{\dot{r}}\\ \end{bmatrix} \end{align} describes the mass matrix with vessel mass $m$, hydrodynamic derivatives in SNAME notation $X_{\dot{u}},Y_{\dot{v}},Y_{\dot{r}},N_{\dot{v}},N_{\dot{r}}$, distance $x_g$ of the origin $0_b$ to the center of gravity on the $x_b$-axis, and moment of inertia $I_{zz}$. Coriolis and centripetal effects are included in the matrix \begin{align} \cnu = -\transpose{\cnu}=\begin{bmatrix} 0 & 0 & c_{13}\\ 0 & 0 & c_{23}\\ -c_{13} & -c_{23}&0 \end{bmatrix}, \end{align} where \begin{align*} c_{13} = -m_{22}v-\frac{m_{23}+m_{32}}{2}r,\quad c_{23} = m_{11}u. \end{align*} The damping matrix \begin{align} \dnu = -\begin{bmatrix} X_u\!+\!X_{\abs{u}u}\abs{u} & 0 & 0\\ 0& Y_v\!+\!Y_{\abs{v}v}\abs{v} & Y_r\\ 0& N_v&N_r\!+\!N_{\abs{r}r}\abs{r}\\ \end{bmatrix} \end{align} combines linear damping terms $X_u,Y_v,Y_r,N_v,N_r$ and nonlinear second order modulus model terms $X_{\abs{u}u}$, $Y_{\abs{v}v}$, $N_{\abs{r}r}$. For an underactuated surface vessel it holds that the effect of the control input $\vtau_{\textit{c}}=\transpose{[\tau_u\ \tau_r]}$ is applied with the actuator configuration matrix \begin{align} B_{\tau}=\begin{bmatrix} 1 & 0\\ 0&0\\0&1 \end{bmatrix}. \end{align} The vector $\vec{\tau}_{\text{w}}$ describes wind-induced disturbances. For a compact notation, the state vector $\vx = \transpose{[\transpose{\veta}\ \transpose{\vnu}]}\inRn{n}$, where $n=6$ is the number of states, and input vector $\vu=\vtau_{\text{c}}\inRn{m}$, where $m=2$ is the number of inputs, are defined such that \eqref{eq:dxdt_robot} can be rewritten in nonlinear input-affine form \begin{subequations} \begin{align} \label{eq:dxdt} \vxd = \vfvx + B\vu + \overline{\vtau}_{\text{w}}, \qquad t>0,\quad \vxnn=\vxn \end{align} where \begin{align} \vfvx &= \begin{bmatrix} \rpsi\vnu\\ -M^{-1}\big(\cnu+\dnu\big)\vnu \end{bmatrix},\\ B &= \begin{bmatrix} \vec{0}^{(3\times m)}\\ M^{-1}B_{\tau} \end{bmatrix},\\ \overline{\vtau}_{\text{w}} &= \begin{bmatrix} \vec{0}^{(3\times 1)}\\ M^{-1}\vtau_{\text{w}} \end{bmatrix}. \end{align} \end{subequations} \subsection{Differential flatness} In the following, the differential flatness of the vessel model is shown. Theoretical background concerning differential flatness is provided in, e.g., \cite{Fliess1995,Rothfuss1997,Fliess1999}. The flat parameterization of the underactuated vessel model shows several singularities, see \cite{Agrawal2004a}. Therefore, a fully actuated model with $\vu=\vtau^\prime_{\textit{c}}=\transpose{[\tau_u\ \tau_v\ \tau_r]}$ and $B = \transpose{[\vec{0}^{(m\times 3)}\ \transpose{(M^{-1}B^{\prime}_{\tau})}]}$ where $B^\prime_{\tau}=I^{(3\times 3)}=\text{diag}\{1,1,1\}$ is assumed. Furthermore, the disturbance term in \eqref{eq:dxdt} is neglected so that $\overline{\vec{\tau}}_{\textit{w}}=0$. Choosing the flat output $\vz=\veta=\transpose{[x\ y\ \psi]}$, the states and inputs can be differentially parametrized in the form \begin{subequations} \label{eq:flatness_ship_theta_x_theta_u} \begin{align} \label{eq:flatness_ship_theta_x} \vx &\!=\! \vec{\theta}_{\vx}\big(\vz,\vzd,\hdots,\vec{z}^{(\vec{\beta}-\vec{1})}\big) \!=\! \begin{bmatrix} z_1\\ z_2\\ z_3\\ \sin(z_3)\dz_2+\cos(z_3)\dz_1\\ \cos(z_3)\dz_2-\sin(z_3)\dz_1 \\ \dz_3 \end{bmatrix}\\ \label{eq:flatness_ship_theta_u} \vu &\!=\! \vec{\theta}_{\vu}\big(\vz,\vzd,\hdots,\vec{z}^{(\vec{\beta})}\big) \!=\! \begin{bmatrix} \theta_{\tau_u}\\ \theta_{\tau_v}\\ \theta_{\tau_r} \end{bmatrix}, \end{align} \end{subequations} with $\vec{\beta}=(2 \ 2 \ 2)$. The terms $\theta_{\tau_u}$, $\theta_{\tau_v}$, and $\theta_{\tau_r}$ are provided in Appendix \ref{app:inputparam}. It becomes apparent that no singularities arise in \eqref{eq:flatness_ship_theta_x_theta_u}. To recover the original underactuated vessel dynamics from the flat parameterization of the fully actuated vessel it is necessary to impose the constraint \begin{align} \label{eq:flatness_constraint} \theta_{\tau_v}=0, \end{align} which induces an ODE in the components of $\vz$. In principle, this ODE can be interpreted as the internal dynamics, see, e.g., the analysis in \cite{Rothfuss1996}. For the considered OCP \eqref{eq:flatness_constraint} is included by means of two inequality constraints to be fulfilled in terms of the decision variables. \subsection{Model parameters}\label{sec:model_parameter} The vessel parameters are taken from \cite{Do2006} for a model ship and are summarized in Tab. \ref{tab:vessel_param}. Therein, $L_S$ and $W_S$ are the vessel length and width, respectively. The inputs are constrained according to \begin{subequations} \begin{align} -\SI{5}{\newton}&\leq\tau_u\leq\SI{5}{\newton},\\ -\SI{0.2}{\newton\meter}&\leq\tau_r\leq\SI{0.2}{\newton\meter}. \end{align} \end{subequations} \begin{table}[ht] \begin{center} \captionsetup{width=\textwidth} \caption{Vessel parameters}\label{tab:vessel_param} \ra{1.2} \begin{tabular}{l S[table-format=2.2, table-column-width=0.35cm] l S[table-format=2.1,table-number-alignment = center,table-column-width=0.35cm] l S[table-format=1.1,table-number-alignment = center,table-column-width=0.3cm] l S[table-format=2.2,table-number-alignment = center,table-column-width=0.6cm,table-align-text-post = false] l S[table-format=1.1,table-number-alignment = center,table-align-text-post = false]} \toprule \multicolumn{2}{l}{\multirow{2}{*}{Mass matrix}} & \multicolumn{4}{l}{Damping matrix} & \multicolumn{2}{l}{\multirow{2}{*}{Vessel}}\\ & & \multicolumn{2}{l}{linear} & \multicolumn{2}{l}{nonlinear} & & \\ \midrule $M_{11}$&25.80&$X_u$ & -12.0&$X_{|u|u}$&-2.1&$L_S$ &1.20\ \si{\meter}&\\ $M_{22}$&33.80&$Y_{v}$&-17.0 &$Y_{|v|v}$&-4.5&$W_S$&0.35\ \si{\meter}&\\ $M_{23}$&6.20 &$Y_{r}$ &-0.2 &$N_{|r|r}$ &-0.1&$m$ &17.00\ \si{\kilogram}\\ $M_{32}$&6.20 &$N_{v}$&-0.5 & & & &\\ $M_{33}$&2.76&$N_{r}$ &-0.5 & & & &\\ \bottomrule \end{tabular} \end{center} \end{table} \section{Flatness-based optimal control}\label{sec:optimal_control} The aim for the desired approach is to generate trajectories while also considering actuator constraints. In other words, a combined trajectory-generation and motion control of the vessel is required while also taking into account confined environments for mooring maneuvers. In the following, CSG functions are discussed which can represent arbitrary shapes. These can be included to an OCP formulation. Furthermore, a flatness-based solution method for the OCP using B-splines is discussed. \subsection{Obstacle modeling} For obstacles of arbitrary shapes, CSG functions are used, see \cite{Ricci1972a}. These are based on geometric primitive functions $f^{\text{pr}}(\vx)$ such as ellipsoids, lines, and triangles. In order to describe the surface $S$ of a shape mathematically, a function of the form \begin{align} \label{eq:csg_inequality} f^{{S}}(\vx)\leq 1 \end{align} can be formulated which combines several primitive shapes using the maximum operator, i.e. \begin{align} f^{{S}}(\vx) = \max\left\{f^{\text{pr}}_{1}(\vx),\hdots,f^{\text{pr}}_l(\vx)\right\}, \end{align} where $l$ is the number of primitive functions used to define the shape. Since the gradient of the maximum operator is not smooth the approximation \begin{align} \begin{split} \max\{f^{\text{pr}}_1(\vx),\hdots,f^{\text{pr}}_l(\vx)\} \approx \big[&(f^{\text{pr}}_1(\vx))^p+\hdots\\ &+(f^{\text{pr}}_l(\vx))^p \big]^{\frac{1}{p}} \end{split} \end{align} is used, where the approximation quality increases with increasing $p\in\mathbb{N}$. In the following scenarios, rectangles are used to reflect confined areas. A rectangle can be constructed from two shifted and rotated parabolas, so that \begin{align} \label{eq:csg_rectangle} \begin{split} f_{\text{rect}}^{{S}}(\vx \vert \vec{r}) &= \Bigg[\bigg(\frac{\cos(\alpha)(x-\tilde{x}_{0}) + \sin(\alpha)(y-\tilde{y}_{0})}{d_x} \bigg)^{2p}\\ &+\bigg(\frac{-\sin(\alpha)(x-\tilde{x}_{0})+\cos(\alpha)(y-\tilde{y}_{0})}{d_y} \bigg)^{2p}\Bigg]^{\frac{1}{p}}, \end{split} \end{align} where the elements of $\vec{r}=\transpose{[\tilde{x}_0\ \tilde{y}_0\ d_x \ d_y \ \alpha \ p]}$ describe the center position, length, width, orientation, and approximation quality parameter in the reference frame. \subsection{Problem formulation} In the following, the OCP for the considered system is expressed with \begin{subequations} \label{eq:ocp} \begin{align} \label{eq:ocp_J} &\min_{\vu} \ J(\vu) = \varphi(\tf,\vx(\tf))\\ \notag\text{s.t.}\\ \label{eq:ocp_dynamic_constraint} &\vxd = \vfvx + B\vu, \quad t>0, \quad \vx(0) = \vxn\\ \label{eq:ocp_final_constraint} &\vec{g}\big(\tf,\vx(\tf)\big)=\vec{0}\\ \label{eq:ocp_h} &\vec{h}(\vx)\leq\vec{0} \\ \label{eq:ocp_input_constraints} &\vu^{-} \leq \vu \leq \vu^{+}, \end{align} \end{subequations} where $J(\vu)$ represents the cost functional in Mayer form that is to be minimized, $\tf$ is the final time, \eqref{eq:ocp_dynamic_constraint} denotes the ODE constraint imposed by the system dynamics with initial condition $\vxnn=\vxn$. Furthermore, terminal path constraints are included with \eqref{eq:ocp_final_constraint}, and state constraints imposed by obstacles are formulated with \eqref{eq:ocp_h}. Herein, $\vh(\vx)$ is obtained by rearranging \eqref{eq:csg_inequality} and including \eqref{eq:csg_rectangle} which yields $h_i(\vx)=1-f_{\text{rect},i}^S(\vx), i=1,\hdots,q$, where $q$ is the number of rectangular obstacles. Input constraints are expressed using \eqref{eq:ocp_input_constraints}, where $\vu^-$, and $\vu^+$ denote the lower and upper input bounds, respectively. \subsection{Flatness-based solution using B-splines} The ODE constraint \eqref{eq:ocp_dynamic_constraint} is implicitly fullfilled by the flat parameterization \eqref{eq:flatness_ship_theta_x_theta_u} of the system. Therefore, the differential flatness of the vessel system can be exploited when the OCP is formulated in flat coordinates thereby eliminating the ODE constraint. Since the problem is still an infinite-dimensional it is convenient to parameterize the flat outputs using B-spline functions which are unions of curve segements. For this, consider the expansion \begin{align} \label{eq:z_approx_bsplines} z_j(t) \approx \hat{z}_j(t,\vec{p}_j) = \sum_{i=0}^{N_j}B_{i,D_j}(t)p_{i,j},\ \ \begin{split} t&\in[0,\tf],\\ j&=1,\hdots,m \end{split} \end{align} for the $j$th component of the flat output $\vz$. Herein, $B_{i,D_j}(t)$ are basis functions of order $D_j$ and the vector {$\vec{p}_j = \transpose{[p_{0,j}\ \hdots \ p_{N_j,j}]}$} summarizes the individual $N_j$ weights. In general, the ability to approximate complex function behavior is improved as $N_j$ is increased. Using B-spline functions the basis functions can be calculated recursively using the Cox-DeBoor scheme, see \cite{Piegl2013}, i.e. \begin{subequations} \begin{align} \label{eq:bsplines_coxdeboor_Bi0} B_{i,0}(t) &= \begin{cases} 1, \qquad \text{for } t \in [u_i,u_{i+1})\\ 0, \qquad \text{else} \end{cases},\\ \begin{split} B_{i,j}(t) &= \frac{t - u_i}{u_{i+j} - u_i}B_{i,j-1}(t) \\ &\quad+ \frac{u_{i+j+1}-t}{u_{i+j+1}-u_{i+j}}B_{i+1,j-1}(t). \end{split} \end{align} \end{subequations} In the recursion formula it can be seen that the time horizon $t\in[0,\tf]$ is separated using a so-called knot vector \begin{equation} \begin{split} \hat{\vec{u}}_j&=\transpose{[u_{0,j}\ \hdots \ u_{M_j,j}]}\quad j=1,\hdots,m. \end{split} \end{equation} At the knot points, the curve segments are joined to form the B-spline function. As can be seen from the recursion, the $i$th basis function $B_{i,D_j}(t)$ that is weighted with $p_{i,j}$ for the $j$th flat output is nonzero on the interval $t\in[u_{i,j},u_{D_j+1,j})$. Thus, choosing \begin{align} \hat{\vec{u}}_j= \transpose{[\underbrace{0\ \hdots \ 0}_{D_j}\ 0 \hdots\ \tf \ \underbrace{\tf\ \hdots \ \tf}_{D_j}]}, \end{align} results in $B_{k,0}(0)=0$ for $k<D_j$ and only $B_{D_j,0}(0)=1$, so that \begin{align} \hat{z}_j(0,\vec{p}_j) &= p_{0,j}. \end{align} Similarly, this choice of the knot vector yields $\hat{z}_j(\tf,\vec{p}_j)= p_{N_j,j}$. In this way, initial and final values (of the flat outputs) are parameterized using the control points $p_{0,j}$ and $p_{N_j,j}$, respectively. The parameter $M_j$ in $\hat{\vec{u}}$ can be determined with $M_j= D_j+N_j+1$. The flat parameterization requires derivatives of the flat outputs up to order $\vec{\beta}$. The $k$th order derivative of a B-spline function is given by \begin{align} \label{eq:z_dot_approx_bsplines} \hat{z}_j^{(k)}(t,\vec{p}_j) &=\sum_{i=0}^{N_j}B_{i,D_j}^{(k)}(t)p_{i,j},\ \ \begin{split} t&\in[0,\tf], \\ j&=1,\hdots,m, \end{split} \end{align} where \begin{equation} \begin{split} &B_{i,l}^{(k)}(t) = \frac{l}{u_{i+l}-u_i}B_{i,l-1}^{(k-1)}(t) \\ &-\frac{l}{u_{i+l+1}-u_{i+1}}B^{(k-1)}_{i+1,l-1}(t), \quad \begin{split}k&=1,\hdots,D_j-1\\l&=0,\hdots,D_j\end{split}. \end{split} \end{equation} This means that the derivative of a B-spline function is again a B-spline function but of lower degree. Each B-spline function is $D_j-2$ times continuously differentiable. To avoid numerical difficulties, $D_j$ should be chosen as small as possible, i.e. $D_j = \beta_j + 2$. For further properties of B-spline functions, see \cite{Piegl2013}. Substituting \eqref{eq:z_approx_bsplines}, \eqref{eq:z_dot_approx_bsplines} together with \eqref{eq:flatness_ship_theta_x_theta_u} into the OCP formulation \eqref{eq:ocp} yields an equivalent problem with the new (constant) decision variables \begin{align} \overline{\vec{p}} = \transpose{\begin{bmatrix} \transpose{\vec{p}}_1 & \hdots & \transpose{\vec{p}}_m \end{bmatrix}} \inRn{n_p}, \end{align} where $n_p=\sum_{j=1}^{m}N_j$ is the number of decision variables. Feasibility w.r.t. obstacle and input constraints \eqref{eq:ocp_h} and \eqref{eq:ocp_input_constraints}, respectively, is checked at collocation points, $t_k = kh,\ k=0,\hdots,N$, where $N+1$ is the number of collocation points and $t_0=0,\ t_N=\tf$. Consequently, a NLP is obtained. \section{Model predictive control}\label{sec:model_predictive_control} In the following, the flatness-based OCP approach is extended to a MPC to compensate for wind-induced disturbances. This is done by repeatedly solving OCPs at discrete points in time with a step time of \mbox{$\Delta t=t_{\text{MPC}}=\text{const.}$} As a scenario, a combined driving and mooring maneuver is considered, each resulting in a different OCP formulation. \subsection{Driving phase} In the first phase, the distance to a desired terminal position $(x_{\text{f}},y_{\text{f}})$ is minimized within the fixed MPC time horizon $\tf=t_{\text{hor}}$, i.e., \begin{align} J(\vu) = \varphi(\tf,\vx(\tf)) = (x(t_{\text{f}})-x_{\text{f}})^2 + (y(t_{\text{f}})-y_{\text{f}})^2, \end{align} with \begin{align} \vec{g}(\tf,\vx(\tf))={\emptyset}, \end{align} such that no terminal condition is imposed on the problem. In this way, the {closest point} w.r.t. the terminal position is the solution to the OCP. It can be assumed that while driving no confined areas are passed by the vessel so that it is sufficient to adduce the origin $0_b$ of the body-fixed frame, i.e. $(x,y)$, in order to evaluate the obstacle functions \eqref{eq:ocp_h}. \subsection{Mooring phase} If the vessel origin is within a defined radius $R_{\text{s}}$ (switching point) of the desired terminal position after an arbitrary iteration, the cost functional is altered to minimize the transition time, i.e. \begin{align} J(\vu) = \varphi(\tf,\vx(\tf)) =\tf. \end{align} This requires the formulation of a terminal condition \begin{align} \vec{g}(\tf,\vx(\tf))=\vx(\tf)-\vx_{\text{f}}, \end{align} where $\vx_{\text{f}}$ is the arbitrary but fixed final state. In this phase, the vessel geometry is approximated as a rectangle and feasibility w.r.t. obstacles is ensured using four edge points of the rectangle. \subsection{Wind-induced disturbances} The disturbances induced by wind $\vec{\tau}_{\text{w}}$ or $\overline{\vec{\tau}}_{\text{w}}$, respectively, are calculated according to \cite{Fossen2011} using a normally distributed wind direction $\beta_w \sim \mathcal{N}(\mu_{\beta},\sigma_{\beta})$ and an absolute wind velocity $V_{w,\text{abs}}\sim\mathcal{W}(k_{V}, \lambda_{V})$, where $k_V$ and $\lambda_V$ are shape and scale parameters of the Weibull distribution. With this, the forces and torque applied to the vessel can be calculated with \begin{align} \vec{\tau}_{\text{w}} = \frac{1}{2}\rho \big(V_{w,\,\text{rel}}\big)^2 \begin{bmatrix} C_XA_f \\ C_Y A_l\\ C_NA_lL_S \end{bmatrix}, \end{align} where $\rho$ is the air density, $V_{w,\text{rel}}$ is the relative wind velocity which, together with the coefficients $C_X,C_Y$, and $C_N$, depends on the absolute wind direction $\beta_w$ and speed $V_{w,\text{abs}}$. The parameters $L_S,A_f$ and $A_l$ are vessel length, projected frontal and lateral areas, respectively. \section{Simulation results}\label{sec:simulation_results} Simulation results are generated in \texttt{MATLAB}\ using \texttt{CasADi}\ with \texttt{IPOPT}\ as NLP solver, see \cite{Andersson2018} and \cite{Waechter2002}, respectively. The underactuated vessel dynamics using the flat parameterization of the fully actuated system is retained by taking into account \eqref{eq:flatness_constraint} which for numerical purposes is approximated by \begin{align} -\epsilon \leq \theta_{\tau_v} \leq \epsilon, \end{align} for $\epsilon\ll 1$. For the simulation, only the solutions of $\theta_{\tau_u}$, and $\theta_{\tau_r}$ are applied to the underactuated model. \begin{rem} Setting $\epsilon=0$ would result in $N+1$ equality constraints which reduces the number of free decision variables in the NLP potentially rendering it unsolvable. Choosing $\epsilon>0$ avoids this issue. \end{rem} Initial and terminal (desired) states are chosen to be \begin{subequations} \begin{align} \vxn &= \transpose{\left[3.5\ 2\ \frac{\pi}{2}\ 0 \ 0 \ 0 \right]},\\ \vx_{\text{f}} &= \transpose{\left[2.4\ 18\ 0\ 0 \ 0 \ 0 \right]}. \end{align} \end{subequations} Further, the switching point is chosen to be \begin{align} R_{\text{s}}=t_{\text{hor}}\sqrt{u_{\text{max}}^2+v_{\text{max}}^2}, \end{align} where $u_{\text{max}}=\SI[per-mode=symbol]{0.38}{\meter\per\second},v_{\text{max}}\approx\SI[per-mode=symbol]{0}{\meter\per\second}$ describe the maximum surge and sway velocity of the vessel, respectively. The fixed time horizon is set to $t_{\text{hor}}=\SI{15}{\second}$ in the driving phase. The MPC horizon is shifted each iteration for $t_{\text{MPC}}=\SI{1}{\second}$. Additionally, four obstacles are considered where $h_i(\vx),i=1,2,3$ are relevant for the mooring maneuver and $h_4(\vx)$ affects the driving maneuver. Feasibility w.r.t. constraints is ensured at $N+1=200$ collocation points. Additional scenario parameters are summarized in Tab. \ref{tab:mpc_scenario_parameter}. The top view of the path, orientation, initial and final position, as well as the switching point are shown in Fig. \ref{fig:mpc_results_xy}. It can be seen that there is no collision with any obstacle. Figure \ref{fig:mpc_results_taus} shows the inputs with constraints marked using dashed-red lines which are satisfied for all times. The remainder of states is shown in Fig. \ref{fig:mpc_results_states} together with the switching time $t_\textit{s}=\SI{31}{\second}$. Sudden changes in the inputs can be explained by numerical issues and disturbances which could push the vessel into the obstacles resulting in feasibility issues for the NLP solver. This could be avoided using soft constraints as described in \cite{Scokaert1999a}. \begin{table}[tb] \begin{center} \captionsetup{width=.5\textwidth} \caption{Obstacle and wind parameters.} \label{tab:mpc_scenario_parameter} \ra{1.2} \begin{tabularx}{.48\textwidth}{l p{.5cm} p{.5cm} p{.5cm} p{.5cm} X l X l} \toprule \multicolumn{5}{c}{Obstacles} &\multicolumn{4}{c}{\multirow{2}{*}{Wind}}\\ &$\vec{r}_1$&$\vec{r}_2$ &$\vec{r}_3$ &$\vec{r}_4$&&&&\\ \midrule $\tilde{x}_0$ & 2 & 2 & 0.5 &3&$A_f$&\SI{0.35}{\square\meter}&$\mu_{\beta}$&\SI{0}{\radian}\\ $\tilde{y}_0$ & 17.575 & 18.575&16.325& 10 &$A_l$&\SI{1.2}{\square\meter}&$\sigma_{\beta}$&\SI{0.06}{\radian}\\ $d_x$ & 2 & 2 & 1 &1.5 & $L_S$ & \SI{1.2}{\meter} & $\lambda_V$&{0.194}\\ $d_y$ & 0.5 & 0.5 & 6 &1.5&&& $k_V$&2\\ $\alpha$ & 0 & 0 & 0 &{$\frac{\pi}{4}$}&&&$\rho$&\SI[per-mode=fraction]{1.205}{\kilogram\per\cubic\meter}\\ $p$ & 12 & 12 & 12&12 & &&&\\ \bottomrule \end{tabularx} \end{center} \end{table} \begin{figure}[tb] \begin{subfigure}{.5\textwidth} \includegraphics{figures/pdf/mpc_xy.pdf} \captionsetup{width=.885\textwidth} \subcaption{Simulated path with wind direction $\beta_w$, absolute wind speed $V_{w,\text{abs}}$, switching radius $R_{\text{d}}$, initial and final positions, and obstacles $h_i(\vx),i=1,\hdots,4$, as well as edge point paths in driving phase (red).} \label{fig:mpc_results_xy} \end{subfigure} \begin{subfigure}{.5\textwidth} \begin{minipage}{.5\textwidth} \includegraphics{figures/pdf/mpc_tauu.pdf} \end{minipage}% \begin{minipage}{.5\textwidth} \includegraphics{figures/pdf/mpc_taur.pdf} \end{minipage} \captionsetup{width=.885\textwidth} \subcaption{Inputs surge force and yaw torque with constraints (dashed red). The input $\tau_v$ is not explicitly shown here because it is forced to zero.}\label{fig:mpc_results_taus} \end{subfigure} \begin{subfigure}{.5\textwidth} \begin{minipage}{.5\textwidth} \includegraphics{figures/pdf/mpc_psi.pdf}\\ \includegraphics{figures/pdf/mpc_r.pdf} \end{minipage}% \begin{minipage}{.5\textwidth} \includegraphics{figures/pdf/mpc_u.pdf}\\ \includegraphics{figures/pdf/mpc_v.pdf} \end{minipage} \captionsetup{width=.885\textwidth} \subcaption{Orientation and velocities of the vessel.}\label{fig:mpc_results_states} \end{subfigure} \caption{Simulation results with optimal path (top), inputs (middle), and states (bottom) each with (blue) and without (black dotted) disturbances considering four rectangular obstacles.} \label{fig:mpc_results} \end{figure} \section{Conclusion}\label{sec:conclusion} In this paper a flatness-based MPC for an underactuated nonlinear surface vessel model is introduced. The {fully} actuated system is shown to be differentially flat so that the ODE constraint in the OCP can be removed. The flat outputs are parameterized using B-spline functions. A discretization in time of the OCP in flat coordinates allows the formulation of a NLP which can be solved numerically. Underactuated vessel dynamics are retained using inequality constraints imposed on the non-controllable input and obstacles are included to the OCP using CSG functions which can approximate arbitrary shapes. The concept is evaluated in a two-phase simulation scenario resulting in different OCP formulations. Future work focuses on real-time feasibility which can be achieved by approximating the highest-order derivative of each flat output and subsequent integration thus avoiding recursive computation of basis functions as shown in \cite{Oldenburg2002}. Further work also focuses on soft constraints and extending the concept to include collision avoidance regulations (COLREGS).
2,877,628,090,011
arxiv
\section{Introduction} \label{sec_intro} The solar dynamo consists of poloidal magnetic field being wound up to generate toroidal magnetic field, while a process involving the Coriolis force creates poloidal field from the toroidal field \citep[see reviews by, e.g.][]{Ossendrijver:2003, Charbonneau:2010, Charbonneau:2014, Cameron:etal:2017, Brun:Browning:2017}. The oscillatory nature of the solar cycle together with Hale's polarity rules \citep{Hathaway:2015} imply a polarity reversal of the toroidal flux system within the convection zone during each 11-year cycle. Therefore, the question arises as to how the `old-polarity' toroidal flux is disposed of before it is replaced by the `new-polarity' flux. Principally, four different mechanism could contribute: \begin{enumerate} \item{`Unwinding' by the action of differential rotation on the new (reversed) poloidal field,} \item{cancellation of opposite-polarity magnetic flux at the equatorial plane due to latitudinal transport of toroidal flux by meridional flow, turbulent diffusion/pumping, or dynamo wave propagation \citep[e.g.,][]{Cameron:Schuessler:2016},} \item{O-type neutral point dissipation along the dipole axis,} \item{loss through the surface due to flux emergence.} \end{enumerate} The first two possibilities are discussed in \cite{Wang:Sheeley:1991}, as is the fourth possibility which they discount for the same reasons as put forward by \citet{Parker:1984} and by \citet{Vainshtein:Rosner:1991}. These authors pointed out that (nearly) perfect flux freezing implies the necessity of detaching the magnetic field lines from their mass load in order to be able to escape from the solar interior. Flux emergence in the form of loops could provide a path to such escape through a well-organized sequence of reconnection events between adjacent ('sea-serpent') loops. Such a situation, however, is considered to be rather artificial and in fact is not supported by observations. The last process has also been considered as a nonlinearity limiting the amplitude of the dynamo process \citep[e.g.,][]{Leighton:1969, Schmitt:Schuessler:1989}. In this paper, we consider the problem of toroidal flux loss by flux emergence from a somewhat different perspective. We consider the net toroidal flux integrated over a hemispheric meridional section, $\int \langle B_\phi(r,\theta)\rangle\,dS$, where $\langle B_\phi\rangle$ is the azimuthally averaged magnetic field \citep{Cameron:Schuessler:2015}. For a reduction of $\langle B_\phi\rangle$ and thus of the net toroidal flux it is not required that toroidal field lines completely detach from the solar interior: each single emergence of a loop reduces $\langle B_\phi\rangle$ proportional to the width and the flux of the loop. We show that the contribution of each emerged loop to the reduction of the net toroidal flux remains constant during the subsequent evolution of the emerged flux. Therefore, the total amount of flux loss can be estimated by simply adding up the contributions of all flux emergence events. Eventually, the corresponding amount of toroidal flux is carried away from the Sun by the solar wind and coronal mass ejections \citep{Bieber:Rust:1995}. The paper is organized as follows. In Sec.~\ref{sec_contr} we consider the evolution of the net toroidal flux using the procedure developed by \citet{Cameron:Schuessler:2015}. In Sec.~\ref{sec_loss} we discuss the effect of loop emergence and the subsequent surface evolution of flux on the net hemispheric toroidal flux. Quantitative estimates for the resulting loss of net toroidal flux on the basis of observed emergence rates are determined and compared with a simple estimate based on turbulent diffusion. Sec.~\ref{sec_concl} contains our conclusions. \section{Evolution of the net toroidal flux} \label{sec_contr} Hale's polarity rules and the observation that the azimuthal field at the solar surface shows a latitude-independent east-west orientation in each hemisphere during the periods of maximum activity \citep{2018A&A...609A..56C} suggest that the relevant quantity for the large-scale solar dynamo is the net hemispheric toroidal flux in the convection zone. \citet{Cameron:Schuessler:2015} have shown that the evolution equation for the net toroidal flux in the (say) northern hemisphere, $\Phi(t)$, is obtained in terms of a contour integral by integrating the hydromagnetic induction equation (neglecting the molecular diffusivity) over a hemispheric meridional section of the convection zone and applying Stokes' theorem (see Fig.~\ref{fig:contour}), viz. \begin{equation} \frac{{\mathrm{d}} \Phi}{{\mathrm{d}} t} =\oint ({\bf{U}}\times {\bf{B}}) \cdot \mathrm{d}{\bf l} = \oint (\langle{\bf{U}}\rangle\times \langle{\bf B}\rangle + \langle {\bf{U'}}\times {\bf B'}\rangle) \cdot \mathrm{d}{\bf l}\,. \label{eq:stokes} \end{equation} Here ${\bf U}$ is the velocity field and ${\bf B}$ is the magnetic field. Quantities in angular brackets, $\langle\dots\rangle$, are azimuthal averages and primed quantities represent fluctuations with respect to the average. This equation describes both the generation of net flux by differential rotation and the loss of flux. In particular, flux loss by transport through the surface is included in the surface part of the contour integral, \begin{eqnarray} \left(\frac{\mathrm{d} \Phi}{\mathrm{d} t}\right)_{\rm surf} =\int^{\pi/2}_0 && \left( \langle U_\phi \rangle \langle B_r \rangle - \langle U_r \rangle \langle B_\phi \rangle \right. \nonumber \\ &&+ \left. \langle U'_\phi B'_r \rangle - \langle U_r' B'_\phi \rangle \right)\vert_{R_\odot} R_{\odot} \mathrm{d}\theta\,, \label{eq:surf} \end{eqnarray} in spherical polar coordinates. The first term of the integrand, $\langle U_\phi \rangle \langle B_r \rangle$, represents the effect of winding/unwinding by latitudinal differential rotation. The second term, $\langle U_r \rangle \langle B_\phi \rangle$, vanishes since there is no mean radial flow at the solar surface. The third term, $\langle U'_\phi B'_r \rangle$, is negligible apart during flux emergence events. This is because the evolution of the surface flux after emergence is well represented by passive transport independent of magnetic polarity, as demonstrated by the success of surface flux transport simulations in reproducing the observations \citep{Wang:etal:1989,Whitbread:etal:2017,Jiang:etal:2014,Jiang:etal:2015}. The fourth term, $\langle U_r' B'_\phi \rangle$, represents flux emergence and submergence. The latter process takes place when magnetic features of opposite polarities meet and cancel after reconnection. \begin{figure} \begin{center} \includegraphics[scale=0.35]{contour.eps} \caption{Contour relevant for determining the evolution of the net toroidal flux in the northern hemisphere. The dashed line represents the base of the convection zone.} \label{fig:contour} \end{center} \end{figure} \section{Flux loss by flux emergence} \label{sec_loss} As we have seen above, flux emergence and submergence change the net toroidal magnetic flux in the convection zone. The evolution of bipolar regions after emergence is well described as passive flux transport by horizontal flows (differential rotation, meridional circulation, and convective flows described in terms of turbulent diffusion). Submergence is represented by diffusive flux cancellation at locations where opposite polarities meet and one might therefore expect that the amount of toroidal flux loss changes in the course of the evolution of a bipolar region. However, we show in this section that this is not the case and, furthermore, that the total amount of flux loss can be quantitatively estimated by simply adding the contributions of all bipolar magnetic regions. \begin{figure} \begin{center} \includegraphics[scale=0.6]{loop.eps} \caption{Idealized sketch of an emerged loop of toroidal magnetic flux. The flux tube (in blue) has a magnetic flux of $\Phi_L$; the longitudinal extent of the emerged part is $D$. The photosphere is indicated by the red line.} \label{fig:loop} \end{center} \end{figure} Consider a single emerged loop of toroidal magnetic flux as sketched in Fig.~\ref{fig:loop}. The resulting decrease of the azimuthal average, $\langle B_\phi \rangle$, corresponds to a reduction, $\Delta\Phi_{\rm tor}$, of the subsurface net toroidal flux, given by \begin{equation} \Delta\Phi_{\rm tor} = \frac{D\, \Phi_{\rm L}}{2\pi R_\odot\cos\lambda}\,, \label{eq:loop} \end{equation} where $\Phi_{\rm L}$ is the amount of flux contained in the loop, $D$ is the longitudinal extension of the loop after emergence, and $\lambda$ its position in latitude. Equation~(3) follows from the fact that the azimuthally averaged change in the subsurface toroidal flux due to the emergence is equal to the flux of the loop multiplied by the fraction of the longitudinal separation of the two polarities at the surface to the circumference of the Sun at the latitude of emergence. More simply, the change in the longitudinally averaged subsurface toroidal flux due to an emergence is the flux of the emerging flux tube multiplied by the fraction of the tube in longitude which has moved across the photosphere. Various processes can, in principle, affect the subsequent evolution of the emerged flux contained in the corresponding bipolar magnetic region (BMR): \begin{enumerate} \item{Transport by horizontal convective flows, which can be described as turbulent diffusion \citep[random walk, see][]{Leighton:1964},} \item{latitudinal differential rotation acting on tilted BMRs,} \item{meridional flow, and} \item{longitudinal drift of the two polarities in opposite directions caused by magnetic tension in the subsurface part of the loop \citep{van_Ballegooijen:1982}.} \end{enumerate} Surface flux transport simulations have repeatedly demonstrated that the surface magnetic flux is passively transported by the surface flows, so that the fourth (dynamic) process seems irrelevant for the evolution of the net toroidal magnetic flux. While Eq.~\ref{eq:surf} implies that meridional flow does not affect the net toroidal magnetic flux, latitudinal differential rotation leads to the `unwinding' and eventual reversal of the net toroidal flux in the course of the dynamo process. The question remains how far the first process, horizontal turbulent diffusion, which causes cancellation, dispersal, and reconnection of the emerged surface flux, leads to a temporal change of the amount of flux loss given by Eq.~(\ref{eq:loop}). The relevant properties of the process in this regard are that (1) diffusion is symmetric (independent of polarity), i.e., it affects both polarities of the loop flux in the same way, and (2) diffusion is a linear process, so that the effects of many BMRs can be simply determined by adding together the contributions of the individual BMRs, thus automatically taking account of the permanent reorganisation of the surface field by reconnection and cancellation of magnetic flux. It therefore suffices to solely consider the evolution of one loop. In the course of the diffusive evolution, both opposite-polarity patches of the vertical loop flux spread in all horizontal directions. While Eq.~(\ref{eq:surf}) shows that expansion in latitude does not affect the subsurface net toroidal flux, spreading in the longitudinal direction potentially could. Part of the emerged flux cancels at the neutral line between the polarities and thus 'heals' the subsurface toroidal flux. Another part of the flux expands longitudinally away from the neutral line and eventually diffuses all around the Sun, thus finally removing the corresponding amount of toroidal flux. While the cancellation at the neutral line reduces the amount of flux loss, the expanding part increases the flux loss by effectively enlarging the polarity separation, $D$. Owing to flux conservation and the symmetry of the diffusion process, it turns out that both contributions exactly balance each other, so that the loss of net toroidal flux, $\Delta\Phi_{\rm tor}$, remains time-independent at its initial value given by Eq.~(\ref{eq:loop}). This can be seen formally by the following illustrative calculation. Assume, for simplicity, one-dimensional cartesian geometry with a purely vertical field, $B(x,t)$, that depends on the horizontal coordinate, $x$ (representing the longitudinal direction), and time, $t$, in an infinite domain. Consider the evolution by diffusion of a bipolar region of vertical flux that is centred at $x=0$ with the two polarities centred at $x=\pm x_0$. The evolution of both polarities can be described by the analytical solution for the diffusive spread of an initial delta function in terms of Gaussian profiles, viz. \begin{equation} B(x,t) = B_0 \sqrt{\frac{a}{\pi}} \left[ e^{-a(x-x_0)^2} - e^{-a(x+x_0)^2} \right] \,, \label{eq:gauss} \end{equation} with $a=(4\eta t)^{-1}$ and diffusivity $\eta$. The centre of gravity of the field distribution for $x\geq 0$ is given by \begin{equation} {{\overline{x}}}_+ = \frac{\int_0^\infty Bx\,{\mathrm d}x}{\int_0^\infty B\, {\mathrm d}x} \,. \label{eq:cog} \end{equation} The centre of gravity, ${{\overline{x}}}_-$ for $x\leq 0$ is defined analogously. The symmetry of the configuration entails ${{\overline{x}}}_- = -{{\overline{x}}}_+$. The relevant quantity for the reduction of the subsurface horizontal flux, corresponding to $D\,\Phi_{\rm L}$ in Eq.~(\ref{eq:loop}), is given by \begin{equation} R(t) = {\overline{x}}_+ \int_0^\infty B\, {\mathrm d}x + {\overline{x}}_- \int_{-\infty}^0 B\, {\mathrm d}x = 2 \int_0^\infty Bx\, {\mathrm d}x \,, \label{eq:red} \end{equation} again owing to symmetry. Using Eq.~(\ref{eq:gauss}) we obtain \begin{equation} \int_0^\infty Bx\, {\mathrm d}x = B_0\sqrt{\frac{a}{\pi}}\, (I_+ - I_-) \label{eq:int12a} \end{equation} with \begin{eqnarray} I_+ &=& \int_0^\infty x e^{-a(x-x_0)^2} {\mathrm d}x \nonumber \\ I_- &=& \int_0^\infty x e^{-a(x+x_0)^2} {\mathrm d}x \,. \label{eq:int12b} \end{eqnarray} After some elementary algebra we obtain \begin{equation} (I_+ - I_-) = 2 x_0 \int_{x_0}^\infty e^{-a(x-x_0)^2} {\mathrm d}x = x_0 \sqrt{\frac{\pi}{a}} \,, \label{eq:int12c} \end{equation} so that with Eq.~(\ref{eq:red}) we have \begin{equation} R(t) = 2B_0\sqrt{\frac{a}{\pi}}\cdot x_0 \sqrt{\frac{\pi}{a}} = 2B_0 x_0\,, \label{eq:red2} \end{equation} which is independent of time. That means that the diffusive evolution of a bipolar magnetic region does not change the reduction of the net toroidal flux due to its emergence, which is given by Eq.~(\ref{eq:loop}). The increase of flux loss by the outward spreading of the magnetic flux at the surface is exactly balanced by flux cancellation. In fact, this result does not depend on the special assumption of Gaussian profiles but is valid for any symmetric profile that is uniformly stretched while keeping the integral constant. Since the flux loss, $\Delta\Phi_{\rm tor}$, associated with an individual bipolar region is time-independent and diffusion is a linear process, we can estimate the mean rate of flux loss during a time interval $\Delta t$ by simply adding the individual contributions given by Eq.~(\ref{eq:loop}) of the bipolar regions emerging within that time, viz. \begin{equation} \frac{{\rm d}\Phi_{\rm tor}}{{\rm d}t} = \frac{\gamma \sum_i \left({D\,\Phi_{\rm L}}{(\cos\lambda)^{-1}}\right)_i} {2\pi R_\odot \Delta t}\,. \label{eq:dphidt} \end{equation} The factor $\gamma$ is the fraction of the emerged flux that is not balanced by emergences with the opposite polarity orientation, i.e., $\gamma=(\Phi_{\rm Hale}-\Phi_{\rm non-Hale})/(\Phi_{\rm Hale}+\Phi_{\rm non-Hale})$, where $\Phi_{\rm Hale}$ and $\Phi_{\rm non-Hale}$, respectively, are the amounts of flux that emerge obeying Hale's law and not obeying it. We first consider the contribution due to ephemeral regions, small bipolar regions carrying a magnetic flux of the order of $10^{20}\,$Mx that emerge ubiquitously at the solar surface. \citet{2001ApJ...555..448H} determined a value of $5 \times 10^{23}$~Mx per day for the emergence rate of unsigned flux in ephemeral regions over the entire solar surface. About 60\% of these were found to obey Hale's polarity laws (i.e., a surplus of 20\%), so that $\gamma=0.2$ in this case. For a rough estimate of the corresponding loss of toroidal flux we assume that polarity separation, $D$, loop flux, $\Phi_{\rm L}$, and emergence latitude, are all uncorrelated. Since the contribution of each emerging loop to the total unsigned surface flux equals $2\Phi_{\rm L}$, we have $\sum_i \Phi_{\rm L}=2.5 \times 10^{23}$~Mx per day. The average polarity separation for ephemeral regions is about $9\,$Mm \citep{2001ApJ...555..448H}. Since $D$ is the longitudinal separation, we have $D_i=9 \cos(\alpha_i)$~Mm where $\alpha_i$ is the tilt angle of the axis of the ephemeral region with respect to the east-west direction. For a given longitudinal polarity orientation (Hale or anti-Hale), these angles are likely to be uniformly distributed between $\pm 90^{\circ}$, so that on average we expect $\langle D_i \rangle \approx 9\times 0.64=5.76$~Mm where $0.64$ is the average value of $\cos(\alpha)$ between $-90^{\circ}$ and $90^{\circ}$. We assume the emergences occur uniformly over the surface, so that the weighted average of $\cos(\lambda)^{-1}$ over the emergences is \begin{eqnarray} \langle (\cos\lambda)^{-1} \rangle_i &=& \frac{\int^{90^{\circ}}_0 \cos(\lambda)^{-1} \cos\lambda d\lambda} {\int^{90^{\circ}}_0 \cos\lambda d\lambda} \nonumber \\ &=&\pi/2\,, \end{eqnarray} where the weighting factor $\cos\lambda$ accounts for the fact that the length of the circumference at constant latitude is proportional to $\cos\lambda$. We thus obtain for the loss rate of toroidal flux per hemisphere due to the emergence of ephemeral regions a value of \begin{equation} \frac{{\rm d}\Phi_{\rm tor, hem}^{\rm ER}}{{\rm d}t} \approx 5.9 \times10^{14}\mbox{ Mx s}^{-1}\,. \label{eq:dphidthER} \end{equation} The results of \citet{2001ApJ...555..448H} are based on data from October 1997, i.e., under solar minimum conditions. Since the emergence rate of ephemeral regions varies roughly by a factor of 2-3 during the solar cycle \citep{Harvey:etal:1975, Martin:Harvey:1979,2003ApJ...584.1107H}, we expect the loss rate during solar maxima to be correspondingly higher, so that \begin{equation} \frac{{\rm d}\Phi_{\rm tor, hem}^{\rm ER}}{{\rm d}t} \vert_{\rm maximum} \approx 2\times 5.9 \times10^{14}\mbox{ Mx s}^{-1}\,. \label{eq:dphidthER_max} \end{equation} We note that a few years before activity minima ephemeral regions from the current and the next cycle are both present on the surface, and the change in the net hemispheric subsurface toroidal flux will reflect the difference between the contributions of the old and new cycle ephemeral regions. For active regions exceeding 3.5 square degrees in size, \citet{1994SoPh..150....1S} report emergence rates over the entire solar surface of $7.4 \times 10^{20}$~Mx per day during activity minimum and $6.2 \times 10^{21}$~Mx per day during maximum. For simplicity, we assume an average polarity separation of $40\,$Mm, east-west alignment, and emergence close to the equator $\cos\lambda=1$. We then obtain \begin{equation} \frac{{\rm d}\Phi_{\rm tor, hem}^{\rm AR}}{{\rm d}t} \approx 2.0\times10^{13}\mbox{ Mx s}^{-1} \label{eq:dphidthARmin} \end{equation} during activity minimum and \begin{equation} \frac{{\rm d}\Phi_{\rm tor, hem}^{\rm AR}}{{\rm d}t} \approx 1.6\times10^{14}\mbox{ Mx s}^{-1} \label{eq:dphidthARmax} \end{equation} during maximum. Assuming a factor of 2 variation of the emergence rate of ephemeral regions between minimum and maximum, these regions therefore contribute about 90\% of the total loss rate of toroidal flux during solar maxima and about 97\% during minima. The total flux loss rate per hemisphere during minimum is then \begin{equation} (\frac{{\rm d}\Phi_{\rm tor, hem}^{\rm ER}}{{\rm d}t}+ \frac{{\rm d}\Phi_{\rm tor, hem}^{\rm AR}}{{\rm d}t})\vert_{\mathrm{minimum}} =6.1\times10^{14} \mbox{~Mx~s}^{-1}, \end{equation} and $1.3\times10^{15}$~Mx~s$^{-1}$ during maximum. With the total loss rate of $1.3\times10^{15}$ Mx s$^{-1}$ around maxima and a total amount of subsurface toroidal flux of $5\times 10^{23}$~Mx per hemisphere \citep{Cameron:Schuessler:2015}, we obtain a characteristic decay time of 12.2 years. Consequently, flux loss through the photosphere associated with flux emergence is an important factor for the evolution of the subsurface toroidal flux on solar-cycle timescales. Roughly approximating the cycle-averaged loss rate by the mean of its maximum and minimum values, i.e., $9.6\times10^{14}$ Mx s$^{-1}$, we obtain a total loss of toroidal flux by flux emergence over 11~years of $3.3\times10^{23}\,$Mx. \citet{Bieber:Rust:1995} estimated the total loss of toroidal flux into interplanetary space as $10^{24}\,$Mx per 11-year cycle, i.e., $5\times10^{23}$~Mx per hemisphere and cycle. This is roughly consistent with our result of $3.3\times10^{23}\,$Mx per hemisphere and cycle, given the considerable uncertainties and simplifications entering both estimates. The rate at which BMRs appear on the solar surface, as a function of the amount of flux which emerges, is described by a single power law which extends over 5 orders of magnitude, from small ephemeral regions to large active regions \citep[see, for example][]{2003ApJ...584.1107H, 2011SoPh..269...13T}. Unlike active regions which emerge only at latitudes less than about 40$^{\circ}$, ephemeral regions emerge all over the solar surface. However ephemeral regions emerging in the butterfly wings have a tendency to obey Hale's law, with the same east-west orientation as the active regions of the same cycle \citep{1979SoPh...64...93M}. The tendency of ephemeral region to emerge obeying Hale's law extends the butterfly wings to earlier times and higher latitudes \citep{1979SoPh...64...93M, 1988Natur.333..748W}. Which sizes range of BMRs are most important for the loss of toroidal field through the surface is mainly decided by the competition between the number of emergences and their tendency to obey Hale's law. The ephemeral regions dominate the flux loss at all phases of the solar cycle because ephemeral region emergence is much more common than active region emergence. The larger ephemeral regions in the range of $10^{18}$~Mx and above are presumably more important than the smaller emergences because the tendency to obey Hale's law decreases rapidly with decreasing flux of the BMR \citep{2003ApJ...584.1107H}. We can also compare our result with simple estimates in terms of turbulent diffusion. Instead of regarding individual emergence events, this approach considers the transport of toroidal magnetic field by turbulent motions throughout the convection zone and across the photosphere. Ignoring turbulent pumping, one can parameterize this by an effective turbulent diffusivity, $\eta_{\mathrm{t}}$, the value of which can be estimated using mixing length theory \citep[e.g.][]{2011ApJ...727L..23M}, from numerical simulations \citep[e.g.][]{2018A&A...609A..51W}, or inferred from observations \citep[e.g.][]{Cameron:Schuessler:2016}. Near-surface values of $\eta_{\mathrm{t}}$ are typically around $10^{12}$cm$^2\,$s$^{-1}$. Using the depth of the convections zone, $L=200$~Mm, as a typical length scale, this leads to a diffusive decay time of $\tau=L^2/\eta_{\mathrm{t}} \simeq 12.7\,$years, which is consistent with the above value of 12.2 years from flux emergence. \section{Conclusions} \label{sec_concl} Our results show that the loss of net toroidal flux from the solar interior due to flux emergence can be faithfully estimated by adding the time-independent contributions of the individual bipolar regions to the reduction of the longitudinally averaged azimuthal field. Using the observed emergence rates of ephemeral and active regions leads to a characteristic decay time of the toroidal flux of about 12 years, in which ephemeral regions contribute most of the effect. The decay rate of toroidal flux by flux emergence is also consistent with simple estimates based on turbulent diffusion. Consequently, flux emergence represents a relevant loss mechanism for the interior toroidal flux. The decay of toroidal flux is further enhanced by cancellation across the equator, dissipation along the dipole axis, and `unwinding' by differential rotation. However, these processes presumably are not dominant because the toroidal flux loss through the photosphere already accounts for most of what needs to be removed. The 12-year timescale for toroidal flux loss due to flux emergence is close to the 11-year solar cycle period. This means that the flux loss is very important to the subsurface flux evolution, and the 12-year timescale is an important constraint for models of the solar dynamo. \begin{acknowledgements} RHC acknowledges partial support from ERC Synergy grant WHOLE SUN 810218. \end{acknowledgements} \bibliographystyle{aa}
2,877,628,090,012
arxiv
\section{Introduction} In relativistic heavy-ion collisions carried out at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), one aims at searching for a new form of matter --- the Quark-Gluon Plasma (QGP)~\cite{PBM_QGP} and studying its properties in laboratory. The production of J/$\psi$ and dileptons in heavy-ion collisions are key measurements to probe the formation of QGP. Due to the color screening of the quark-antiquark potential in the deconfined medium, the production of J/$\psi$ would be significantly suppressed, which was proposed as a direct signature of the QGP formation~\cite{MATSUI1986416}. After decades of experimental and theoretical efforts, it is recognized that other mechanisms, such as the recombination of deconfined charm quarks in the QGP and cold nuclear matter (CNM) effects, also modify the J/$\psi$ production significantly in heavy-ion collisions. Currently, the interplay of these effects can qualitatively explain the J/$\psi$ yield measured so far at SPS, RHIC and LHC~\cite{STAR_Jpsi_AuAu_BES}. Dileptons have been proposed as ``penetrating probes'' for the hot and dense medium~\cite{SHURYAK198071}, because they are not subject to the violent strong interactions in the medium. Various dilepton measurements have been performed in heavy-ion collisions. A clear enhancement in low mass region ($M_{ll} < M_{\phi}$) has been observed, which is consistent with in-medium broadening of the $\rho$ mass spectrum~\cite{PhysRevLett.79.1229,KOHLER2014665}. While the excess presented in the intermediate mass region ($M_{\phi} < M_{ll} < M_{J/\psi}$) is believed to be originated from the QGP thermal radiation~\cite{PhysRevC.63.054907}. J/$\psi$ and dilepton can also be generated by the intense electromagnetic fields accompanied with the relativistic heavy ions~\cite{UPCreview}. The intense electromagnetic field can be viewed as a spectrum of equivalent photons by the equivalent photon approximation~\cite{KRAUSS1997503}. The quasi-real photon emitted by one nucleus could fluctuate into $c\bar{c}$ pair, scatters off the other nucleus, and emerge as a real J/$\psi$. The virtual photon from one nucleus can also interact with the photon from the other, resulting in the production of dileptons, which can be represented as $\gamma + \gamma \rightarrow l^{+} + l^{-} $. The coherent nature of these interactions gives the processes distinctive characteristics: the final products consist of a J/$\psi$ (or dilepton pair) with very low transverse momentum, two intact nuclei, and nothing else. Conventionally, these reactions are only visible and studied in Ultra-Peripheral Collisions (UPC), in which the impact parameter ($b$) is larger than twice the nuclear radius ($R_{A}$) to avoid any hadronic interactions. Can the coherent photon products also exist in Hadronic Heavy-Ion Collisions (HHIC, $b < 2R_{A}$), where the violent strong interactions occur in the overlap region? The story starts with the measurements from ALICE: significant excesses of J/$\psi$ yield at very low $p_{T} (< 0.3$ GeV/c) have been observed in peripheral Pb+Pb collisions at $\sqrt{s_{\rm{NN}}} =$ 2.76 TeV~\cite{LOW_ALICE}, which can not be explained by the hadronic J$/\psi$ production with the known cold and hot medium effects. STAR made the same measurements in Au+Au collisions at $\sqrt{s_{\rm{NN}}} =$ 200 GeV~\cite{1742-6596-779-1-012039}, and also observed significant enhancement at very low $p_{T}$ in peripheral collisions. The anomaly excesses observed possess characteristics of coherent photoproduction and can be quantitative described by the theoretical calculations with coherent photon-nucleus production mechanism~\cite{PhysRevC.93.044912,PhysRevC.97.044910,SHI2018399}, which points to evidence of coherent photon-nucleus reactions in HHIC. If coherent photonuclear production is the underlying mechanism for the observed J/$\psi$ excess, coherent photon-photon production should also be there and contribute to the dilepton pair production in HHIC. Base on this train of thought, STAR measured the dielectron spectrum at very low $p_{T}$ in peripheral collisions, and indeed significant excesses were observed~\cite{PhysRevLett.121.132301}, which could be reasonably described by coherent photon-photon production mechanism~\cite{ZHA2018182,PhysRevC.97.054903}. The isobaric collision experiment, recently completed in the 2018 run at RHIC ($^{96}_{44}\rm{Ru} + ^{96}_{44}\rm{Ru}$ and $^{96}_{40}\rm{Zr} + ^{96}_{40}\rm{Zr}$), provides a unique opportunity to further address this issue. The idea is that: the production yield originated from coherent photon-nucleus interaction should be proportional to $Z^{2}$, while the production rate of coherent photon-photon is proportional to $Z^{4}$; according to above, the excesses of J$/\psi$ and dielectron would differ significantly between isobaric collisions and Au+Au collisions for centralities with the same hadronic background. In this letter, we report calculations for coherent production of J/$\psi$ and dielectron in isobaric collisions to provide theoretical baseline for further experimental test. The centrality dependence of the coherent products is presented and compared to that in Au+Au collisions. The difference in $t$ distributions of J/$\psi$ between isobaric collisions and Au+Au collision is also discussed in current framework. \section{Theoretical formalism} According to the equivalent photon approximation, the coherent photon-nucleus and photon-photon interactions in heavy-ion collisions can be factorized into a semiclassical and quantum part. The semiclassical part deals with the distribution of quasi-real photons induced by the colliding ions, while the quantum part handles the interactions of photon-Pomeron or photon-photon. The cross section for J/$\psi$ from coherent photon-nucleus interactions can be written as~\cite{UPC_JPSI_PRC,UPC_JPSI_PRL}: \begin{equation} \label{equation1} \sigma({A + A} \rightarrow {A + A} + \text{J}/\psi) = \int d\omega n(\omega)\sigma(\gamma A \rightarrow \text{J}/\psi A), \end{equation} where $\omega$ is the photon energy, $n(\omega)$ is the photon flux at energy $\omega$, and $\sigma(\gamma A \rightarrow \text{J}/\psi A)$ is the photonuclear interaction cross-section for J$/\psi$. Similarly, the dielectron production from coherent photon-photon reactions can be calculated via~\cite{Klein:2016yzr}: \begin{equation} \begin{aligned} &\sigma (A + A \rightarrow A + A + e^{+}e^{-}) \\ & =\int d\omega_{1}d\omega_{2} \frac{n(\omega_{1})}{\omega_{1}}\frac{n(\omega_{2})}{\omega_{2}}\sigma(\gamma \gamma \rightarrow e^{+}e^{-}), \label{equation2} \end{aligned} \end{equation} where $\omega_{1}$ and $\omega_{2}$ are the photon energies from the two colliding beams, and $\sigma(\gamma \gamma \rightarrow e^{+}e^{-})$ is the photon-photon reaction cross-section for dielectron. The photon flux induced by the heavy ions can be modelled using the Weizs\"acker-Williams method~\cite{KRAUSS1997503}: \begin{equation} \label{equation3} \begin{aligned} & n(\omega,r) = \frac{4Z^{2}\alpha}{\omega} \bigg | \int \frac{d^{2}q_{\bot}}{(2\pi)^{2}}q_{\bot} \frac{F(q)}{q^{2}} e^{iq_{\bot} \cdot r} \bigg |^{2} \\ & q = (q_{\bot},\frac{\omega}{\gamma}) \end{aligned} \end{equation} where $n(\omega,r)$ is the flux of photons with energy $\omega$ at distant $r$ from the center of nucleus, $\alpha$ is the electromagnetic coupling constant,$\gamma$ is lorentz factor, and the form factor $F(q)$ is Fourier transform of the charge distribution in nucleus. In the calculations, we employ the Woods-Saxon form to model the nucleon distribution of nucleus in spherical coordinates: \begin{equation} \rho_{A}(r,\theta)=\frac{\rho^{0}}{1+\exp[(r-R_{\rm{WS}}-\beta_{2}R_{\rm{WS}}Y_{2}^{0}(\theta))/d]}, \label{equation4} \end{equation} where $\rho_{0} = 0.16 \rm{\ fm}^{-3}$, $R_{WS}$ and $d$ are the ``radius'' and the surface diffuseness parameter, respectively, and $\beta_{2}$ is the deformity of the nucleus. The deformity parameter $\beta_{2}$ for Ru and Zr is ambiguous and important for the bulk correlated physics~\cite{PhysRevC.97.044901}, however, it is a minor effect in our calculations. For simplicity, the deformity parameter $\beta_{2}$ is ignored and set to 0. The ``radius'' $R_{WS}$ (Au: 6.38 fm, Ru: 5.02 fm, Zr: 5.02fm) and surface diffuseness parameter $d$ (Au: 0.535 fm, Ru: 0.46 fm, Zr: 0.46 fm) are based on fits to electron scattering data~\cite{0031-9112-29-7-028}. Fig.~\ref{figure1} shows the two-dimensional distributions of the photon flux induced in isobaric collisions at $\sqrt{s_{\rm{NN}}} =$ 200 GeV as a function of distant $r$ and energy $\omega$ for Ru + Ru (left panel) and Zr + Zr (right panel). \renewcommand{\floatpagefraction}{0.75} \begin{figure*}[htbp] \includegraphics[keepaspectratio,width=0.45\textwidth]{flux_distribution_Ru.pdf} \includegraphics[keepaspectratio,width=0.45\textwidth]{flux_distribution_Zr.pdf} \caption{Two-dimensional distributions of the photon flux in the distant $r$ and in the energy of photon $\omega$ for Ru + Ru (left panel) and Zr + Zr (right panel) collisions at $\sqrt{s_{\rm{NN}}} =$ 200 GeV.} \label{figure1} \end{figure*} The cross-section for $\gamma A \rightarrow \text{J}/\psi A$ reaction can be derived from a quantum Glauber approach coupled with the parameterized forward scattering cross section $\frac{d\sigma(\gamma p \rightarrow \text{J}/\psi p)}{dt}|_{t=0}$ as input~\cite{PhysRevC.93.044912,UPC_JPSI_PRC,PhysRevC.97.044910}: \begin{equation} \label{equation3_1} \begin{split} &\sigma(\gamma A \rightarrow \text{J}/\psi A)=\frac{d\sigma(\gamma A \rightarrow \text{J}/\psi A)}{dt}\bigg|_{t=0} \times\\ &\int|F_{P}(\vec{k}_{P})|^{2}d^{2}{\vec{k}_{P\bot}} \ \ \ \ \ \ \vec{k}_{P}=(\vec{k}_{P\bot},\frac{ \omega_{P}}{\gamma_{c}})\\ & \omega_{P} = \frac{1}{2}M_{\text{J}/\psi} e^{\pm y} = \frac{M_{\text{J}/\psi}^{2}}{4\omega_{\gamma}} \end{split} \end{equation} \begin{equation} \label{equation3_2} \frac{d\sigma(\gamma A \rightarrow \text{J}/\psi A)}{dt}\bigg|_{t=0}=C^{2}\frac{\alpha \sigma_{tot}^{2}(\text{J}/\psi A)}{4f_{\text{J}/\psi}^{2}} \end{equation} \begin{equation} \label{equation3_3} \sigma_{tot}(\text{J}/\psi A)=2\int(1-\exp(-\frac{1}{2}\sigma_{tot}(\text{J}/\psi p)T_{A}(x_{\bot})))d^{2}x_{\bot} \end{equation} \begin{equation} \label{equation3_4} \sigma_{tot}^{2}(\text{J}/\psi p)=16\pi\frac{d\sigma(\text{J}/\psi p \rightarrow \text{J}/\psi p)}{dt}\bigg|_{t=0} \end{equation} \begin{equation} \label{equation3_5} \frac{d\sigma(\text{J}/\psi p \rightarrow \text{J}/\psi p)}{dt}\bigg|_{t=0}=\frac{f_{\text{J}/\psi}^{2}}{4\pi \alpha C^{2}}\frac{d\sigma(\gamma p \rightarrow \text{J}/\psi p)}{dt}\bigg|_{t=0} \end{equation} where $T_{A}(x_{\bot})$ is the nuclear thickness function, $-t$ is the squared four momentum transfer, and $f_{\text{J}/\psi}$ is the J/$\psi$-photon coupling. Eq.~\ref{equation3_2} and ~\ref{equation3_5} are relations from vector meson dominance model~\cite{RevModPhys.50.261} and the correction factor $C$ is adopted to account for the non-diagonal coupling through higher mass vector mesons~\cite{HUFNER1998154}, as implemented in the generalized vector dominance model~\cite{PhysRevC.57.2648}. Eq.~\ref{equation3_4} is the optical theorem relation and parametrization for forward cross section $\frac{d\sigma(\gamma p \rightarrow \text{J}/\psi p)}{dt}|_{t=0}$ in Eq.~\ref{equation3_5} is obtained from~\cite{Klein:2016yzr}. The elementary cross-section to produce a pair of positron-electron with electron mass $m$ and pair invariant mass $W$ can be determined by the Breit-Wheeler formula~\cite{PhysRevD.4.1532} \begin{equation} \label{equation5} \begin{aligned} & \sigma (\gamma \gamma \rightarrow l^{+}l^{-}) = \\ &\frac{4\pi \alpha^{2}}{W^{2}} [(2+\frac{8m^{2}}{W^{2}} - \frac{16m^{4}}{W^{4}})\text{ln}(\frac{W+\sqrt{W^{2}-4m^{2}}}{2m}) \\ & -\sqrt{1-\frac{4m^{2}}{W^{2}}}(1+\frac{4m^{2}}{W^{2}})]. \end{aligned} \end{equation} The angular distribution of these positron-electron pairs can be given by \begin{equation} G(\theta) = 2 + 4(1-\frac{4m^{2}}{W^{2}})\frac{(1-\frac{4m^{2}}{W^{2}})\text{sin}^{2}(\theta)\text{cos}^{2}(\theta)+\frac{4m^{2}}{W^{2}}}{(1-(1-\frac{4m^{2}}{W^{2}})\text{cos}^{2}(\theta))^{2}}, \label{equation6} \end{equation} where $\theta$ is the angle between the beam direction and one of the electrons in the positron-electron center of mass frame. The approaches employed in this calculation are very mature in UPC, which could quantitatively describe the experimental measurements~\cite{SHI2018399,PhysRevC.70.031902,DYNDAL2017281,Abbas2013,2017489}. However, the energetic strong interactions in HHIC could impose significant impact on the coherent production. The possible disruptive effect could be factorized into two distinct sub-process: photon emission and external disturbance in overlap region. The equivalent photon field is highly contracted into the transverse plane, and travels along with the colliding nuclei in the laboratory frame. Therefore the coherent photon-nucleus and photon-photon interactions occur at almost the same time when violent hadronic collisions happen. Due to the time retarded potential, the quasi-real photons are likely to be emitted before hadronic collision by about $\Delta t = \gamma R/c$, where $\gamma$ is Lorentz factor, and $R$ is the transverse distance from the colliding nuclei. Hence, the photon emission should be unaffected by hadronic collisions. In the overlap region of collisions, the photon products could be affected by the violent hadronic interactions, leading to the loss of coherent action. For the coherent photon-photon interactions, because the final product is electron-positron pair, which is not subject to the strong interactions, the disruption effect from overlap region should be small enough to be neglected in the calculations. However, the J/$\psi$'s from coherent photon-nucleus interactions are sensitive to the hadronic interactions, thus the production in overlap region is prohibited in our approach. Furthermore, as described in Ref.~\cite{PhysRevC.97.044910}, the interference effect is included in the calculations of coherent photon-nucleus process. \section{Results} \renewcommand{\floatpagefraction}{0.75} \begin{figure}[htbp] \includegraphics[keepaspectratio,width=0.45\textwidth]{drawyield.pdf} \caption{Yields of coherent J/$\psi$ production as a function of $N_{\rm{part}}$ at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au (solid line), Ru+Ru (dotted line), and Zr+Zr (dashed line) collisions.} \label{figure2} \end{figure} \renewcommand{\floatpagefraction}{0.75} \begin{figure}[htbp] \includegraphics[keepaspectratio,width=0.45\textwidth]{drawt.pdf} \caption{The $t$ distribution of coherently produced J/$\psi$ at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au collisions for $60-80\%$ centrality class (solid line), Ru+Ru collisions for $47-75\%$ centrality class (dotted line), and Zr+Zr collisions for $47-75\%$ centrality class (dashed line). } \label{figure3} \end{figure} \renewcommand{\floatpagefraction}{0.75} \begin{figure}[htbp] \includegraphics[keepaspectratio,width=0.45\textwidth]{compair_Au_Zr_Ru.pdf} \caption{The invariant mass spectrum of electron-positron pair at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au collisions for $60-80\%$ centrality class (solid line), Ru+Ru collisions for $47-75\%$ centrality class (dotted line), and Zr+Zr collisions for $47-75\%$ centrality class (dashed line). The experimental measurements~\cite{PhysRevLett.121.132301} in $60-80\%$ centrality class from STAR are also plotted for comparison.The results are filtered to match the fiducial acceptance described in the text.} \label{figure4} \end{figure} Figure~\ref{figure2} shows the coherent J/$\psi$ yields, including interference effect, as a function of number of participants ($N_{\rm{part}}$) at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au (solid line), Ru+Ru (dotted line), and Zr+Zr (dashed line) collisions. The predictions in Au+Au and isobaric collisions all follow a trend from rise to decline towards central collisions. The increasing of cross section from peripheral to semi-peripheral collisions results from the larger photon flux induced by nucleus with smaller impact parameter. However, the later on inversion of trend originates from the destructive interference and external disturbance from overlap region, which prevails over the increasing photon flux towards central collisions. The turning point of the trend in Au+Au collisions is at higher $N_{\rm{part}}$ value than those of isobaric collisions, which is due to the collision geometry and nucleus profile differences. The production rate in Ru+Ru collisions is 1.2 times that in Zr+Zr collisions, following the exact $Z^{2}$ scaling. And the yields in isobaric collisions is dramatically smaller than that in Au+Au collisions, which is conducted by the huge $Z$ difference combined with different nuclear profile and collision geometry. The significant differences in production rate among the three collision systems lead to dramatically different enhancements with respect to hadronic background, which provides us a sensitive probe to test the coherent photoproduction in HHIC. The differential distributions of coherently produced J/$\psi$ in the three collision systems are also studied in this paper. Fig.~\ref{figure3} shows the $t$ distributions of coherently produced J/$\psi$ at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au collisions for $60-80\%$ centrality class (solid line), Ru+Ru collisions for $47-75\%$ centrality class (dotted line), and Zr+Zr collisions for $47-75\%$ centrality class (dashed line). The Mandelstam variable t is expressed as $t = t_{\parallel} +t_{\perp}$, with $t_{\parallel} = -M_{\text{J}/\psi}^{2}/(\gamma^{2}e^{\pm y})$ and $t_{\perp} = - p_{T}^{2}$. At top RHIC energies, $t_{\parallel}$ is very small and neglected here($t \simeq -p_{T}^{2}$). The specified centrality class is chosen to guarantee the same hadronic backgrounds in the three collisions systems. The rapid drops towards $p_{T} \rightarrow 0$ in $t$ distributions come from the destructive interference from the two colliding nuclei, while the downtrends at relative higher $p_{T}$ range reveal diffraction pattern, which are determined by the nuclear density distributions. The corresponding $t$ value at peak positron in Au+Au collisions is smaller than those in isobaric collisions, which is due to the larger impact parameter in Au+Au collision for the selected centrality class. And the slope of the down trend at relative higher $p_{T}$ range in Au+Au collisions is deeper than those in isobaric collisions, owning to larger nuclear profile for Au nucleus. There is no difference in $t$ distributions between isobaric collisions, since we use the same nuclear density distributions for Zr+Zr and Ru+Ru. In comparison to coherent photon-nucleus interactions, the $Z^{4}$ dependence of coherent photon-photon process make it a more significant signal to be tested in HHIC. The cross-section of photon-photon interactions is heavily concentrated among near-threshold pairs, which are not visible to existing detectors. So, calculations of the total cross-section are not particularly useful. Instead, we calculate the cross-section and kinematic distributions with acceptances that match that used by STAR. Fig.~\ref{figure4} shows the invariant mass spectrum of electron-positron pair at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au collisions for $60-80\%$ centrality class (solid line), Ru+Ru collisions for $47-75\%$ centrality class (dotted line), and Zr+Zr collisions for $47-75\%$ centrality class (dashed line). The results are filtered to match the fiducial acceptance at STAR: daughter track transverse momentum $p_{T} >$ 0.2 GeV/c, track pseudo-rapidity $|\eta| <$ 1, and pair rapidity $|y| <$ 1. The experimental measurements~\cite{PhysRevLett.121.132301} in $60-80\%$ centrality class from STAR are also shown for comparison, which could be reasonaly described by our calculation. The mass distribution shapes for the three collision systems are almost the same, while the relative yield ratios are 7.9 : 1.5 : 1.0 for Au+Au, Ru+Ru, and Zr+Zr collisions, respectively. The large differences provide excellent experimental feasibility to test the production mechanism in isobaric collisions. \section{summary} In summary, we have performed calculations of J/$\psi$ production from coherent photon-nucleus interactions and electron-positron pair production from coherent photon-photon interactions in hadronic isobaric collisions. We show that the production rate of the coherent photon products at RHIC top energy differ significantly between isobaric collisions, and in comparison with those in Au+Au collisions, the differences grow enormous. The differential $t$ distributions of coherently produced J/$\psi$ in isobaric collisons are also studied, which possesses different shapes compared with that in Au+Au collisions due to the different nuclear profile and collison geometry. The predictions for isobaric collisions carried out in this paper provides theoretical basline for the further experimental test, which would be performed in the near future. \section{acknowledgement} We thank Dr. Spencer Klein and Prof. Pengfei Zhuang for useful discussions. This work was funded by the National Natural Science Foundation of China under Grant Nos. 11775213, 11505180 and 11375172, the U.S. DOE Office of Science under contract No. DE-SC0012704, and MOST under Grant No. 2014CB845400. \nocite{*} \bibliographystyle{aipnum4-1}
2,877,628,090,013
arxiv
\section{Introduction to Fracking} Fracking, the common term for hydraulic fracturing, dates back to the late 1940's with the first commercial applications in 1949. The original process was a secondary recovery method designed to enhance production in reservoirs where primary recovery had decreased to the point where production was no longer economical. By injecting a high viscosity fluid at high pressures into the reservoir rock, one or two large fractures were created that extended from the borehole. Also injected were large quantities of sand which ``propped'' the generated fractures open and allowed oil and gas to flow through the fractures to the borehole. We refer to this process as low-volume or traditional fracking. Traditional fracking is applied to conventional reservoirs with high permeability, typically sandstone reservoirs. Because of the high permeability, the oil and gas can readily migrate to the generated fractures and flow to the borehole where it can be extracted. Two developments in the 1980's allowed fracking to extract oil and gas from tight shale reservoirs where the natural formation permeability is too low for economic extraction using traditional methods. The first development was horizontal drilling. Because many formations are relatively thin and lie nearly horizontally, a vertical well can only access a limited volume of reservoir rock. By turning the well bore horizontal, using directional drilling, a single well can access a much larger volume. This reduces the cost of well drilling making the extraction more economical. The second development was ``slickwater.'' In many reservoirs the low natural matrix permeability prevents the flow of oil and gas to a well. In these reservoirs, significant flow can only occur along fractures. By injecting a low viscosity fluid at high pressure, a distributed network of fractures is generated. These fractures increase the permeability in the rock surrounding the borehole allowing oil and gas to flow to the borehole. The combination of horizontal drilling with ``slickwater'' has changed the nature of fracking to a method that can be used to extract oil and gas from tight shales. Shales are important source rocks for oil and gas \citep{Tourtelot1979, Arthur1994}. It is estimated that a large fraction of gas and oil has been formed in black shales during anoxic periods \citep{Ulmishek1990, Klemme1991, Trabucho-Alexandre2012}. As oil and gas develop in a shale, they generate pressures sufficient to fracture the rock \citep{Olson2009}. Typical shales have extensive fracture networks and joint sets. If a shale is relatively old, there is a greater chance that the natural fractures have been sealed by the deposition of silica or carbonates \citep{Gale2007a}. Fracking seems to be effective only in tight reservoirs where the natural fractures have been sealed \citep{King2012}, examples are the Barnett Shale in Texas and the Bakken Shale in North Dakota. Large quantities of natural gas are now being extracted from the Barnett Shale and large quantities of oil are being extracted from the Bakken Shale. Fracking appears to be ineffective in increasing the production from shales where the natural fractures are open, examples are the Antrim Shale in Michigan and the Monterey Shale in California. In both cases production of oil and gas continues to decrease. Fracture permeability of shales allow the migration of oil and gas into overlying strata which typically have a higher permeability. In the overlying strata, the oil and gas flow into structural or stratigraphic traps. These traps are the traditional reservoirs from which the majority of oil and gas recovery has occurred. In order to understand the process and to optimize recovery, the fracking injections are often monitored using sensitive seismometers \citep{Warpinski2013}. In addition to recovery boreholes, one or two monitoring bore holes are often drilled. Sensitive seismometer arrays are placed along their lengths. The recorded microseismic data is used to determine the locations and magnitudes of the microseismic events that occur during a fracking treatment. This information can be used in real time to control the pressure, rate, and composition of the injected fluid. Because of the great depths, analysis of microseismic data is one of the primary methods used to understand the fracking process yet only 3\% of the fracking treatments performed in 2009 were microseismically monitored \citep{Zoback2010}. An example of microseismic data recorded during a four stage frack are shown in Figure \ref{fig:microseismicity}. Anisotropy plays a large role in fracking, with stress anisotropy being common. In a reservoir, the principle stresses are often not equal. Fractures grow perpendicular to the minimum principle stress and tend to be confined to the horizontal plane because the maximum principal stress in generally vertical. The distribution of microseismicity in Fig. \ref{fig:microseismicity} is clearly anisotropic. \begin{figure} \centering \includegraphics[width=3.3in]{microseismicity} \caption{Map of the epiceneters of microseismicity associated with four fracks of the Barnett Shale \citep{Maxwell2011}. The colors correspond to the four injections and the axes are distances in meters from the monitoring well..} \label{fig:microseismicity} \end{figure} As fracking has spread to more populated areas, such as the Marcellus Shale in Pennsylvania, there has been an increase in public concern over the safety of the process. The most common concerns are the potential to generate large earthquakes and the potential to contaminate drinking water. The largest recorded earthquake from a fracking treatment had a magnitude around three \citep{Ellsworth2013a}, much too small to cause any significant damage. However, the waste water generated during a fracking treatment is often re-injected into deep saline aquifers. There is increasing evidence that this re-injection causes larger earthquakes \citep{Keranen2013}. Additionally, because the microseismicity follows a power-law (Gutenberg-Richter) frequency-magnitude distribution \citep{Maxwell2011}, it is impossible to rule out the possibility of large events. It has been observed that drinking wells near fracking wells contain elevated levels of methane \citep{Osborn2011}. This contamination is not likely to be the result of the fracture network extending from the borehole at depth to the near-surface freshwater aquifer. Shale layers are typically three to five kilometers deep whereas freshwater aquifers are on the order of a hundred meters deep. Fractures running several kilometers would have seismic energy releases much greater that those observed during fracking treatments. These large fractures would also be undesirable from an engineering perspective. If the fracture network extends beyond the shale layer into the overlying strata, which often has a much lower permeability, significant leak-off can occur which reduces the effectiveness of the frack. The observed contamination is likely due to poor quality or damaged cement well casings. Despite the lack of understanding of the processes and increased concern over the safety of fracking, there is very little publicly available research on fracking. This paper is part of an attempt to increase the availability of fracking research that can be used to understand the processes and risks associated with fracking. \section{Introduction to Percolation} Percolation theory has been used to study everything from conductivity \citep{Seager1974} to economics \citep{Cont2000}. Within geophysics, percolation theory has been used to study rock transport properties\citep{Gueguen1989}, earthquakes\citep{Otsuka1971, Sahimi1993}, and oil production \citep{King1999}. At its core, percolation is the study of connectivity. The classical random percolation problem is as follows: Given a random lattice of sites or bonds, what fraction of those bonds must be occupied for a cluster (a group of connected sites) to span from one side of a lattice to the other. This cluster is called the spanning cluster. The minimum occupation probability for which a spanning cluster exists on an infinite lattice (percolation threshold) has been shown to be a classical critical point. Near the percolation threshold, the statistics of the clusters are governed by power-laws analogous to the critical point of the liquid-gas phase transition. For a more complete introduction to percolation theory see \cite{Stauffer1994} There have been many variants of this initial model \citep{Sahimi1994}. The variant closest to our model is called invasion percolation \citep{Wilkinson1983} and has been used to study water flooding for oil recovery. Water flooding is a secondary recovery method where water is injected into a reservoir to drive oil and gas to the production well. In practice, several injection wells are drilled along one edge of an oil field and several production wells are drilled on the opposite edge. Water is injected from the injection wells and drives oil or gas to the production wells. In their model, the sites in a lattice are assigned random numbers on the interval [0,1). The sites along one edge are added to the perimeter of a growing cluster. The site on the perimeter with the smallest random number is invaded, and all sites adjacent to the invaded site are added to the perimeter. At each time step the site on the perimeter with the smallest random number is added to the single connected cluster. A later study involved injection from a single site with the cluster growing radially \citep{Wilkinson1984}. There are two major variants of the model, trapping and non-trapping, depending on whether the defending fluid is incompressible (trapping) or compressible (non-trapping) \citep{Knackstedt2002}. Non-trapping invasion percolation has been shown to belong to the same universality class as random percolation \citep{Knackstedt2002}. One of the properties of invasion percolation is that it displays self-organized criticality, i.e the dynamics take the system to a critical state. For a review of invasion percolation see \cite{Ebrahimi2010}. Despite its relative simplicity, there are many aspects of percolation theory which are still unknown or not well studied. One of those aspects, addressed in this paper, is the role of anisotropy. Anisotropy is commonly introduced by occupying bonds in the horizontal direction with one probability $p_h$ and the bonds in the vertical direction with another probability $p_v$. The critical line for 2D bond percolation on a square lattice was determined by \cite{Sykes1963}. Since that time renormalization approaches have been used to explore anisotropic percolation both on and away from criticality \citep{Ikeda1979, Lobb1981, Kim1992}. The primary experimental work has been done in the field of material conductivity \citep{Smith1979, Mendelson1980, Balberg1983}. Initially, experimental results suggested that the introduction of anisotropy would cause changes to previously universal critical exponents \citep{Balberg1987}. More recent work suggests that anisotropic percolation should share universal isotropic exponents\citep{Han1991, Celzard2003}. The applicability of these results may be limited as our model provides a method for exploring the cross-over from one to two dimensional percolation. This cross-over requires a change in the critical exponents from their values in 2D. Similar cross-overs in percolation models have been studied previously \citep{Chame1984, Sotta2003}. \cite{Herrmann1993a} studied anisotropic fracture propagation using a lattice of elastic beams subject to tensional failure. Fractal branching structures were obtained but the density of fractures were much lower than in percolation models \section{Model} Our model is an extension of the radial invasion model first studied by \cite{Wilkinson1984}. The isotropic version of our model has been given previously \citep{Norris2014}. The reservoir rock is assumed to have a network of natural fractures that have been sealed by deposition. We assume a point injection of a low viscosity fluid that breaks the seals as the fluid flows from the point of injection. We neglect the viscous pressure drop during flow and assume the fluid breaks the weakest seals as it flows through the matrix of preexisting fractures. The sealed fractures are represented by a lattice of bonds. Each bond is assumed to have a effective strength ($s$) which we represent with a random number. For simplicity we use a 2D square lattice of bonds. Our justification for the applicability of the 2D-model is our interest in layered sedimentary deposits that have remained nearly horizontal. In many cases the target reservoir strata (the black shale) is relatively thin (say 100$\;$m) and the horizontal well is drilled within this strata. We hypothesize that fractures are confined to this target strata. The anisotropy we model is due to the anisotropy of the stress field in the layer. We assume the least principal stress is in the $y$-direction and the intermediate principal stress is in the $x$-direction. Thus the induced fractures will tend to propagate in the $x$-direction. Fractures tend to be oriented perpendicular to the direction of the least principal stress, the $y$-direction, and for this reason horizontal wells are drilled in this direction. In order to model the preferential fracture orientation due to the existing stress field in the rock, we assign random numbers (effective strengths) to bonds oriented in the $x$-direction on the interval $[0,1)$ and bonds oriented in the $y$-direction on the interval $[0,a)$ with $a>1$. When $a=1$ the model is isotropic. When $a=\infty$ the model becomes one-dimensional, with propagation only in the $x$-direction. Additionally, the tuning of $a$ provides a simple way of exploring the cross-over from 2D ($a=1$) to 1D ($a=0$) percolation. The variable $a$ is a measure of the anisotropy and gives the relative likelihood of a fracture to propagate in a given direction. Thus in a simulation with $a=2$, the fracture network is twice as likely to grow in the $x$-direction than the $y$-direction. We relate this choice of anisotropy to the stresses in the rock. \begin{equation} \sigma_{xx} = a \sigma_{yy} \end{equation} We can then interpret $a$ as being the ratio of the two principal horizontal stresses in the rock. Fluid is injected from a single site and the fracture network can grow in one of four directions as illustrated in Figure \ref{fig:model}a. These bonds are assigned anisotropic effective strengths as explained previously. The weakest bond (smallest $s$) fails and the fluid-filled fracture network grows in that direction as illustrated in Figure \ref{fig:model}b. The bond fails and fluid flows into the opened crack due to the pressure difference between the injected fluid and the surrounding rock. The new nearest neighbor bonds are assigned effective strengths and the process repeats. At each time step, the weakest bond on the perimeter of the growing fracture network fails, the fluid-filled fracture network grows in that direction, and new perimeter bonds are given effective strengths. Although the effective bond strength is not assigned until the bond joins the perimeter, the bond strength does not change once assigned (quenched disorder). If at anytime, the two ends of a bond belong to the growing fracture network as shown in Figure \ref{fig:model}c, the bond is removed from the simulation as shown in Figure \ref{fig:model}d. Because the differences in fluid pressure within the fluid filled fracture network are much smaller than the difference between the pressure in the surrounding rock, these bonds are much less likely to open and can be removed from the simulation. This bond removal step leads to a non-intersecting (loopless) fracture network. In our simulations we do not include an external boundary and the fracture network can grow indefinitely. Typically, we grow a cluster until a specified number of bonds have been added to the cluster. This can be thought of as limiting the volume of fluid injected during the fracking treatment. We refer to the number of bonds in a cluster as the mass ($M$) of the cluster. \begin{figure} \centering \includegraphics[width = 3.3in]{model.pdf} \caption{Illustration of our model. (a) Fluid is injected at the site shown. The four bonds to adjacent sites are also shown, the weakest bond (smallest $s$) is a solid line. (b) Three bonds to adjacent sites are added, the weakest of the six available bonds is a solid line. (c) Step b is repeated. (d) Step b is repeated, but the internal bond is removed.} \label{fig:model} \end{figure} In our and other invasion percolation models, growth occurs in bursts, the failure of a relatively strong bond is followed by the failure of a series of relatively weak bonds. We previously \citep{Norris2014} introduced a definition of a burst involving a waterlevel. A waterlevel is chosen. A burst begins when the strength of a failed bond falls below the chosen waterlevel. The burst continues until a failed bond's strength is greater than the chosen waterlevel. We refer to the number of bonds in a burst as the mass ($m_b$) of the burst. This definition is illustrated in Figure \ref{fig:burst}. By choosing a waterlevel just below the strength of the strongest failed bond in the fracture network, we obtain a power-law distribution of bursts \citep{Norris2014}. In our and other percolation models the strength of the strongest failed bond lies just below the percolation threshold of the lattice. This makes sense because in the absence of external constraints (stopping growth at a certain mass) the fracture network would grow to infinite size, percolating the infinite lattice. The power-law distribution of bursts lets us interpret bursts as the observed microseismic event generated during fracking treatments. \begin{figure}[] \centering \includegraphics[]{burst_definition} \caption{Illustration of our definition of a burst. A typical sequence of 25 opened bond strengths is given. A burst begins when an opened bond strength is below the waterlevel and ends when the strength is above the waterlevel. Three bursts with masses $m_b=4$, 1, and 13 are illustrated.} \label{fig:burst} \end{figure} \section{Results} We have performed simulations for several different values of anisotropy ($a$). These simulations take seconds on a desktop computer for even the largest runs making our model ideal for exploring parameter space. Our simulations are currently memory limited due to the large number of perimeter bonds that must be stored. One of the goals of this paper is to understand how the simulated microseismic events (bursts) are related to the underlying fracture network and the role anisotropy plays in the structure of both. We will first examine the statistics of grown clusters (simulated fracture networks) and then examine the statistics of bursts. \subsection{Cluster Statistics} In our model, the clusters represent the connected fracture network generated during a fracking treatment. It is important to understand the properties of this network to minimize risk and optimize production. For this paper, we are primarily interested in determining how anisotropy affects cluster properties. \subsubsection{Images of Clusters} To get an idea of the general geometry of and variations between cluster realizations, we have generated three relatively small ($M=10,000$) clusters for two different anisotropies ($a=1$ and $a=4$. These clusters are shown in Figure \ref{fig:images_of_clusters}. The five largest bursts in each cluster have been colored, while the black bonds are smaller bursts and non-bursting bonds. \begin{figure*}[] \centering \includegraphics[]{cluster_plot.png} \caption{Three examples of clusters with burst structures for $a=1$ and for $a=4$. In each realization $M=10,000$. The four largest bursts are shown in color. Smaller bursts and non-burst bonds are shown in black. The injection site for each cluster is shown as a star.} \label{fig:images_of_clusters} \end{figure*} The clusters become more elongated with increasing anisotropy. This is expected as large anisotropies lead to weaker bonds in the $x$-direction. These weaker bonds provide the most likely paths for growth. As with other percolation models, there are many regions within a cluster that are completely surrounded by the cluster. The bonds on the boundaries of these cutoff regions are relatively strong with strengths greater than approximately $s=0.5$ and prevent further expansion of the cluster into the cutoff region. Bonds within the cutoff regions which are not on the boundaries can have any value of $s$, $0<s<1$. These cutoff regions are most easily observed in simulations where $a=1$. This is similar to the prevention of loops in self-avoiding random walks. Cluster growth is often asymmetrical about the point of injection despite a homogeneous distribution of strengths. In a few cases, growth occurs in nearly in a single direction. This shows that while the distribution may be symmetrical, a single realization may not exhibit that symmetry. It also shows that even small degrees of heterogeneity in an otherwise homogeneous material can result in large-scale inhomogeneous structures. \subsubsection{Occupation Probability} One property of interest is the distribution of bond strengths. In the isotropic case, we found that the frequency density ($f=\frac{\mathrm{d}N}{\mathrm{d}s}$) of bond strengths in the cluster shows a sharp cutoff near the critical point of the lattice \citep{Norris2014}. We have generated a single cluster of mass $M=10^7$ for six different anisotropies $a=1,2,4,8,16,100$ and calculated the frequency density of bond strengths as shown in Fig. \ref{fig:bond_strength_frequency_density}. \begin{figure*}[] \centering \includegraphics[]{hist_combined} \caption{Frequency densities of open bond strengths $f\left(s\right)$ are given as a function of bond strength $s$ for several values of $a$.} \label{fig:bond_strength_frequency_density} \end{figure*} As in the isotropic case, the distributions of bond strengths for all the anisotropies considered show a sharp cutoff. The cutoff moves closer to 1 as anisotropy increases. Additionally as anisotropy increases, the distribution becomes more step-like. The critical line for 2D anisotropic bond percolation has been given previously from graph theory \citep{Sykes1963} and renormalization \citep{Arovas1983, Chaves1979}. The critical line in terms of $p_x$ and $p_y$, the occupation probability for bonds oriented in the $x$ and $y$ directions, is \begin{equation} p_x + p_y = 1 \label{eq:critical_line} \end{equation} In the isotropic case this equation gives $p_c = p_x = p_y = 0.5$. In our case, we do not have two different occupation probabilities, but a largest strength which in the isotropic case was equal to the critical occupation probability. To compare the largest strengths in the anisotropic case we rewrite Eq. \eqref{eq:critical_line} in terms of our anisotropic parameter $a$ \begin{equation} p + \frac{p}{a} = 1 \Rightarrow p =\frac{a}{a+1} \label{eq:critical_curve} \end{equation} To determine whether the sharp cutoffs observed in Fig. \ref{fig:bond_strength_frequency_density} are near the critical value given in Eq. \eqref{eq:critical_line} we look for the largest strength in a cluster of mass $M=10^7$. Because small clusters often contain strengths above the cutoff, we determine the largest bond strength after an initial transient of 10 thousand bonds. The largest bond strength as a function of inverse anisotropy are shown in Fig. \ref{fig:strength_cutoff} along with the critical curve predicted by Eq. \ref{eq:critical_curve}. We find excellent agreement between the cutoffs and the critical values predicted by Eq. \ref{eq:critical_curve}, indicating that the largest bond strength is near the critical occupation probability for the lattice, even with the introduction of anisotropy. \begin{figure}[] \centering \includegraphics[]{critical_curve} \caption{Bond strength cutoff as a function of 1/anisotropy ($\frac{1}{a}$). Solid circles are from the data given in Fig. \ref{fig:bond_strength_frequency_density} and the solid line is the critical point prediction from Eq. \eqref{eq:critical_curve}.} \label{fig:strength_cutoff} \end{figure} \subsubsection{Fractal Dimension} One common measure used to distinguish between clusters of different types is the fractal dimension. To our knowledge the fractal dimension of anisotropic percolation clusters has not been measured previously. We follow the convention presented by \cite{Bunde2012} and define the fractal dimension ($D$) as the scaling exponent between the mass of the cluster $M$ and the radius from its center $r$ \begin{equation} M\left(r\right) \sim r^D \end{equation} Because we are interested in how much the reservoir is connected to the borehole, we measure the distance $r$ from the injection site (the borehole). In general, the borehole is not the location of the center of the mass of the cluster so different results may be obtained if distances are measured from the center of mass. To obtain good statistics, we generate 1000 clusters of mass $M=10^7$ for each value of anisotropy. For each cluster we center circles of varying radii ${r_1, r_2,...,r_i}$ on the injection site. For each circle we determine the mass of the cluster (number of bonds) contained within each circle ${M_1, M_2,...,M_i}$. We then take the logarithm of the radii and mass data and do a least-squares fit of aggregate log-log data to \begin{equation} \log{M} = D \log{r} + C \label{eq:fractal_dimension} \end{equation} The average cluster mass as a function of radius along with the fit for several different anisotropies are shown on a log-log plot in Fig. \ref{fig:fractal_dimension}. \begin{figure*}[] \centering \includegraphics[]{fractal_dimension} \caption{Dependence of the number of opened bonds $M$ contained within a circle of radius $r$ centered on the injection site on the radius $r$ for several values of the anisotropy parameter $a$. The best fits of Eq. \eqref{eq:fractal_dimension} to the data over the region between the vertical lines gives the fractal dimension $D$.} \label{fig:fractal_dimension} \end{figure*} For large values of $r$ the cluster mass flattens out as entire clusters are contained within a circle of radius $r$ and the cluster begins to look more point-like. For small values of $r$ the discrete nature of the lattice causes variations in the masses. Because of these two factors we look for a linear region free both types of variation. The region used for the fit is shown in Fig. \ref{fig:fractal_dimension}. Because the region used is arbitrary, the uncertainty given is the uncertainty in the fit. We see that the fractal dimension is weakly dependent on the anisotropy parameter. The fractal dimension only differs by $0.07$ between $a=1$ and $a=100$. Initially we thought that we could explore the cross-over to one-dimensional percolation; however, we see that anisotropies orders of magnitudes greater than those observed in reservoirs are required to significantly alter the fractal dimension. \subsection{Burst Statistics} Having quantified the anisotropic clusters generated by our model we not turn out attention to bursts. It is important to understand how the properties of the bursts are related to the properties of the underlying cluster. In fracking, this translates into understanding how the properties of the microseismic data are related to the underlying reactivated fracture network. Our definition of a burst requires the specification of a waterlevel just below the largest bond strength in the cluster. In this paper, we have shown that the largest bond strength in the cluster is near the critical occupation probability for the lattice. How close the largest bond strength is to the critical occupation probability depends on the size of the cluster. We have found that as clusters grow larger, the largest bond strength becomes asymptotically close to the critical occupation probability. For the six clusters shown in Fig. \ref{fig:images_of_clusters} we have determined the locations and sizes of the bursts using cutoffs of $0.47$ and $0.77$ for $a=1$ and $a=4$ respectively. We have plotted these bursts in Fig. \ref{fig:burst_images}. The burst markers are colored and scaled according to the size $m_b$ of the burst. \subsubsection{Images of Bursts} \begin{figure*}[] \centering \includegraphics[]{burst_plot.pdf} \caption{Epicenter locations of the bursts in the six realizations given in Fig. \ref{fig:images_of_clusters}. Different size circles correspond to different ranges of burst masses $m_b$.} \label{fig:burst_images} \end{figure*} We see that the bursts and clusters cover roughly the same area and have a similar outline. This indicates that bursts are useful in determining the extent of cluster growth. This suggests that bursts could be used in real time to monitor the location and direction of cluster growth. We see that the bursts of different sizes seem to be more or less evenly distributed over the cluster, rather than larger bursts bunching in one location and small bursts bunching in another. The bursts shown are just a small sample and are meant to give a qualitative illustration of the spatial distribution of bursts. To provide a more quantitative description of bursts we have determined the bursts for the same set of 1000 clusters of mass $M=10^7$ used to calculate the fractal dimension shown in Fig. \ref{fig:fractal_dimension}. Because these are relatively large clusters, a waterlevel very close to the critical occupation probability must be chosen. The critical occupation probabilities and chosen waterlevels for the six anisotropies considered are given in Table \ref{table:waterlevels}. \begin{table} \centering \begin{tabular}{l c c} Anisotropy & $p_c$ & Waterlevel \\ \hline \\ [-1.5ex] $a=1$ & 0.50000 & 0.49950 \\ $a=2$ & 0.66667 & 0.66610 \\ $a=4$ & 0.80000 & 0.79950 \\ $a=8$ & 0.88889 & 0.88830 \\ $a=16$ & 0.94118 & 0.94060 \\ $a=100$ & 0.99010 & 0.98995 \\ \end{tabular} \caption{For the anisotropy values $a$ we consider, we give the critical probabilities from Eq. \eqref{eq:critical_curve} and our waterlevel values.} \label{table:waterlevels} \end{table} Using these values, we determine the non-cumulative burst frequency-size distribution for each realization. For each anisotropy considered, we aggregate the data. For larger bursts the aggregate data is sparse with zero or one bursts of a given mass. In this case, it is standard treatment to bin the data \citep{Malamud2004}. For each anisotropy considered, we bin the data and do a linear least-squares fit of the log-log data to the power-law distribution \begin{equation} N_b \sim m_b^{-\tau} \label{eq:power_law} \end{equation} For larger anisotropies there is a rollover for small bursts. This region becomes significant for $a=16$ and $a=100$ and has been excluded from the fit. In Fig. \ref{fig:burst_size_distribution}, we give the binned data and fit for each value of anisotropy considered. \subsubsection{Frequency-Size Distributions} \begin{figure*}[] \centering \includegraphics[]{burst_size_dist} \caption{Dependence of the number of bursts $N_b$ with mass $m_b$ on $m_b$. These data are aggregated from 1000 runs with $M=10^7$ for each run. The best fit correlation with Eq. \eqref{eq:power_law} is shown for each value of $a$.} \label{fig:burst_size_distribution} \end{figure*} For each anisotropy value considered, we find excellent agreement with Eq. \eqref{eq:power_law}. The exponent takes values near $1.46$, suggesting that the slope is unchanged by the introduction of anisotropy. The rollover observed for $a=16$ and $a=100$ appears to be increasing with anisotropy. As the anisotropy becomes very large this rollover may dominate the distribution. Because this is a non-cumulative distribution, the b-value for bursts generated by our model is $b=\tau-1$ and $b \approx .46$ for all anisotropies considered. This is significantly lower than the $b=2$ typically reported for microseismic data \cite{Maxwell2011}. More recent work \cite{Tafti2013} on injection into geothermal reservoirs has found $b \approx 1.3$. We note that while the b-value of our model is significantly lower than that observed during fracking treatments, our simulation is only in two-dimensions and moving to three dimensions might significantly change the b-value. \subsubsection{Burst Fractal Dimension} To obtain a comparison between clusters and bursts we calculate the fractal dimension for the bursts generated using our model. To obtain the fractal dimension we use the correlation function \citep{Hirata1987} \begin{equation} C\left(r\right) = \frac{2}{N\left(N-1\right)}N_r\left(R < r\right), \label{eq:correlation} \end{equation} where $N$ is the total number of pairs of events and $N_r\left(R < r\right)$ is the number of pairs of events whose separation is less than $r$. If the burst distribution is fractal, the correlation should follow a power-law \begin{equation} C\left(r\right) \sim r^D. \label{eq:correlation_dimension} \end{equation} The fractal dimension $D$ is sometimes called the correlation dimension and has been used previously to compare the fractal dimension of earthquakes to the fractal dimension of percolation clusters \cite{Tafti2013}. Using the same realizations as before, we calculate the correlation defined in Eq. \ref{eq:correlation} for the burst hypocenters. For each realization we fit the linear region to Eq. \eqref{eq:correlation_dimension}. If the realization does not have a large enough linear region or has two different linear realizations it is discarded. For the six anisotropies considered, we have calculated the mean and standard deviation of the fractal dimensions of the remaining realizations. These are given in Table \ref{table:fractal_dimension}. \begin{table} \centering \begin{tabular}{l c} Anisotropy & Burst Fractal Dimension (D) \\ \hline \\ [-1.5ex] $a=1$ & $1.84 \pm 0.11$ \\ $a=2$ & $1.82 \pm 0.15$ \\ $a=4$ & $1.81 \pm 0.17$ \\ $a=8$ & $1.79 \pm 0.16$ \\ $a=16$ & $1.77 \pm 0.17$ \\ $a=100$ & $1.49 \pm 0.25$ \\ \end{tabular} \caption{The mean and standard deviations of the burst fractal dimensions $D$ are given for the anisotropy values $a$ that we consider.} \label{table:fractal_dimension} \end{table} In all cases, the average fractal dimension of the bursts is less than the fractal dimension of the clusters. We also find that the difference increases with increasing anisotropy, However, for relatively small anisotropies the variation in fractal dimension is small and might not be significant in practical applications. \subsubsection{Burst Anisotropy} We also want to determine how the bursts are related to the underlying anisotropy in the bond strength distribution. As a simple measure of burst anisotropy, we calculated the standard deviations $\mathrm{s}_x$ and $\mathrm{s}_y$ in the $x$ and $y$ locations of the burst hypocenters relative to the injection point. This measure gives the aspect-ratio of the spatial distribution of bursts. Using the same 1000 realizations of $M=10^7$, we aggregated the burst hypocenters and computed the means and standard deviations. In performing the averages, we found that the results did not change significantly if the bursts were weighted by their size. For simplicity, the results presented here are not weighted by burst size. Additionally, we calculated the anisotropy using a common method introduced by \cite{Family1985}. This method utilizes the ratio of the eigenvalues ($\lambda_1$ and $\lambda_2$) of the gyration tensor and gives an anisotropy in the range 0 (completely anisotropic) to 1 (fully isotropic). In order to compare this measure of anisotropy to our results, we take the square root of the inverse of the anisotropy defined by \cite{Family1985}. For the simulations discussed above, we have determined the mean values of the ratio of standard deviations $\frac{\mathrm{s}_x}{\mathrm{s}_y}$ and the mean values of the square root of the ratio of eigenvalues $\sqrt{\frac{\lambda_2}{\lambda_1}}$. These results are given in Table \ref{table:anisotropies} for several values of the anisotropy parameter $a$. The two methods give similar values which are somewhat less than the anisotropy of the strength distribution $a$. In Fig. \ref{fig:anisotropies} we give the dependence of $\dfrac{\mathrm{s}_x}{\mathrm{s}_y}$ on $a$. We find a good fit to a linear dependence. \begin{table} \centering \begin{tabular}{l c c c} Anisotropy & $\dfrac{\mathrm{s}_x}{\mathrm{s}_y}$ & $\sqrt{\dfrac{\lambda_2}{\lambda_1}}$ \\ \hline \\ [-1.5ex] $a=1$ & 0.987 & 0.980 \\ $a=2$ & 1.843 & 1.851 \\ $a=4$ & 3.526 & 3.527 \\ $a=8$ & 6.589 & 6.589 \\ $a=16$ & 12.73 & 12.74 \\ $a=100$ & 78.29 & 78.94 \\ \end{tabular} \caption{The mean values of the ratio of the standard deviations $\frac{\mathrm{s}_x}{\mathrm{s}_y}$ and the mean values of the square root of the ratio of eigenvalues $\sqrt{\frac{\lambda_2}{\lambda_1}}$ are given for several values of the anisotropy parameter $a$.} \label{table:anisotropies} \end{table} \begin{figure} \centering \includegraphics[]{anisotropies} \caption{Dependence of the burst anisotropy $\frac{\mathrm{s}_x}{\mathrm{s}_y}$ on the bond strength anisotropy $a$. A linear correlation of the data is also shown.} \label{fig:anisotropies} \end{figure} \section{Discussion} High volume fracking allows the extraction of oil and gas from tightly sealed shale reservoirs. During the initial formations of the reservoirs natural hydraulic fractures are generated by the high pressure associated with gas and oil generation. With time, these natural hydraulic fractures are sealed by chemical deposition. The injection of a high pressure, low viscosity fluid penetrates the formation reopening the sealed fractures. This allows the migration of oil and gas to the horizontal injection/production wells. In order to model the injection process, we utilized invasion percolation. We assume a 2D square lattice of bonds. These bonds represent the preexisting array of natural fractures. Each bond is assigned a random strength and the weakest bond breaks at each time step. This represents the migration of the injected fluid through the sealed network of natural fractures. This migration occurs in bursts as the fluid enters a region of weak bonds. We associate these bursts with the microseismicity that occurs during fracking injections. A primary focus of this paper is on the role of anisotropic strengths on injection patterns. The examples of microseismicity associated with four fracking injections into the Barnett Shale, illustrated in Fig. \ref{fig:microseismicity}, clearly show strong anisotropy. It is of interest to compare this microseismicity with the modeled microseismicity given in Fig. \ref{fig:burst_images}. The state 1 injection in Fig. \ref{fig:microseismicity} is similar to the $a=4$ injections illustrated in Fig. \ref{fig:burst_images}. Both the modeled and the observed microseismicity exhibit Gutenberg-Richter frequency magnitude statistics, however the b-values differ. Our model certainly involves a number of serious approximations. Our model utilizes a two-dimensional square grid. Actual fluid injections are clearly three dimensional but seismic observations indicate the flow tends to be confined to a relatively narrow horizontal layer. The preexisting natural fractures that the injection reopens tend to have spacings in the range 0.1 to 1 meter, but are only approximated by a square grid. Our model neglects the pressure drops associated with the fluid flow. This is probably a good approximation between ``bursts'' (microseismic events) but significant pressure drops may occur during a ``burst.'' Our model also assumes that the assigned bonds strengths are uncorrelated in space, some spacial correlations may be expected in actual reservoirs. Despite the assumptions, the geometries of the invading cluster and associated modeled microseismicity are certainly qualitatively similar to the patterns of injection indicated by observed microseismicity. As discussed in our introduction, high volume fracking is successful only if the natural fractures are largely sealed. Unsealed fractures allow the inject fluid to flow through them without producing the distributed damage required for production. An interesting future extension of this model would include some open fractures prior to injection to quantify the problems associated with fluid leakage through these fractures. \begin{acknowledgements} The research of JQN and JBR has been supported by a grant from the US Department of Energy to the University of California, Davis \#DE-FG02-04ER15568 \end{acknowledgements}
2,877,628,090,014
arxiv
\section{Introduction} There is an extensive literature on two-stage and multistage voting. Although some of this study exists within economics, multistage elections and runoffs have been greatly influential in computational social choice during the past decade, due to such work as that of Elkind and Lipmaa~\shortcite{elk-lip:c:hybrid-manipulation} and Conitzer and Sandholm~\shortcite{con-san:c:nonexistence}. Particularly interesting recent work in this line has been done by Narodytska and Walsh~\shortcite{nar-wal:c:two-stage}. They focus on manipulation of election systems of the form \xtheny{$X$}{$Y$}, i.e., an initial-round election under voting rule $X$, after which if there are multiple winners just those winners go on to a runoff election under voting rule $Y$, with the initial votes now restricted to the remaining candidates. The question at issue is whether a given manipulative coalition can vote in such a way as to make a distinguished candidate win (namely, win in the initial round if there is a unique winner in the initial round, or if not, then be a winner of the runoff). Narodytska and Walsh~\shortcite{nar-wal:c:two-stage} study the computational complexity of this question. They strongly address the issue of how the manipulation complexity of $X$ and $Y$ affect the manipulation complexity of \xtheny{$X$}{$Y$}. Viewing P as being easy and NP-hardness as being hard, they show that every possible combination of these manipulation complexities can be achieved for $X$, $Y$, and \xtheny{$X$}{$Y$}. The present paper focuses on the complexity of \xthenx{$X$}. That is, we are focused on the case where $X$ is so valued as an election system that if $X$ selects a unique winner, our election is over and we have our winner. However, if $X$ in the initial round has tied winners, then we take just those winners and subject them to a runoff election, again using system $X$. (Votes in this second election will be over only the candidates who made it to the second round.) We are interested in the case in which the second-round votes are simply the initial-round votes restricted to the remaining candidates, and the case in which revoting is allowed in the second round. Real-world examples exist of such same-system runoff elections. In general elections in North Carolina and many districts of California, election law specifies that if there are two or more candidates tied for being the winner in the initial plurality election, a plurality runoff election is held among just those candidates~(\cite{north-carolina:law:ties,california:law:ties-in-elections}). So (\xthenx{Plurality})-with-revoting is being used. Although Narodytska and Walsh~\shortcite{nar-wal:c:two-stage} for \xtheny{$X$}{$Y$} elections showed that all combinations of P and NP-hardness for $X$, $Y$, and \xtheny{$X$}{$Y$} can be realized, their examples achieving that almost all have $X \neq Y$. Thus their broad results do not address the issue of whether all possibilities can be achieved if one seeks to use the same system for both the initial and the runoff election. We show that every possibility can be achieved, even when the runoff is the same system as the initial election. Indeed, even in the three-way comparison of the complexity of $X$, the complexity of $X$ with runoff (under $X$), and the complexity of $X$ with a runoff (under $X$) with revoting, we show that every possibility of setting some or all of those to P or to NP-complete manipulation complexities can be realized. And we show that that can even be done while ensuring that the winner problem for $X$ (i.e., determining whether a given candidate is a winner of a given election under $X$) remains in P, and can also be done both for the weighted and the unweighted cases. For example, there are election systems $X$---having ${\rm P}$ winner problems---such that manipulation of $X$ is NP-complete, manipulation of \xthenx{$X$} is NP-complete, but manipulation of \xthenx{$X$} with revoting is in ${\rm P}$. And there are election systems $X$---having ${\rm P}$ winner problems---such that manipulation of $X$ is in P, manipulation of \xthenx{$X$} is NP-complete, but manipulation of \xthenx{$X$} with revoting is in ${\rm P}$. Briefly put, there is no inherent connection between these three complexities. For the most important systems, however, it is very important to see what the effects of runoffs, and revoting runoffs are. For example, weighted plurality is easily seen to remain easy in all of our cases, e.g., manipulation of elections with runoffs, or with revoting runoffs, remains in ${\rm P}$. However, that result itself is something of a fluke. We show that for every (so-called) scoring protocol that is not Triviality, Plurality, or a disguised version of one of those, manipulation of elections with runoffs and manipulation of elections with revoting runoffs are NP-complete. Although manipulation of unweighted veto is in P, we show that manipulation of unweighted veto elections with runoffs and manipulation of unweighted veto elections with revoting runoffs are NP-complete. For unweighted HalfApproval (the scoring protocol where each voter gives one point to his or her $\lceil \|C\|/2 \rceil$ top candidates and zero points to the rest), we prove that for both elections with runoffs and elections with revoting runoffs, the manipulation complexity, even when restricted to having at most one manipulator, is NP-complete. This contrasts with the nonrunoff manipulation complexity here when there is one manipulator, which clearly is ${\rm P}$. For the case of one manipulator, a standard way of seeking to manipulate unweighted or weighted scoring protocols---pioneered for the unweighted case by Bartholdi, Tovey, and Trick~\shortcite{bar-tov-tri:j:manipulating}, and extended in many papers since---is to use the natural greedy algorithm. However, we prove that for some scoring protocols $X$, the greedy approach fails on \xthenx{$X$}. \section{Related Work} There are quite a few papers whose focus is close to ours. Yet each differs in some important way. Centrally underpinning our study and framing is the creative, direction-opening work of Narodytska and Walsh~\shortcite{nar-wal:c:two-stage} on manipulating \xtheny{$X$}{$Y$} elections. (Indeed, their paper even proposes the study of revoting in the second round, and---although in contrast with the present paper they do not seek complexity results regarding revoting---they give a convincing example of why that can make a difference in what can be manipulated.) In a very real sense, our paper is merely about their diagonal---the case when one uses the same election in the original election and the runoff. However, since they were not specifically exploring the diagonals, their existence results in general don't address that case. However, we must mention an important exception. They show that for STV$'$, a particular decisive form of STV, that STV$'$ and \xthenx{STV$'$} are both NP-hard.\footnote{To avoid confusing the literature's terminology, it is important for us to mention that there is a very slight, but arguably philosophically interesting, difference between the $\mbox{\sc Then}$ we defined in the Introduction and the $\mbox{\sc Then}$ operator as defined by Narodytska and Walsh~\shortcite{nar-wal:c:two-stage}. Our and their definitions of \xtheny{$X$}{$Y$} can differ in outcome only on what happens if there is exactly one winner of the initial election. In our use of \mbox{\sc Then}\ (as given in this paper), in that decisive case the election is over. In their case, that one winner goes on to a one-person election under system $Y$. Their approach opens the door to having system $Y$ in some cases kill off a single candidate who won the initial round. However, we stress that in their paper they absolutely never use that possibility, and so every result in their paper, including each one mentioned in this paper, holds equally well in both models. Indeed, for any election system that always has at least one winner when there is at least one candidate, the two models coincide, and almost all natural election systems have this property. Nonetheless, to avoid causing any confusion as to terminology, we will henceforward avoid using the term \mbox{\sc Then}, and will generally speak of elections ``with runoff'' or ``with revoting runoff,'' to refer to the cases we here are considering.} Our constructions, which must work within a single system for both rounds, are quite different from theirs. In contrast, the work of Elkind and Lipmaa~\shortcite{elk-lip:c:hybrid-manipulation} has a section on using the same system in each round, which is our focus also. However, their model (unlike Narodytska and Walsh and unlike our paper, which pass forward just the winners) is based on removing only the \emph{least} successful candidate after a round. In particular, their model is of one or more initial rounds, that use a ``prune off the last successful candidate'' (although in one case they prune off half the candidates) rule inspired by some election system $X$, after which there is a final round using some election system $Y$. So their section on using the same system is about having one or more rounds using (a variant of) $X$ to cut off the least popular candidate, and then a final round also using $X$. Other recent work on removing weakest candidates, usually sequentially, include that of Bag, Sabourian, and Winter~\shortcite{bag-sab-win:j:sequential-elimination} and Davies, Narodytska, and Walsh~\shortcite{dav-nar-wal:c:weakest-link}. Related to the Elkind--Lipmaa work is the ``universal tweaks'' work of Conitzer and Sandholm~\shortcite{con-san:c:nonexistence}, which shows that adding one pairwise (so-called) CUP-like ``preround,'' which cuts out about half the candidates, can tremendously boost a system's manipulation complexity over a broad range of systems. Speaking more broadly, the problem that Narodytska and Walsh~\shortcite{nar-wal:c:two-stage} and this paper are studying, for the case of runoffs and runoffs with revoting, is the manipulation problem. This asks whether a coalition of manipulators can ensure that a particular candidate is a winner of the overall election. The seminal work on the computational complexity of manipulation was that of Bartholdi, Tovey, and Trick~\shortcite{bar-tov-tri:j:manipulating} and Bartholdi and Orlin~\shortcite{bar-oli:j:polsci:strategic-voting}, and there have been many papers since studying manipulation algorithms for, and hardness results for, a variety of election systems, see, e.g., the survey~\cite{fal-hem-hem-rot:b-too-short:richer}. This entire stream exists within the area known as computational social choice~\cite{che-end-lan-mau:c:polsci-intro}. Finally, we mention that there is an interesting line of work of \ourcite{jen-mei-pol-ros:c:iterative-plurality} and Lev and Rosenschein~\shortcite{lev-ros:c:iterative-voting} studying in a fully game-theoretic setting iterated voting, in the sense of seeing whether a Nash equilibrium is reached. This work does not remove candidates after votes, and so is different in flavor and goal from our work. \section{Preliminaries} Each election instance will have a finite set, $C$, of candidates, e.g., a particular election might have Obama and Romney as its candidates. Elections also have a finite collection of votes, which we will assume are input as a list of ballots, one per voter. Although social choice theory sometimes allows voters to have names, in this paper we study the most natural case---the one where votes come in nameless, and the election system's outcome depends on just what the multiset of votes is. We will refer to the collection of votes as $V$. The type of each vote will depend on the election system. Most systems require a tie-free linear ordering of the candidates, and that will be the case for all systems discussed in this paper. So-called scoring protocols such as Plurality, Veto, Borda, and so on will for us have votes cast as linear orders. And then from those orders we will assign points to each candidate based on the rules of that scoring system. For example, in a veto election, each voter casts zero points for his or her least favorite candidate, and one point for each other candidate. In a plurality election, each voter casts one point for his or her favorite candidate, and zero points for each other candidate. In HalfApproval, if there are $m$ candidates, each voter gives one point to each of the $\lceil m/2 \rceil$ top candidates in his or her linear order, and gives zero points to each other candidate. In any scoring system, all points for each candidate are added up, and the candidate(s) who have the maximum score achieved by any candidate are the winner(s). (When we speak of scoring protocols in the abstract, each scoring protocol must have a fixed number of candidates. However, when we say Plurality or HalfApproval or so on, we usually are referring to the protocol that on $m$-candidate inputs uses the $m$-candidate Plurality or HalfApproval or so on scoring protocol mentioned above.) An election system, $\ensuremath{X}$, is a mapping that given $C$ and $V$ outputs a member of the power set of $C$; the member(s) of that output set are said to be the winner(s). This is precisely the definition of a social choice correspondence, as given in Shoham and Leyton-Brown~\shortcite{ley-sho:b:multiagent-systems}.\footnote{That definition and this paper allow, as do many papers in computational social choice theory, the case in which an election has no winners. We find that natural for symmetry with the case in which everyone wins. Also, there are real-world cases in which having no winner is natural, e.g., the system for electing players to the Baseball Hall of Fame is set up so that if the crop of candidates in a given year is weak no one will win. That has happened four times, most recently in the January 2013 vote, in which none of the 37 candidates were elected to the Hall.} The winner problem for an election system $\ensuremath{X}$ is the language that contains exactly those triples $C$, $V$, and $p\in C$ such that $p$ is a winner in the $\ensuremath{X}$ election on $C$ and $V$. Although some well-known election systems exist whose winner problems are not in P~\cite{bar-tov-tri:j:who-won}, all the systems we study in this paper have ${\rm P}$ winner problems. We now define the classic unweighted and weighted election manipulation problems, respectively due to Bartholdi, Tovey, and Trick~\shortcite{bar-tov-tri:j:manipulating} and Conitzer, Sandholm, and Lang~\shortcite{con-lan-san:j:when-hard-to-manipulate}. The unweighted version, called Constructive Unweighted Coalitional Manipulation (CUCM), is defined as follows for any given election system $\ensuremath{X}$. \begin{description} \item[Name:] $\cucm{\ensuremath{X}}$. \item[Given:] A set $C$ of candidates, a collection $V_1$ of the nonmanipulative votes (each specified by a tie-free linear ordering over the candidates), a set $V_2$ of manipulative voters (since our voters do not have names, these are specified by a nonnegative integer input in unary giving the number of manipulative voters), and a distinguished candidate $p \in C$. \item[Question:] Is there a way to set the votes of the manipulators, $V_2$, so that under the election system~$\ensuremath{X}$, $p$ is a winner of the election over candidate set $C$ with the vote set being the ballots of the manipulators and the nonmanipulators? \end{description} The analogous weighted version, \cwcm{\ensuremath{X}}, is the same except each member of $V_1$ has both a weight and a tie-free linear order, and $V_2$ is specified as a list giving the weight of each manipulator. The allowed range of weights is the positive integers. Our interest here is in runoff elections. So in addition to the above classic versions, let us define versions with runoffs and with revoting runoffs. The ``runoff'' problems \cucmrunoff{\ensuremath{X}} and \cwcmrunoff{\ensuremath{X}}\ are the same as the above problems, except if after the $\ensuremath{X}$ election there are two or more winners, a runoff election is conducted under $\ensuremath{X}$, with the candidates being just the winners of the initial election, and the votes of all voters (both manipulators and nonmanipulators) being their initial-election's preference-order vote, restricted to the remaining set of candidates. The ``revoting runoff'' (or for short, ``revoting'') problems \cucmrevoting{\ensuremath{X}} and \cwcmrevoting{\ensuremath{X}}\ are the same as the above runoff problems, except if there is a runoff election, the manipulators may change their votes. And the question is, of course, whether in this setting there is a set of initial-round and, if needed, second-round manipulator votes that makes $p$ a winner of the overall election. Note that all of these problems are defined as language problems, as is standard in the area. Typical complexities that they might take on are membership in P and NP-completeness. Those two cases are the focus of this paper and of most papers in this area. However, we mention in passing three related issues. First, it has recently been pointed out that at least in some artificial cases, election decision problems can be in P even when their related search problems are NP-hard~\cite{hem-hem-men:ctoappear:search-versus-decision}. This worry does not infect any of this paper's results. Every result where we make a polynomial-time claim in this paper has the property that in polynomial time one can even produce the action(s) that achieve the desired outcome (such as making the given candidate win), i.e., our polynomial-time results are essentially what is sometimes called ``certifiable,'' see Hemaspaandra, Hemaspaandra, and Rothe~\shortcite{hem-hem-rot:j:destructive-control}.\footnote{For the case of revoting runoffs, the natural model here, in terms of seeking a polynomial-time certificate- (i.e., action-) yielding algorithms, is to allow the manipulative coalition, before the runoff election, a full view of all the initial votes and candidates, and of the outcome of that election, and to require that they set their votes in polynomial time, and of course to also require that their initial-election vote-setting be done in polynomial time. However, since all the election systems in this paper have polynomial-time winner problems, after a given set of initial-round votes the manipulators can themselves compute who the initial-round winner(s) are, and so for problems with p-time winner algorithms, one can w.l.o.g.\ require the manipulative coalition to fork over at the same time both of its rounds of votes.} Second, and on the other end of the complexity range, there has been much worry about, and some empirical studies suggesting, that perhaps even NP-complete sets can be often easy. Only during the past half decade has computer science obtained the following remarkably strong result showing that this cannot happen: If even one NP-complete set has a (deterministic) polynomial-time heuristic algorithm whose asymptotic error frequency is subexponential, then the polynomial hierarchy collapses. See the expository article of Hemaspaandra and Williams~\shortcite{hem-wil:j:heuristic-algorithms-correctness-frequency} for a discussion of that result and an attempt to reconcile it with the good empirical results observed for hard problems. (Even for election problems, heuristics seem to often do very well, as shown in a number of papers by Walsh and his collaborators, see, e.g., \cite{wal:c:where-hard-veto}.) Our view is that the issue of proving rigorous results about the performance of heuristics on election problems is a highly difficult, highly important direction, but that NP-completeness results for a given problem are unquestionably an excellent indication that p-time algorithms, and even p-time heuristics with subexponential error rates, cannot be reasonably expected. Thirdly, we mention that in our model, as is standard in this area, the manipulators are given access to the votes of the nonmanipulators. This is a strong though standard assumption, and admittedly is a model for study rather than a perfect image of the real world. The model actually makes the NP-hardness results stronger (since they say that even with full information the problem remains intractable) and most of our results are NP-hardness results. \section{Results} We now turn to our results regarding the complexity of the manipulation problem for elections, for elections with runoffs, and for elections with revoting runoffs. Our results are of two basic sorts. First, we are interested in what \emph{can} happen. That is, for those three manipulation complexities, what is the relationship between them? Is there any connection at all? We show that there is no connection that holds globally. Even when limiting ourselves just to election systems with P winner problems, we prove that every possible case of P-or-NP-complete can simultaneously hold for these three complexities: Each of the 8 weighted and 8 unweighted possibilities can be realized. The reason we want to know what \emph{can} happen is because it is important to know the universe of behaviors that one may face. Note that since our runoff and revoting problems must have the same system used in the initial and runoff rounds, the result we mention does not follow from the important work of Narodytska and Walsh~\shortcite{nar-wal:c:two-stage} realizing all possibilities for \xtheny{$X$}{$Y$}; also, they did not look at the \emph{complexity} of revoting, although as mentioned earlier they did identify and commend revoting as an important area for study. Our second type of result regards what \emph{does} happen for the most famous, important, natural systems. For example, although we show that, perhaps counterintuitively, runoffs and revoting runoffs can sometimes lower complexity and can have other bizarre relative complexities, for none of the natural, concrete systems we have looked at do we find this behavior to occur. For each concrete, natural system we have studied, runoffs and revoting runoffs either leave the manipulation complexity unchanged, or increase the manipulation complexity. Of course, our results on what \emph{does} happen for concrete systems prove some of the cases of our claims regarding what \emph{can} happen. The following theorem states our result about what can happen, namely, regarding P and NP-completeness, any possible triple of complexities can occur. \begin{theorem}\label{t:eight-part} Let $\ensuremath{{\rm NPC}}$ denote ``${\rm NP}$-complete.'' Let $W =\allowbreak \{ ({\rm P},{\rm P},{\rm P}),\allowbreak ({\rm P},{\rm P},\ensuremath{{\rm NPC}}),\allowbreak ({\rm P},\ensuremath{{\rm NPC}},{\rm P}),\allowbreak ({\rm P},\ensuremath{{\rm NPC}},\ensuremath{{\rm NPC}}),\allowbreak (\ensuremath{{\rm NPC}},{\rm P},{\rm P}),\allowbreak (\ensuremath{{\rm NPC}},{\rm P},\ensuremath{{\rm NPC}}),\allowbreak (\ensuremath{{\rm NPC}},\ensuremath{{\rm NPC}},{\rm P}),\allowbreak (\ensuremath{{\rm NPC}},\ensuremath{{\rm NPC}},\ensuremath{{\rm NPC}})\}$. \begin{enumerate} \item For each element $w$ of $W$, there exists an election system $\ensuremath{X}$, whose winner problem is in ${\rm P}$, such that the complexity of \cucm{$\ensuremath{X}$}, \cucmrunoff{$\ensuremath{X}$}, and \cucmrevoting{$\ensuremath{X}$} is, respectively, the three fields of $w$. \item The analogous result holds for the weighted case (where the three fields will capture the complexity of \cwcm{$\ensuremath{X}$}, \cwcmrunoff{$\ensuremath{X}$}, and \cwcmrevoting{$\ensuremath{X}$}, respectively). \end{enumerate} \end{theorem} We in the rest of this report will present our results about concrete systems, and some proofs/sketches regarding our results on those. (However, we mention---and an extended version of this report will give complete details on all the constructions---that some of the more counterintuitive cases within the above theorem involve novel proof approaches. The two most interesting unweighted cases are the ones realizing the cases $(\ensuremath{{\rm NPC}},{\rm P}, \ensuremath{{\rm NPC}})$ and, especially, $(\ensuremath{{\rm NPC}},\ensuremath{{\rm NPC}},{\rm P})$. The key twist in these is that both create a setting in which an election system can in effect pass messages to its own second-round self through the winner set and with the help of the manipulators. In particular, in a certain set of circumstances, the election system can be made to, in effect, know that ``If the input I'm seeing is taking place in a second round (although I cannot myself tell whether or not it is), then we are utterly certainly in a model in which revoting is allowed and indeed in which one of the manipulators has changed his or her vote since the initial round.'') The following result provides an unweighted case where the classic manipulation problem is simple but the runoff and revoting runoff versions are hard. To support this contrast, we must mention that it is well-known that \cucm{Veto} is in ${\rm P}$. \begin{theorem}\label{t:veto} \cucmrunoff{Veto} and \cucmrevoting{Veto} are each $\ensuremath{{{\rm NP}\hbox{-}complete}}$. \end{theorem} \begin{proofs} We will reduce from from the well-known NP-complete Exact Cover by 3-Sets Problem (X3C): Given a set $B = \{b_1, \ldots, b_{3k}\}$, and a collection ${\cal S} = \{S_1, \ldots, S_n\}$ of 3-element subsets of $B$, we ask if $S$ has an exact cover for $B$, i.e., if there exists a subcollection ${\cal S'}$ of ${\cal S}$ such that every element of $B$ occurs in exactly member of ${\cal S'}$. Without loss of generality, we assume that $n \geq 3$. We will denote which elements of $B$ are in a given $S_i$ by some new $b_{i_j}$ variables: $S_i = \{b_{i_1}, b_{i_2}, b_{i_3}\}$. Since \cucm{Veto} is in P (simply greedily veto all candidates that score higher than $p$), the only place where hardness can come in is in the selection of the set of winners in the initial round. Our election has the following candidates: $p$ (the preferred candidate), $b_1, \ldots, b_{3k}$ and $s_1, \ldots, s_n$ (candidates corresponding to the X3C instance), $r_1, \ldots, r_k$ (candidates that will be vetoed in the runoff), $d$ (a buffer candidate), and $\ell$ (a candidate that always loses in the initial round). We have $k$ manipulators. We have the following nonmanipulators: \begin{itemize} \item For every $i, 1 \leq i \leq n$, one nonmanipulator voting \\$\cdots > p > b_{i_1} > s_i$\\ ($\cdots$ denotes that the remaining candidates are in arbitrary order). \item For every $i, 1 \leq i \leq n$, one nonmanipulator voting \\$\cdots > p > b_{i_2} > s_i$. \item For every $i, 1 \leq i \leq n$, one nonmanipulator voting \\$\cdots > p > b_{i_3} > s_i$. \item Three nonmanipulators voting $\cdots > p$. \item For every $c \in B \cup \{r_1, \ldots, r_k\} \cup \{d\}$, three nonmanipulators voting $\cdots > p > c$. \item One nonmanipulator voting $\cdots > p > \ell$. \item For every $i, 1 \leq i \leq n$, one nonmanipulator voting \\$\cdots > p > d > s_i > \ell$. \end{itemize} Note that every candidate other that $\ell$ receives 3 vetoes from the nonmanipulators in the initial round. Let ${\cal S'} = \{S_{j_1}, \ldots, S_{j_k}\}$ be an exact cover for ${\cal S}$. For $1 \leq i \leq k$, let the $i$th manipulator vote $\cdots > r_i > s_{j_i}$. We claim that $p$ is a winner of the overall election (even without revoting). It is immediate that the winner set of the initial round is $C - \{\ell\} - \{s_{j} \ | \ S_j \in {\cal S}\}$. Since $\ell$ does not participate in the runoff, $p$ gains one veto from the nonmanipulator voting $\cdots > p > \ell$ and each $s_i$ that participates in the runoff gains one veto from the nonmanipulator voting $\cdots > d > s_i > \ell$. $d$ gains $k$ vetoes from the nonmanipulators voting $\cdots > d > s_i > \ell$ such that $S_i \in {\cal S'}$ and every $b \in B$ gains one veto from the nonmanipulator voting $\cdots > p > b > s_i$ such that $b \in S_i$ and $S_i \in {\cal S'}$. Every candidate $r_i$ gains a veto from the manipulator voting $\cdots > r_i > s_{j_i}$. It follows that $p$ is a winner of the runoff. For the converse, we will show the manipulations described above are the only way to make $p$ a winner. Suppose the manipulators can vote (in the initial round and the runoff) in such a way that $p$ becomes a winner of the overall election. Recall that in the initial round, every candidate other that $\ell$ receives 3 vetoes from the nonmanipulators and that $\ell$ receives $n+1$ vetoes. Since $n \geq 3$ and there are $k$ manipulators, $\ell$ does not participate in the second round and at most $k$ other candidates (the ones vetoed by a manipulator) do not participate in the second round. Since $\ell$ does not participate in the second round, $p$ gains one veto from the nonmanipulator voting $\cdots > p > \ell$. Suppose there is a candidate $c \in B \cup \{r_1, \ldots, r_k\} \cup \{d\}$ that does not participate in the second round of the election. Then $p$ gains 3 vetoes from the nonmanipulators voting $\cdots > p > c$, and thus $p$ receives at least $7$ vetoes in the second round. There are at least $2k$ candidates from $B$ that participate in the second round and each of these candidates is vetoed 3 times in the initial round and does not gain any vetoes from deleting $\ell$. Since $p$ receives at least 7 vetoes in the second round, each candidate in $B$ that participates in the second round needs to gain at least 4 vetoes, so these candidates need to gain a total of at least $8k$ vetoes. But the most vetoes that these candidates can gain is 3 vetoes for each candidate $s_i$ that does not participate in the second round plus $k$ vetoes from the manipulators. Since fewer than $k$ $s_i$ candidates do not participate in the runoff, the $B$ candidates that participate in the runoff gain a total of at most $4k$ vetoes, which is not enough. It follows that the only candidates other than $\ell$ that do not participate in the second round are $s_i$ candidates. Note that candidates in $\{r_1, \ldots, r_k\}$ will not gain vetoes from the nonmanipulators, and so each manipulator needs to veto exactly one $r_i$. To make sure that every candidate $b \in B$ gains at least one veto, we need to delete a set of $s_i$ candidates corresponding to a cover. Since we can delete at most $k$ such candidates, these candidates will correspond to an exact cover.% \end{proofs} It is easy to argue, in contrast with the result of Theorem~\ref{t:veto} regarding Plurality's close cousin Veto, that Plurality is easy, even in the weighted case, since throwing all one's votes to $p$ is always optimal. \begin{theorem}\label{t:plurality} \cwcmrunoff{Plurality} and \cwcmrevoting{Plurality} are each in ${\rm P}$. \end{theorem} We mention the following result, which holds because by brute-force partitioning of the integer $\|V\|$ into at most $(\|C\|!)^2$ named buckets (one for each pair of possible votes, though a second-round decrease in candidates could make the numbers even smaller than this), one can solve even the revoting runoff manipulation question (and of course the same holds for plain runoffs). \begin{theorem}\label{t:brute} For any election system $\ensuremath{X}$ having a ${\rm P}$ winner problem, and for any integer $k$, \cucmrevoting{$\ensuremath{X}$} restricted to $k$ candidates is in ${\rm P}$. \end{theorem} The following claim transfers to our two problems the dichotomy result for scoring protocols known for the nonrunoff case. \begin{theorem} For every weighted scoring protocol $\ensuremath{X}$, \cucmrunoff{$\ensuremath{X}$} and \cucmrevoting{$\ensuremath{X}$} are in ${\rm P}$ if $\ensuremath{X}$ is Plurality or Triviality (or a direct transform of one of those, in a sense that can be made formal, see \cite{hem-hem:j:dichotomy}), and otherwise are $\ensuremath{{{\rm NP}\hbox{-}complete}}$. \end{theorem} \begin{proofs} For Plurality, this follows from Theorem~\ref{t:plurality} and for Triviality, this is trivial. For every other weighted scoring protocol $\ensuremath{X}$, Hemaspaandra and Hemaspaandra~\shortcite{hem-hem:j:dichotomy} give a reduction $f$ from the NP-complete problem Partition to \cucm{$\ensuremath{X}$} with the property that for all $x$, if $x \in $ Partition, then $p$ can be made the unique winner in $f(x)$, and if $x \not \in $ Partition, then $p$ can not be made a winner in $f(x)$. So, if $x \in $ Partition, then $x$ can be made the unique winner of the initial round, and thus the unique winner of the overall election. And if $x \not \in $ Partition, then $p$ will never make it to the final round. \end{proofs} The case of just one manipulator is a natural and important case. It also can often be surprisingly well handled, thanks to the lovely result---initially due for the unweighted case to the seminal work of Bartholdi, Tovey, and Trick~\shortcite{bar-tov-tri:j:manipulating} and since then much extended---that the natural (p-time) greedy manipulation algorithm (giving one's highest point value to $p$ and then giving, in turn, the highest remaining value to the candidate who has the lowest point total among those not yet assigned points by the manipulative voter) is optimal (i.e., finds a successful manipulation when one exists) for both weighted and unweighted scoring protocols, for the case when there is just one manipulator. The following theorem states that that result does not carry over to runoff elections. \begin{theorem}\label{t:one-evil} The standard 1-manipulator p-time greedy algorithm for scoring protocols is not optimal for \cucmrunoff{$\ensuremath{X}$} or \cucmrevoting{$\ensuremath{X}$}, restricted to at most one manipulator, where $\ensuremath{X}$ is the family of scoring protocols $(2, 1, 0, \ldots, 0)$. \end{theorem} \begin{proofs} Consider the election with candidate set $\{p,a,b,c\}$, two nonmanipulators voting $a > p > c > b$ and $b > a > c > p$, and one manipulator. The scores of $p, a, b, c$ from the nonmanipulators are $1, 3, 2, 0$. The greedy algorithm would give the following vote for the manipulator: $p > c > b > a$. Then $p$ and $a$ are the winners of the initial round, and there is no way for $p$ to win the second round. However, if the manipulator votes $p > b > c > a$, then $p$, $a$, and $b$ are the winners of the initial round, and $p$ is a winner of the runoff (even without revoting). \end{proofs} Theorem~\ref{t:veto} gave a case where a simple-to-manipulate unweighted scoring protocol became hard for runoffs, with or without revoting. The following result gives a new example of runoffs increasing complexity, this time for the one-manipulator case. It is natural to wonder whether the following theorem itself implies the cousin of Theorem~\ref{t:one-evil} in which the protocol $(2,1,0,\ldots,0)$ is replaced by HalfApproval. The answer is that the following theorem does not imply that, but it does imply something a bit weaker than that cousin, namely, it says that that cousin holds unless ${\rm P} = {\rm NP}$. (Of course, Theorem~\ref{t:one-evil} holds absolutely; it doesn't require a ${\rm P} \neq {\rm NP}$ hypothesis.) \cucm{HalfApproval} for one manipulator is clearly in ${\rm P}$---for example by the greedy algorithm we mentioned above---and so the following result does express a raising of complexity. \begin{theorem}\label{t:half} \cucmrunoff{HalfApproval} and \cucmrevoting{HalfApproval} are each $\ensuremath{{{\rm NP}\hbox{-}complete}}$, even when restricted to having at most one manipulator. \end{theorem} \begin{proofsketch} We pad the construction from Theorem~\ref{t:veto}. Note that we have fewer candidates in the second round than in the initial round, and so in contrast to the proof of Theorem~\ref{t:veto}, the manipulators have fewer vetoes to contribute in the final round. Without loss of generality, we assume that $k$ is even. We use the election from the proof of Theorem~\ref{t:veto}, with the following modifications. We delete candidates $r_{\frac{k}{2}+1}, \ldots, r_k$ (but keep $r_1, \ldots, r_{\frac{k}{2}}$). We add candidates $\hat{r}_1, \ldots, \hat{r}_{n + \frac{3k}{2} + 3}$ and for each $i$, $1 \leq i \leq n + \frac{3k}{2} + 3$, we add \begin{itemize} \item two nonmanipulators voting $\cdots > p > \hat{r}_i$ and \item one nonmanipulator voting $\cdots > \hat{r}_i > \ell$. \end{itemize} And we have only one manipulator. Note that we have a total of $2(n+\frac{5k}{2}+3)$ candidates. It is easy to see that if ${\cal S'} = \{S_{j_1}, \ldots, S_{j_k}\}$ is an exact cover for ${\cal S}$, then letting the manipulator vote \[\cdots r_1 > \cdots > r_{\frac{k}{2}} > \hat{r}_1 > \cdots > \hat{r}_{n + \frac{3k}{2} + 3} > s_{j_1} > \cdots > s_{j_k}\] will make $p$ a winner of the overall election. For the converse, if the manipulator can vote (in the initial round and the runoff) such that $p$ becomes a winner, note that every $\hat{r}_i$ candidate must be vetoed by the manipulator in the initial round and in the second round. This leaves $k$ vetoes for the other candidates in the initial round, and, much like in the proof of Theorem~\ref{t:veto}, it can be shown that those candidates must be $s_i$ candidates that correspond to a cover. \end{proofsketch} \section{Conclusions and Open Problems} This paper has explored the relative manipulation complexity of runoff elections, with and without revoting. We have seen that there is no general relation between the manipulation complexity of either of those with each other or with the manipulation complexity of the underlying election system. Sometimes revoting can even lower complexity, for example. Yet for the natural, concrete systems we studied, runoffs and revoting runoffs never lowered complexity and sometimes raised complexity. Important open directions include the study of runoffs and revoting runoffs for the case of bribery rather than manipulation, for which we have some preliminary results, and the study of what role heuristics, especially in light of Theorems~\ref{t:one-evil} and~\ref{t:half}, can play. \bibliographystyle{plainnat}
2,877,628,090,015
arxiv
\section{Introduction} Colloidal particles, trapped at fluid interfaces by adsorption energies much larger than the thermal energy, can form effectively two-dimensional colloidal monolayers \cite{Pie80}. During the last two decades these systems have received significant attention both in basic research as well as in applied sciences. On one hand, these monolayers serve as model systems for studying effective interactions, phase behaviors, structures, and the dynamics of condensed matter in reduced dimensionality \cite{Joa01,Din02,Lou05,Che11,Wan12,Ers13,Mao13}. On the other hand, self-assembled colloidal monolayers find applications in optical devices, molecular electronics, emulsion stabilization processes, and as templates in the fabrication of new micro- and nanostructured materials. Therefore, a reliable description of the lateral inter-particle interaction at all distances $r$, which governs the structure formation of colloids at fluid interfaces, is of primary importance. In his pioneering work Pieranski \cite{Pie80} showed that the electrostatic {\it repulsion} of charged colloids at such interfaces is dominated by a long-ranged dipole-dipole interaction, due to an asymmetric counterion distribution in the two adjacent media, in addition to the screened Coulomb interaction also present in bulk systems. Later both the power-law and the exponential contributions have been calculated within the framework of linearized Poisson-Boltzmann theory assuming point-like particles \cite{Hur85}. It turned out that, whereas the interaction energy for charged particles always decays asymptotically $\propto 1/r^3$, the prefactor depends on whether the interaction originates from charges on the polar \cite{Pie80,Par08} or on the apolar \cite{Ave00,Ave02} side of the fluid interface. In addition there are experimental indications of an {\it attractive} long-ranged lateral interaction which cannot be interpreted in terms of a van der Waals force \cite{Sta00,Nik02}. Attempts were made to explain it in terms of a deformation-induced capillary interaction, but a complete and final picture has not yet been reached \cite{For04,Oet051,Oet052,Wue05}. Here, we focus on the electrostatic contribution to the interaction. Whereas Pieranski's work has been extended in numerous directions, almost all subsequent studies have discussed exclusively the case of colloidal particles being far away from each other. In this asymptotic limit the superposition approximation has been assumed to be reliable, according to which one approximates the actual electrostatic potential (or interfacial deformation) for a pair of particles by the sum of the potentials (or deformations) of the two single particles. However, for a dense system or during aggregation, particles can come close to each other such that this superposition approximation is no longer justified. For the deformation induced attractive part of the interaction, the validity of this approximation has been discussed for both large \cite{Oet051,Wue05,Dom07} and small \cite{He13} separations. But so far for the repulsive electrostatic interaction no investigations of small-distance deviations from the superposition approximation have been reported, although a systematic multipole expansion of the electrostatic potential around a single inhomogeneously charged particle trapped at an interface is available \cite{Dom08}. Here, we assess the quality of the superposition approximation for the electrostatic interaction between two colloidal particles floating close to each other at an electrolyte interface by considering a simplified problem (see Fig.~\ref{fig:1}) which offers the possibility to obtain exact analytic expressions. Accordingly, first, the interface is assumed to be planar, i.e., no deformations of the fluid interface are considered, which are typically of the order of nanometers for micron-sized particles \cite{Sta00,Nik02,Law13}. Second, due to the small particle-particle distances to be studied, the curvature of the colloidal particles is ignored in the spirit of a Derjaguin approximation \cite{Rus89} by considering the effective interaction between two charged, planar, and parallel walls. Third, a liquid-particle contact angle of $90^\circ$ is assumed; this value is encountered for actual systems \cite{Mas10}. We have derived an exact analytic expression for the electrostatic potential of this model within linearized Poisson-Boltzmann theory, which is then used to calculate the surface interaction energies per total surface area and the line interaction energy per total length of the two three-phase contact lines (Fig.~\ref{fig:1}). The main result is the observation of significant deviations between the exact values of these quantities and those obtained within the superposition approximation, both at small and even at large distances (see Fig.~\ref{fig:2}). \begin{figure}[!t] \includegraphics[width=8cm]{Fig1.eps} \caption{(a) Cross section of two identical spherical particles trapped at a fluid interface (horizontal blue line) close to each other and with contact angle $90^\circ$. (b) Magnified view of the boxed region in (a). The two adjacent fluids (``1'', located at $x>0$, and ``2'', located at $x<0$) forming the interface have permittivities $\varepsilon_1$, $\varepsilon_2$ and inverse Debye lengths $\kappa_1$, $\kappa_2$, respectively. Since the surface-to-surface distance between the particles is small compared to their radii, the particle surfaces can be approximated by planes located at $z=\pm L$ which carry charge densities $\sigma_1$ and $\sigma_2$ at the surfaces in contact with fluid ``1'' and ``2'', respectively. According to the model the fluid structures vary steplike at the surfaces and at the interface. } \label{fig:1} \end{figure} \section{Electrostatic potential} Consider a three-dimensional Cartesian coordinate system such that the two charged planar walls, which mimic the colloidal particles, are located at $z=\pm L$ and the fluid interface is at $x=0$ (Fig.~\ref{fig:1}(b)). The electrolyte solution present at $x>0$ ($x<0$) is denoted as medium ``1'' (``2''). For simplicity here we consider binary monovalent electrolytes only, i.e., there are only two ionic species of opposite sign like $\text{Na}^+$ and $\text{Cl}^-$. Generically the ions and the molecules are coupled such that the molecular and ion number densities vary on the scale of the bulk correlation length which is much smaller than the Debye length which sets the length scale for the variation of the charge density \cite{Bie12}. Thus the number densities in both media vary only close to the walls or to the fluid interface at distances of the order of the bulk correlation length, which, away from critical points, is of the order of the size of the fluid molecules and of the ions and falls below the length scale to be considered here. Accordingly, the permittivity $\varepsilon_1$ ($\varepsilon_2$) and the inverse Debye length $\kappa_1$ ($\kappa_2$) in medium ``1'' (``2'') are uniform where $\kappa_i=(2I_ie^2/(\varepsilon_ik_BT))^{1/2}$, $i\in\{1,2\},$ with bulk ionic strength $I_i$ (which is the bulk number density of each ionic species in medium $i$), Boltzmann constant $k_B$, temperature $T$, and elementary charge $e>0$. The two walls are assumed to be chemically identical such that the surface charge densities at both half-planes in contact with medium ``1'' (``2'') are given by $\sigma_1$ ($\sigma_2$). The local charge density of the ions is \emph{not} uniform in media ``1'' or ``2'' because this quantity varies on the scale of the Debye lengths, which are typically much larger than molecular sizes. Since the slab formed by the two walls at $z=\pm L$ is a model of the space in between two colloidal particles trapped at the fluid interface, it is appropriate to describe the ions within a grand canonical ensemble, the reservoirs of which are given by the bulk electrolyte solutions far away from the fluid interface. Within a simple density functional theory, which (i) considers uniform solvents in the upper and the lower half space, (ii) assumes low ionic strength in the bulk (which facilitates the description of the ions as point-like particles), and (iii) describes deviations of the ion densities from the bulk ionic strengths only up to quadratic order, one derives the linearized Poisson-Boltzmann (PB) equation $(\Delta-\kappa_i^2)\Phi_i=0$ to be fulfilled by the electrostatic potential $\Phi_i(x,z)$ in medium $i\in\{1,2\}$. The corresponding boundary conditions are: (i) the electrostatic potential should remain finite for $x\rightarrow\pm\infty$, (ii) the electrostatic potential and the normal component of the electric displacement field at the fluid interface should be continuous, i.e., $\Phi_1(x=0^+,z)=\Phi_2(x=0^-,z)$ and $\varepsilon_1\partial_x\Phi_1(x=0^+,z) = \varepsilon_2\partial_x\Phi_2(x=0^-,z)$, and (iii) due to global charge neutrality the normal component of electric displacement field at the walls correspond to the surface charge densities, i.e., $\varepsilon_i\partial_z\Phi_i(x,z=\pm L)=\pm\sigma_i$. It is important to note that in our model the fluids are confined to the space between the two walls such that outside the fluid slab the electric field vanishes. In order to determine the electrostatic potential we first split the whole problem into three sub-problems (see appendix~\ref{app:A}): (i) only the fluid interface is present in the absence of any walls, (ii) two charged walls with homogeneous surface charge densities $\sigma_1$ and the uniform medium ``1'' in between, and (iii) two charged walls with homogeneous surface charge densities $\sigma_2$ and the uniform medium ``2'' in between. By adding the solution of problem (ii) and the solution of problem (i) for the upper half-space and by adding the solution of problem (iii) and the solution of problem (i) for the lower half-space, one obtains potentials in the two media which satisfy all the boundary conditions listed above except the continuity of the potential at the interface. In order to fulfill also the latter one, we construct a correction function which (i) is a solution of the linearized PB equation, (ii) keeps all boundary conditions unchanged which are already satisfied, and (iii) leads to continuity of the potential at the interface. This can be achieved by means of 2D Fourier transform or Fourier series expansions \cite{Sti61}. The final expression for the {\it{e}}xact electrostatic potential (denoted by superscript ``e'') reads \begin{align} \Phi_i^e&(x,z)=\!\Phi_{bi}\!+\sum\limits_{j\in\{1,2\}}^{j\neq i} \frac{(-1)^j\kappa_j\varepsilon_j\Phi_D}{\kappa_1\varepsilon_1+\kappa_2\varepsilon_2} e^{-\kappa_{i}\lvert x\rvert}\notag\\ &+\Phi_i^{(0)}\frac{\cosh(\kappa_iz)}{\sinh(\kappa_iL)}\! +\!\!\sum\limits_{j\in\{1,2\}}^{j\neq i}\frac{C_{ij}^{(0)}(L)e^{-a_i^{(0)}(L)\lvert x\rvert}}{2}\notag\\ &+\sum\limits_{j\in\{1,2\}}^{j\neq i}\sum\limits_{n=1}^\infty C_{ij}^{(n)}(L)e^{-a_i^{(n)}(L)\lvert x\rvert}\cos\left(\frac{n\pi z}{L}\right), \label{eq:m1} \end{align} where the explicit dependences of $\Phi_i^{(0)}$, $a_i^{(n)}(L)$, and $C_{ij}^{(n)}(L)$ on $n$, $L$, and the type of media $i$ and $j$ are given in appendix~\ref{app:A}. The electrostatic bulk potential $\Phi_{bi}$ is defined as $\Phi_{b1}=0$ and $\Phi_{b2}=\Phi_D$, with the Donnan potential (or Galvani potential difference \cite{Bag06}) $\Phi_D$ between medium ``2'' and medium ``1'', which originates from the differences of the solubilities of the ions in the two media \cite{Bie08}. The first two terms on the right-hand side of Eq.~(\ref{eq:m1}) together represent the effect of the fluid interface in the absence of walls (sub-problem (i)) which corresponds to the limit $L\rightarrow\infty$ at any fixed position $z$. The third term describes the electrostatic potential of two uniformly and equally charged walls in the presence of a uniform electrolyte solution in between (sub-problem (ii) or (iii)). According to Eq.~\Eq{m1}, up to the constant $\Phi_{bi}$, $\Phi_i^e(x,z)$ reduces to the third term in the limit $\lvert x\rvert\rightarrow\infty$, i.e., far away from the fluid interface. The fourth and the fifth term in Eq.~(\ref{eq:m1}) correspond to the correction function which describes the contact of the walls with the fluid interface. Due to the symmetry of the problem, $\Phi_i(x,z)$ has to be an even function of $z$, and $\Phi_2(-\infty,z)-\Phi_1(\infty,z)=\Phi_D$ for any fixed position $z$ in the limit of large wall separations $L\rightarrow\infty$. $\Phi_i^e(x,z)$ exhibits these properties. By adding the electrostatic potentials of two single walls, each in contact with the fluid interface in a semi-infinite geometry with respect to $z$, one obtains the superposition approximation (denoted by superscript ``$s$'') \begin{align} &\Phi_i^s(x,z) = \!2\Phi_{bi}\!+\!\!\sum\limits_{j\in\{1,2\}}^{j\neq i} \frac{2(-1)^j\kappa_j\varepsilon_j\Phi_D}{\kappa_1\varepsilon_1+\kappa_2\varepsilon_2} e^{-\kappa_{i}\lvert x\rvert}\notag\\ &+\ 2\Phi_i^{(0)} \cosh(\kappa_i z)e^{-\kappa_i L}\!\notag\\ &+\!\!\sum\limits_{j\in\{1,2\}}^{j\neq i}\int\displaylimits_0^{\infty} dq~C_{ij}^s(q) \cos(qL) \cos(qz) e^{-\sqrt{q^2+\kappa_i^2}\lvert x\rvert}. \label{eq:m2} \end{align} The explicit expression for $C_{ij}^s(q)$ is given in appendix~\ref{app:A}. A comparison between the exact electrostatic potential $\Phi_i^e(x,z)$ and the superposition approximation $\Phi_i^s(x,z)$ at the plane of interface ($x=0$) is displayed in Fig.~\ref{fig:3si} in the appendix. Moreover, $\Phi_i^s(x,z)$ does not satisfy the boundary condition which relates the electric displacement field at the walls to the surface charge densities and $\Phi_2^s(-\infty,z)-\Phi_1^s(\infty,z)\neq\Phi_D$ for any fixed position $z$ in the limit of large wall separations $L\rightarrow\infty$. \section{Surface and line interactions} With the electrostatic potential given, the corresponding grand canonical potential can also be determined both exactly as well as within the superposition approximation. After subtracting the bulk free energy, the surface and interfacial tensions, and the line tension contributions from the grand potential one obtains the $L$-dependent part of the grand potential, \begin{align} \Delta\Omega(L)= A_1\omega_{\gamma,1}(L) + A_2\omega_{\gamma,2}(L) + \ell\omega_\tau(L), \label{eq:m3} \end{align} for the walls being a distance $2L$ apart, where $A_1$ and $A_2$ are the total areas of the two walls in contact with medium ``1'' and ``2'', respectively, and $\ell$ is the total length of the three-phase contact lines formed by medium ``1'', medium ``2'', and the walls; by construction $\Delta\Omega(L\rightarrow\infty)\rightarrow0$. The surface interaction energy per total surface area $A_i$ ($\omega_{\gamma,i}$) in contact with medium $i\in\{1,2\}$ is exactly (superscript ``e'') given by \begin{align} \omega^e_{\gamma,i}(L) = \frac{\sigma_i^2}{2\kappa_i\varepsilon_i}\left(\coth(\kappa_iL)-1\right), \label{eq:m4} \end{align} and within the superposition approximation (superscript ``s'') by \begin{align} \omega^s_{\gamma,i}(L) = \frac{\sigma_i^2}{2\kappa_i\varepsilon_i}\left(2e^{-\kappa_iL}\cosh(\kappa_iL)-1\right). \label{eq:m5} \end{align} \begin{figure}[!t] \includegraphics[width=7cm]{Fig2.eps} \caption{(a) Comparison between the exact expression (superscript ``e'', black solid lines, see Eq.~\Eq{m4}) and the corresponding superposition approximation (superscript ``s'', red dashed lines, see Eq.~\Eq{m5}) of the surface interaction energy $\omega_{\gamma,2}(L)$ per total surface area of contact between the walls and medium ``2'' in units of $\omega^{(0)}_\gamma=\sigma_1^2/(\kappa_1\varepsilon_1)$ as a function of $\hat{L}=\kappa_1L$. Typical experimental values for the parameter ratios $\kappa=\kappa_2/\kappa_1=0.025$, $\varepsilon=\varepsilon_2/\varepsilon_1=0.025$, and $\sigma=\sigma_2/\sigma_1=0.1$ have been chosen for the plots \cite{Sta00, Nik02, Dan04, Che09, Kes93}. Obviously $\omega^e_{\gamma,2}(L)$ and $\omega^s_{\gamma,2}(L)$ differ significantly at small distances, but even in the limit of large wall separations the superposition approximation is too small by a factor of $2$ (see the offset between the two curve in the inset). A similar deviation is obtained for $\omega_{\gamma,1}(L)$, but due to its very small magnitude ($\approx10^{-10}\times\omega_{\gamma,2}(L)$, for the above parameter choices) it is not shown here (see Fig.~\ref{fig:4si} in the appendix). (b) Comparison of the exact expression (superscript ``e'', black solid lines) and the superposition approximation (superscript ``s'', red dashed lines) of the effective line interaction energy $\omega_\tau(L)$ per total length of the three-phase contact lines between media ``1'' and ``2'' and the walls in units of $\omega_\tau^{(0)}=\sigma_1^2/(\kappa_1^2\varepsilon_1)$ as a function of $\hat{L}$ (see appendix~\ref{app:C} for explicit expressions). In addition to the same parameters $\sigma$, $\varepsilon$, and $\kappa$ as in panel (a) the Donnan potential (Galvani potential difference) $\Phi_D/\Phi_1^{(0)}=1.3$ is used. As for the surface interaction potential in panel (a), the superposition approximation of the line interaction potential deviates qualitatively from the exact result at small wall separations and its absolute value at large distances is too small by a factor of $2$.} \label{fig:2} \end{figure} According to Eqs.~\Eq{m4} and \Eq{m5}, varying $\sigma_i$ and $\varepsilon_i$ influences only the amplitude of $\omega_{\gamma,i}$ whereas its decay rate is solely determined by $\kappa_i$. For large wall separations one has $\displaystyle\omega_{\gamma,i}^e(\kappa_iL\gg1)\simeq\frac{\sigma_i^2}{\kappa_i\varepsilon_i}e^{-2\kappa_iL}$ and $\displaystyle\omega_{\gamma,i}^s(\kappa_iL\gg1)\simeq\frac{\sigma_i^2}{2\kappa_i\varepsilon_i}e^{-2\kappa_iL}$, i.e., the superposition approximation correctly predicts the exponential decay in the large distance limit but, in contrast to common expectations, the corresponding prefactor is too small by a factor of $2$. Moreover, the superposition approximation is qualitatively wrong for small wall separations (but still large on the molecular scale), because the exact surface interaction potential diverges in this limit as $\displaystyle\omega_{\gamma,i}^e(\kappa_iL\ll1)=\frac{\sigma_i^2}{2\kappa_i\varepsilon_i} \left[\frac{1}{\kappa_iL}-1+\frac{\kappa_iL}{3}+\mathcal{O}((\kappa_iL)^3)\right]$, whereas the superposition approximation stays finite: $\displaystyle\omega_{\gamma,i}^s(\kappa_iL\ll1)=\frac{\sigma_i^2}{2\kappa_i\varepsilon_i}\left[1-2\kappa_iL+\mathcal{O}((\kappa_iL)^2)\right]$. Thus the superposition approximation underestimates $\omega_{\gamma,i}$ for all $L$. Since for dilute aqueous electrolyte solutions of, e.g., $1\,\mathrm{mM}$ ($\approx0.0006\,\mathrm{nm^{-3}}$) ionic strength the Debye length ($1/\kappa_i\gtrsim10\,\mathrm{nm}$) is much larger than typical molecular size (e.g., $L=1\,\mathrm{nm}$), the exact surface interaction $\omega_{\gamma,i}^e(L)$ and the corresponding superposition approximation $\omega_{\gamma,i}^s(L)$ differ by at least one order of magnitude: $\omega_{\gamma,i}^e(L)/\omega_{\gamma,i}^s(L)\simeq1/(\kappa_iL)\gtrsim10$. Figure~2(a) displays a comparison between the exact result (black solid lines) and the superposition approximation (red dashed lines) for a set of typical experimental values for the ratios $\sigma=\sigma_2/\sigma_1$, $\kappa=\kappa_2/\kappa_1$, and $\varepsilon=\varepsilon_2/\varepsilon_1$. The line interaction potential $\omega_\tau(L)$ per total length of the three-phase contact line between media ``1'' and ``2'' and the walls has been calculated from Eqs.~\Eq{m1} and \Eq{m2} (see appendix~\ref{app:C} for explicit expressions). A comparison between the exact result $\omega_\tau^e(L)$ and the corresponding superposition approximation $\omega_\tau^s(L)$ is displayed in Fig.~2(b). Similar to the surface interaction potentials, $\omega_\tau^s(L)$ differs significantly from the exact result $\omega_\tau^e(L)$ at small wall separations $2L$. For large values of $L$, its absolute value is too small by a factor of $2$, like the surface contribution. \section{Discussion} By considering a slab geometry, we have investigated the electrostatic interaction between two colloidal particles at close proximity trapped at the interface of two immiscible electrolyte solutions. In our calculations, we have considered the charge density at the surface of the colloids to be constant, forming a boundary condition. However, in actual systems the situation is slightly different. When two particles approach each other the electrostatic potential becomes deeper in the region between the particles. Due to that certain charged molecular surface groups recombine in order to adjust the electrostatic potential. Such a process can better be described by a charge regulation model \cite{Rus89}. Keeping in mind the actual complexity of the system considered here, we briefly discuss the implications of charge regulation by focusing on a simpler system which consists of an electrolyte between two charged walls without a liquid-liquid interface in between. For such a system, the electrostatic potential with a surface charge density $\sigma_{wi}(L)$ at the two walls (which is constant for any fixed $L$) is given by $\Phi_{wi}^e=\frac{\sigma_{wi}^e(L)}{\kappa_{wi}\varepsilon_{wi}} \frac{\cosh{\kappa_{wi}z}}{\sinh{\kappa_{wi}L}}$ for the {\it{e}}xact calculation (see Eqs.~\Eq{7} and \Eq{8} in the appendix) and by $\Phi_{wi}^s=\frac{2\sigma_{wi}^s(L)}{\kappa_{wi}\varepsilon_{wi}}e^{-\kappa_{wi}L}\cosh{(\kappa_{wi}z)}$ within the {\it{s}}uperposition approximation (see the first terms in Eqs.~\Eq{36} and \Eq{37} in the appendix). Here the subscript ``$wi$'' stands for the system {\it{w}}ithout {\it{i}}nterface and the quantities $\sigma_{wi}$, $\kappa_{wi}$, and $\varepsilon_{wi}$ indicate, respectively, the surface charge density at the walls, the inverse Debye length, and the permittivity of the medium between the two planes in the absence of the horizontal interface. The dependence of the surface charge densities $\sigma_{wi}^e(L)$ and $\sigma_{wi}^s(L)$ on $L$ originates from the charge regulation (see appendix~\ref{app:E}). Inserting these expressions for the electrostatic potential into Eq.~\Eq{39p} in the appendix and using the fact that $D_x(\mathbf{r})$ vanishes in the absense of a liquid-liquid interface as it is the case here, leads to the following surface interaction energies per total surface area of both walls: \begin{align} \omega^e_{\gamma,wi}(L) = \frac{\left(\sigma_{wi}^e(L)\right)^2}{2\kappa_{wi}\varepsilon_{wi}}\left(\coth(\kappa_{wi}L)-1\right) \label{eq:m6} \end{align} and \begin{align} \omega^s_{\gamma,wi}(L) = \frac{\left(\sigma_{wi}^s(L)\right)^2}{2\kappa_{wi}\varepsilon_{wi}}\left(2e^{-\kappa_{wi}L}\cosh(\kappa_{wi}L)-1\right). \label{eq:m7} \end{align} We note that Eqs.~\Eq{m6} and \Eq{m7} are identical to Eqs.~\Eq{m4} and \Eq{m5}, respectively, except the fact that here the surface charge density varies with the thickness $L$ of the slab. We discuss the two limiting cases of small and large $L$ separately. In the limit $\kappa_{wi}L\ll1$ one has $\sigma_{wi}^e(L)\simeq-\text{sign}(q)e\sqrt{2nKL}$ for the exact calculation (Eq.~\Eq{50} in the appendix) and $\sigma_{wi}^s(L)$ is constant for the superposition approximation (see appendix~\ref{app:E}). $K$ (with units 1/volume) is the equilibrium constant for the association-dissociation reaction of the surface groups, $n$ denotes the total number of surface sites per cross-sectional area where a dissociation reaction can take place, and $q$ is the valency of the solvated ions due to the dissociation reaction at the wall surface (appendix~\ref{app:E}). This implies $\displaystyle\omega_{\gamma,wi}^e(L\rightarrow0)=\frac{e^2nKL}{\kappa_{wi}\varepsilon_{wi}} \left[\frac{1}{\kappa_{wi}L}-1+\frac{\kappa_{wi}L}{3}+\mathcal{O}((\kappa_{wi}L)^3)\right]$ which is nonzero for $L=0$. On the other hand, the nonzero and finite limiting value $\sigma_{wi}^s(L\rightarrow0)\neq0$ within the superposition approximation is clearly unphysical because the charge density is expected to decrease upon decreasing the inter-particle separation distance $L$. If by fiat, in order to avoid this unphysical feature, in Eq.~\Eq{m7} we replace $\sigma_{wi}^s(L)$ by $\sigma_{wi}^e(L)$, in the limit of small $L$ one finds $\displaystyle\omega_{\gamma,wi}^s(L\rightarrow0)=\frac{e^2nKL}{\kappa_{wi}\varepsilon_{wi}}\left[1-2\kappa_{wi}L+\mathcal{O}((\kappa_{wi}L)^2)\right]$, which vanishes for $L\rightarrow0$. In the opposite limit, i.e., for $\kappa_{wi}L\gg1$, one finds $\displaystyle\omega_{\gamma,wi}^e\simeq\frac{\left(\sigma_{wi}^e(L)\right)^2}{\kappa_{wi}\varepsilon_{wi}}e^{-2\kappa_{wi}L}$ and, by using the same replacement as above, $\displaystyle\omega_{\gamma,wi}^s\simeq\frac{\left(\sigma_{wi}^e(L)\right)^2}{2\kappa_{wi}\varepsilon_{wi}}e^{-2\kappa_{wi}L}=\frac{\omega_{\gamma,wi}^e}{2}$ with $\sigma_{wi}^e(L)$ given by Eq.~\Eq{49} in the appendix. Thus for the simple slab system without a liquid-liquid interface, but with charge regulation, the exact calculation and the superposition approximation are also in disagreement by a factor of 2 in the large separation limit and they differ qualitatively in the small separation limit. For the more complicated system with a liquid-liquid interface, we can expect these discrepancies to persist. \section{Conclusion} Within a continuum model of two parallel plates with two different electrolyte solutions in between forming a liquid-liquid interface, we have derived exact expressions for the electrostatic potential as well as for the effective surface and the line interaction potentials. The comparison between the exact results and the corresponding expressions within the superposition approximation reveals that the latter underestimates these quantities qualitatively at short distances and quantitatively even at large distances. Depending on the specific experimental system, the difference at small distances can be significant. The issue whether the deviations at large distances persist for a spherical geometry is left for future investigations. We expect our results to improve the description of the effective interaction between colloidal particles trapped at fluid interfaces, which plays an important role, e.g., in the formation of two-dimensional colloidal aggregates. \begin{acknowledgments} Helpful discussions with Alois W\"{u}rger are gratefully acknowledged. \end{acknowledgments}
2,877,628,090,016
arxiv
\section{Introduction.} \subsection{} Let $B$ be a definite quaternion algebra over $\Q$ with the discriminant $d_B$~(cf.~Section 1.1) and $D$ be a divisor of $d_B$. We recall that Arakawa lifting is a theta lifting to a cusp form on the quaternion unitary group $GSp(1,1)_{\A_{\Q}}$ from a pair consisting of an elliptic cusp form $f$ of level $D$ and an automorphic form $f'$ on $B^{\times}_{\A_{\Q}}$ ``with the same weight''~(for the definitions of the automorphic forms, see Section 1.1 and [28,~Sections 3.1 and 3.2]). At the archimedean place this theta lift $\cL(f,f')$ from $(f,f')$ generates a quaternionic discrete series representation in the sense of Gross-Wallach~[10]~(cf.~Section 1.1). Throughout the introduction we suppose that $(f,f')$ are non-zero Hecke eigenforms. The Fourier expansion of a cusp form on $GSp(1,1)_{\A_{\Q}}$ is indexed by $\xi\in B^{-}\setminus\{0\}$ and a unitary character $\chi$ of $\A_{\Q}^{\times}\Q(\xi)^{\times}\backslash\A_{\Q(\xi)}^{\times}$~(cf.~Section 1.2), where $B^{-}$~(respectively $\Q(\xi)$) stands for the set of pure quaternions in $B$~(respectively the imaginary quadratic field generated by $\xi$ over $\Q$). We let $\cL(f,f')_{\xi}^{\chi}$ be the Fourier coefficient of $\cL(f,f')$ indexed by $\xi\in B^{-}\setminus\{0\}$ and $\chi$~(which is also called a Bessel period). As an application of our previous work [28] we study an explicit relation between the square norm of the Fourier coefficient $\cL(f,f')_{\xi}^{\chi}$ and the product of central values of the two quadratic base change lift $L$-functions for $(f,f')$ twisted by $\chi^{-1}$. We can furthermore relate the square norm with the central value of some automorphic $L$-function of convolution type attached to $\cL(f,f')$ and $\chi^{-1}$. Our method of the study here also yields an existence theorem of $\cL(f,f')$'s with strictly positive central values of the $L$-functions just mentioned. \subsection{} Let $P_{\chi}(f;h)$ and $P_{\chi}(f';h')$ with $(h,h')\in GL_2(\A_{\Q})\times B^{\times}_{\A_{\Q}}$ be the toral integrals of $f$ and $f'$ defined by $\chi$ respectively~(cf.~Section 1.5). The main result [28,~Theorem 5.2.1] of our previous paper says that the Fourier coefficient $\cL(f,f')_{\xi}^{\chi}$ for a primitive $\xi\in B^{-}\setminus\{0\}$~(for the definition, see Section 1.3) is written as \[ \cL(f,f')_{\xi}^{\chi}(g_{0})=C_0(f,f',\xi,\chi)\overline{P_{\chi}(f;\gamma_0)}P_{\chi}(f';\gamma'_0) \] with some $(g_0,\gamma_0,\gamma'_0)\in GSp(1,1)_{\A_{\Q}}\times GL_2(\A_{\Q})\times B_{\A_f}^{\times}$. Here the constant $C_0(f,f',\xi,\chi)$ depending on $f$, $f'$, $\chi$ and $\xi$ is explicitly determined~(see [28,~Theorem 5.2.1] or Theorem 1.3). Let $\pi(f)$~(respectively~$\pi(f')$) be the automorphic representation generated by $f$~(respectively~$f'$) and let $\JL(\pi(f'))$ be the Jacquet-Langlands-Shimizu lift of $\pi(f')$~(cf.~[19],~[35,~Theorem 1]). We furthermore let $\Pi$~(respectively~$\Pi'$) be the base change lift of $\pi(f)$~(respectively~$\JL(\pi(f'))$) to $GL_2(\A_{\Q(\xi)})$. Denote the square norm of $f$~(respectively~$f'$) by $\langle f,f\rangle$~(respectively~$\langle f',f'\rangle$). Assuming that $f$ is primitive~(for the definition, see [25,~Section 4.6]), we now recall that, due to Waldspurger [38,~Proposition 7], there are constants $C(f,\chi)$ and $C(f',\chi)$ such that \[ \frac{||P_{\chi}(f;\gamma_0)||^2}{\langle f,f\rangle}=C(f,\chi)L(\Pi,\chi^{-1},\frac{1}{2}),\quad\frac{||P_{\chi}(f';\gamma'_0)||^2}{\langle f',f'\rangle}=C(f',\chi)L(\Pi',\chi^{-1},\frac{1}{2}), \] where $L(\Pi,\chi^{-1},s)$~(respectively~$L(\Pi',\chi^{-1},s)$) denotes the $L$-function of $\Pi$~(respectively~$\Pi'$) with $\chi^{-1}$-twist. The results of Waldspurger [38,~Proposition 7] and ours~[28,~Theorem 5.2.1] thus imply that \[ \frac{||\cL(f,f')_{\xi}^{\chi}(g_0)||^2}{\langle f,f\rangle\langle f',f'\rangle}=C(f,f',\xi,\chi)L(\Pi,\chi^{-1},\frac{1}{2})L(\Pi',\chi^{-1},\frac{1}{2}), \] where $C(f,f',\xi,\chi)$ is a constant depending only on $(f,f',\xi,\chi)$. We define some global spinor $L$-function for $GSp(1,1)_{\A_{\Q}}$ modifying Sugano's definition~(cf.~[36,~(3-4)]). More precisely, we define non-archimedean local factors of the $L$-function by the formula for the formal Hecke series and complete the global $L$-function with a suitable archimedean factor~(cf.~Section 2.6). This leads us to define the global $L$-function $L(\cL(f,f'),\chi^{-1},s)$ of convolution type for $\cL(f,f')$ and $\chi^{-1}$~(cf.~Section 2.6), whose local factors at unramified places are of degree eight. We see that this decomposes into \[ L(\cL(f,f'),\chi^{-1},s)=L(\Pi,\chi^{-1},s)L(\Pi',\chi^{-1},s) \] (cf.~Proposition 2.10). In this paper we explicitly determine the constant $C(f,f',\xi,\chi)$~(cf.~Theorem 2.8) and deduce the following formula: \begin{thm}[Theorem 2.11] We have \[ \frac{||\cL(f,f')_{\xi}^{\chi}(g_0)||^2}{\langle f,f\rangle\langle f',f'\rangle}=C(f,f',\xi,\chi)L(\cL(f,f'),\chi^{-1},\frac{1}{2}). \] \end{thm} \subsection{} Let us make several remarks on this theorem. We should first remark that Furusawa-Martin-Shalika [7]~[6] conjectured that the quantity $||\cL(f,f')_{\xi}^{\chi}(g_0)||^2$ is proportional to a similar $L$-function for the split symplectic group $GSp(2)$ of degree two with similitudes. Their conjecture is inspired by B{\"o}cherer's work~[2]. We note that the group $GSp(1,1)$ is an inner form of $GSp(2)$ and that the Langlands principle of functoriality~(cf.~[21]) suggests that an automorphic $L$-function for $GSp(1,1)$ should coincide with some automorphic $L$-function for $GSp(2)$. For any given divisor $D$ of $d_B$ we have a global maximal compact subgroup $K_f^D$~(cf.~Section 1.1). When $f$ is of level $D$, $\cL(f,f')$ is right $K_f^D$-invariant. In the coming paper [30] the spinor $L$-function of $\cL(f,f')$ is proved to be that of a paramodular cusp form on $GSp(2)_{\A_{\Q}}$ of level $d_BD$ given by some theta lift, where see Roberts and Schmidt [34,~Theorem 7.5.3,~Theorem 7.5.9,~Section A.6] for the non-archimedean local spinor $L$-functions of paramodular forms. For the case of $D=1$ this is essentially predicted by Ibukiyama~(cf.~[13], [14]). The $L$-function $L(\cL(f,f'),\chi^{-1},s)$ then turns out to be the $L$-function of convolution type attached to the above paramodular cusp form and $\chi^{-1}$. For the determination of $C(f,f',\xi,\chi)$ we need explicit formulas for the two constants $C(f,\chi)$ and $C(f',\chi)$. There are many contributers to the study on $C(f,\chi)$ and $C(f',\chi)$, e.g. Gross [9], Hida [12], Martin-Whitehouse [24], Murase [26], Prasanna [32], Waldspurger [38], Xue [40] [41] and Zhang [42]. The works [9], [40] and [42] study the toral integrals under the geometric background. The method of [12], [26], [32], [38] and [41] is the theta correspondence, while that of [24] is the relative trace formula by Jacquet [17] and Jacquet-Chen [18,~Theorem 2]. We quote an explicit formula for $C(f,\chi)$ by [26]~(cf.~Proposition 2.6). To know $C(f',\chi)$ explicitly we recall that Waldspurger~[38,~Proposition 7] expresses it as a product of some local constants over places of $\Q$. More specifically, the local constants are written as products of some local integrals and some ratios of local $L$-functions. To obtain an explicit form of $C(f',\chi)$ we have to evaluate the local integrals involved in $C(f',\chi)$. At a finite place not dividing the discriminant $d_B$ of $B$, we evaluate the integral by using Macdonald's explicit formula for zonal spherical functions~(cf.~[23,~Chap.V,~Section 3,~(3.4)]). The local integrals at other places are evaluated by a direct calculation. Our formula for $C(f',\chi)$ is stated as Proposition 2.7. We should remark that Martin and Whitehouse [24,~Theorem 4.1,~Theorem 4.2] already obtained similar formulas for $C(f,\chi)$ and $C(f',\chi)$. However, we note that the archimedean component of $f'$ has to be a highest weight vector~(which is not always true) in order to directly apply the formulas of [24] to $f'$. In addition, as far as we know, our method of the proof for Proposition 2.7 seems different from those of the known results. \subsection{} Let us specify the quaternion algebra as $B=\Q+\Q i+\Q j+\Q k$ with $i^2=j^2=-1$ and $ij=-ji=k$. As a primitive $\xi\in B^-\setminus\{0\}$ we take $\xi=i/2$. The level $D$ of $f$ is one or two since $d_B=2$ for this $B$. When $\chi$ is a unitary character of $\A_{\Q}^{\times}\Q(\xi)^{\times}\backslash\A_{\Q(\xi)}^{\times}$ unramified at every finite prime, we have verified the existence of simultaneously non-vanishing toral integrals for $(f,f')$~(cf.~[28,~Section 14] or Proposition 2.12). This and our explicit formulas for the toral integrals show the existence of simultaneously strictly positive central $L$-values \[ L(\Pi,\chi^{-1},\frac{1}{2})>0,\quad L(\Pi',\chi^{-1},\frac{1}{2})>0 \] (cf.~Theorem 2.13). For this result we remark that there are several results on the non-negativity of central values of these $L$-functions~(cf.~Guo [11],~Jacquet-Chen [18]). We furthermore see the existence of $\cL(f,f')$'s with the strictly positive central $L$-value as follows: \begin{thm}[Proposition 2.12,~Theorem 2.14] Let $B$ and $\chi$ be as above. When $D=1$~(respectively~$D=2$) let $\kappa\ge 12$~(respectively $8$) be an integer divisible by $4$~(respectively $8$). There exists $(f,f')$ of the same weight $\kappa$ such that \[ \cL(f,f')\not\equiv 0~\text{and}~L(\cL(f,f'),\chi^{-1},\frac{1}{2})>0. \] \end{thm} We note that this $L$-function should be a Rankin-Selberg convolution $L$-function for $GSp(2)\times GL(2)$. As a related work we cite Lapid [22]. This deals with the non-negativity of central values of Rankin-Selberg L-functions for $SO(2m+1)\times GL(n)$, assuming that cuspidal automorphic representations of $GL(n)$~(respectively~$SO(2m+1)$) are ``orthogonal''~(respectively~generic). For this we note that $PGSp(2)\simeq SO(2,3)$. \subsection{} The outline of this paper is as follows. In Section 1 we review the main result of our previous paper [28]. In Section 2 we deal with our theorems: an explicit relation between $||\cL(f,f')_{\xi}^{\chi}(g_0)||^2$ and the central $L$-values, and an existence theorem of $\cL(f,f')$'s with the strictly positive central $L$-values. More precisely, after introducing basic notations of automorphic $L$-functions for $GL(2)$ in Sections 2.1 and 2.2, we give the explicit formula for $C(f,\chi)$ in Section 2.3 following [26]. In Section 2.4 we state our formula for $C(f',\chi)$. We then have our theorem relating $||\cL(f,f')_{\xi}^{\chi}(g_0)||^2$ to the product $L(\Pi,\chi^{-1},\frac{1}{2})L(\Pi',\chi^{-1},\frac{1}{2})$ of the central $L$-values in Section 2.5. In Section 2.6 we introduce the global $L$-function $L(\cL(f,f'),\chi^{-1},s)$ of convolution type attached to $\cL(f,f')$ and $\chi^{-1}$. In Theorem 2.11~(cf.~Section 2.6) we relate $||\cL(f,f')_{\xi}^{\chi}(g_0)||^2$ to $L(\cL(f,f'),\chi^{-1},\frac{1}{2})$. In Section 2.7 we show the existence of $\cL(f,f')$'s satisfying the strictly positivity of central values for the $L$-functions in our concern. We are then left with the proof of the formula for $C(f',\chi)$, which is considered in Section 3. After reviewing Waldspurger's formula [38,~Proposition 7] and Macdonald's formula [23,~Chap.V,~Section 3,~(3.4)] in Section 3.1 we evaluate each local component of $C(f',\chi)$. In Section 3.2~(respectively~Section 3.3) we deal with the local component at a finite prime $p\nmid d_B$~(respectively~$p|d_B$). In Section 3.4 we calculate the archimedean component of $C(f',\chi)$. \subsection{} (1)~In the introduction of our previous paper [28], we cited Jacquet [16] as a paper dealing with another proof of the formula of Waldspurger [37]~(not [38]). In fact, we can find in [16] an approach by a relative trace formula toward the formula. However, we remark that the proof of Waldspurger's formula [37] by the relative trace formula is not completed in [16]. We note that, for instance, Baruch-Mao [1] carry out the relative trace formula approach to the formula of Waldspurger [37].\\ (2)~In [31] we generalize our results without restricting ourselves to the case where the weights of $(f,f')$ are the same. However, we remark that this paper and our previous paper [28] include the essential part of the study at non-archimedean places necessary to obtain such generalized results in [31]. \subsection*{Notation.} The ring $\A_{K}$ denotes the adele ring of a number field $K$ and $\A_f$ the ring of finite adeles in $\A_{\Q}$. For an algebraic group ${\cal G}$ over a field $F$ and an $F$-algebra $R$, ${\cal G}_R$ stands for the group of $R$-valued points. When $F=\Q$ and $R=\Q_p$ for a place $p$ of $\Q$ we sometimes denote ${\cal G}_{\Q_p}$ simply by ${\cal G}_p$. When $F$ is a number field or its completion at a finite place, $\mO_F$ denotes the ring of integers in $F$. For a finite set $S$, $|S|$ means the cardinality of $S$. Given a condition $C$, we put $\delta(C)= \begin{cases} 1 & (\text{$C$ holds})\\ 0 & (\text{otherwise}) \end{cases}$. For a measurable set $M$, $\vol(M)$ denotes its volume. \section{Reviews on Arakawa lifting and its Fourier expansion.} \subsection{} In this section we basically use the notation in [28]. Let $B$ be a definite quaternion algebra over $\Q$. The discriminant $d_B$ of $B$ is defined as the product of finite prime $p$'s such that $B_p=B\otimes_{\Q}\Q_p$ is a division algebra. Let $B\ni x\mapsto \bar{x}\in B$ be the main involution of $B$ and denote by $n$ and $\tr$ the reduced norm and the reduced trace of $B$ respectively. Namely \[ n(x)=x\bar{x},\quad\tr(x)=x+\bar{x} \] for $x\in B$. For $X= \begin{pmatrix} a & b\\ c & d \end{pmatrix}\in M_2(B)$ we put $\bar{X}= \begin{pmatrix} \bar{a} & \bar{b}\\ \bar{c} & \bar{d} \end{pmatrix}$. Let $G=GSp(1,1)$ be the $\Q$-algebraic group defined by \[ G_{\Q}:=\{g\in M_2(B)\mid {}^t\bar{g}Qg=\nu(g)Q,~\nu(g)\in\Q^{\times}\}, \] where $Q:= \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}$. By $Z_G$ we denote the center of $G$. We put $G_{\infty}^1:=\{g\in M_2(\Hl)\mid {}^t\bar{g}Qg=Q\}$, where $\Hl:=B{\otimes}_{\Q}\R$ is the Hamilton quaternion algebra. Then \[ K_{\infty}:=\{ \begin{pmatrix} a & b\\ b & a \end{pmatrix}\in M_2(\Hl)\mid a\pm b\in\Hl^{1}\} \] forms a maximal compact subgroup of $G_{\infty}^1$, where $\Hl^1:=\{u\in\Hl\mid n(u)=1\}$. For a non-negative integer $\kappa$ we let $(\sigma'_{\kappa},V_{\kappa})$ be the $\kappa$-th symmetric tensor representation of $GL_2(\C)$ and $\sigma_{\kappa}$ the pull-back of $\sigma'_{\kappa}$ to $\Hl^{\times}$ via the standard embedding $\Hl\subset M_2(\C)$~(cf.~[28,~(1.4)]). This induces an irreducible representation $(\tk,V_{\kappa})$ of $K_{\infty}$ by \[ \tk( \begin{pmatrix} a & b\\ b & a \end{pmatrix}):=\sigma_{\kappa}(a+b)~( \begin{pmatrix} a & b\\ b & a \end{pmatrix}\in K_{\infty}). \] Let $H$ and $H'$ be $\Q$-algebraic groups defined by \[ H_{\Q}=GL_2(\Q),~H'_{\Q}:=B^{\times} \] respectively. Fix a maximal order $\fO$ of $B$ and a divisor $D$ of $d_B$. For $p|d_B$ let ${\idP}_p$ be the maximal ideal of the $p$-adic completion $\fO_p$ of $\fO$ and let \[ L_p:= \begin{cases} {}^t(\fO_p\oplus\fO_p)&(\text{$p\nmid d_B$ or $p|D$}),\\ {}^t(\fO_p\oplus\idP_p^{-1})&(p|\frac{d_B}{D}). \end{cases} \] We put $K_p:=\{k\in G_p\mid kL_p=L_p\}$ for each finite prime $p$ and $K_f^D:=\prod_{p<\infty}K_p$. The group $K_p$ forms a maximal compact subgroup of $G_p$ for each $p$ as is remarked in [27,~Section 2]. Hereafter $\kappa$ denotes an even integer. For $\kappa>4$ we then introduce the space $\cS_{\kappa}^D$ of $V_{\kappa}$-valued cusp forms $F$ on $G_{\A_{\Q}}$ satisfying the following: \begin{enumerate} \item $F(z\gamma gk_fk_{\infty})=\tk(k_{\infty})^{-1}F(g)$ for $(z,\gamma,g,k_f,k_{\infty})\in Z_{G,\A_{\Q}}\times G_{\Q}\times G_{\A_{\Q}}\times K_f^D\times K_{\infty}$, \item for each fixed $g_f\in G_{\A_f}$, the right translations of the coefficients of $F|_{G_{\infty}^1}(g_f*)$ by elements in $G_{\infty}^1$ generate, as a $({\frak g},K_{\infty})$-module, the quaternionic discrete series representation~(cf.~[10]) with minimal $K_{\infty}$-type $\tk$~(if $F$ is non-zero). Here ${\frak g}$ denotes the Lie algebra of $G_{\infty}^1$. \end{enumerate} We remark that in [27,~Definition 2.1] and [28,~Definition 1.4.1] the second condition is replaced by the recurrence condition with respect to some reproducing kernel function. The equivalence of the two conditions is verified in [29,~Theorem 8.7,~Section 9]. For a positive integer $\kappa$ we let $S_{\kappa}(D)$ be the space of elliptic cusp forms of weight $\kappa$ with level $D$~(cf.~[28,~Section 3.1]) and ${\cal A}_{\kappa}$ be the space of automorphic forms of weight $\sigma_{\kappa}$ with respect to $\prod_{p<\infty}\fO_p^{\times}$~(cf.~[28,~Section 3.2]), where $\fO_p^{\times}$ denotes the unit group of $\fO_p$. Now we can review the definition of Arakawa lifting. By using a metaplectic representation of $G_{\A_{\Q}}\times H_{\A_{\Q}}\times H'_{\A_{\Q}}$ we define in [27,~Section 4] the $\End(V_{\kappa})$-valued theta function $\theta^{\kappa}(g,h,h')$ with some specified $\End(V_{\kappa})$-valued Schwartz-Bruhat function on $B_{\A_{\Q}}^{\oplus 2}\times\A_{\Q}^{\times}$. Then, for $\kappa>4$, we have \[ S_{\kappa}(D)\times {\cal A}_{\kappa}\ni (f,f')\mapsto {\cal L}(f,f')(g)\in\cS_{\kappa}^D \] (cf.~[28,~Theorem 3.3.2]) with \[ {\cal L}(f,f')(g):= \int_{(\R_+^{\times})^2(H\times H')_{\Q}\backslash (H\times H')_{\A_{\Q}}}\overline{f(h)}\theta^{\kappa}(g,h,h')f'(h')dhdh'. \] Here $(\R_+^{\times})^2$ means the connected component of the identity for the archimedean part of the center of $(H\times H')_{\A_{\Q}}$. \subsection{} We now review the Fourier expansion of ${\cal L}(f,f')$ described in [28,~Section 1.3]. We let $B^-:=\{x\in B\mid \tr(x)=0\}$ and have \[ {\cal L}(f,f')(g)=\sum_{\xi\in B^-\setminus\{0\}}{\cal L}(f,f')_{\xi}(g), \] where \[ {\cal L}(f,f')_{\xi}(g):=\int_{B^-\backslash B^-_{\A_{\Q}}}{\cal L}(f,f')( \begin{pmatrix} 1 & x\\ 0 & 1 \end{pmatrix}g)\psi(-\tr(\xi x))dx \] with the standard additive character $\psi:=\otimes_{p\le\infty}\psi_p$ of $\A_{\Q}/\Q$, which satisfies $\psi_{\infty}(x_{\infty})=\exp(2\pi\sqrt{-1}x_{\infty})$ for $x_{\infty}\in\R$. Here we normalize the measure $dx$ so that the volume of $B^-\backslash B^-_{\A_{\Q}}$ is one. For $\xi\in B^-\setminus\{0\}$ we let $X_{\xi}$ be the set of unitary characters on $\A_{\Q}^{\times}\Q(\xi)^{\times}\backslash \A_{\Q(\xi)}^{\times}$, which we call Hecke characters. With this $X_{\xi}$ the Fourier expansion is refined as follows: \[ {\cL}(f,f')(g)=\sum_{\xi\in B^-\setminus\{0\}}\sum_{\chi\in X_{\xi}}{\cal L}(f,f')_{\xi}^{\chi}(g), \] where \[ {\cal L}(f,f')_{\xi}^{\chi}(g):=\vol(\R_+^{\times}\A_{\Q}^{\times}\backslash\A_{\Q(\xi)}^{\times})^{-1}\int_{\R^{\times}_+\Q(\xi)^{\times}\backslash\A_{\Q(\xi)}^{\times}}{\cal L}(f,f')_{\xi}(s1_2\cdot g)\chi(s)^{-1}ds. \] \subsection{} To review our explicit formula for ${\cL}(f,f')_{\xi}^{\chi}$ , we let $(f,f')\in S_{\kappa}(D)\times{\cal A}_{\kappa}$ and assume the following two conditions:\\ (1)~The two forms $f$ and $f'$ are Hecke eigenforms and have the same eigenvalue~(or signature) for the ``Atkin-Lehner involutions''. More precisely, for each $p|D$, let $\epsilon_p$~(respectively~$\epsilon'_p$) be the eigenvalue for the involutive action of $ \begin{pmatrix} 0 & 1\\ -p & 0 \end{pmatrix} $~(respectively~a prime element $\varpi_{B,p}\in B_p$) on $f$~(respectively~$f'$). Then \[ \epsilon_p=\epsilon'_p. \] Otherwise $\cL(f,f')\equiv 0$~(cf.~[27,~Remark 5.2 (ii)]).\\ (2)~We assume that $\xi\in B^-\setminus\{0\}$ is primitive~(cf.~[28,~Section 4.1]). Namely, for each finite prime $p$, we let \[ \la_p:= \begin{cases} \fO_p&(\text{$p\nmid d_B$ or $p|D$})\\ \idP_p&(p|\frac{d_B}{D}) \end{cases},~ (\la_p^-)^*:=\{z\in B^-_{p}\mid\tr(\bar{z}w)\in\Z_p,~\text{for any $w\in\la_p\cap B_p^-$}\} \] and assume that \[ \xi\in(\la_p^-)^*\setminus p(\la_p^-)^*. \] For this second assumption we note that, in general, the Fourier coefficient $F_{\xi}$ of an automorphic form $F$ on $G_{\A_{\Q}}$ satisfies \[ F_{\xi}(g)=F_{\xi}( \begin{pmatrix} t & 0\\ 0 & 1 \end{pmatrix}g)=F_{t\xi}(g)\quad(t\in\Q^{\times}). \] We then see that the problem determining $F_{\xi}$ is reduced to the case where $\xi$ is primitive. \subsection{} For each $\xi\in B^- \setminus \{0\}$, $d_{\xi}$ denotes the discriminant of $\Q(\xi)$. Hereafter, fixing $\xi$, we often denote $\Q(\xi)$ by $E$. We put \[ a:= \begin{cases} 2\sqrt{-n(\xi)}\sqrt{d_{\xi}}&(\text{$d_{\xi}$ is odd})\\ \sqrt{-n(\xi)}\sqrt{d_{\xi}}&(\text{$d_{\xi}$ is even}) \end{cases},\ b:=\xi^2-\frac{a^2}{4}. \] With these $a$ and $b$ we define $\iota_{\xi}:E^{\times}\hookrightarrow GL_2(\Q)$ by \[ \iota_{\xi}(x+y\xi)=x\cdot 1_2+y\cdot \begin{pmatrix} \frac{a}{2} & b\\ 1 & -\frac{a}{2} \end{pmatrix}~(x,y\in\Q). \] Put $\theta:=r^{-1}(\xi-\frac{a}{2})$ with $r=\frac{2\sqrt{-n(\xi)}}{\sqrt{d_{\xi}}}\in\Q^{\times}$. Then $\{1,\theta\}$ forms a $\Z$-basis of the integer ring $\mO_E$ of $E$. We can rewrite $\iota_{\xi}$ as \[ \iota_{\xi}(x+y\theta)= \begin{pmatrix} x & -rN_{E/\Q}(\theta)y\\ r^{-1}y & x+\Tr_{E/\Q}(\theta)y \end{pmatrix}\quad(x,y\in\Q). \] The completion $E\oo$ of $E$ at $\infty$ is identified with $\C$ by \[ \delta_{\xi}:E\oo\ni x+y\xi\mapsto x+y\sqrt{-n(\xi)}\in\C~(x,y\in \R). \] For a Hecke character $\chi=\prod_{p\leq\infty}\chi_p$ of $\R^{\times}_+E^{\times}\backslash\A_{E}^{\times}$, we let $w\oo(\chi)\in\Z$ be such that \[ \chi\oo(u)=(\delta_{\xi}(u)/|\delta_{\xi}(u)|)^{w\oo(\chi)}~(u\in E\oo^{\times}). \] Furthermore, for each prime $p<\infty$, we let \[ i_p(\chi):=\min\{i\ge 0\mid \chi|_{\mO_{E_p,i}^{\times}}\equiv 1\} \] with $\mO_{E_p,i}:=\Z_p+p^i\mO_{E_p}$ and let \[ \mu_p:=\frac{\ord_p(2\xi)^2-\ord_p(d_{\xi})}{2}, \] which coincides with $\ord_p(r)$. We now state the following~(cf.~[28,~Theorem 5.1.1]): \begin{prop} $\cL(f,f')_{\xi}^{\chi}\equiv 0$ unless $i_p(\chi)=0$ for any $p|d_B$ and $w_{\infty}(\chi)=-\kappa$. \end{prop} \subsection{} In what follows, we assume the following condition on $\chi$: \begin{assm} $i_p(\chi)=0$ for any $p|d_B$ and $w_{\infty}(\chi)=-\kappa$. \end{assm} We need further notations to recall our formula for $\cL(f,f')_{\xi}^{\chi}$. We define $\gamma_0=(\gamma_{0,p})_{p\leq\infty}\in H_{\A_{\Q}}$ and $\gamma'_0=(\gamma'_{0,p})_{p<\infty}\in H'_{\A_f}$ as follows: \begin{align*} \gamma_{0,p}&:= \begin{cases} \begin{pmatrix} 1 & 0 \\ 0 & p^{-\mu_p+i_p(\chi)} \end{pmatrix}&(p\nmid D),\\ 1_2 &\text{($p|D$ and $p$ is inert in $E$)},\\ \begin{pmatrix} 0 & 1\\ -p & 0 \end{pmatrix}&\text{($p|D$ and $p$ ramifies in $E$)},\\ \begin{pmatrix} 1 & a/2\\ 0 & 1 \end{pmatrix} \begin{pmatrix} n(\xi)^{1/4} & 0\\ 0 & n(\xi)^{-1/4} \end{pmatrix}&(p=\infty), \end{cases}\\ \gamma'_{0,p}&:= \begin{cases} \begin{pmatrix} 1 & 0\\ 0 & p^{-\mu_p+i_p(\chi)} \end{pmatrix}&(p\nmid d_B),\\ \varpi_{B,p}^{-1}&(p|d_B). \end{cases} \end{align*} Here recall that $\varpi_{B,p}$ denotes a prime element of $B_p$~(cf.~Section 1.3). In addition, we introduce the following local constants: \[ C_p(f,\xi,\chi):= \begin{cases} p^{2\mu_p-i_p(\chi)}(1-\delta(i_p(\chi)>0) e_p(E)p^{-1})&(p\nmid d_B),\\ 1&(p|\frac{d_B}{D}),\\ 2\epsilon_p&\text{($p|D$ and $p$ is inert in $E$)},\\ (p+1)^{-1}&\text{($p|D$ and $p$ ramifies in $E$)}, \end{cases} \] where \[ e_p(E)= \begin{cases} -1&\text{($p$ is inert in $E$)},\\ 0&\text{($p$ ramifies in $E$)},\\ 1&\text{($p$ splits in $E$)}. \end{cases} \] As in [28,~Section 2.4] we normalize the measure $ds=\prod_{p\le\infty}ds_p$ of $\A^{\times}_{E}$ so that \[ \int_{\mO_{E_p}^{\times}}ds_p=1~\text{for any $p<\infty$},\quad\int_{E_{\infty}^1}ds_{\infty}=1, \] where $E_{\infty}^1:=\{s\in E_{\infty}^{\times}\mid n(s)=1\}$. In addition, we choose the measure of $\A_{\Q}^{\times}$ so that \[ \vol(\Z_p^{\times})=1~\text{for any $p<\infty$}. \] For $(f,f')\in S_{\kappa}(D)\times\cA_{\kappa}$ we introduce their toral integrals \[ P_{\chi}(f;h):=\int_{\R_+^{\times}E^{\times}\backslash \A_{E}^{\times}}f(\iota_{\xi}(s)h)\chi(s)^{-1}ds,~P_{\chi}(f';h'):=\int_{\R_+^{\times}E^{\times}\backslash \A_{E}^{\times}}f(sh')\chi(s)^{-1}ds \] with respect to $\chi$, where $(h,h')\in GL_2(\A_{\Q})\times B^{\times}_{\A_{\Q}}$. For these integrals we note that $E_p^{\times}$ is embedded into $B_p^{\times}\simeq GL_2(\Q_p)$ by $\iota_{\xi}$ when $p\nmid d_B$. We denote by ${\mathbf h}(E)$ and ${\mathbf w}(E)$ the class number of $E$ and the number of the roots of unity in $E$ respectively. Then we are able to state our formula for $\cL(f,f')_{\xi}^{\chi}$~(cf.~[28,~Theorem 5.2.1]). \begin{thm} Let $(f,f')$ be Hecke eigenforms with the same signature of the Atkin-Lehner involutions and $\xi\in B^-\setminus\{0\}$ be primitive~(cf.~Section 1.3 (1),~(2)), and recall Assumption 1.2 on $\chi$. We then have the following formula: \begin{align*} &\cL(f,f')_{\xi}^{\chi}(g_{0,f} \begin{pmatrix} \sqrt{\eta\oo} & 0\\ 0 & \sqrt{\eta\oo}^{-1} \end{pmatrix})\\ &=2^{\kappa-1}n(\xi)^{\kappa/4}\frac{\mathbf{w}(E)}{\mathbf{h}(E)}\cdot \left(\prod_{p<\infty}C_p(f,\xi,\chi)\right) \eta\oo^{\kappa/2+1}\exp(-4\pi\sqrt{n(\xi)}\eta\oo) \,\overline{P_{\chi}(f;\gamma_0)}P_{\chi}(f';\gamma'_0). \end{align*} Here $\eta\oo\in \R\cross_+$ and $g_{0,f}=(g_{0,p})_{p<\infty}\in G_{\A_f}$ is given by \[ g_{0,p}:= \begin{cases} \diag(p^{i_p(\chi)-\mu_p},p^{2(i_p(\chi)-\mu_p)},1,p^{i_p(\chi)-\mu_p})&(p\nmid d_B),\\ 1_2&(p|d_B), \end{cases} \] where the notation ``$\diag$'' means that $g_{0,p}$ is a diagonal matrix. \end{thm} \begin{rem} According to Sugano [36,~Theorem 2-1], the Fourier coefficient ${\cal L}(f,f')_{\xi}^{\chi}$ is determined by the evaluation at $g_{0,f} \begin{pmatrix} \sqrt{\eta\oo} & 0\\ 0 & \sqrt{\eta\oo}^{-1} \end{pmatrix}$. \end{rem} \section{Relation with central $L$-values.} \subsection{} Throughout this section we let $(f,f')\in S_{\kappa}(D)\times {\cal A}_{\kappa}$ be Hecke eigenforms in the sense of [28,~Section 3.1,~3.2] and assume that $f$ is a primitive form~(for the definition, see [25,~Section 4.6]). Let $\pi(f)$~(respectively~$\pi(f')$) be the irreducible automorphic representation generated by $f$ (respectively~$f'$), and let $\JL(\pi(f'))$ be the Jacquet-Langlands-Shimizu lift of $\pi(f')$~(cf.~[19],~[35,~Theorem 1]). We remark that the classical prototype of the Jacquet-Langlands-Shimizu lift just mentioned is the Hecke equivariant isomorphism between ${\cal A}_{\kappa}$ and the space spanned by primitive forms in $S_{\kappa+2}(d_B)$, which is due to Eichler~(cf.~[4],~[5],~[35,~Section 6]). We denote by $\JL(f')$ the primitive form corresponding to $f'$ via this isomorphism. It is known that $\pi(f)$ and $\JL(\pi(f'))$ (respectively~$\pi(f')$) decompose into restricted tensor products over finite or infinite places $p\leq\infty$ of irreducible admissible representations of $GL_2(\Q_p)$~(respectively~$B_p^{\times}$). By $\pi_p$, $\pi'_p$ and $\pi''_p$ we denote the $p$-component of $\pi(f)$, $\JL(\pi(f'))$ and $\pi(f')$ respectively. According to such decompositions of $\pi(f)$ and $\pi(f')$, $f$ and $f'$ admit decompositions into pure tensor products \[ \rho(f)=\prod_{p\leq\infty}f_p,~\rho'(f')=\prod_{p\leq\infty}f'_p, \] where we fix an isomorphism $\rho$~(respectively~$\rho'$) between $\pi(f)$ and the restricted tensor product $\otimes'_{p\le\infty}\pi_p$~(respectively~$\pi(f')$ and $\otimes'_{p\le\infty}\pi''_p$). We denote by $\Pi$~(respectively~$\Pi'$) the quadratic base change lift of $\pi(f)$~(respectively~$\JL(\pi(f'))$) to $GL_2(\A_{E})$. These $\Pi$ and $\Pi'$ also decompose into the restricted tensor products $\otimes'_{p\le\infty}\Pi_p$ and $\otimes'_{p\le\infty}\Pi'_p$ respectively, where each $\Pi_p$ or $\Pi'_p$ is a local base change lift of $\pi_p$ or $\pi'_p$ at every place $p$ respectively. We remark that each of the local and global representations just introduced has the trivial central character since so do $f$ and $f'$, thus it is self-dual~(cf.~[19,~Theorem 2.18 (i)],~[3,~Theorem 3.3.5]). \subsection{Review on the adjoint $L$-functions and the $L$-functions of base change lifts for $GL_2$.} Let $L(\pi,s)$ be the standard $L$-function for an automorphic representation $\pi$ of $GL_2(\A_{\Q})$. We denote by $L(\Pi,\chi^{-1},s)$~(respectively~$L(\Pi',\chi^{-1},s)$) the $L$-function of $\Pi$~(respectively~$\Pi'$) with $\chi^{-1}$-twist, and let $L(\pi(f),\Ad,s)$~(respectively~$L(\JL(\pi(f')),\Ad,s)$) be the adjoint $L$-function of $\pi(f)$~(respectively~$\JL(\pi(f'))$). We describe the local factors of $L(\Pi,\chi^{-1},s)$ and $L(\Pi',\chi^{-1},s)$~(respectively~$L(\pi(f),\Ad,s)$ and $L(\JL(\pi(f')),\Ad,s)$), following Jacquet~[15]~(respectively~Gelbart-Jacquet~[8]). We note that $\pi_p$~(respectively~$\pi'_p\simeq\pi''_p$) is a unitary unramified principal series representation of $GL_2(\Q_p)$ for each finite prime $p\nmid D$~(respectively~$p\nmid d_B$). This is due to the Ramanujan conjecture for holomorphic cusp forms on $GL_2$. In addition, we remark that, for $p|d_B$, $\pi''_p$ is written as \[ B_p^{\times}\ni b\mapsto \delta_p\circ n(b)\in\{\pm 1\}, \] with the unramified character $\delta_p$ of $\Q_p^{\times}$ of order at most two determined by $\delta_p(p)=\epsilon'_p$~(for $\epsilon'_p$ see Section 1.3). For $p|d_B$, $\pi'_p$ is thus the special representation of $GL_2(\Q_p)$ given by the irreducible subrepresentation~(or irreducible subquotient) of the normalized induced representation induced from the character of the Borel subgroup defined by two quasi-characters $\delta_p\cdot|*|_p^{\frac{1}{2}}$ and $\delta_p\cdot|*|_p^{-\frac{1}{2}}$~(cf.~[19,~Theorem 4.2 (iii)]), where $|*|_p$ is the $p$-adic valuation. We further note that $\pi_p$ is also a special representation when $p|D$. The induced representation giving rise to $\pi_p$ is associated with the unramified character $\delta'_p$ of order at most two determined by $\delta'_p(p)=-\epsilon_p$ in place of $\delta_p$, since the signature $\epsilon_p(=\epsilon'_p)$ of the Atkin Lehner involution of $f$ at $p|D$ differs from that of the corresponding automorphic form on $B^{\times}_{\A}$~(cf.~[19,~Lemma 15.7]). We remark that, when $p$ is inert in $E$ or when $p$ is ramified in $E$ and $\chi_p$ is unramified, $\chi_p$ can be written as \[ \chi_p=\omega_p\circ n_{E_p/\Q_p} \] with a character $\omega_p$ of $\Q_p^{\times}$ of order at most two and the norm $n_{E_p/\Q_p}$ of $E_p$. In fact, $\omega_p\equiv1$ when $p$ is inert. In addition, at the archimedean place, $\pi_{\infty}$ and $\pi'_{\infty}$ are the discrete series representations with weight $\kappa$ and $\kappa+2$~(for the definition see [3,~Section 2.5]) respectively. For this fact see~[35,~Section 6]. We let $\pi_{\ur}$ be a unitary unramified principal series representation of $GL_2(\Q_p)$ with Satake parameter $(\alpha_p,\alpha_p^{-1})$ and the trivial central character, and let $\pi_{\spc}(+)$~(respectively $\pi_{\spc}(-)$) be the special representation $\pi'_p$ for $p|d_B$~(respectively $\pi_p$ for $p|D$). We denote by $\Pi_{\ur}$~(respectively~$\Pi_{\spc}(\pm)$) the base change lift of $\pi_{\ur}$~(respectively~$\pi_{\spc}(\pm)$) to $GL_2(E_p)$. We first deal with the local $L$-functions of $\pi_{\ur}$ and $\Pi_{\ur}$. The following lemma is well-known. \begin{lem} (1) \[ L_p(\pi_{\ur},s)=(1-\alpha_pp^{-s})^{-1}(1-\alpha_p^{-1}p^{-s})^{-1}. \] (2) \[ L_p(\pi_{\ur},\Ad,s)=(1-p^{-s})^{-1}(1-\alpha_p^2p^{-s})^{-1}(1-\alpha_p^{-2}p^{-s})^{-1}. \] (3) \begin{align*} &L_p(\Pi_{\ur},\chi_p^{-1},s)=\\ &\begin{cases} \prod_{i=1,2}(1-\alpha_p\chi_p(\varpi_{p,i})^{-1}p^{-s})^{-1}(1-\alpha_p^{-1}\chi_p(\varpi_{p,i})^{-1}p^{-s})^{-1}&(\text{$p$:split},~i_p(\chi)=0),\\ (1-\alpha_p^2p^{-2s})^{-1}(1-\alpha_p^{-2}p^{-2s})^{-1}&(\text{$p$:inert},~i_p(\chi)=0),\\ (1-\alpha_p\chi_p(\varpi_p)^{-1}p^{-s})^{-1}(1-\alpha_p^{-1}\chi_p(\varpi_p)^{-1}p^{-s})^{-1}&(\text{$p$:ramified},~i_p(\chi)=0),\\ 1&(i_p(\chi)>0), \end{cases} \end{align*} where $\varpi_{p,i}\in E_p$ with $i=1,2$~(respectively~$\varpi_p\in E_p$) denote prime elements of two distinct prime ideals dividing $p$~(respectively~a prime element dividing $p$) when $p$ is split~(respectively~$p$ is ramified). \end{lem} We next deal with the case of $\pi_{\spc}(\pm)$ and $\Pi_{\spc}(\pm)$, where note that $\pi_{\spc}(+)$ and $\Pi_{\spc}(+)$~(respectively~$\pi_{\spc}(-)$ and $\Pi_{\spc}(-)$) are defined for $p|d_B$~(respectively~$p|D$). \begin{lem} We have \begin{align*} L_p(\pi_{\spc}(\pm),s)&=(1\mp\delta_p(p)p^{-(s+\frac{1}{2})})^{-1},\\ L_p(\pi_{\spc}(\pm),\Ad,s)&=(1-p^{-(s+1)})^{-1},\\ L_p(\Pi_{\spc}(\pm),\chi_p^{-1},s)&= \begin{cases} (1-p^{-(2s+1)})^{-1}&(\text{$p$:inert}),\\ (1\mp\delta_p(p)\omega_p(p)p^{-(s+\frac{1}{2})})^{-1}&(\text{$p$:ramified}). \end{cases} \end{align*} \end{lem} \begin{pf} For the first formula see [19,~Proposition 3.6]. To verify the other two we need the local Rankin-Selberg convolution $L$-function $L_p(\pi_1\times\pi_2,s)$ of two irreducible admissible representations $\pi_1$ and $\pi_2$ of $GL_2(\Q_p)$~(cf.~[15]). According to [8,~Proposition (1.4), (1.4.3)], we have \[ L_p(\sigma_1\times\sigma_2,s)=L_p(\mu_1\mu_2,s)L_p(\nu_1\mu_2,s) \] for two special representations $\sigma_1$ and $\sigma_2$, where, for $i=1$ or $2$, $\sigma_i$ is attached to two quasi-character $\mu_i$ and $\nu_i$ of $\Q_p^{\times}$ with $\frac{\mu_i}{\nu_i}=|*|_p$. Then the second formula follows from this and the definition of the adjoint $L$-function in [8,~p.485], where we note that $\pi_{\spc}(\pm)$ is self-dual~(cf.~Section 2.1). From [15,~Definition 20.1] we recall that \[ L_p(\Pi_{\spc}(\pm),\chi_p^{-1},s)=L_p(\pi_{\spc}(\pm)\times\pi(\chi_p^{-1}),s), \] where see [19,~Theorem 4.6] for $\pi(\chi_p^{-1})$~(the dihedral representation associated with $\chi_p^{-1}$). From [15,~(20.3)] we can deduce the last formula.\qed \end{pf} For a positive integer $k\ge 2$ we let $\pi_{k}$ be the discrete series representation of $GL_2(\R)$ with weight $k$~(cf.~[3,~Section 2.5]) and $\Pi_k$ denote its base change lift. For $l\in\frac{1}{2}\Z$ we put $\chi_l$ to be the character of $\C^{\times}$ given by \[ \chi_l(z):=\left(\frac{z}{\bar{z}}\right)^l,\quad z\in\C^{\times}. \] With \[ \Gamma_{\R}(s):=\pi^{-\frac{s}{2}}\Gamma(\frac{s}{2}),\quad\Gamma_{\C}(s):=2(2\pi)^{-s}\Gamma(s) \] we state the following: \begin{lem} We have \begin{align*} L_{\infty}(\pi_k,s)&=\Gamma_{\C}(s+\frac{k-1}{2}),\\ L_{\infty}(\pi_k,\Ad,s)&=\Gamma_{\R}(s+1)\Gamma_{\C}(s+k-1),\\ L_{\infty}(\Pi_k,\chi_l,s)&= \begin{cases} \Gamma_{\C}(s+\frac{k-1}{2}+|l|)\Gamma_{\C}(s-\frac{k-1}{2}+|l|)&(|l|\geq\frac{k-1}{2})\\ \Gamma_{\C}(s+\frac{k-1}{2}+|l|)\Gamma_{\C}(s+\frac{k-1}{2}-|l|)&(|l|\leq\frac{k-1}{2}) \end{cases}, \end{align*} where $l\in\frac{1}{2}\Z$. \end{lem} \begin{pf} We recall that the discrete series representation $\pi_k$, which is called a special representation in [19], corresponds to the representation of the Weil group $W_{\R}$ of $\R$ induced from the character $\chi_{\frac{k-1}{2}}$ of $\C^{\times}$~(cf.~[19,~Sections 5 and 12]), where note that $\C^{\times}$ is the Weil group of $\C$~(for the definition of Weil groups, see [39] and [19,~Section 12]). For the first formula see [19,~p.181,~p.194,~p.195]. For the other two we need to consider the tensor product of two representations of $W_{\R}$ induced from characters of $\C^{\times}$, for which we refer to [15,~Case 17.6]. For the second formula recall the definition of the local adjoint L-function~(cf.~[8,~p.485]). We then see that \[ L_{\infty}(\pi_{k},\Ad,s)=\Gamma_{\R}(s+1)\Gamma_{\C}(s+k-1). \] As for the last one, following the definition [15,~Definition 20.1] of the local L-factor of the lifting to $GL_2$ over a quadratic extension, we have $L_{\infty}(\Pi_{k},\chi_l,s)$ as above.\qed \end{pf} \subsection{Relation between $P_{\chi}(f;\gamma_0)$ and $L(\Pi,\chi^{-1},\frac{1}{2})$.} By $\eta=\prod_{p\le\infty}\eta_p$ we denote the quadratic character attached to the quadratic extension $E/\Q$. We let $L(\eta,s)$ be the $L$-function defined by $\eta$ and $L_p(\eta_p,s)$ the local factor of $L(\eta,s)$ at a place $p$. We introduce the subsets $S_1$ and $S_2^{\pm}(f,\chi)$ of $d(D):=\{p<\infty\mid p|D\}$ which are denoted by ``$S_1(\xi)$'' and ``$S_2^{\pm}(f,\xi)$'' in [26,~Section 1.2] respectively. In view of [26,~Section 3.3] they are given as \begin{align*} S_1&:=\{p<\infty\mid \text{$p|D$, $p$ is inert in $E$}\},\\ S_2^{\pm}(f,\chi)&:=\{p<\infty\mid \text{$p|D$, $p$ ramifies in $E$, $\chi_p(\varpi_p)=\mp \epsilon_p$}\}, \end{align*} where recall that $\epsilon_p$ denotes the eigenvalue of the Atkin-Lehner involution for $f$~(cf.~Section 1.3) and that $\varpi_p$ is a prime element of $E_p$~(cf.~Lemma 2.1). Note that, different from [26,~Section 1.2], $i_p(\chi)=0$ is already assumed for $p|D$~(cf.~Assumption 1.2) and that $S_1\cup S_2^+(f,\chi)\cup S_2^-(f,\chi)$ coincides with $d(D)$. We furthermore put $A(\chi):=\prod_{p<\infty}p^{i_p(\chi)}$. We first quote [26,~Theorem 1.1], with a modification, for our situation. To consider the toral integral in [26,~(1.6)] we need another embedding $\iota'_{\xi}:E\hookrightarrow GL_2(\Q)$: \[ \iota'_{\xi}(x+y\theta')= \begin{pmatrix} x & n(\theta')y\\ -y & x+\Tr(\theta')y \end{pmatrix}. \] Here we put $\theta'=-\bar{\theta}$, for which we have to see that $\theta'$ satisfies the condition in [26,~Section 1.2]~(for this see also [28,~Lemma 4.3.1 (iii) and (4.9)]). Our embedding $\iota_{\xi}$ is related to $\iota'_{\xi}$ by \[ \iota_{\xi}(\overline{x+y\theta})= \begin{pmatrix} r & 0\\ 0 & 1 \end{pmatrix}\iota'_{\xi}(x+y\theta) \begin{pmatrix} r^{-1} & 0\\ 0 & 1 \end{pmatrix}\quad(x+y\theta\in E), \] where recall that $r=\frac{2\sqrt{-n(\xi)}}{\sqrt{d_{\xi}}}$~(cf.~Section 1.4). Furthermore we note that our normalized measure of $\A_{E}^{\times}$ is the multiple of that of [26,~Section 2.4] by $\displaystyle\frac{\sqrt{|d_{\xi}|}}{4\pi}$. In fact, the volume of $\R^{\times}_+E^{\times}\backslash\A_E^{\times}$ with respect to our measure is equal to $\displaystyle\frac{{\bf h}(E)}{{\bf w}(E)}$ while that with respect to the measure of [26,~Section 2.4] is $2\prod_{p<\infty}L_p(\eta_p,1)=\displaystyle\frac{4\pi{\bf h}(E)}{\sqrt{|d_{\xi}|}{\bf w}(E)}$, where we note the normalization of the measure of $\A_{\Q}^{\times}$~(cf.~Section 1.5) to calculate this volume. We can then reformulate [26,~Theorem 1.1] as follows: \begin{prop} Let $L^{(\infty)}(\Pi,\chi^{-1},s)$ be the partial $\chi^{-1}$-twisted L-function of $\Pi$ without the archimedean factor. Under Assumption 1.2 we obtain \[ |P_{\chi}(f;\gamma_0)|^2= \begin{cases} C'(f,\chi) L^{(\infty)}(\Pi,\chi^{-1},\frac{1}{2})&(S_1=S_2^+(f,\chi)=\emptyset),\\ 0&(\text{otherwise}), \end{cases} \] where \[ C'(f,\chi):= |d_{\xi}|(4\pi)^{-1-\kappa}(\kappa-1)!D^{-\frac{1}{2}}A(\chi)^{-1}2^{|d(D)|}\prod_{p|A(\chi)}L_p(\eta_p,1)^2. \] \end{prop} Let $Z$ denote the center of $GL_2$. With the invariant measure $dg$ of $GL_2(\A_{\Q})$ as in [26,~Section 2.4], we introduce the Petersson norm \[ \langle f,f\rangle:=\int_{Z_{\A_{\Q}}GL_2(\Q)\backslash GL_2(\A_{\Q})}|f(g)|^2dg \] of $f$. Rankin [33,~Theorem 3 (iii)] relates $\langle f,f\rangle$ to $L(\pi(f),\Ad,1)$ by taking the residue of the Rankin-Selberg integral of $f\times\bar{f}$ against an Eisenstein series. We can modify Rankin's argument adelically in a standard way to have the following: \begin{prop} \[ \langle f,f\rangle=2^{-\kappa-1}D\cdot L(\pi(f),\Ad,1). \] \end{prop} In addition, we note that the archimedean factor of $L(\Pi,\chi^{-1},\frac{1}{2})$ is equal to \[ \Gamma_{\C}(\kappa)\Gamma_{\C}(1), \] with $\Gamma_{\C}(s):=2(2\pi)^{-s}\Gamma(s)$~(cf.~Lemma 2.3). Then we can restate Proposition 2.4 as follows: \begin{prop} Under Assumption 1.2, for a non-zero primitive form $f$, we have \[ \frac{|P_{\chi}(f;\gamma_0)|^2}{\langle f,f\rangle}= \begin{cases} C(f,\chi)L(\Pi,\chi^{-1},\frac{1}{2})&(S_1=S_2^+(f,\chi)=\emptyset),\\ 0&(\text{otherwise}) \end{cases} \] with \[ C(f,\chi):= \frac{2^{|d(D)|-2}|d_{\xi}|\prod_{p|A(\chi)}L_p(\eta_p,1)^2}{D^{\frac{3}{2}}A(\chi)L(\pi(f),\rm{Ad},1)}. \] \end{prop} \subsection{Relation between $P_{\chi}(f';\gamma'_0)$ and $L(\Pi',\chi^{-1},\frac{1}{2})$.} Let $(*,*)_{\kappa}$ be a unitary inner product of $(\sigma_{\kappa}|_{\Hl^1},V_{\kappa})$~(for $\sigma_{\kappa}$ see Section 1.1) and $||v||$ denote the norm of $v\in V_{\kappa}$ induced by this inner product. We next provide an explicit relation between $||P_{\chi}(f';\gamma'_0)||^2$ and the central L-value $L(\Pi',\chi^{-1},\frac{1}{2})$. We postpone the proof until Section 3. To write down the relation we need several notations. We denote by $r_p$ the ramification index of $p$ for the quadratic extension $E/\Q$, i.e. $r_p:= \begin{cases} 1&(\text{$p$:split or inert})\\ 2&(\text{$p$:ramified}) \end{cases}$. Let $Z'$ denote the center of $B^{\times}$. We normalize the invariant measure $db:=\prod_{p\le\infty}db_p$ of $B_{\A_{\Q}}^{\times}$ so that \[ \int_{\mO_p^{\times}}db_p=1~(p<\infty),\quad\int_{\Hl^{1}}db_{\infty}=1. \] We set \[ \langle f',f'\rangle:=\int_{Z'_{\A_{\Q}}B^{\times}\backslash B^{\times}_{\A_{\Q}}}(f'(b),f'(b))_{\kappa}db. \] We now note that the archimedean component $\pi''_{\infty}$ of $\pi(f')$ can be identified with an irreducible representation $(\sigma_{\kappa}|_{\Hl^1/\{\pm 1\}},V_{\kappa})$ of $B_{\infty}^{\times}/Z'_{\infty}\simeq\Hl^1/\{\pm 1\}$, where note that $\kappa$ is even~(cf.~Section 1.1) and $\sigma_{\kappa}$ is thus trivial on $\{\pm 1\}$. Let $v_{\kappa,\xi}$ be a highest weight vector of $V_{\kappa}$ with respect to $\sigma_{\kappa}(\R(\xi)^{\times})$-action, and let $v_{\kappa,\xi}^*\in V_{\kappa}$ be the dual of $v_{\kappa,\xi}$ with respect to $(*,*)_{\kappa}$. We set $f'_{\infty,\kappa}:=(f'_{\infty},v_{\kappa,\xi}^*)_{\kappa}v_{\kappa,\xi}$, where note that $f'_{\infty}$~(cf.~Section 2.1) is viewed as an element in $V_{\kappa}$. Then we have the following: \begin{prop} Under Assumption 1.2, for a non-zero Hecke eigenform $f'$, we have \[ \frac{||P_{\chi}(f';\gamma'_0)||^2}{\langle f',f'\rangle}= \begin{cases} C(f',\chi)L(\Pi',\chi^{-1},\frac{1}{2})&(\text{$\pi''_p|_{E_p^{\times}}=\chi_p$ when $p$ divides $d_B$ and is ramified in $E$}),\\ 0&(\text{otherwise}), \end{cases} \] where \[ C(f',\chi):=\frac{\sqrt{|d_{\xi}|}(\kappa+1)}{4A(\chi)L(\JL(\pi(f')),\Ad,1)}\prod_{p|A(\chi)}L_p(\eta_p,1)^2\cdot\prod_{p|d_B}r_pp^{-1}\cdot\frac{(f'_{\infty,\kappa},f'_{\infty,\kappa})_{\kappa}}{(f'_{\infty},f'_{\infty})_{\kappa}}. \] \end{prop} \subsection{Main result~(the first form).} Theorem 1.3, Proposition 2.6 and Proposition 2.7 imply the following theorem: \begin{thm} Under the assumption in Theorem 1.3 we have \[ \frac{||\cL(f,f')_{\xi}^{\chi}(g_0)||^2}{\langle f,f\rangle\langle f',f'\rangle}=C(f,f',\xi,\chi)L(\Pi,\chi^{-1},\frac{1}{2})L(\Pi',\chi^{-1},\frac{1}{2}), \] where, if $\pi''_p|_{E_p^{\times}}=\chi_p$ for $p|d_B$ ramified in $E$ and $S_1=S_2^+(f,\chi)=\emptyset$, \begin{align*} &C(f,f',\xi,\chi)=\frac{2^{2\kappa+|d(D)|-6}(\kappa+1) n(\xi)^{\frac{\kappa}{2}}|d_{\xi}|^{\frac{3}{2}}\mathbf{w}(E)^2}{\mathbf{h}(E)^2A(\chi)^4D^{\frac{3}{2}}L(\pi(f),\Ad,1)L(\JL(\pi(f')),\Ad,1)}\\ &\prod_{p\nmid d_B}p^{4\mu_p}(1-\delta(i_p(\chi)>0)e_p(E)p^{-1})^2\prod_{p|d_B}r_pp^{-1}\prod_{p|D}(p+1)^{-2}\cdot e^{-8\pi\sqrt{n(\xi)}}\cdot\frac{(f'_{\infty,\kappa},f'_{\infty,\kappa})_{\kappa}}{(f'_{\infty},f'_{\infty})_{\kappa}}, \end{align*} and $C(f,f',\xi,\chi)=0$ otherwise. \end{thm} \subsection{Main result~(the second form).} We introduce the global $L$-function of convolution type attached to $\cL(f,f')$ and $\chi^{-1}$, and relate its central value to the square norm $||\cL(f,f')_{\xi}^{\chi}(g_0)||^2$. We now recall that $\cL(f,f')$ belongs to $\cS_{\kappa}^D$~(cf.~Section 1.1). Before introducing the $L$-function of convolution type, we define the spinor $L$-function for a Hecke eigenform $F\in \cS_{\kappa}^D$. In [27,~Section 5.1] we introduced three Hecke operator $\cT_p^i$ with $0\le i\le 2$ for $p\nmid d_B$. Let $\Lambda_p^i$ be the Hecke eigenvalue of $\cT_p^i$ for $F$ with $0\le i\le 2$. For $p\nmid d_B$ we put \[ Q_{F,p}(t):=1-p^{-\frac{3}{2}}\Lambda_p^1t+p^{-2}(\Lambda_p^2+p^2+1)t^2-p^{-\frac{3}{2}}\Lambda_p^1t^3+t^4. \] For this we note that $Q_{F,p}(p^{-s})^{-1}$ coincides with the local spinor $L$-function for an unramified principal series representation of the group of $\Q_p$-rational points for the split symplectic $\Q$-group $GSp(2)$ of degree two with similitudes. This comes from the denominator of the formal Hecke series for $GSp(1,1)_{\Q_p}\simeq GSp(2)_{\Q_p}$. Here recall that $GSp(2)$ is defined by \[ GSp(2)_{\Q}:=\left\{g\in GL(4)_{\Q} \left|~{}^tg \begin{pmatrix} 0_2 & 1_2\\ -1_2 & 0_2 \end{pmatrix}g=\nu(g)\begin{pmatrix} 0_2 & 1_2\\ -1_2 & 0_2 \end{pmatrix},~\nu(g)\in\Q^{\times}\right.\right\}. \] On the other hand, in [27,~Section 5.2], we introduced two Hecke operators $\cT_p^i$ with $0\le i\le 1$ for $p|d_B$. Let ${\Lambda'}_p^i$ be the Hecke eigenvalue of $\cT_p^i$ for $F$ with $0\le i\le 1$. For $p|d_B$ we put \[ Q_{F,p}(t):= \begin{cases} (1-p^{-\frac{3}{2}}({\Lambda'}_p^1-(p-1){\Lambda'}_p^0)t+t^2)(1-{\Lambda'}_p^0p^{-\frac{1}{2}}t)&(p|\frac{d_B}{D}),\\ (1+{\Lambda'}_p^0p^{-\frac{1}{2}}t)(1-{\Lambda'}_p^0p^{-\frac{1}{2}}t)&(p|D). \end{cases} \] The first one is due to Sugano [36,~(3-4)]. The first factor for the case $p| D$ comes from the numerator of the formal Hecke series for $GSp(1,1)_{\Q_p}$. We define the spinor $L$-function $L(F,\operatorname{spin},s)$ of $F$ by \[ L(F,\operatorname{spin},s):=\prod_{p\le\infty}L_p(F,\operatorname{spin},s), \] where \[ L_p(F,\operatorname{spin},s):= \begin{cases} Q_{F,p}(p^{-s})^{-1}&(p<\infty),\\ \Gamma_{\C}(s+\frac{\kappa-1}{2})\Gamma_{\C}(s+\frac{\kappa+1}{2})&(p=\infty). \end{cases} \] This is a modification of the definition in [27,~Section 5.3]. We can then reformulate [27,~Corollary 5.3] as follows: \begin{prop} The spinor $L$-function for $\cL(f,f')$ decomposes into \[ L(\cL(f,f'),\operatorname{spin},s)=L(\pi(f),s)L(\JL(\pi(f')),s). \] \end{prop} \begin{pf} This is deduced from Lemma 2.1, Lemma 2.2 and Lemma 2.3 and the commutation relation of Hecke operators satisfied by Arakawa lifting~(cf.~[27,~Theorem 5.1]). Here we note that $(1-{\Lambda'}_p^0p^{-\frac{1}{2}-s})^{-1}=(1-\pi''_p(\varpi_{B,p})p^{-\frac{1}{2}-s})^{-1}=(1-\delta_p(p)p^{-\frac{1}{2}-s})^{-1}$~(respectively~$(1+{\Lambda'}_p^0p^{-\frac{1}{2}-s})^{-1}=(1+\delta_p(p)p^{-\frac{1}{2}-s})^{-1}$) holds for $p|d_B$~(respectively~$p|D$), which follows from [27,~Theorem 5.1 (ii)]. This is the local $L$-factor of $\JL(\pi(f'))$ at $p|d_B$~(respectively~$\pi(f)$ at $p|D$), which is a local $L$-function of a special representation~(cf.~Lemma 2.2). \qed \end{pf} \noindent Of course, we see that $L(\cL(f,f'),\operatorname{spin},s)$ has the meromorphic continuation and satisfies the functional equation between $s$ and $1-s$ since so do $L(\pi(f),s)$ and $L(\JL(\pi(f')),s)$. We now introduce the global $L$-function \[ L(F,\chi^{-1},s):=\prod_{p\le\infty}L_p(F,\chi^{-1},s) \] of convolution type for a Hecke eigenform $F\in\cS_{\kappa}^D$ and $\chi^{-1}$. Here the local factors $L_p(F,\chi^{-1},s)$ are given as \[ L_p(F,\chi^{-1},s):= \begin{cases} Q_{F,p}(\alpha_p^{\chi}p^{-s})^{-1}Q_{F,p}(\beta_p^{\chi}p^{-s})^{-1}&(\text{$\chi$ is unramified at $p<\infty$}),\\ 1&(\text{$\chi$ is ramified at $p<\infty$}),\\ \Gamma_{\C}(s+\kappa-\frac{1}{2})\Gamma_{\C}(s+\frac{1}{2})\Gamma_{\C}(s+\kappa+\frac{1}{2})\Gamma_{\C}(s+\frac{1}{2})&(p=\infty), \end{cases} \] where \[ (\alpha_p^{\chi},\beta_p^{\chi}):= \begin{cases} (\chi_p(\varpi_{p,1})^{-1},\chi_p(\varpi_{p,2})^{-1})&(\text{$p$:~split}),\\ (\chi_p(p)^{-1},-\chi_p(p)^{-1})=(1,-1)&(\text{$p$:~inert}),\\ (\chi_p(\varpi_p)^{-1},0)&(\text{$p$:~ramified}) \end{cases} \] with prime elements $\varpi_{p,1},~\varpi_{p,2}$ and $\varpi_p$ of $E_p$ introduced in Lemma 2.1 (3). \begin{prop} We have \[ L(\cL(f,f'),\chi^{-1},s)=L(\Pi,\chi^{-1},s)L(\Pi',\chi^{-1},s). \] This is an entire function of $s$ and satisfies the functional equation \[ L(\cL(f,f'),\chi^{-1},s)=\epsilon(\Pi,\chi^{-1})\epsilon(\Pi',\chi^{-1})L(\cL(f,f'),\chi^{-1},1-s), \] where $\epsilon(\Pi,\chi^{-1})$~(respectively~$\epsilon(\Pi',\chi^{-1})$) denotes the $\epsilon$-factor of $L(\Pi,\chi^{-1},s)$~(respectively~$L(\Pi',\chi^{-1},s)$). \end{prop} \begin{pf} The first statement follows from Lemma 2.1, Lemma 2.2, Lemma 2.3 and Proposition 2.9. According to [15,~Corollary 19.15], $L(\Pi,\chi^{-1},s)$ and $L(\Pi',\chi^{-1},s)$ are entire functions of $s$ and satisfy \[ L(\Pi,\chi^{-1},s)=\epsilon(\Pi,\chi^{-1})L(\Pi,\chi,1-s),\quad L(\Pi',\chi^{-1},s)=\epsilon(\Pi',\chi^{-1})L(\Pi',\chi,1-s), \] where we note that $\Pi$ and $\Pi'$ are self-dual~(cf.~Section 2.1). In view of Lemma 2.1, Lemma 2.2 and Lemma 2.3, we see that $L(\Pi,\chi,s)=L(\Pi,\chi^{-1},s)$ and $L(\Pi',\chi,s)=L(\Pi',\chi^{-1},s)$, which implies the second assertion.\qed \end{pf} Proposition 2.10 obviously implies that the function $L(\cL(f,f'),\chi^{-1},s)$ is regular at $s=\frac{1}{2}$. We are now able to reformulate Theorem 2.8 as follows: \begin{thm} Let the assumption and the notation be as in Theorem 1.3. We have \[ \frac{||\cL(f,f')_{\xi}^{\chi}(g_0)||^2}{\langle f,f\rangle\langle f',f'\rangle}=C(f,f',\xi,\chi)L(\cL(f,f'),\chi^{-1},\frac{1}{2}). \] \end{thm} \subsection{Strictly positive central L-values.} As an application of our main results we show the existence of the strictly positive central values for the $L$-functions in our concern. In this subsection we fix a quaternion algebra $B$ and a maximal order $\fO$ of $B$ as \begin{align*} &B=\Q+\Q\cdot i+\Q\cdot j+\Q\cdot k\quad(i^2=j^2=-1,~ij=-ji=k),\\ &\fO=\Z\cdot\frac{1+i+j+k}{2}+\Z\cdot i+\Z\cdot j+\Z\cdot k. \end{align*} We note that $d_B=2$ for this $B$. In [28,~Section 14] we have shown the following: \begin{prop} Suppose that $\xi=i/2$, which is primitive~(cf,~[28,~Section 14.2]), and $\chi$ is unramified at every finite prime, i.e. $i_p(\chi)=0$ for any finite prime $p$. Let $D\in\{1,2\}$ and $\kappa\ge 12$~(respectively~$\kappa\ge 8$) be divisible by $4$~(respectively~by $8$) when $D=1$~(respectively~$D=2$). Then there exist a primitive form $f\in S_{\kappa}(D)$ and a Hecke eigenform $f'\in\cA_{\kappa}$ such that $P_{\chi}(f,\gamma_0)P_{\chi}(f',\gamma'_0)\not=0$~(which implies $\cL(f,f')_{\xi}^{\chi}\not\equiv 0$). \end{prop} For this proposition we remark that $f$ is not assumed to be primitive in [28,~Section 14]. However, when $D=2$ and $\chi$ is unramified at every finite prime, we find in [28,~Section 14] $f$ satisfying $P_{\chi}(f;\gamma_0)\not=0$ and the sign condition in Section 1.3 (1). There such $f$'s are given by the powers of the primitive form of level 2 and weight 8. This implies that the statement remains valid even if we assume that $f$ is primitive. From Proposition 2.6, Proposition 2.7 and Proposition 2.12 we deduce the following: \begin{thm} Under the assumption in Proposition 2.12 there are Hecke eigenforms $(f,f')\in S_{\kappa}(D)\times\cA_{\kappa}$ such that \[ L(\Pi,\chi^{-1},\frac{1}{2})>0,\quad L(\Pi',\chi^{-1},\frac{1}{2})>0. \] \end{thm} \begin{pf} Due to the isomorphism between $\cA_{\kappa}$ and the space spanned by primitive forms in $S_{\kappa+2}(d_{B})$~(cf.~Section 2.1) we have $\JL(f')\not=0$ for a non-zero $f'$. Furthermore we note that Proposition 2.5 implies that $L(\pi(f),\Ad,1)$ and $L(\JL(\pi(f')),\Ad,1)$, which appear in $C(f,\chi)$ and $C(f',\chi)$, are positive for a non-zero primitive form $f$ and a non-zero Hecke eigenform $f'$. Proposition 2.6 and Proposition 2.7 imply that $C(f,\chi)>0$ and $C(f',\chi)>0$ if $C(f,\chi)\not=0$ and $C(f',\chi)\not=0$. Let $(f,f')$ be Hecke eigenforms as in Proposition 2.12. In view of the explicit formulas in Proposition 2.6 and Proposition 2.7, we see $C(f,\chi)\not=0$ and $C(f',\chi)\not=0$ and thus $C(f,\chi)>0$ and $C(f',\chi)>0$ are satisfied. We then verify the simultaneous positivity of the two central $L$-values in the assertion. \qed \end{pf} As an immediate consequence of this and Proposition 2.10 we have obtained the following theorem: \begin{thm} Under the same assumption as in Proposition 2.12 there exist Hecke eigenforms $(f,f')\in S_{\kappa}(D)\times\cA_{\kappa}$ such that \[ L(\cL(f,f'),\chi^{-1},\frac{1}{2})>0. \] \end{thm} \section{Proof of Proposition 2.7.} \subsection{} In order to verify Theorem 2.8, which leads to Theorem 2.11, it remains to show Proposition 2.7. For the proof we need two formulas. The first one~(respectively~the second one) is due to Waldspurger~[38,~Proposition 7]~(respectively~Macdonald~[23,~Chap.V,~Section 3,~(3.4)]). To state the first formula we remark that every $\pi''_p$ is unitary and thus equipped with a unitary inner product. When $p\nmid d_B$ we embed $E_p^{\times}$ into $B_p^{\times}=GL_2(\Q_p)$ by $\iota_{\xi}$~(cf.~Section 1.4). Recall that, for the quadratic character $\eta$ attached to the quadratic extension $E/\Q$, we have let $L(\eta,s)$ be the $L$-function defined by $\eta$ and $L_p(\eta_p,s)$ the local factor of $L(\eta,s)$ at a place $p$~(cf.~Section 2.3). For the toral integral $P_{\chi}(f';b)~(b\in B_{\A_{\Q}}^{\times})$ we note that our normalized measure $ds$ of $\A_{E}^{\times}$ is the multiple of that of [38] by $\displaystyle\frac{\sqrt{|d_{\xi}|}}{4}$. In fact, the volume of $\R^{\times}_+ E^{\times}\backslash\A_{E}^{\times}$ with respect to our measure~(respectively~the measure of [38]) is $\displaystyle\frac{{\bf h}(E)}{{\bf w}(E)}$~(respectively~$2L(\eta,1)=\displaystyle\frac{4{\bf h}(E)}{\sqrt{|d_{\xi}|}{\bf w}(E)}$). For the calculation of the volume by the measure of [38], we take the normalized measure of $\A_{\Q}^{\times}$~(cf.~Section 1.5) into account. For the norm $\langle f',f'\rangle$ of $f'$ we remark that our normalized measure $db$ of $B_{\A_{\Q}}^{\times}$~(cf.~Section 2.4) is the $2^{-4}3^{-1}\cdot\prod_{p|d_B}(p-1)$-multiple of that of [38], which chooses the Tamagawa measure of $B^{\times}/Z'\simeq SO(3)$. Actually, from the fact that the Tamagawa number of $SO(3)$ is two, we deduce that the volume of $\prod_{p<\infty}\mO_p^{\times}/\Z_p^{\times}\times\Hl^1/\{\pm1\}$ with respect to the measure of [38] is equal to $2^{3}\cdot3\cdot\prod_{p|d_B}(p-1)^{-1}$~(cf.~[20,~Section 3,~3,~(4)]), where note that $2^{-3}3^{-1}\prod_{p|d_B}(p-1)$ is the familiar number appearing in the well-known Mass formula for $B$. On the other hand, we see that the volume of $\prod_{p<\infty}\mO_p^{\times}/\Z_p^{\times}\times\Hl^1/\{\pm1\}$ is $\displaystyle\frac{1}{2}$ by our measure. These justify our remark on the measure $db$. Let us recall that $f'_p$ denotes the $p$-component of $f'$ for each prime $p$~(cf.~Section 2.1). With the normalized measures $ds$ and $db$ we quote [38,~Proposition 7] as follows: \begin{prop}[Waldspurger] For $h'=(h'_p)_{p\leq\infty}\in B^{\times}_{\A_{\Q}}$, \[ \frac{||P_{\chi}(f';h')||^2}{\langle f',f'\rangle}=\frac{\pi\sqrt{|d_{\xi}|}}{\prod_{p|d_B}(p-1)}\frac{L(\Pi',\chi^{-1},\frac{1}{2})}{L(\pi(\JL(f')),\Ad,1)}\prod_{p\leq\infty}\alpha_p(f'_p,\chi_p,h'_p), \] where \[ \alpha_p(f'_p,\chi_p,h'_p):=\frac{L_p(\eta_p,1)L_p(\pi'_p,\Ad,1)}{\zeta_p(2)L_p(\Pi'_p,\chi_p^{-1},\frac{1}{2})}\int_{\Q_p^{\times}\backslash E_p^{\times}}\frac{\langle\pi''_p(s_ph'_p)f'_p,\pi''_p(h'_p)f'_p\rangle_p}{\langle f'_p,f'_p\rangle_p}\chi_p(s_p)^{-1}ds_p \] with an inner product $\langle *,*\rangle_p$ of $\pi''_p$ for each place $p$. \end{prop} We put \[ \phi_p(g):=\frac{\langle\pi''_p(g)f'_p,f'_p\rangle_p}{\langle f'_p,f'_p\rangle_p}\quad(g\in B_p^{\times}) \] for each place $p$. We note that, at a finite prime $p$ not dividing $d_B$, $\pi''_p$ is isomorphic to $\pi'_p$ and is a unitary unramified principal series representation of $GL_2(\Q_p)$~(cf.~Section 2.2). The Satake parameter of $\pi''_p$ is thus of the form $(\alpha_p,\alpha_p^{-1})\in(\C^{\times})^2$ with $|\alpha_p|=1$. The representation $\pi''_p$ has $f'_p$ as a spherical vector. We remark that $\phi_p(g)$ is therefore the zonal spherical function on $GL_2(\Q_p)$ with $\phi_p(1)=1$ for $p\nmid d_B$~(for the definition see [23,~p.162]). The following proposition is the second formula we need~(cf.~[23,~Chap.V,~Section 3,~(3.4)]): \begin{prop}[Macdonald] Let $p$ be a finite prime not dividing $d_B$. For $a_m:= \begin{pmatrix} p^m & 0\\ 0 & 1 \end{pmatrix}$ with a non-negative integer $m$ we have \[ \phi_p(a_m)=\frac{p^{-\frac{m}{2}}}{1+p^{-1}}(\alpha_p^m\frac{1-p^{-1}\alpha_p^{-2}}{1-\alpha_p^{-2}}+\alpha_p^{-m}\frac{1-p^{-1}\alpha_p^2}{1-\alpha_p^2}). \] \end{prop} \subsection{Calculation at $p\nmid d_B$.} In this subsection we assume that $p\nmid d_B$. In what follows, we put $\mO_{E_p,i}:=\Z_p+p^i\mO_{E_p}$ for a non-negative integer $i$, and $\lambda_p:=p^{\frac{1}{2}}(\alpha_p+\alpha_p^{-1})$, which is the eigenvalue of the Hecke operator defined by the double coset $GL_2(\Z_p) \begin{pmatrix} p & 0\\ 0 & 1 \end{pmatrix}GL_2(\Z_p)$~(cf.~[3,~Proposition 4.6.6]). The aim of this subsection is to prove the following proposition. \begin{prop} For $p\nmid d_B$, \[ \alpha_p(f'_p,\chi_p,\gamma'_{0,p})= \begin{cases} 1&(i_p(\chi)=0),\\ p^{-i_p(\chi)}L_p(\eta_p,1)^2&(i_p(\chi)>0), \end{cases} \] where see Section 1.5 for $\gamma'_{0,p}$. \end{prop} \begin{pf} \subsection*{(1)~The case of a split prime $p$.} For the proof we do some preparation. For this case we note that there is an isomorphism $E_p\simeq \Q_p\times\Q_p$. We may thus assume that $\theta$~(cf.~Section 1.4) corresponds to $(1,0)$ via this isomorphism. Namely $\theta$ satisfies \[ N(\theta)=0,~\Tr(\theta)=1,~\theta^2=\theta. \] We see that \[ {\gamma'}_{0,p}^{-1}\iota_{\xi}(x+y\theta)\gamma'_{0,p}= \begin{pmatrix} x & 0\\ p^{-i_p(\chi)}y & x+y \end{pmatrix}. \] Recall that $\varpi_{p,1}$ and $\varpi_{p,2}$ denote the two distinct prime elements of $E_p$ as in Lemma 2.1 (3). We may assume that $\varpi_{p,1}$ and $\varpi_{p,2}$ correspond to $(p,1)$ and $(1,p)$ respectively. Therefore \[ \varpi_{p,1}=1+(p-1)\theta,\quad\varpi_{p,2}=1+(p-1)\bar{\theta}. \] For $i>0$ we have \[ \Q_p^{\times}\backslash E_p^{\times}/\mO_{E_p,i}^{\times}\simeq\bigsqcup_{n\in\Z}\varpi_{p,1}^n(\Z_p^{\times}\backslash\mO_{E_p}^{\times}/\mO_{E_p,i}^{\times}) \] and \[ \Z_p^{\times}\backslash\mO_{E_p}^{\times}/\mO_{E_p,i}^{\times}\simeq\{1+b\theta\mid b\in\Z_p/p^{i}\Z_p,~1+b\in\Z_p^{\times}\}. \] Noting that the conductor of $\chi_p$ is $p^{i_p(\chi)}$, we can show the following lemma. \begin{lem} Let $p$ be split and $i_p(\chi)>0$.\\ (1)~For $k\le i_p(\chi)$, \[ \sum_{\scriptstyle b\in p^k\Z_p/p^{i_p(\chi)}\Z_p \atop \scriptstyle 1+b\in\Z_p^{\times}}\chi_p(1+b\theta)^{-1}= \begin{cases} 0&(k\le i_p(\chi)-1),\\ 1 & (k=i_p(\chi)). \end{cases} \] (2)~When $i_p(\chi)>1$, \[ \sum_{\scriptstyle b\in\Z_p/p^{i_p(\chi)}\Z_p \atop \scriptstyle 1+b\in\Z_p^{\times},~\ord_p(b)=k} \chi_p(1+b\theta)^{-1}= \begin{cases} 0&(k<i_p(\chi)-1),\\ -1 & (k=i_p(\chi)-1),\\ 1 & (k=i_p(\chi)). \end{cases} \] When $i_p(\chi)=1$, \[ \sum_{\scriptstyle b\in\Z_p/p\Z_p \atop \scriptstyle 1+b\in\Z_p^{\times},~\ord_p(b)=k}\chi_p(1+b\theta)^{-1}= \begin{cases} -1&(k=0),\\ 1&(k=1). \end{cases} \] \end{lem} The next lemma is verified in view of the normalization of our measures of $E_p^{\times}$ and $\Z_p^{\times}$~(cf.~Section 1.5). \begin{lem} For a split prime $p$ we have \[ \vol(\Z_p^{\times}\backslash\mO_{E_p,i}^{\times})= \begin{cases} 1&(i=0),\\ p^{-i}L_p(\eta_p,1)&(i>0). \end{cases} \] \end{lem} We first carry out the proof for the case of $i_p(\chi)=0$. It suffices to show that \[ \int_{\Q_p^{\times}\backslash E_p^{\times}}\frac{\langle\pi''_p(\iota_{\xi}(s_p)\gamma'_{0,p})f'_p,\pi''_p(\gamma'_{0,p})f'_p\rangle_p}{\langle f'_p,f'_p\rangle_p}\chi_p(s_p)^{-1}ds_p=\frac{\zeta_p(2)L_p(\Pi'_p\otimes\chi_p^{-1},\frac{1}{2})}{L_p(\eta_p,1)L_p(\pi'_p,\Ad,1)}. \] Note that we can regard $\chi_p$ as a character of $(\Q_p^{\times})^2\simeq E_p^{\times}$, thus we may write $\chi_p$ as \[ \chi_p((t_1,t_2))=\omega_1(t_1)\omega_2(t_2)=\omega_1(t_1t_2^{-1})\quad((t_1,t_2)\in(\Q_p^{\times})^2) \] with two unramified characters $\omega_1$ and $\omega_2$ of $\Q_p^{\times}$ such that $\bar{\omega_1}=\omega_2$. The integral is then equal to \[ \int_{\Q_p^{\times}\backslash E_p^{\times}}\phi_p({\gamma'}_{0,p}^{-1}\iota_{\xi}(s_p)\gamma'_{0,p})\chi_p(s_p)^{-1}ds_p=\int_{\Q_p^{\times}}\phi_p( \begin{pmatrix} t_1 & 0\\ 0 & 1 \end{pmatrix})\omega_1(t_1)^{-1}dt_1, \] where we note that ${\gamma'}_{0,p}^{-1}\iota_{\xi}(s_p)\gamma'_{0,p}$ can be replaced by $\begin{pmatrix} t_1 & 0\\ 0 & t_2 \end{pmatrix}$ with $(t_1,t_2)\in(\Q_p^{\times})^2$ in view of the elementary divisor theorem. Now, using Proposition 3.2, we verify that this is equal to \begin{align*} &\frac{1}{1+p^{-1}}\sum_{m\in\Z}((p^{-\frac{1}{2}}\alpha_p)^{|m|}\frac{1-p^{-1}\alpha_p^{-2}}{1-\alpha_p^{-2}}\overline{\omega_1(p)}^m+(p^{-\frac{1}{2}}\alpha_p^{-1})^{|m|}\frac{1-p^{-1}\alpha_p^2}{1-\alpha_p^2}\overline{\omega_1(p)}^m)\\ &=\frac{1}{1+p^{-1}}\{(-1+(1-\alpha_pp^{-\frac{1}{2}}\overline{\omega_1(p)})^{-1}+(1-\alpha_pp^{-\frac{1}{2}}\overline{\omega_2(p)})^{-1})\frac{1-p^{-1}\alpha_p^{-2}}{1-\alpha_p^{-2}}\\ &+(-1+(1-\alpha_p^{-1}p^{-\frac{1}{2}}\overline{\omega_1(p)})^{-1}+(1-\alpha_p^{-1}p^{-\frac{1}{2}}\overline{\omega_2(p)})^{-1})\frac{1-p^{-1}\alpha_p^{2}}{1-\alpha_p^{2}}\}. \end{align*} By a direct calculation we prove that this coincides with the ratio of the local L-functions on the right hand side~(cf.~Lemma 2.1). We thus have $\alpha_p(f'_p,\chi_p,\gamma'_{0,p})=1$. We next consider the case of $i_p(\chi)>0$. Since $\phi_p$ is bi-invariant by $GL_2(\Z_p)$ and trivial on the center we see that, for $1+b\theta\in O_{E_p}^{\times}$ and $n\in\Z\setminus\{0\}$, \begin{align*} \phi_p({\gamma'}_{0,p}^{-1}\iota_{\xi}(1+b\theta)\gamma'_{0,p})&= \phi_p( \begin{pmatrix} p^{2(i_p(\chi)-\ord_p(b))} & 0\\ 0 & 1 \end{pmatrix}),\\ \phi_p({\gamma'}_{0,p}^{-1}\iota_{\xi}(\varpi_1^n(1+b\theta))\gamma'_{0,p})&=\phi_p( \begin{pmatrix} p^{2i_p(\chi)+|n|} & 0\\ 0 & 1 \end{pmatrix}). \end{align*} By Proposition 3.2, Lemma 3.4 and Lemma 3.5 the local integral $\displaystyle\int_{\Q_p^{\times}\backslash E_p^{\times}}\phi_p({\gamma'}_{0,p}^{-1}\iota_{\xi}(s_p)\gamma'_{0,p})\chi_p(s_p)^{-1}ds_p$ involved in $\alpha_p(f'_p,\chi_p,\gamma'_{0,p})$ is \begin{align*} &\vol(\mO_{E_p,i_p(\chi)}^{\times}/\Z_p^{\times})\{\sum_{n\in\Z\setminus\{0\}}\phi_p( \begin{pmatrix} p^{2i_p(\chi)+|n|} & 0\\ 0 & 1 \end{pmatrix})\chi(\varpi_1^n)^{-1}\sum_{\scriptstyle b\in Z_p/p^{i_p(\chi)}\Z_p \atop \scriptstyle 1+b\in\Z_p^{\times}}\chi(1+b\theta)^{-1}\\ &+\sum_{\scriptstyle b\in\Z_p/p^{i_p(\chi)}\Z_p \atop \scriptstyle 1+b\in\Z_p^{\times}}\phi_p( \begin{pmatrix} p^{2(i_p(\chi)-\ord_p(b))} & 0\\ 0 & 1 \end{pmatrix})\chi(1+b\theta)^{-1}\}\\ =&p^{-i_p(\chi)}L_p(\eta_p,1)(1-\phi_p( \begin{pmatrix} p^2 & 0\\ 0 & 1 \end{pmatrix}))=\frac{L_p(\eta_p,1)((p+1)^2-\lambda_p^2)}{p^{i_p(\chi)}\cdot p(p+1)}. \end{align*} On the other hand, Lemma 2.1 implies that the ratio of the local $L$-functions in $\alpha_p(f'_p,\chi_p,\gamma'_{p,0})$ is equal to \[ \frac{L_p(\eta_p,1)\cdot p(p+1)}{(p+1)^2-\lambda_p^2}. \] As a result we deduce that \[ \alpha_p(f'_p,\chi_p,\gamma'_{p,0})=p^{-i_p(\chi)}L_p(\eta_p,1)^2. \] \subsection*{(2)~The case of an inert prime $p$.} A complete set of representatives for $\Q_p^{\times}\backslash E_p^{\times}/\mO_{E_p,i}^{\times}\simeq\Z_p^{\times}\backslash \mO_{E_p}^{\times}/\mO_{E_p,i}^{\times}$ is given by \[ \{1+b\theta\mid b\in\Z_p/p^{i}\Z_p\}\sqcup\{a+\theta\mid a\in p\Z_p/p^{i}\Z_p\} \] for $i>0$. To verify the proposition for this case we need the following two lemmas, whose proofs are similar to those of Lemma 3.4 and Lemma 3.5. \begin{lem} Let $p$ be inert and $i_p(\chi)>0$.\\ (1)~We have \[ \sum_{b\in\Z_p/p^{i_p(\chi)}\Z_p}\chi_p(1+b\theta)^{-1}=-\sum_{a\in p\Z_p/p^{i_p(\chi)}\Z_p}\chi_p(a+\theta)^{-1}= \begin{cases} -\chi(\theta)^{-1}&(i_p(\chi)=1),\\ 0&(i_p(\chi)>1). \end{cases} \] (2)~When $i_p(\chi)>1$, \[ \sum_{\scriptstyle b\in\Z_p/p^{i_p(\chi)}\Z_p \atop \scriptstyle \ord_p(b)=k}\chi_p(1+b\theta)^{-1}= \begin{cases} 0&(k<i_p(\chi)-1),\\ -1&(k=i_p(\chi)-1),\\ 1&(k=i_p(\chi)). \end{cases} \] \end{lem} \begin{lem} For an inert prime $p$ we have \[ \vol(\Z_p^{\times}\backslash\mO_{E_p,i}^{\times})= \begin{cases} 1&(i=0),\\ p^{-i}L_p(\eta_p,1)&(i>0). \end{cases} \] \end{lem} The local integral involved in $\alpha_p(f'_p,\chi_p,\gamma'_{0,p})$ is expressed as \[ \vol(\Z_p\backslash\mO_{E_p,i_p(\chi)}^{\times})\int_{\Z_p^{\times}\backslash\mO_{E_p}^{\times}/\mO_{E_p,i_p(\chi)}^{\times}}\phi_p(\gamma_{0,p}^{-1}\iota_{\xi}(s_p)\gamma_{0,p})\chi_p(s_p)^{-1}ds_p. \] First let $i_p(\chi)=0$. This integral is evaluated to be $\vol(\Z_p^{\times}\backslash \mO_{E_p^{\times}})=1$. Let us calculate the ratio of the local L-functions occurring in $\alpha_p(f'_p,\chi_p,\gamma'_{0,p})$. With the Satake parameter $(\alpha_p,\alpha_p^{-1})$ of $\pi'_p$ we have \begin{align*} &\frac{L_p(\eta_p,1)L_p(\pi'_p,\Ad,1)}{\zeta_p(2)L_p(\Pi'_p,\chi_p^{-1},\frac{1}{2})}=\frac{(1+p^{-1})^{-1}(1-p^{-1})^{-1}(1-\alpha_p^2p^{-1})^{-1}(1-\alpha_p^{-2}p^{-1})^{-1}}{(1-p^{-2})^{-1}(1-\alpha_pp^{-\frac{1}{2}})^{-1}(1-\alpha_p^{-1}p^{-\frac{1}{2}})^{-1}(1+\alpha_pp^{-\frac{1}{2}})^{-1}(1+\alpha_p^{-1}p^{-\frac{1}{2}})^{-1}}\\ &=1 \end{align*} (cf.~Lemma 2.1), which implies the assertion for this case. Next let $i_p(\chi)>0$. In view of the bi-invariance by $GL_2(\Z_p)$ and the triviality on the center for $\phi_p$ we see that \[ \phi_p({\gamma'_{0,p}}^{-1}\iota_{\xi}(1+b\theta)\gamma'_{0,p})=\phi_p( \begin{pmatrix} p^{2(i_p(\chi)-\ord_p(b))} & 0\\ 0 & 1 \end{pmatrix}),\quad \phi_p({\gamma'_{0,p}}^{-1}\iota_{\xi}(a+\theta)\gamma'_{0,p})=\phi_p( \begin{pmatrix} p^{2i_p(\chi)} & 0\\ 0 & 1 \end{pmatrix}) \] for $a\in p\Z_p/p^{i_p(\chi)}\Z_p$ and $b\in\Z_p/p^{i_p(\chi)}\Z_p$. By Proposition 3.2, Lemma 3.6 and Lemma 3.7 we therefore evaluate the local integral to be $\displaystyle\frac{L_p(\eta_p,1)^2((p+1)^2-\lambda_p^2)}{p^{i_p(\chi)}\cdot p^2}$. On the other hand, recalling Lemma 2.1, we see that the ratio of the local L-functions in $\alpha_p(f'_p,\chi_p,\gamma'_{p,0})$ is $p^{2}((p+1)^2-\lambda_p^2)^{-1}$. We then obtain the evaluation of $\alpha_p(f'_p,\chi_p,\gamma'_{0,p})$ in the assertion. \subsection*{(3)~The case of a ramified prime $p$.} We give a complete set of representatives for $\Q_p^{\times}\backslash E_p^{\times}/\mO_{E_p,i}^{\times}$ as follows: \[ \{1+b\theta\mid b\in\Z_p/p^{i}\Z_p\}\sqcup\{ap+\theta\mid a\in \Z_p/p^{i}\Z_p\}, \] where $i\ge0$. We state the following two lemmas, whose proofs are similar to those of the corresponding two lemmas for the two cases above. \begin{lem} Let $p$ be ramified and $i_p(\chi)>0$.\\ (1)~We have \[ \sum_{b\in\Z_p/p^{i_p(\chi)}\Z_p}\chi_p(1+b\theta)^{-1}=-\sum_{a\in \Z_p/p^{i_p(\chi)}\Z_p}\chi_p(ap+\theta)^{-1}=0. \] (2)~When $i_p(\chi)>1$, \[ \sum_{\scriptstyle b\in\Z_p/p^{i_p(\chi)}\Z_p \atop \scriptstyle \ord_p(b)=k}\chi_p(1+b\theta)^{-1}= \begin{cases} 0&(k<i_p(\chi)-1),\\ -1&(k=i_p(\chi)-1),\\ 1&(k=i_p(\chi)). \end{cases} \] \end{lem} \begin{lem} For a ramified prime $p$ we have \[ \vol(\Z_p^{\times}\backslash\mO_{E_p,i}^{\times})= \begin{cases} 1&(i=0),\\ p^{-i}=p^{-i}L_p(\eta_p,1)&(i>0). \end{cases} \] \end{lem} Suppose first that $i_p(\chi)=0$. The local integral is \[ \int_{\Z_p^{\times}\backslash\mO_{E_p}^{\times}}\phi_p({\gamma'}_{0,p}^{-1}\iota_{\xi}(s_p)\gamma'_{0,p})\chi_p(s_p)^{-1}ds_p+\int_{\Z_p^{\times}\backslash\varpi_{p}\mO_{E_p}^{\times}}\phi_p({\gamma'}_{0,p}^{-1}\iota_{\xi}(s_p)\gamma'_{0,p})\chi_p(s_p)^{-1}ds_p, \] where recall that $\varpi_p$ denotes a prime element of $E_p$~(cf.~Lemma 2.1 (3)). The first integral is proved to be $1$. By virtue of the elementary divisor theorem we see that the second integral is reduced to $\phi_p( \begin{pmatrix} p & 0\\ 0 & 1 \end{pmatrix})\chi_p(\varpi_{p})^{-1}$, which is equal to \[ \frac{p^{-\frac{1}{2}}}{1+p^{-1}}(\alpha_p\frac{1-p^{-1}\alpha_p^{-2}}{1-\alpha_p^{-2}}+\alpha_p^{-1}\frac{1-p^{-1}\alpha_p^2}{1-\alpha_p^2})\chi_p(\varpi_{p})^{-1}=\frac{\lambda_p\omega_p(p)}{p+1}. \] Here see Section 2.2 for $\omega_p$. Thus we have \[ \int_{\Q_p^{\times}\backslash E_p^{\times}}\phi_p(\gamma_{0,p}^{-1}\iota_{\xi}(s_p)\gamma_{0,p})\chi_p(s_p)^{-1}ds_p=\frac{p+1+\lambda_p\omega_p(p)}{p+1}. \] On the other hand, the ratio of the local L-functions in $\alpha_p(f'_p,\chi_p,\gamma'_{0,p})$ is verified to be \[ \frac{(p+1)(p+1-\lambda_p\omega_p(p))}{(p+1)^2-\lambda_p^2}. \] The assertion is now obvious. Next let $i_p(\chi)>0$. We have \[ \phi_p({\gamma'}_{0,p}^{-1}\iota_{\xi}(1+b\theta)\gamma'_{0,p})=\phi_p( \begin{pmatrix} p^{2(i_p(\chi)-\ord_p(b))} & 0\\ 0 & 1 \end{pmatrix}),\quad \phi_p({\gamma'}_{0,p}^{-1}\iota_{\xi}(ap+\theta)\gamma'_{0,p})=\phi_p( \begin{pmatrix} p^{2i_p(\chi)+1} & 0\\ 0 & 1 \end{pmatrix}) \] for $a\in \Z_p/p^{i_p(\chi)}\Z_p$ and $b\in\Z_p/p^{i_p(\chi)}\Z_p$. By Proposition 3.2, Lemma 3.8 and Lemma 3.9 we evaluate the local integral to be $\displaystyle\frac{(p+1)^2-\lambda_p^2}{p^{i_p(\chi)}\cdot p(p+1)}$. The ratio of the local $L$-functions is $\displaystyle\frac{L_p(\pi'_p,\Ad,1)}{\zeta_p(2)}=\displaystyle\frac{p(p+1)}{(p+1)^2-\lambda_p^2}$. The formula for this case then follows immediately. As a result we have completed the proof of the proposition.\qed \end{pf} \subsection{Calculation at $p|d_B$.} Throughout this subsection we assume that $p|d_B$. Then $p$ is never split in $E/\Q$. Recall that we have assumed that $\chi_p$ is unramified at such $p$~(cf.~Assumption 1.2). We now show the following: \begin{prop} Recall that $r_p$ denotes the ramification index of $p$ for the quadratic extension $E/\Q$~(cf.~Section 2.4). We have \[ \alpha_p(f'_p,\chi_p,\gamma'_{0,p})= \begin{cases} r_p(1-p^{-1})&(\text{$p$ is inert, or $p$ is ramified and $\pi''_p|_{E_p^{\times}}=\chi_p$}),\\ 0&(\text{$p$ is ramified and $\pi''_p|_{E_p^{\times}}\not=\chi_p$}), \end{cases} \] where we note that $\gamma'_{0,p}=\varpi_{B,p}$~(cf.~Section 1.5). \end{prop} Our proof of this needs the following: \begin{lem} (1) \[ L_p(\pi'_p,\Ad,1)=(1-p^{-2})^{-1}. \] (2) \[ L_p(\Pi'_p,\chi_p^{-1},\frac{1}{2})= \begin{cases} (1-p^{-2})^{-1}&(\text{$p$:inert})\\ (1-\delta_p(p)\omega_p(p)p^{-1})^{-1}&(\text{$p$:ramified}) \end{cases}, \] where see Section 2.2 for $\delta_p$ and $\omega_p$. \end{lem} \begin{pf} This is a direct consequence of Lemma 2.2.\qed \end{pf} Now we are able to calculate $\alpha_p(f'_p,\chi_p,\gamma'_{0,p})$. We first note that we can replace $\gamma'_{0,p}$ by $1$ since $\pi''_p(\varpi_{B,p})\in\{\pm1\}$. By a direct calculation we verify \[ \int_{\Q_p^{\times}\backslash E_p^{\times}}\frac{\langle\pi''_p(s_p\gamma'_{0,p})f'_p,\pi''_p(\gamma'_{0,p})f'_p\rangle_p}{\langle f'_p,f'_p\rangle_p}\chi_p(s_p)^{-1}ds_p= \begin{cases} r_p&(\text{$p$ is inert, or $p$ is ramified and $\pi''_p|_{E_p^{\times}}=\chi_p$}),\\ 0&(\text{$p$ is ramified and $\pi''_p|_{E_p^{\times}}\not=\chi_p$}). \end{cases} \] Now note that $\pi''_p|_{E_p^{\times}}=\chi_p$ means $\delta_p=\omega_p$. As for the ratio of the local L-functions, Lemma 3.11 yields \[ \frac{L_p(\eta_p,1)L_p(\pi'_p,\Ad,1)}{\zeta_p(2)L_p(\Pi'_p,\chi_p^{-1},\frac{1}{2})}=1-p^{-1} \] unless $p$ is ramified and satisfies $\pi''_p|_{E_p^{\times}}\not=\chi_p$. These imply Proposition 3.10. \subsection{Calculation at $\infty$.} To complete the proof of Proposition 2.7 we are left with the calculation of $\alpha_{\infty}(f'_{\infty},\chi_{\infty}^{-1},\gamma'_{0,\infty})$, where $\gamma'_{0,\infty}=1$. We note that the inner product $\langle *,*\rangle_{\infty}$ is taken as $(*,*)_{\kappa}$~(cf.~Section 2.4). \begin{prop} \[ \alpha_{\infty}(f'_{\infty},\chi_{\infty}^{-1},\gamma'_{0,\infty})=\frac{\kappa+1}{4\pi}\frac{(f'_{\infty,\kappa},f'_{\infty,\kappa})_{\kappa}}{(f'_{\infty},f'_{\infty})_{\kappa}}, \] where see Section 2.4 for $f'_{\infty,\kappa}$. \end{prop} This proposition follows from two lemmas. The first one below is settled by a direct calculation: \begin{lem} \[ \int_{E_{\infty}^{\times}/\Q_{\infty}^{\times}}\phi_{\infty}(s_{\infty})\chi_{\infty}(s_{\infty})^{-1}ds_{\infty}=\frac{(f'_{\infty,\kappa},f'_{\infty,\kappa})_{\kappa}}{2(f'_{\infty},f'_{\infty})_{\kappa}}. \] \end{lem} To state the second lemma we note that the archimedean component $\pi'_{\infty}$ of $\pi'$ is the discrete series of weight $\kappa+2$~(cf.~[35,~Section 6]). We have the following: \begin{lem} \begin{align*} &L_{\infty}(\pi'_{\infty},\Ad,1)=2^{-(\kappa+1)}\pi^{-(\kappa+3)}(\kappa+1)!,\\ &L_{\infty}(\Pi'_{\infty},\chi_{\infty}^{-1},\frac{1}{2})=2^{-\kappa}\pi^{-(\kappa+2)}\kappa!,\\ &\zeta_{\infty}(2)=L_{\infty}(\eta_{\infty},1)=\pi^{-1}. \end{align*} \end{lem} \noindent The first two follow from Lemma 2.3 and the last one is well-known. As a result of Proposition 3.1, Proposition 3.3, Proposition 3.10 and Proposition 3.12, we have proved Proposition 2.7. \subsection*{Acknowledgement.} We are very grateful to Kimball Martin for his comments on the explicit formulas for the toral integrals in terms of the central $L$-values for $GL(2)$. Our deep gratitude is due to Ralf Schmidt for a fruitful discussion.
2,877,628,090,017
arxiv
\section{Introduction} The unique interacting binary system AR Scorpii (AR~Sco hereafter) has been called a white dwarf ``pulsar'' because its bright electromagnetic flashes appear to be generated by the spin-down energy of its degenerate primary stellar component \citep{marsh16, stiller18, gaibor20}. The binary consists of a low-mass red dwarf star and a rapidly spinning magnetized white dwarf (WD) orbiting over a 3.56-hour period. The spectacular pulsed emission is seen over a broad range of wavelengths from radio to soft X-rays \citep{marsh16, takata18, marcote17, stanway} and it is related to the 1.95-minute spin period of the WD and its magnetic interaction with the red dwarf secondary. The pulsed emission does not appear to be dominated by accretion onto the WD \citep{marsh16} as is seen in intermediate polar-type (IP) cataclysmic variable stars (CVs). A study of the polarized emission by \citet{buckley17} showed that the pulses are consistent with synchrotron radiation coming from near the WD, although where and how the electrons are accelerated remains an interesting question. There have been several models proposed to explain the observed synchrotron pulses. For example, a direct interaction between the WD magnetic field with the secondary star or its wind has been suggested \citep{geng16, katz17}. \citet{garnavich19} identified slingshot prominences from the red dwarf star and speculated that magnetic reconnection events between the WD and secondary star fields are the source of the energetic electrons. These models require extremely high WD field strengths of more that 200~MG to generate sufficient interaction energy near the surface of the red dwarf. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{lc_example_trough.pdf} \caption{A sample of the $COS/HST$ light curve of AR~Sco showing five individual exposures (obtained over two binary orbits). The dashed line indicates where the division between pulsed photons and trough photons were made in re-sampling the spectra.} \label{lc} \end{figure} Recently, \citet{lyutikov20} has argued that the surface magnetic field of the WD in AR~Sco must be only about 10~MG to have been able to spin-up during a period of rapid mass accretion. This field strength is similar to what is seen in IPs and that AR~Sco may be an IP in a propeller mode where the spinning WD field drives away the gas donated by the red dwarf. AE~Aqr is the only confirmed case of an IP in a propeller state, but it does not generate synchrotron pulses at the spin period of its WD. The \citet{lyutikov20} model places the site of the interaction well inside the WD magnetosphere by allowing mostly neutral gas from the secondary to fall toward the WD before it is ionized and swept away by the spinning magnetic field. They estimate that the temperature of the WD needed to photoionize the gas is at least 12000$^\circ$K. \citet{marsh16} could not directly see evidence of the WD in the AR~Sco spectra, but placed a rough limit of 9750$^\circ$K on WD surface temperature. Here, we will attempt to test the \citet{lyutikov20} model by better constraining the WD surface temperature. We analyze the archival AR~Sco spectra obtained in 2016 by the Hubble Space Telescope's ($HST$) Cosmic Origins Spectrograph ($COS$) with the goal separating the pulsed emission from the inter-pulse light. Reducing the synchroton background by isolating the emission between pulse may reveal the presence of the WD. \begin{figure} \centering \includegraphics[width=\columnwidth]{pulse_trough.pdf} \caption{Comparison between the $COS/HST$ spectra obtained during the pulses and the light curve minima (troughs). The pulsed spectrum shows a blue continuum while the trough spectrum is rather red. The emission lines are thought to arise from the irradiated face of the red dwarf and are similar in strength for both subsets of data.} \label{pulse} \end{figure} \section{Data} The far-ultraviolet (FUV) region of the spectrum is a good place to search for emission from the WD surface in cataclysmic variable stars \citep[e.g.][]{gaensicke99}. The FUV ranges between 1100~\AA\ and about 2000~\AA\ and contains the Lyman-$\alpha$ absorption feature that can be easy to spot in DA type WDs. In AR~Sco, however, there are other sources of emission that may interfere in characterizing its WD. For example, the $COS/HST$ FUV photometry published by \citet{marsh16} shows strong synchrotron pulses, just as observed in the optical (see Figure~\ref{lc}), as well as a brightness modulation over an orbit. To reduce the contribution of the pulses to the FUV spectrum, we have reanalyzed the $COS/HST$ data for AR~Sco by selecting time-tagged photons that arrived between pulses. We call the period around the minimum between pulses the ``trough''. Using the routine {\bf splittag}\footnote{https://justincely.github.io/AAS224/splittag\_tutorial.html}, we divide the 16 {\bf corrtag} exposures in the dataset into pulse-dominated phases and trough-dominated ones with the division occurring at approximately 2$\times 10^{-15}$ ${\rm erg\; cm^{-2}\; s^{-1}\; \AA^{-1}}$\ as shown in Figure~\ref{lc}. It is not possible to set a single flux value for the division between pulse and trough because, as in the optical, there is an orbital modulation in the FUV light curve. The orbital modulation \citep[e.g.,][]{gaibor20} has an amplitude of more than a magnitude at optical and FUV wavelengths and it peaks around orbital phases\footnote{Zero orbital phase, $\phi$, is defined as inferior conjunction when the red dwarf is between the Earth and the WD} of 0.4 to 0.5. The time-tag photons were divided into 172 pulses (both primary and secondary pulses were included) and 169 troughs. These sub-exposures are typically between 20 seconds and 40 seconds in length. Each {\bf corrtag} section was converted to an extracted spectrum using the {\bf x1dcorr} routine. The individual pulsed spectra were combined using the {\bf splice} \footnote{https://www.stsci.edu/itt/review/dhb\_ 2013/COS/ch5\_cos\_analysis4.html} routine and the same was done for all the trough spectral pieces. The resulting average spectra are shown in Figure~\ref{pulse}. The emission lines are thought to come from the heated face of the secondary star and are of similar strength in both spectra. The pulsed spectrum has a blue continuum while the trough spectrum increases into longer wavelengths. \section{Analysis} \subsection{The Trough Spectrum} The spectrum of the troughs shown in Figure~\ref{pulse} should provide some limits on the WD properties. However, the orbital light curve modulation seen in the optical also contributes to the FUV light. The orbital variation is likely linked to the heated face of the secondary star. At inferior conjunction the the heated face is partly obscured by the secondary itself, while a half orbit later the emission is seen unimpeded. To determine how large this contribution is in the FUV, we further sub-divided the trough spectra by orbital phase. The peak of the orbital modulation occurs between orbital phase $0.4< \phi <0.5$. We re-binned the trough spectra into orbital phases with $-0.25 < \phi <0.25$ (centered on inferior conjunction) and phases $0.25 < \phi <0.75$ (centered on superior conjunction). The resulting spectra are shown in Figure~\ref{orbit}. \begin{figure} \centering \includegraphics[width=\columnwidth]{trough_orbit.pdf} \caption{Comparison between trough spectra taken around inferior and superior conjunction. For the spectra taken near superior conjunction the there is a red continuum peaking at approximately 1900~\AA , corresponding to a black body temperature of 15000$^\circ$K. The emission lines are also significantly brighter implying they originate near the irradiated face of the secondary. The spectrum around inferior conjunction is relatively flat. } \label{orbit} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{temperature.pdf} \caption{The trough spectrum around inferior conjunction compared with hydrogen-rich WD spectra from \citet{koester10} ranging in surface temperature from 11000$^\circ$K to 14000$^\circ$K. All the models assume $log(g)=8.5$ and a mass for the WD of 0.8~M$_\odot$. The models have been scaled to the distance of AR~Sco. The lack of strong features in the observed spectrum shortward of 1700~\AA\ suggests that a single temperature hydrogen-rich WD is not a good match to the data. Additional sources of emission are required. But DA models with temperatures greater than 13000$^\circ$K are excluded by this analysis.} \label{temperature} \end{figure} The trough spectrum obtained around superior conjunction clearly shows more flux at the longer wavelengths than at the spectrum around inferior conjunction. We attribute this emission to our better view of the irradiated face of the secondary star during superior conjunction. The spectrum appears to peak around 1900~\AA\ corresponding to a black body temperature of approximately 15000$^\circ$K. The trough spectrum around inferior conjunction contains the minimum contamination from the synchrotron pulses and heated face of the secondary star, and therefore it should provide the best constraint on the properties of the WD in the system. We compare the spectrum with \citet{koester10} models assuming a WD mass of 0.8~M$_\odot$ and $logg=8.5$, scaled to the distance of AR~Sco. The choice of $logg=8.5$ for this mass WD is consistent with the recent study by \citet{chandra20}. Different choices for the mass or gravity are possible as these are not well constrained. Further, a significant surface magnetic field will reduce the apparent strength of absorption features due to Zeeman splitting. For example, the Lyman-$\alpha$ feature for a DA WD with surface magnetic fields over 100~MG can divide into multiple absorptions lines that may be difficult to detect \citep[e.g.][]{burleigh99}. For now, we will assume the surface magnetic field is sufficiently low that component splitting is small compared with the WD line width. Therefore, the Lyman absorption equivalent widths should not be strongly impacted by Zeeman splitting \citep[e.g.][]{schmidt03}. Even these low fields strengths of a several MG, Zeeman splitting will tend to slightly broaden the line widths and result in underestimates of the surface temperatures based on observations of Lyman-$\alpha$. As seen in Figure~\ref{temperature}, the model spectra alone are a poor match the observed trough spectrum. The observed continuum is rather flat with no strong features while the DA models have a deep Lyman-$\alpha$ absorption. At surface temperatures of 13000$^\circ$K and hotter, the predicted WD model fluxes exceed the trough spectrum continuum and must be ruled out for our assumed mass and surface gravity. \begin{figure} \centering \includegraphics[width=\columnwidth]{dilute_wd.pdf} \caption{The trough spectrum of AR~Sco (light grey line) compared to WD models diluted by the pulsed spectrum (green dotted line). The pulsed spectrum has been scaled by a factor of 0.2 to match the observed spectrum at 1216~\AA . Diluted WD models with temperatures of 11000$^\circ$K, 11500$^\circ$K, 12000$^\circ$K, are shown as solid lines plotted over the observed spectrum. The weak dip observed at 1600~\AA\ is well matched by the models, but the models fall below the data shortward of 1200~\AA . (see \citet{lyutikov20}) } \label{dilute} \end{figure} The trough spectrum does show a shallow minimum around Lyman-$\alpha$, but the depth clearly does not match uncontaminated hydrogen-rich (DA) WD models. There is also a weak step in the continuum between 1600 and 1700~\AA, where there is a dip in the model spectra at temperatures less than 14000$^\circ$K. This feature results from the quasi-molecular absorption by H$_2$ \citep{koester85}, and its depth is sensitive to temperature in this regime. In fact, the presence of the 1600~\AA\ feature in the observed spectrum implies that the WD must have a temperature of less than 14000$^\circ$K as the quasi-molecular absorption is too weak to detect above this temperature. A narrow dip in the model spectra at 1400~\AA\ is due to quasi-molecular absorption by H$^+_2$. No dip is seen the trough spectrum, but strong Si~IV emission interferes with the continuum at that wavelength. The lack of a deep Lyman-$\alpha$ feature in the trough spectrum suggests, despite our attempts isolate it, that the light from the WD in AR~Sco remains diluted by another source. In creating the trough spectra, we were forced to include some the fading and rising sections of the pulsed emission centered around the minima, so we expect that the contamination is partly due to residual pulsed light. We can take the spectral shape of the pulses, add them to the WD models and attempt to match the trough spectrum. The WD models nearly go to zero flux at the bottom of the Lyman~$\alpha$ line, and this provides the scale to fix the pulse contamination. The sum of the scaled pulse spectrum and WD models around 11500$^\circ$K are compared with the observed trough spectrum in Figure~\ref{dilute}. The results are encouraging and suggest that the WD temperature is 11500$^\circ\pm 500^\circ$K. This is somewhat dependent on the assumed slope of the contaminating spectrum. Still, the observed dip around 1600~\AA\ is well matched by the sum of a WD model around 11500$^\circ$K added to an additional source with a blue continuum. \subsection{The Pulsed Spectrum} The difference between the average pulse and trough spectra should contain pure synchrotron light with a minimum of contamination from stellar components. The difference spectrum is shown in the top panel of Figure~\ref{diff} and a strong blue continuum is apparent. The slope in $\nu F\nu$ is relatively flat as modelled by \citet{singh20} for the low-magnetic field synchrotron source (their synchrotron-1 region). Surprisingly, there appears a dip in the continuum at the wavelength of Lyman-$\alpha$, when we would expect the synchrotron emission to be featureless. The broad absorption appears like the Lyman-$\alpha$ feature from the WD we expected to find in the trough spectrum. Taking the difference between the pulsed and trough spectra should have removed the stellar components if their contributions were constant over a WD spin period. The pulsed spectrum without subtracting the trough (lower panel of Figure~\ref{diff}) also shows an obvious broad Lyman-$\alpha$ absorption around the geocoronal emission. This is puzzling since the trough spectrum is four times fainter than the pulse around 1200~\AA\ and, the trough spectrum itself shows little evidence of Lyman-$\alpha$ from the WD. The strength of Lyman-$\alpha$ feature during the pulses implies that the WD thermal emission contributes about 20\%\ to the average pulse flux around 1200~\AA . In this data, the WD must be significantly brighter during the pulses than between pulses. For \citet{koester10} models with temperatures of 18000$^\circ$K$< T_{eff}< 30000^\circ$K, the full-width at half minimum (FWHM) of the Lyman-$\alpha$ absorption is approximately linear with surface temperature for a fixed surface gravity. For this range of temperatures, we normalized the model continua and fit the absorption with a Gaussian. This provides the relation: $$T_{eff}=37170-221.7\times W \;\;\; ,$$ where $W$ is the measured FWHM in \AA . This linear relation reproduces the model temperatures with a standard deviation of 360$^\circ$K. We then measured the FWHM of the Lyman-$\alpha$ absorption in the pulsed spectra. The fit of the Gaussian from the $HST$ data is poorly constrained due to the geocoronal emission (and some Lyman-$\alpha$ emission from AR~Sco) that prevents a clear determination of the minimum, so uncertainties are large. For the pulse spectrum $W=55\pm 4$~\AA\ corresponding to a temperature of 25000$\pm 1000^\circ$K. The difference spectrum gives a somewhat lower FWHM of $W=39\pm 5$ corresponding to a temperature of 28500$\pm 1100$. The width of the Lyman-$\alpha$ absorption line suggests that the WD temperature during a pulse is a factor of two larger than seen during a trough. No WD spectral features other than Lyman-$\alpha$ are seen during the pulses, supporting the assertion that surface temperatures are in excess of 15000$^\circ$K where the quasi-molecular hydrogen features are no longer significant. We also compared the \citet{koester10} WD models directly with the pulsed spectrum as shown in Figure~\ref{diff}. We assume the flux below the Lyman-$\alpha$ minumum comes from a non-thermal power-law which is added to a WD spectral model. The best results are for WD temperatures of 20000$^\circ$K (pulse) and 23000$^\circ$K (pulse minus trough). These are slightly lower than the temperatures estimated using the absorption width alone, which may mean the assumption of a Gaussian line shape was not ideal. In these models, the WD fluxes need to be reduced by a factor of 16 (pulse) and 40 (pulse minus trough). The dilution suggests that the high temperatures do not cover the full WD surface area and may be localized hot regions or hot magnetic polar caps. Because we do not have a good picture of their temperature structure or extent, We will refer to these regions as ``hotspots''. The presence of a clear, single, Lyman-$\alpha$ absorption that appears to be symmetric about 1215~\AA\ implies that the surface magnetic field on the WD is less than $\lesssim$100~MG. Zeeman splitting becomes quite apparent above this field strength as seen in AR~UMa \citep{gaensicke01}. This is a fairly conservative limit, but the geocoronal emission in the AR~Sco data could hide subtle effects of Zeeman splitting at lower field strengths. \begin{figure} \centering \includegraphics[width=\columnwidth]{wd_fit.pdf} \caption{{\bf top:} The difference between the average pulse and trough spectra. The emission lines and geocoronal features are not completely subtracted leaving the spikes. The broad absorption at Lyman-$\alpha$ is surprising because taking the difference between the pulse and trough spectra should eliminate the stellar contributions and leave only the spectrum of the pulsed emission. The solid red line is a model made from the sum of a power-law and a 23000$^\circ$K WD model with $logg=8.5$ reduced in luminosity by a factor of 0.05. The dashed line shows the power-law added to the WD model to match the overall continuum. {\bf bottom:} The pulse spectrum with emission lines extracted. The thick vertical band indicates where the geocoronal Lyman-$\alpha$ dominates the flux. The solid red line is a model as described above, but using a 20000$^\circ$K WD model. } \label{diff} \end{figure} \subsection{The Visibility of the Hotspots} The apparent hotspots on the WD inferred from these data are seen primarily during the beat pulses and are not prominent during the inter-pulse periods. However, we expect the temperature variations on the WD to be seen at its spin period, while the pulses are observed at the beat period. The combination of beat and spin pulses was modelled by \citet{stiller18}, who showed how the interference of the beat and spin modulations shapes some of the properties of the optical light curve. They note that constructive interference between these two signals occurs at orbital phase $0.25 < \phi < 0.35$ and at half an orbit later. Destructive interference is greatest near $\phi\sim 0.05$ and $\phi\sim 0.55$. Similar orbital phasing between these signals is seen in the FUV light curve. As a result, we would expect the hotspots to be contributing to the trough light at certain orbital phases, while during other parts of the orbit, they would coincide with the synchrotron pulses. Yet evidence for the $>$20000$^\circ$K temperatures is seen in the pulse spectrum and not in the trough spectrum. There are two reasons for the hotspots to have been predominately visible during the pulses. First, the binary orbital sampling by $HST$ was not uniform. The observation consists of five spacecraft orbits each taking 95 minutes, with gaps caused by Earth occultations that last 45 minutes. This resulted in the data skipping binary orbital phases near $\phi\approx 0.1$ and $\phi\approx 0.6$ while phases around $\phi\approx 0.35$ and $\phi\approx 0.8$ were covered twice. So the phases when the spin and beat destructively interfere were poorly sampled and the constructive sections overly represented in the data. By chance, this meant the beat pulses tended to contain spin peaks caused by the hotspots. Conversely, the cool regions of the WD tended to be visible during the troughs. The second bias against including the hotspots in the inter-pulse regions is that when the spin was out of phase with the beat, the troughs tended to be too bright to include in the average. As discussed in Section~2, troughs were defined as periods with fluxes fainter than about 2$\times 10^{-15}$ ${\rm erg\; cm^{-2}\; s^{-1}\; \AA^{-1}}$ , while the peaks of the spin modulation would push the light curve in the inter-pulse regions above that limit. Thus, the trough spectra were biased against including the hotspots. The Lomb-Scargle power spectrum of the FUV light curve \citep{marsh16} shows that there is almost no power at the fundamental WD spin frequency, but the very strong peak at twice the spin frequency implies that the temperature of the two hotspots is very similar. From the power spectrum, the average amplitude of the spin peaks is about 20\%\ to 30\%\ of the primary beat pulse amplitudes. This is consistent with what we see in the spectra where the Lyman-$\alpha$ depth is suggests the hotspots add 20\%\ to the pulse flux. In the optical, there is little contribution of the hotspots to the observed spin modulation. Extrapolating a 23000$^\circ$K WD model at the distance of AR~Sco and diluted by a factor of 40 implies a $SDSS-r$-band brightness for the hotspot contribution to be only $\approx 21$~mag. The $SDSS-r$ brightness of AR~Sco ranges between 14~mag and 16~mag \citep{gaibor20}. Thus, the amplitude of the hotspot modulation in the optical would amount to a few percent of the total flux and be lost in the other variations in the system. Instead, the linear polarization at the spin frequency observed by \citet{potter18} implies some non-thermal source of emission is important to the spin modulation at optical wavelengths. \subsection{Spin Color Variability} The spectral changes of the WD over the expected temperature range is particularly dramatic in the FUV. For the relatively cool 11500$^\circ$K the quasi-molecular absorption severely decreases the flux short-ward of 1600~\AA\ (see Figure~\ref{temperature}). In contrast, the $>$20000$^\circ$K spectrum is quite blue and unaffected by quasi-molecules. Splitting the FUV spectra into two bands divided at 1600~\AA, we predict a large color modulation as the WD rotates. For the AR~Sco FUV data, we construct a blue bandpass running between 1150~\AA\ and 1500~\AA\ and a red bandpass between 1600~\AA\ and 2000~\AA . We convert the average flux in each bandpass to a magnitude and construct a color in the standard convention by subtracting the red magnitude from the blue. The resulting color curve shows a large modulation at a period of around one minute. Using the \citet{koester10} WD models scaled to the distance of AR~Sco, We can estimate the expected color modulation from the 11500$^\circ$K rotating WD with hotspots around 23000$^\circ$K. The median flux level for the $HST/COS$ light curve is 3.0$\times 10^{-15}$ ${\rm erg\; cm^{-2}\; s^{-1}\; \AA^{-1}}$\ which we add to the model spectra before calculating the colors using the same prescription as for the real data. The difference in color between the uniform WD and the WD with hotspots amounts to 0.23~mag, so we expect the color modulation of the spinning WD to have approximately this peak-to-peak amplitude. To separate the beat and spin color modulations, we calculate the color Lomb-Scargle power spectrum in Figure~\ref{color_pow} and compare it to the flux power spectrum. Twice the spin frequency, $2\omega$, shows the highest peak and this corresponds to a color amplitude of 0.30$\pm 0.02$~mag. This is fairly consistent with what we expect from the hotspot model. The very low power at the native spin frequencies suggest that the double peaks viewed each cycle are nearly equal in amplitude. Double the beat frequency, $2(\omega -\Omega)$, also has significant power corresponding to an amplitude of 0.20$\pm 0.02$~mag. This color modulation at may correspond to variations in the FUV power-law slope of the synchrotron emission during both the primary and secondary beat pulses. \section{Discussion} \subsection{Comparison to AE Aqr} Our analysis suggests that during the pulsed emission, the WD in AR~Sco has a surface temperature above 20000$^\circ$K, while during the inter-pulse segments the surface temperature is around 11500$^\circ$K. This can be explained with a hotspot near the magnetic poles combined with a cooler equatorial band. Accreting magnetic WDs such as in the proto-type AM~Her have been observed to have hotspots at their magnetic poles \citep{gaensicke06}. Hotspots also explain the fast brightness variations in the propeller-state IP AE~Aqr \citep{eracleous94}. The 33s UV variability in AE~Aqr is the consequence of temperature variations on the surface of the spinning WD in the system. The spectrum of the pulse peaks shows a broad Lyman-$\alpha$ absorption and a continuum consistent with a WD temperature of 26000$^\circ$K. \citet{eracleous94} found an excellent fit to the continuum variability by assuming a minimum temperature of 14000$^\circ$K. These are very close to the range of temperatures we have estimated for AR~Sco. In AE~Aqr, the UV light curve on short time-scales is completely dominated by the temperature variations on the WD. For AR~Sco, the UV variations result from the combination of the WD surface hotspots and non-thermal pulsed emission. The WD temperature dipole variations contribute 20\%\ to the pulses around 1200~\AA\ and a decreasing fraction at longer wavelengths. As discussed in the previous section, there are times when the hotspots are out of phase with the pulses, but the gaps in the existing $COS/HST$ data and the way we extracted the troughs tended to avoid times when the poles would be more cleanly separable from the pulses. The temperature structure on the WD in AE~Aqr is similar to what we expect in AR~Sco. This color change on the spin period should be directly visible on AE~Aqr since it does not have a large beat pulse confusing the spin variations. Indeed, \citet{eracleous94} divided the FUV spectra into several wavelength bins and found the highest amplitude for the spin modulation in the FUV was at the 1340~\AA\ and 1450~\AA\ bins and lowest at 2000~\AA . From their plots we find a color amplitude of between 0.2 and 0.4~mag. In the previous section we showed a color amplitude of 0.30~mag for AR~Sco, in excellent agreement with that for AE~Aqr. For AE~Aqr, the FUV flux modulation at the spin period shows two unequal amplitude peaks suggesting that the magnetic axis and viewing angle conspire to dim our view of one of the hotspots \citep{eracleous94}. In contrast, the FUV flux power spectrum for AR~Sco implies that the spin modulation is very symmetric so that the magnetic axis is nearly perpendicular to the spin axis and that the hotspots in AR~Sco are very close to the WD equator. \begin{figure} \centering \includegraphics[width=\columnwidth]{color_pow.pdf} \caption{{\bf top:} The power spectrum of the color curve created by dividing the FUV spectra into a short wavelength band (1150-1500~\AA ) and a long wavelength band (1600-2000~\AA ). The strongest peak corresponds to twice the spin frequency and it has an amplitude of 0.30~mag. {\bf bottom:} The power spectrum of the total FUV flux. The weak peaks near each of the major peaks result from the way $HST$ sampled the light curve. } \label{color_pow} \end{figure} \subsection{Origin of the Hotspots} Some isolated magnetic WDs show low-amplitude oscillations at their spin period due to spots on their surface \citep[e.g.][]{hoard18}. It has been suggested that brightness modulations can be caused by strong magnetic fields or localized abundance enhancements that result in surface temperature variations \citep{holberg11}. While these modulations are only a few percent in the optical, the temperature variations may have a larger impact in the FUV. In mass-transfer binary systems, such as AE~Aqr, direct accretion on to the magnetic poles is also a possible cause of WD hotspots. \citet{marsh16} constrained the WD accretion rate in AR~Sco to be less than $1.3\times 10^{-11}$~M$_\odot$~yr$^{-1}$ and pointed out that accretion near that rate would result in broad emission lines distinct from the narrow lines observed on the secondary star. Accretion at a lower rate has not been ruled out and could be a source of some heating. Because of the unique nature of AR~Sco, we present two new models to explain the apparent temperature gradients on its WD. \subsubsection{Hot spots on the WD due to synchrotron heating?} \label{sc:sh} \citet{potter18} described a model in which relativistic electrons following the field lines produce synchrotron radiation beamed in the direction of the WD. The opening angle for the beam was quite large, so that only a small fraction of the energy would irradiate the surface of a WD at its magnetic poles. Assuming equilibrium, the thermal energy emitted by the hotspots should match the synchrotron energy absorbed on the poles. In the previous section, we estimated that at 23000$^\circ$K, a hotspot would cover 2.5\%\ of the WD surface. This hotspot is above the typical surface temperature of 11500$^\circ$K and generates an excess luminosity of 1.7$\times 10^{30}$ erg~s$^{-1}$ that must be replenished through heating. \citet{marsh16} estimated that the synchrotron production in AR~Sco is $L_{syn}=1.3\times 10^{32}\;$~erg~s$^{-1}$, meaning that only 1\%\ of the generated energy needs to be absorbed at a magnetic pole to maintain the hotspot temperature. \citet{bl2020} has modelled the particle motions in the WD magnetosphere and included effects of radiative damping and adiabatic mirroring. These results suggest that synchrotron emission from energetic electrons approaching the magnetic poles is unlikely to produce localized heated region on the WD. As non-thermal electrons move from their acceleration zone along magnetic field lines towards the poles, they increase their pitch angle $\alpha$ (see \citet{kpa15}). As a result, at the mirror point, where the emissivity is maximal, the emissivity diagram is very wide, predominately in the direction normal to the magnetic field lines. Thus, this thin cone of synchrotron emission will mostly miss the WD surface. For some combinations of initial pitch angles and lepton Lorentz factors, the emission cone will touch the WD surface, but it would irradiate nearly the entire WD hemisphere. Thus, emission from energetic electrons trapped in the WD magnetosphere is unlikely to be the source of localized heating on the WD surface. \subsubsection{Baryonic Bombardment} \label{sc:hb} Another possible source of heating of the polar caps may be by the leaking of protons trapped on closed field lines of the WD’s magnetosphere. Trapping occurs through turbulent scattering, also known as radial diffusion in the case of the Earth magnetosphere. One of the most important processes will be magnetic reflection due to the conservation of the first adiabatic invariant. As a result, only a small fraction of baryons can reach the WD surface $\delta \approx 1/8 (B_{WD}/B_{A})^{-1} \approx 1/8 (R_{WD}/R_{A})^3 \approx 10^{-6}$ for each cycle of magnetospheric oscillations. In the model suggested by \citet{lyutikov20} the matter flow from the red dwarf secondary interacts with the WD magnetosphere at a radius of about $R_A\approx2\times10^{10}$~cm. At this point electrons and baryons are accelerated in magnetic reconnection events. As electrons propagate within the magnetosphere of the WD, they experience strong radiative cooling, losing their energy to synchrotron emission. As baryons do not emit efficiently, their energy is conserved. The captured baryons will oscillate on the magnetic field lines between the magnetic poles, forming analogs of Earth's van Allen radiation belts. Scattering on turbulent fluctuations will lead to a changing pitch angle. After approximately $N\sim 10^3$ oscillations they will be scattered into the loss cone and will hit the WD. Another process limiting the baryons' lifetime in the WD magnetosphere is the escape from the magnetosphere at a characteristic time of $t_{es}\sim 2\pi \eta/\omega$, here $\omega$ is WD angular velocity and $\eta\sim 1$ is the factor which takes into account probability for non-thermal (NT) particle to escape \footnote{Even for $10^3$ oscillations the synchrotron cooling time is negligible for baryons with energy bellow 1~TeV.}, so the number of oscillations per baryon is $N\sim t_{es} c/\pi R_A$. The polar cap heating rate can be estimated as $L_{NT}\delta N \sim \mbox{few}\times10^{30}$~erg/s, where $L_{NT}\gtrsim L_{syn}$ is proton acceleration rate. This heating rate is close to the observed hotspot luminosity. We note that the randomization from many oscillations through the magnetosphere will result in approximately equal amounts of energy deposited on the two poles. Indeed, an appealing feature of this mechanism is that it naturally explains how the observed hotspots have such similar temperatures (see Figure~\ref{color_pow}). \subsection{Implications for theoretical models of AR Sco} \citet{lyutikov20} have proposed a mechanism to explain the observed characteristics of AR~Sco that does not require accretion on to the WD. They predict that neutral gas leaving the secondary will reach into the WD magnetosphere before being ionized and ejected through the propeller mechanism. To do this, they estimate that the WD requires a surface temperature of at least 12000$^\circ$K for sufficient ionization of the secondary's gas. Here, we find that during the time between pulses the WD surface temperature is consistent with their prediction within our range of uncertainty. However, there are several results here that may complicate \citet{lyutikov20} interpretation. For example, the magnetic poles of the WD appear to have temperatures in excess of 20000$^\circ$ which would ionize the gas coming from the secondary before it reaches the inner magnetosphere of the WD. Our analysis of the trough spectrum shows that the inner face of the secondary may already be at a temperature of $\approx$15000$^\circ$K and likely to have a high ionization fraction before heading toward the WD. Indeed, optical spectra have already found HeII 4686~\AA\ emission located on the secondary star's inner face \citep{garnavich19} implying heating through irradiation or magnetic interaction. From these spectra, we cannot directly estimate the magnetic field strength on the WD surface. All we can do is note that there is no clear Zeeman splitting of the Lyman-$\alpha$ absorption which suggests that the fields are less than about 100~MG \citep{gaensicke01}. While rather high, this limit puts pressure on models where the WD magnetic field is in the 200~G range at the red dwarf. Such models require WD surface fields of $>200$~MG \citep[e.g.][]{takata18, garnavich19}. Likewise, our constraint challenges a scenario proposed in \citet{buckley17} wherein the WD field strength could be as high\footnote{The original estimate in \citet{buckley17} was $\sim$500~MG, but per their Eq.~1, this value needs to be doubled due to improved measurements of the spin-down power from subsequent studies \citep{stiller18, gaibor20}.} as $\sim$1000~MG. \section{Conclusion} In order to constrain the temperature of the WD in AR~Sco, we reanalyzed the FUV spectral data obtained with $COS$ on $HST$ in 2016. The average spectrum obtained over two binary orbits does not show clear evidence of emission from the WD in the system. We divide up the observations in time to build up an average ``trough'' spectrum from the minima between pulses. We further sub-divide the trough spectra by orbital phase to minimize emission sources that contaminate the WD signal. From the trough spectra averaged around inferior conjunction, we find plausible evidence for a quasi-molecular hydrogen absorption band and flux constraints that imply a WD surface temperature of 11500$\pm 500^\circ$K. The trough spectra obtained around superior conjunction shows broad excess emission peaking around 1900~\AA . This emission corresponds to the peak of the orbital modulation and it is likely coming from the irradiated face of the secondary star that is best viewed around orbital phase 0.5. The spectral peak implies a hot region on the irradiated face of the secondary at about 15000$^\circ$K. An average spectrum of the pulsed emission shows a strong Lyman-$\alpha$ absorption that is not obvious between pulses. The pulsed spectrum is well fit by a power-law pulse continuum added to a WD spectrum with a temperature of 23000$\pm 3000^\circ$K. The WD flux contributes 20\%\ to the total pulsed light around 1200~\AA . We conclude that hotspots are present on the WD and visible primarily during the pulses. Magnetic WDs in polars often have hotspots due to direct accretion of gas lost by their companion stars. Some direct accretion on to the WD in AR~Sco has not been ruled out, and may be a source of the observed localized heating. Another possible heating source comes from the \citet{potter18} model that has the synchrotron beams approximately aimed in the direction of the WD magnetic poles, suggesting that the hotspots result from absorption of a fraction of the synchrotron energy. More detailed modelling of electron emission in the WD magnetosphere \citep{bl2020} described in section \ref{sc:sh} suggests that the synchrotron beams are unable to provide the desired localized heating. Here, we present a new scenario where non-thermal proton bombardment deposits energy at the WD magnetic poles. These trapped protons make many oscillations between the poles that naturally provides an explanation for the similar temperatures of the two observed hotspots. The FUV spectra do not strongly constrain the surface magnetic field on the WD. Geocoronal emission makes it difficult to detect details in profile of the Lyman-$\alpha$ line. From this data we only conclude that there is no strong Zeeman splitting visible and constrain the field to less than 100~MG. Still, this means the WD field at the surface of the red dwarf is less than about 50~G. Our results support the \citet{lyutikov20} model in that the WD temperature is sufficient to ionize gas from the red dwarf as it approaches the inner magnetosphere. In fact, the temperature of the WD hotspots may be so high that gas is significantly ionized close to the secondary. We see evidence for 15000$^\circ$K gas on the irradiated face of the secondary, implying a fairly high ionized fraction near the $L_1$ point. While this does not rule out the \citet{lyutikov20} model, it does suggest that the interaction mechanism between these two stars requires further investigation. \begin{acknowledgements} This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. These observations are associated with program GO14470, PI: Gaensicke. This work had been supported by DoE grant DE-SC0016369, NASA grant 80NSSC17K0757 and NSF grants {1903332 and 1908590}. ML would like to thank organizers and participants of the conference ``Compact White Dwarf Binaries'' for enlightening discussions. \end{acknowledgements}
2,877,628,090,018
arxiv
\section{Introduction and motivation} The dynamics of QCD generates dynamically a mass for its fundamental quanta. For the quarks the running mass exceeds largely the contribution due to the Higgs mechanism for small momenta. Indeed, recent lattice simulations at zero temperature of the Landau gauge quark propagator~\cite{Oliveira:2016muq,Oliveira:2018lln}, taking into account only QCD, show that the running quark mass takes values of $\sim 320$ -- $340$ MeV at zero momentum for current quark masses below 10 MeV. However, in a medium where the fundamental fields feel the effects of temperature and density, as occurs in heavy ion collisions or inside dense stars, the properties observed at zero temperature are changed. At sufficiently high temperature quarks and gluon become deconfined and a chiral phase transition is also expected. For example, lattice simulations within pure SU(3) gauge theories show that the longitudinal component of the Landau gauge gluon propagator at finite temperature is sensitive to the deconfinement phase transition~\cite{Silva:2013maa}. Moreover, due to the breaking of the center symmetry, the propagators in various $Z_3$ sectors differ above $T_c$~\cite{Silva:2016onh} and the differences can be used to identify the transition to the deconfined regime. The sign problem prevents lattice simulations from covering the full range of realistic chemical potentials $\mu$ and, therefore, the quark propagator properties are less studied. The two point correlation functions for the gauge sector for two-color QCD at finite density were investigated in~\cite{Hajizadeh:2017ewa} and some low-momentum gluon screening at high densities was observed. The properties of quarks and, in particular, its propagators change with $T$ and $\mu$. Herein, we report preliminary results for the study of the Landau gauge quark propagator in momentum space, at finite temperature and for temperatures above and below the deconfinement phase transition, using quenched lattice simulations. After gauge fixing, we take into account those configurations that belong to the $Z_3$ sector where the phase of the associated Polyakov loop vanishes. There are similar studies where the mass function was measured using Wilson fermions~\cite{Hamada:2006ra}, non-perturbative improved Clover fermions~\cite{Hamada:2010zz} and where the quark spectral functions were investigated~\cite{Karsch:2009tp,Kaczmarek:2012mb} relying on particular ans\"atze. Our study is performed using much larger lattice volumes and looks also to the bare quark mass dependence of the quark propagator form factors. \section{Quark Propagator at Finite Temperature} The simulations with finite temperature break rotational invariance and, therefore, in momentum space the continuum quark propagator is described by three form factors, namely, \begin{eqnarray} S(p_t , \vec{p} ) & = &\frac{1}{ i \gamma_t \, p_t ~ \omega (p_t, \vec{p}) + i \vec{\gamma} \cdot \vec{p} ~ Z(p_t , \vec{p} ) + \sigma (p_t , \vec{p} ) } \nonumber \\ & = &\frac{ - i \gamma_t \, p_t ~ \omega (p_t, \vec{p}) - i \vec{\gamma} \cdot \vec{p} ~ Z(p_t , \vec{p} ) + \sigma (p_t , \vec{p} ) } { p^2_t ~ \omega^2 (p_t, \vec{p}) + \left( \vec{p}\cdot\vec{p} \right) ~ Z^2(p_t , \vec{p} ) + \sigma^2 (p_t , \vec{p} ) }. \label{Eq:ContQuarkProp} \end{eqnarray} In the results reported below, we use non-perturbative improved Clover fermions~\cite{Sheikholeslami:1985ij} and consider rotated sources~\cite{Heatlie:1990kg}. Furthermore, in the analysis of the lattice data we assume that the simulations are close to the continuum and compute the $\omega (p_t, \vec{p})$, $Z (p_t, \vec{p})$ and $\sigma (p_t, \vec{p})$ form factors relying on Eq. (\ref{Eq:ContQuarkProp}). On the lattice given that the boundary conditions for fermions are periodic in the spatial directions and anti-periodic in time, the momenta $\vec{p}$ and $p_t$ assume discrete values \begin{equation} p_t = \frac{ 2 \, \pi}{L_t} \, \left( n + \frac{1}{2} \right) \qquad\mbox{ and }\qquad p_i = \frac{ 2 \, \pi}{L_s} \, n \ , \qquad\mbox{where }\qquad n = 0, \, 1, \, 2, \, \dots \end{equation} Further, the ``continuum'' form factor is given by $Z_c(p_t, \vec{p}) = Z(p_t, \vec{p}) / \omega (p_t, \vec{p})$ and the running quark mass by $M (p_t, \vec{p}) = \sigma(p_t, \vec{p}) / \omega(p_t, \vec{p})$. In the current work, these form factors are computed for temperatures around the critical temperature, taken as the critical temperature above which the gluons become deconfined for pure Yang-Mills Theory, i.e. $T_c \approx 270$ MeV. The lattice setup used in the simulations is described in Tab.~\ref{Tab:LatSet}, where each ensemble has one hundred gauge configurations. The quark propagator is computed inverting the fermionic matrix for 2 point sources. The values used for $c_{sw}$ and the critical hopping parameter $\kappa_{c}$ are computed from~\cite{Luscher:1996ug}. In Tab.~\ref{Tab:LatSet} the bare quark mass is given by $m_{bare} = ( 1/ \kappa - 1 / \kappa_c ) / 2 a$, where $a$ is given in~\cite{Silva:2013maa}, and our simulations consider $m_{bare} \approx 10$ MeV or 50 MeV. \begin{table} \begin{center} \begin{tabular}{ccccccc} \hline T(MeV) & $\beta$ & $L_s^3 \times L_t$ & $\kappa$ & $m_{bare}$ (MeV) & $c_{sw}$ \\ \hline 243 & 6.0000 & $64^3\times8$ & 0.1350 & 10 & 1.769\\ & & & 0.1342 & 53 & \\ \hline 260 & 6.0347 & $68^3\times8$ & 0.1351 & 11 & 1.734\\ & & & 0.1344 & 51 & \\ \hline 275 & 6.0684 & $72^3\times8$ & 0.1352 & 12 & 1.704\\ & & & 0.1345 & 54 & \\ \hline \end{tabular} \end{center} \caption{Lattice setup.} \label{Tab:LatSet} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{omega_T.pdf} \caption{Bare $\omega$, in lattice units, for the various ensembles and for the various Matsubara frequencies.} \label{fig:omega} \end{figure} \section{Results and Conclusions} \begin{figure}[t] \centering \includegraphics[width=3.5in]{Z_64x3x8_b6p0_T243MeV.pdf} \\ \vspace{-0.7cm} \includegraphics[width=3.5in]{Z_68x3x8_b6p0347_T260MeV.pdf} \\ \vspace{-0.7cm} \includegraphics[width=3.5in]{Z_72x3x8_b6p0684_T275MeV.pdf} \caption{Bare $Z_c(p_t, \vec{p})$ for the various simulations. } \label{fig:Zc} \end{figure} In order to suppress lattice effects we have applied momentum cuts as described in~\cite{Aouane:2011fv,Silva:2013maa}. For those quantities defined by ratios of lattice functions, as $Z_c$ and $M$, we have ignored all the points whose statistical error was larger than 50\%. In Fig.~\ref{fig:omega} we report the bare $\omega ( p_t , \vec{p} )$ for all Matsubara frequencies for the heaviest mass considered. For the lightest quark mass only the lowest Matsubara frequency is shown. The plot shows a general trend observed for all simulations: the function associated with the lightest bare quark mass have large fluctuations and, therefore, large statistical errors below the deconfinement phase transition, while above the deconfinement phase transition the statistical fluctuations become similar for both quark masses. Furthermore, at sufficiently high momenta no significative difference between the momenta functions associated to the Matsubara frequencies is seen. On Fig.~\ref{fig:Zc} the bare $Z_c(p_t , \vec{p})$ is reported. Note that no lattice artefacts are subtracted to the data. Below $T_c$, we observe significant statistical fluctuations associated with the lightest quark mass. Above the deconfinement temperature the lattice data for the lightest quark mass also shows oscillations that deserve further study. The function $Z_c(p_t , \vec{p})$ seems to approach a constant for sufficient large momenta. The not so good agreement between the results for the various Matsubara frequencies at high momenta are a possible indication of the breaking of the O(4) symmetry, the use of finite lattice spacing and/or finite volume effects. \begin{figure}[t] \centering \includegraphics[width=3.5in]{M_64x3x8_b6p0_T243MeV.pdf} \\ \vspace{-0.7cm} \includegraphics[width=3.5in]{M_68x3x8_b6p0347_T260MeV.pdf} \\ \vspace{-0.7cm} \includegraphics[width=3.5in]{M_72x3x8_b6p0684_T275MeV.pdf} \caption{Running quark mass $M(p_t, \vec{p})$ for the various simulations. } \label{fig:M} \end{figure} \begin{figure}[t] \centering \includegraphics[width=4in]{massas_low.pdf} \caption{$M(p_t , \vec{p}=0)$ for the smallest $p_t$ for the various ensembles. } \label{fig:massas} \end{figure} On Fig.~\ref{fig:M} we show the running mass for the various simulations. Note that no removal of the lattice artefacts is performed and that why the data is shown only up to $p = 1.5$ GeV. For higher momenta the corrections to the use of finite lattice spacing are rather large, making the raw lattice data for the running mass meaningless - see e.g. the discussion and references in~\cite{Oliveira:2018lln}. The data shows a $M(p_4, \vec{p})$ that decreases with the temperature. This effect is illustrated in Fig.~\ref{fig:massas}, where we report $M(p_t , \vec{p}=0)$ for the smallest value of $p_t$ for each of the simulations. As we cross the deconfinement temperature, the blue vertical line on the plot, the value of $M$ becomes about half of its values below $T_c$. Note, however, that $M$ are always much larger than the bare quark masses and, although, the data suggests a decreasing $M(T)$ it can cannot be viewed as an indication of the chiral symmetry restoration. \section*{Acknowledgements} P.J.S. acknowlegdes the generous sponsorship by the TUM/LMU "Universe Cluster". P.J.S. also acknowledges support by Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (FCT) under contracts SFRH/BPD/40998/2007 and SFRH/BPD/109971/2015. The authors acknowledge financial support from FCT under contract with reference UID/FIS/04564/2016. The authors also acknowledge the Laboratory for Advanced Computing at University of Coimbra (http://www.uc.pt/lca) for providing access to the HPC resource Navigator. The SU(3) lattice simulations were done using Chroma \cite{Edwards2005} and PFFT \cite{Pippig2013} libraries.
2,877,628,090,019
arxiv
\section{Introduction}\label{A} Jan Paseka [1999, 2002, 2003] introduced the notion of {\em Hilbert module} on an involutive quantale: it is a module equipped with an {\em inner product}. This provides for an order-theoretic notion of ``inner product space'', originally intended as a generalisation of complete lattices with a duality. Recently, Pedro Resende and Elias Rodrigues [2008] applied this definition to a locale $X$ and further defined what it means for a Hilbert $X$-module to have a {\em Hilbert basis}. These Hilbert $X$-modules with Hilbert basis describe, in a module-theoretic way, the sheaves on $X$. At the same time, the present authors defined the notion of {\em (locally) principally generated module} on a quantaloid [Heymans and Stubbe, 2009]. Our aim too was to describe ``sheaves as modules'', albeit sheaves on quantaloids in the sense of [Stubbe, 2005b]. In this formulation the ordinary sheaves on a locale $X$ are described as locally principally generated $X$-modules whose locally principal elements satisfy an extra ``openness'' condition. Whereas Hilbert locale modules easily generalise to modules on involutive quantales, the principally generated quantaloid modules straightforwardly specialise to involutive quantales. Thus we have two module-theoretic approaches to sheaves on involutive quantales: in this note we explain the precise relationship between them. This work can be summarised as follows: After some preliminary definitions we show in Section \ref{B} that any principally generated module on an involutive quantale comes with a {\em canoncial (pre-)inner product}. In Section \ref{C} we first present the notion of Hilbert basis for modules on an involutive quantale [Resende, 2008]. After introducing a suitable notion of symmetry for such modules, termed {\em principal symmetry}, we prove that a module is principally generated and principally symmetric if and only if it admits a canonical Hilbert structure (= canonical inner product plus canonical Hilbert basis). When working over a {\em modular quantal frame} it is a fact, as we prove in Section \ref{D}, that a module bears a Hilbert structure if and only if it is principally generated and principally symmetric, in which case the given inner product is necessarily the canonical one (admitting the canonical Hilbert basis). That is to say, in this case the only possible (and thus the only relevant) Hilbert structure is the canonical one. We illustrate all this module-theory with many examples. In the final Section \ref{E} we draw some conclusions from our work. We explain all new results in this paper in a self-contained manner in the language of quantale modules, focussing on the purely order-theoretic aspects. However, in some examples, particularly those concerned with sheaf theory in one way or another, we freely use material from the references without recalling much of the details. Thus, the reader who is mainly interested in order theory can safely skip those examples; but the reader who is also interested in the applications to sheaf theory will most likely have to have a quick look at the cited papers too, insofar as the notions involved are not already familiar to her or him. \section{Canonical inner product}\label{B} We begin by recalling some definitions. Throughout this paper, $Q=(Q,\bigvee,\circ,1)$ stands for a {\em quantale}, i.e.\ a monoid in the monoidal category ${\sf Sup}$ of complete lattices and maps that preserve arbitrary suprema. Explicitly, a quantale $Q$ consists of a complete lattice $(Q,\bigvee)$ equipped with a binary operation $Q\times Q\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q\:(f,g)\mapsto f\circ g$ and a constant $1\in Q$ such that $$f\circ(g\circ h)=(f\circ g)\circ h\mbox{, \quad}1\circ f=f=f\circ 1\mbox{\quad and \quad}(\bigvee_{i\in I}f_i)\circ(\bigvee_{j\in J}g_j)=\bigvee_{i\in I}\bigvee_{j\in J}(f_i\circ g_j)$$ for all $f,g,h,f_i,g_j\in Q$. (Some call this a {\em unital quantale}, but since we shall not encounter ``non-unital quantales'' in this work we drop that adjective.) \begin{definition}\label{1} A map $Q\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q\:f\mapsto f^{\sf o}$ is an {\em involution}, and the pair $(Q,(-)^{\sf o})$ forms an {\em involutive quantale}, if it is order-preserving ($f\leq g\Rightarrow f^{\sf o}\leq g^{\sf o}$), involutive ($f^{\sf oo}=f$) and multiplication-reversing ($(f\circ g)^{\sf o}=g^{\sf o}\circ f^{\sf o}$). \end{definition} It follows that an involution is an isomorphism of complete lattices, and also unit-preserving ($1^{\sf o}=1$). Most often we shall simply speak of ``an involutive quantale $Q$'' and leave it understood that the involution is written as $f\mapsto f^{\sf o}$. \begin{definition}\label{1.0} An element $f\in Q$ of an involutive quantale is {\em symmetric} if $f^{\sf o}=f$. \end{definition} A symmetric idempotent element of $Q$ ($f^{\sf o}=f=f\circ f$) is sometimes called a {\em projection}. \begin{example}\label{1.1} Among the many examples of involutive quantales, we point out some of particular interest. \begin{enumerate} \item\label{1.1.1} A quantale $Q$ is commutative if and only if the identity map $1_Q\:Q\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q\:q\mapsto q$ is an involution. In particular is every {\em locale} (also called {\em frame}) $X=(X,\bigvee,\wedge,\top)$ an involutive quantale for this trivial involution. \item\label{1.1.2} Let $S$ be a complete lattice with a {\em duality}, i.e.\ a supremum-preserving map $d\:S\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} S^{\sf op}$ such that $d(x)=d^*(x)$ and $d(d(x))=x$ for all $x\in S$, where $d^*$ is the right adjoint to $d$ (abbreviated as $d\dashv d^*$) in the category ${\sf Ord}$ of ordered sets and order-preserving maps, explicitly, $d^*\:S^{\sf op}\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} S\:y\mapsto\bigvee\{x\in S\mid d(x)\leq^{\sf op} y\}$. The quantale $Q(S):=({\sf Sup}(S,S),\bigvee,\circ,1_S)$ has a natural involution [Mulvey and Pelletier, 1992]: for $f\in Q(S)$ put $f^{\sf o}:=d^{\sf op}\circ (f^*)^{\sf op}\circ d$ (where $f\dashv f^*$ in ${\sf Ord}$). When putting $f_{\sf o}:=d^{\sf op}\circ f^{\sf op}\circ d$, we have $f^{\sf o}\dashv f_{\sf o}$ in ${\sf Ord}$. \item\label{1.1.3} A {\em modular quantale} $Q$ is an involutive one which satisfies Freyd's modular law [Freyd and Scedrov, 1990]: $(p\circ q\wedge r)\leq p\circ(q\wedge p^{\sf o}\circ r)$ for all $p,q,r\in Q$. We follow [Resende, 2007] in speaking of a {\em quantal frame} when we mean a quantale whose underlying lattice is a frame (= locale); the term {\em modular quantal frame} then speaks for itself. It is a matter of fact that modular quantal frames are precisely the one-object locally complete distributive allegories of P. Freyd and A. Scedrov [1990]. Allegories are closely related to toposes; below we shall see that modular quantal frames in particular appear in the study of sheaves (cf.\ Theorem \ref{14} and Example \ref{20}). \item\label{1.1.4} An {\em inverse quantal frame} is a modular quantal frame $Q$ in which every element is the join of so-called {\em partial units} ($p\in Q$ is a partial unit if $p^{\sf o} p\vee pp^{\sf o}\leq 1_Q$); it suffices that the top of $Q$ is such a join. This definition is equivalent to the original one given in [Resende, 2007] because it is proved in that reference that inverse quantal frames arise as quotients (as frames {\em and} as involutive quantales) of quantal frames that are evidently modular. There is a correspondence up to isomorphism between inverse quantal frames and \'etale groupoids [Resende, 2007], providing a context to consider \'etendues in terms of quantales. \item\label{1.1.5} (In this and the next example we use notions that, for a lack of space, we cannot recall; but we do include ample references.) A {\em quantaloid} is a ${\sf Sup}$-enriched category. If $A$ is an object of a quantaloid ${\cal Q}$, then ${\cal Q}(A,A)$ is a quantale; in particular, a quantaloid with only one object is precisely a quantale. A quantaloid ${\cal Q}$ has a direct-sum completion, which can be described as ${\sf Matr}({\cal Q})$, the quantaloid of {\em matrices with elements in ${\cal Q}$}. All definitions above can straightforwardly be generalised from quantales to quantaloids. For details, see e.g.\ [Freyd and Scedrov, 1990; Rosenthal, 1996; Stubbe, 2005a]. A small quantaloid ${\cal Q}$ is {\em Morita equivalent} to the quantale $Q:={\sf Matr}({\cal Q})({\cal Q}_0,{\cal Q}_0)$ [Mesablishvili, 2004], and it is easily seen that several properties of ${\cal Q}$ are carried over to its Morita-equivalent quantale $Q$: for example, if ${\cal Q}$ is involutive then so is $Q$. \item\label{1.1.6} For a small site $({\cal C},J)$, i.e.\ ${\cal C}$ a small category and $J$ a Grothendieck topology on ${\cal C}$, the $J$-closed relations between the representables in ${\sf Set}^{{\cal C}^{\sf op}}$ form a locally complete distributive allegory, i.e.\ a modular quantaloid ${\cal Q}$ whose hom-objects are frames [Walters, 1982; Betti and Carboni, 1983]. It is easy to verify that this small quantaloid's Morita-equivalent quantale $Q$ is a modular quantal frame, and that ${\cal Q}$ can be identified with a subquantaloid of the universal splitting of the symmetric idempotents of $Q$. \end{enumerate} \end{example} When we speak of a {\em (right) $Q$-module $M$} we mean so in the obvious way in ${\sf Sup}$. That is to say, $(M,\bigvee)$ is a complete lattice on which $Q$ acts by means of a function $M\times Q\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\:(m,f)\mapsto m\cdot f$ satisfying $$m\cdot(f\circ g)=(m\cdot f)\cdot g\mbox{, \quad}m\cdot 1=m\mbox{\quad and \quad}(\bigvee_{i\in I}m_i)\cdot(\bigvee_{j\in j}f_j)=\bigvee_{i\in I}\bigvee_{j\in J}(m_i\cdot f_j)$$ for all $m,m_i\in M$ and $f,g,f_j\in Q$. Accordingly, a function $\phi\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} N$ between two $Q$-modules is a {\em $Q$-module morphism} if $$\phi(m\cdot f)=\phi(m)\cdot f\mbox{\quad and \quad}\phi(\bigvee_{i\in I}m_i)=\bigvee_{i\in I}\phi(m_i)$$ for all $m,m_i\in M$ and $f\in Q$. We shall write ${\sf Mod}(Q)$ for the category of $Q$-modules and module morphisms. Of course $Q$ itself is a $Q$-module, with action given by multiplication in $Q$. \begin{definition}[Paseka, 1999]\label{2} Let $M$ be a module on an involutive quantale $Q$. A map $$M\times M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q\:(m,n)\mapsto\<m,n\>$$ is a {\em pre-inner product} if, for all $m,n\in M$, \begin{enumerate} \item\label{2.1} $\<m,-\>\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q$ is a module morphism, \item\label{2.2} $\<m,n\>^{\sf o}=\<n,m\>$ (which we refer to as {\em Hermitian symmetry}). \newcounter{saveenum} \setcounter{saveenum}{\value{enumi}} \end{enumerate} It is an {\em inner product} if it moreover satisfies \begin{enumerate} \setcounter{enumi}{\value{saveenum}} \item\label{2.3} $\<-,m\>=\<-,n\>$ implies $m=n$ \setcounter{saveenum}{\value{enumi}} \end{enumerate} and it is said to be {\em strict} if \begin{enumerate} \setcounter{enumi}{\value{saveenum}} \item\label{2.4} $\<m,m\>=0$ implies $m=0$. \end{enumerate} \end{definition} Now we shall recall some definitions from [Heymans and Stubbe, 2009], where they are given for quantaloids but which we apply here to quantales. Let $e\in Q$ be an idempotent. The fixpoints of $e\circ -\:Q\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q$ form a $Q$-module which we shall write as $Q^e$: the action of $Q$ on $Q^e$ is given by multiplication, so the inclusion $$\iota_e\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q\:f\mapsto f$$ is a module morphism. Further, if $M$ is any $Q$-module then for any $m\in M$ the map $$\tau_m\:Q\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\:f\mapsto m\cdot f$$ is a module morphism. Thus also the composite $$\xymatrix{ Q^e\ar[rd]_{\iota_e}\ar@{.>}[rr]^{\zeta_m} & & M \\ & Q\ar[ru]_{\tau_m}}$$ is a module morphism. Essentially as an application of the Yoneda Lemma for enriched categories [Kelly, 1982] we find the following characterisation. \begin{proposition}\label{3.0} Let $Q$ be a quantale, $e\in Q$ an idempotent, and $M$ a $Q$-module. There is a one-one correspondence between the fixpoints of $-\cdot e\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ and the module morphisms from $Q^e$ to $M$. \end{proposition} \noindent {\em Proof\ }: If $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ is any module morphism, then $m_{\zeta}:=\zeta(e)\in M$ satisfies $m_{\zeta}\cdot e=m_{\zeta}$; conversely, if $m\in M$ satisfies $m\cdot e=m$, then $\zeta_m\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\:f\mapsto m\cdot f$ is a module morphism. This is easily seen to set up a one-one correspondence. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent In particular, such a map $\zeta_m\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ between complete lattices preserves suprema; therefore it has an infima-preserving right adjoint in the category of ordered sets and order-preserving maps. However, in general the order-preserving right adjoint need not be a module morphism, i.e.\ it need not be right adjoint to $\zeta_m$ in the category ${\sf Mod}(Q)$ of $Q$-modules. \begin{definition}[Heymans and Stubbe, 2009]\label{3} Let $Q$ be a quantale and $M$ a $Q$-module. An element $m\in M$ is said to be {\em locally principal at an idempotent $e\in Q$} if $m\cdot e=m$ and $$\zeta_m\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\:f\mapsto m\cdot f$$ has a right adjoint in ${\sf Mod}(Q)$. \end{definition} \begin{proposition}\label{4} Let $Q$ be a quantale, $e\in Q$ an idempotent, and $M$ a $Q$-module. There is a one-one correspondence between $M$'s locally principal elements at $e$ and left adjoint module morphisms from $Q^e$ to $M$. \end{proposition} In what follows we shall always write $\zeta^*\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q^e$ for the right adjoint to a given module morphism $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$, whenever we know or assume it exists. Now we come to a trivial but crucial observation. \begin{proposition}\label{6.1} Let $Q$ be an involutive quantale, ${\cal E}\subseteq Q$ the set of symmetric idempotents, and $M$ a $Q$-module. The formula $$\<m,n\>_{\sf can}:=\bigvee\left\{(\zeta^*(m))^{\sf o}\circ\zeta^*(n)\ \Big|\ e\in{\cal E},\ \zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in }{\sf Mod}(Q)\right\}$$ defines a pre-inner product, called the {\em canonical pre-inner product}, on $M$. \end{proposition} \noindent {\em Proof\ }: For any $e\in{\cal E}$ and any left adjoint $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$, the pointwise multiplication of the composite module morphism $$\xymatrix{M\ar[r]^{\zeta^*} & Q^e\ar[r]^{\iota_e} & Q}$$ with the element $(\zeta^*(m))^{\sf o}\in Q$ gives a module morphism $$(\zeta^*(m))^{\sf o}\circ\zeta^*(-)\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q.$$ As any pointwise supremum of parallel module morphisms is again a module morphism, we find that $$\<m,-\>_{\sf can}=\bigvee\left\{(\zeta^*(m))^{\sf o}\circ\zeta^*(-)\ \Big|\ e\in{\cal E},\ \zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in }{\sf Mod}(Q)\right\}$$ is a module morphism from $M$ to $Q$. It is a triviality that the function $\<-,-\>_{\sf can}\:M\times M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q$ is symmetric. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent \begin{example}\label{6.2} We shall compute some more explicit examples in the next section, but we already include the following here. \begin{enumerate} \item\label{6.2.1} Every involutive quantale $Q$, regarded as a module over itself, has a natural inner product [Paseka, 1999]: for $f,g\in Q$ let $\<f,g\>:=f^{\sf o}\circ g$. And the canonical pre-inner product on a $Q$-module $M$ is expressed as a supremum of values of the natural inner product on $Q$. \item\label{6.2.2} Particularly for a complete lattice $S$ with duality $d\:S\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} S^{\sf op}$ we can consider the natural inner product on the involutive quantale $Q(S)$: we have for $s,t\in S$ and $f,g\in Q(S)$ that $$\<f,g\>(s)\leq t\iff g(s)\leq f_{\sf o}(t).$$ From this it is easy to verify that $\<f,g\>=0$ if and only if $f$ and $g$ are {\em disjoint}: $f(s)\leq d(g(t))$ for all $s,t\in S$. \end{enumerate} \end{example} \section{Canonical Hilbert basis}\label{C} We start by recalling another definition from [Heymans and Stubbe, 2009] (where it was actually stated more generally for modules on quantaloids). \begin{definition}\label{6} Let $Q$ be a quantale, ${\cal E}\subseteq Q$ any set of idempotent elements containing the unit $1$, and $M$ a $Q$-module. If, for all $m\in M$, $$m=\bigvee\left\{\zeta(\zeta^*(m))\ \Big|\ e\in{\cal E},\ \zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in } {\sf Mod}(Q)\right\}$$ then $M$ is {\em ${\cal E}$-principally generated} (which is short for: {\em generated by its elements which are locally principal at some $e\in{\cal E}$}). \end{definition} This Definition \ref{6} resembles the following notion, which was originally defined by Resende and Rodrigues [2008] for pre-Hilbert modules on a locale, but which can straightforwardly be extrapolated to pre-Hilbert modules on an involutive quantale, as [Resende, 2008] does: \begin{definition}\label{7} Let $Q$ be an involutive quantale, and $M$ a $Q$-module with pre-inner product $\<-,-\>$. If a subset $\Gamma\subseteq M$ satisfies, for all $m\in M$, $$m=\bigvee_{s\in\Gamma}s\cdot\<s,m\>$$ then it is a {\em Hilbert basis}\footnote{As also remarked in [Resende and Rodrigues, 2008], the word ``basis'' is quite deceiving: since there is no freeness condition, it would be more appropriate to speak of {\em Hilbert generators}. However, for the sake of clarity we shall adopt the terminology that was introduced in the cited references.} for $M$. \end{definition} If a $Q$-module $M$ bears a pre-inner product admitting a Hilbert basis, we speak of its {\em Hilbert structure}; unless explicitly stated otherwise we shall always write $\<-,-\>$ for the pre-inner product and $\Gamma$ for the Hilbert basis. As also pointed out in [Resende, 2008], it is trivial to check that: \begin{proposition}\label{7.1} If $Q$ is an involutive quantale, and $M$ a $Q$-module with a pre-inner product $\<-,-\>$ admitting a Hilbert basis $\Gamma$, then $\<-,-\>$ is in fact an inner product. \end{proposition} \noindent {\em Proof\ }: Given $m,n\in M$ such that $\<-,m\>=\<-,n\>$, we certainly have $\<s,m\>=\<s,n\>$ for all $s\in\Gamma$. The formula in Definition \ref{7} then allows us to compute that $$m=\bigvee_{s\in\Gamma}s\cdot\<s,m\>=\bigvee_{s\in\Gamma}s\cdot\<s,n\>=n$$ and we are done. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent Both Definitions \ref{6} and \ref{7} speak of a ``generating set'' for $Q$-modules... But already in the localic case these two definitions are different! \begin{example}\label{9} Let $X$ be a locale, view it as a quantale $(X,\bigvee,\wedge,\top)$ with identity involution. The set ${\cal E}$ of symmetric idempotents in $X$ coincides with $X$, and it is shown in [Stubbe, 2005b; Heymans and Stubbe, 2009] that an ${\cal E}$-principally generated $X$-module is the same thing as an {\em ordered sheaf} on $X$, i.e.\ an ordered object in ${\sf Sh}(X)$. On the other hand, as proved in [Resende and Rodrigues, 2008], a pre-Hilbert $X$-module with Hilbert basis is the same thing as a {\em sheaf} on $X$. \end{example} This example hints at the importance of the intrinsic {\em symmetry} in the notion of ``pre-Hilbert $Q$-module with Hilbert basis'', i.e.\ the symmetry of the involved pre-inner product. Indeed notice that Definitions \ref{2} and \ref{7} ask for a module on an {\em involutive} quantale -- without which it would simply be impossible to coherently speak of {\em symmetry} -- whereas Definition \ref{6} has no such requirement at all. To systematically explain the relation between the two definitions we must therefore develop a suitable notion of symmetry in the context of ${\cal E}$-principally generated $Q$-modules. \begin{proposition}\label{10} Let $Q$ be an involutive quantale, ${\cal E}\subseteq Q$ the set of symmetric idempotents, and $M$ a $Q$-module. The following statements are equivalent: \begin{enumerate} \item for any $e\in{\cal E}$, any left adjoint $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ and any $m\in M$: $\zeta^*(m)=\<\zeta(e),m\>_{\sf can}$, \item for any $e,f\in{\cal E}$ and any left adjoints $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$, $\eta\:Q^f\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$: $\zeta^*(\eta(f))=\<\zeta(e),\eta(f)\>_{\sf can}$, \item for any $e,f\in{\cal E}$ and any left adjoints $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$, $\eta\:Q^f\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$: $\zeta^*(\eta(f))=(\eta^*(\zeta(e)))^{\sf o}$. \end{enumerate} In this case we say that $M$ is {\em ${\cal E}$-principally symmetric}. \end{proposition} \noindent {\em Proof\ }: The only non-trivial implication is ($3\Rightarrow 1$). In fact, the ``$\leq$'' in statement 1 always holds: because $$\zeta^*(m)=e\circ\zeta^*(m)=e^{\sf o}\circ\zeta^*(m)\leq(\zeta^*(\zeta(e)))^{\sf o}\circ\zeta^*(m)\leq\<\zeta(e),m\>_{\sf can}$$ where we used respectively: $\zeta^*(m)\in Q^e$; $e=e^{\sf o}$; the unit of the adjunction $\zeta\dashv\zeta^*$ to get $e\leq\zeta^*(\zeta(e))$ from which $e^{\sf o}\leq(\zeta^*(\zeta(e)))^{\sf o}$ because the involution preserves order; and finally the definition of the canonical pre-inner product. Thus, assuming statement 3 we must show that the ``$\geq$'' in statement 1 holds. But we can compute that, for any $f\in{\cal E}$ and any left adjoint $\eta\:Q^f\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$, $$(\eta^*(\zeta(e)))^{\sf o}\circ\eta^*(m)=\zeta^*(\eta(f))\circ\eta^*(m)=\zeta^*\circ\eta\circ\eta^*(m)\leq\zeta^*(m)$$ using respectively: the assumption; the fact that $\zeta^*(\eta(f))$ is the representing element for the $Q$-module morphism $\zeta^*\circ\eta\:Q^f\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q^e$ (cf.\ Proposition \ref{3.0}); and the counit of the adjunction $\eta\dashv\eta^*$. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent Remark that $(1\Rightarrow 2\Rightarrow 3)$ in Proposition \ref{10} holds {\em for any pre-inner product} on $M$ (but $(3\Rightarrow 1)$ does not!): that is to say, if one can prove the first or the second condition for a given pre-inner product on $M$ (not necessarily the canonical one), then it follows that $M$ is ${\cal E}$-principally symmetric. This shall be useful in the proof of Lemma \ref{18}. We can now prove a first ``comparison'' between Definitions \ref{6} and \ref{7}. \begin{theorem}\label{11} Let $Q$ be an involutive quantale, ${\cal E}\subseteq Q$ the set of symmetric idempotents, and $M$ a $Q$-module. The following are equivalent: \begin{enumerate} \item $M$ is ${\cal E}$-principally generated and ${\cal E}$-principally symmetric, \item the set $$\Gamma_{\sf can}:=\{\mbox{all elements of $M$ which are locally principal at some }e\in{\cal E}\}$$ is a Hilbert basis for the canonical pre-inner product on $M$, called the {\em canonical Hilbert basis}. \end{enumerate} In this case, it follows by Proposition \ref{7.1} that the canonical pre-inner product is an inner product; we speak of the {\em canonical Hilbert structure on $M$}. \end{theorem} \noindent {\em Proof\ }: (1$\Rightarrow$2) Assuming that $M$ is ${\cal E}$-principally generated we have by definition that, for any $m\in M$, $$m=\bigvee\left\{\zeta(\zeta^*(m))\ \Big|\ e\in{\cal E},\ \zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in } {\sf Mod}(Q)\right\}.$$ Assuming moreover that $M$ is ${\cal E}$-principally symmetric we can compute $$\zeta(\zeta^*(m))=\zeta(e\circ\zeta^*(m))=\zeta(e\circ\<\zeta(e),m\>_{\sf can})=\zeta(e)\cdot\<\zeta(e),m\>_{\sf can}$$ using respectively: $\zeta^*(m)\in Q^e$; the first statement in Proposition \ref{10}; and the fact that $\zeta$ is a module morphism. Replacing this in the right hand side of the first expression, we obtain $$m=\bigvee\left\{\zeta(e)\cdot\<\zeta(e),m\>_{\sf can}\ \Big|\ e\in{\cal E},\ \zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in } {\sf Mod}(Q)\right\}$$ so that, if we put $$\Gamma_{\sf can}:=\left\{\zeta(e)\ \Big|\ e\in{\cal E},\ \zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in } {\sf Mod}(Q)\right\},$$ which we know by Proposition \ref{4} indeed corresponds to the set of elements of $M$ which are locally principal at some $e\in{\cal E}$, we find precisely what we claimed. (2$\Rightarrow$1) For any $e\in{\cal E}$ and left adjoint $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$, there certainly is a module morphism $\<\zeta(e),-\>_{\sf can}\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q$. But we can compute that, for any $m\in M$, $$e\circ\<\zeta(e),m\>_{\sf can}=\<\zeta(e)\cdot e^{\sf o},m\>_{\sf can}=\<\zeta(e\circ e^{\sf o}),m\>_{\sf can}=\<\zeta(e),m\>_{\sf can}$$ using: the ``conjugate-linearity'' of $\<-,m\>_{\sf can}$; the module morphism $\zeta$; the fact that $e$ is a symmetric idempotent. Therefore, this module morphism corestricts to $\<\zeta(e),-\>_{\sf can}\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q^e$. We claim that it is right adjoint to $\zeta\:Q\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$. Indeed, if for $q\in Q^e$ and $m\in M$ we assume that $q\leq\<\zeta(e),m\>_{\sf can}$ then we can compute $$\zeta(q)=\zeta(e\circ q)=\zeta(e)\cdot q\leq\zeta(e)\cdot\<\zeta(e),m\>_{\sf can}\leq m$$ using: $q\in Q^e$, i.e.\ $e\circ q=q$; $\zeta$ is a module morphism; the assumed inequality which is preserved by $\zeta(e)\cdot -$; and finally the hypothesis that $M$ has Hilbert basis $\Gamma$. Assuming conversely that $\zeta(q)\leq m$, then we can compute $$\begin{array}{rl} q & =e\circ q\leq\<\zeta(e),\zeta(e)\>_{\sf can}\circ q=\<\zeta(e),\zeta(e)\cdot q\>_{\sf can} \\[2ex] & =\<\zeta(e),\zeta(e\circ q)\>_{\sf can}=\<\zeta(e),\zeta(q)\>_{\sf can}\leq\<\zeta(e),m\>_{\sf can} \end{array}$$ using: $q\in Q^e$; the unit of $\zeta\dashv\zeta^*$ in $e=e^{\sf o}\circ e\leq(\zeta^*(\zeta(e)))^{\sf o}\circ\zeta^*(\zeta(e))\leq\<\zeta(e),\zeta(e)\>_{\sf can}$; the module morphism $\<\zeta(e),-\>_{\sf can}$; the module morphism $\zeta$; again $q\in Q^e$; and finally the assumed inequality which is preserved by the module morphism $\<\zeta(e),-\>_{\sf can}$. Hence, for any $q\in Q^e$ and $m\in M$, $$q\leq\<\zeta(e),m\>_{\sf can}\iff \zeta(q)\leq m.$$ Adjoints are unique and so we obtain that $\zeta^*(m)=\<\zeta(e),m\>_{\sf can}$ for all $m\in M$. By Proposition \ref{10} this exactly means that $M$ is ${\cal E}$-principally symmetric. Since we assume that $\Gamma$ is a Hilbert basis, we have that $$m=\bigvee\left\{\zeta(e)\cdot\<\zeta(e),m\>_{\sf can}\ \Big|\ e\in{\cal E},\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in }{\sf Mod}(Q)\right\}.$$ But the previous computation allows us to write $$\zeta(e)\cdot\<\zeta(e),m\>_{\sf can}=\zeta(e)\cdot\zeta^*(m)=\zeta(e\circ\zeta^*(m))=\zeta(\zeta^*(m))$$ hence we find that $$m=\bigvee\left\{\zeta(\zeta^*(m))\ \Big|\ e\in{\cal E},\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in }{\sf Mod}(Q)\right\}$$ as wanted. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent \begin{example}\label{12} We shall give some examples of $Q$-modules with Hilbert structure, and then make a comment on the {\em category} of $Q$-modules with Hilbert structure. \begin{enumerate} \item\label{12.1} Cf.\ Example \ref{6.2}--\ref{6.2.1}, $\Gamma:=\{1_Q\}$ is a Hilbert basis for the natural inner product on $Q$. More generally, if $e\in Q$ is an idempotent, then $Q^e$ is a $Q$-module with inner product $\<f,g\>:=f^{\sf o}\circ g$ admitting $\Gamma:=\{e\}$ as Hilbert basis. \item\label{12.2} Let $Q$ be the 2-element chain ${\bf 2}=\{0<1\}$ (with $\wedge$ as multiplication, trivial involution, etc.); both its elements are symmetric idempotents. Let $(A,\leq)$ be an ordered set and consider ${\sf Dwn}(A,\subseteq)$, the downclosed subsets of $A$ ordered by inclusion. This is the typical example of an ${\cal E}$-principally generated ${\bf 2}$-module [Heymans and Stubbe, 2009] and is also one of the fundamental constructions in [Resende and Rodrigues, 2008]. If $D\in{\sf Dwn}(A, \subseteq)$ is a locally principal element, then it is either the empty downset $D=\emptyset$ (locally principal at $0\in{\bf 2}$) or a principal downset $D=\down x$ for some $x\in A$ (locally principal at $1\in{\bf 2}$). For any $D,E\in{\sf Dwn}(A,\subseteq)$, their canonical inner product is $$\<D,E\>_{\sf can}=\left\{\begin{array}{l}1\mbox{ if }D\cap E\neq \emptyset \\ 0\mbox{ otherwise}\end{array}\right.$$ To say that ${\sf Dwn}(A,\subseteq)$ is ${\cal E}$-principally symmetric is to require that for any $x,y\in A$: $$\down x\subseteq \down y\iff \down y\subseteq\down x.$$ This makes the order $(A,\subseteq)$ in reality an equivalence relation $(A,\approx)$. \item\label{12.3} The localic case: Let $X$ be any locale and $S$ any set. Then $X^S$ is an $X$-module, with pointwise suprema and $(f\cdot x)(s)=f(s)\wedge x$, for any $f\in X^S$, $x\in X$ and $s\in S$. Take now an $X$-matrix $\Sigma\:S\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} S$ (= a family $(\Sigma(y,x))_{(x,y)\in S\times S}$ of elements of $X$) satisfying $$\Sigma(z,y)\wedge\Sigma(y,x)\leq\Sigma(z,x)\mbox{ \ \ and \ \ }\Sigma(x,x)\wedge\Sigma(x,y)=\Sigma(x,y)=\Sigma(x,y)\wedge\Sigma(y,y)$$ and consider the $X$-submodule ${\cal R}(\Sigma)$ of $X^S$ consisting of those functions $f\:S\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} X$ satisfying $$f(s)=\bigvee_{x\in S}\Sigma(s,x)\wedge f(x).$$ In the terminology of [Stubbe, 2005b], $\Sigma$ is a totally regular $X$-semicategory and ${\cal R}(\Sigma)$ is (up to the identification of $X$-modules with cocomplete $X$-categories [Stubbe, 2006]) the cocomplete $X$-category of (totally) regular presheaves on $\Sigma$. This is the typical example of a {\em locally principally generated $X$-module} [Heymans and Stubbe, 2009] and is one of the fundamental constructions of [Resende and Rodrigues, 2008] too. It is not too difficult to show by direct calculations, but it also follows from our further results, that ${\cal R}(\Sigma)$ is ${\cal E}$-principally symmetric if and only if $\Sigma$ is a symmetric $X$-matrix. Moreover, for a symmetric $X$-matrix $\Sigma$ to satisfy the above conditions is equivalent to it being an idempotent, hence the module ${\cal R}(\Sigma)$ is ${\cal E}$-principally generated and ${\cal E}$-principally symmetric if and only if $\Sigma$ is a so-called {\em projection matrix} (with elements in $X$). Our upcoming Theorem \ref{14} says that such structures coincide in turn with $X$-modules with (necessarily canonical) Hilbert structure. \item\label{12.4} The previous example is an instance of a more general situation. We write ${\sf Hilb}(Q)$ for the quantaloid whose objects are $Q$-modules with Hilbert structure and whose morphisms are module morphisms. And we write ${\sf Matr}(Q)$ for the quantaloid whose objects are sets and whose morphisms are {\em matrices with elements in $A$}: such a matrix $\Lambda\:S\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} T$ is an indexed set of elements of $Q$, $(\Lambda(y,x))_{(x,y)\in S\times T}\in Q$. Matrices compose straightforwardly with a ``linear algebra formula'', and the identity matrix on a set $S$ has all $1$'s on the diagonal and $0$'s elsewhere. This matrix construction makes sense for any quantale (and even quantaloid), and whenever $Q$ is involutive then so is ${\sf Matr}(Q)$: the involute of a matrix is computed elementwise. Now there is an equivalence of quantaloids\footnote{[Resende, 2008] also notes the object correspondence, but not the morphism correspondence, and thus not the equivalence of these quantaloids.} $${\sf Hilb}(Q)\simeq{\sf Proj}(Q),$$ where the latter is the quantaloid obtained by splitting the symmetric idempotents in ${\sf Matr}(Q)$, i.e.\ the quantaloid of so-called {\em projection matrices} with elements in $Q$. Explicitly, if $\Sigma\:S\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} S$ is such a projection matrix, then $${\cal R}(\Sigma):=\{f\:S\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q\mid \forall s\in S:f(s)=\bigvee_{s\in S}\Sigma(s,x)\circ f(x)\}$$ is a $Q$-module with inner product and Hilbert basis respectively $$\<f,g\>:=\bigvee_{s\in S}(f(s))^{\sf o}\circ g(s)\mbox{ and }\Gamma:=\{f_s\:S\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q\:x\mapsto\Sigma(x,s)\mid s\in S\}.$$ This object correspondence $\Sigma\mapsto{\cal R}(\Sigma)$ extends to a ${\sf Sup}$-functor from ${\sf Proj}(Q)$ to ${\sf Hilb}(Q)$: it is the restriction to symmetric idempotent matrices of the embedding of the Cauchy completion of $Q$ qua one-object ${\sf Sup}$-category -- i.e.\ the quantaloid obtained by splitting {\em all} idempotents of ${\sf Matr}(Q)$ -- into ${\sf Mod}(Q)$. Conversely, a module $M$ with inner product $\<-,-\>$ and Hilbert basis $\Gamma$ obviously determines a projection matrix $\Sigma\:\Gamma\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$}\Gamma$ with elements $\Sigma(s,t):=\<s,t\>$; this easily extends to a ${\sf Sup}$-functor from ${\sf Hilb}(Q)$ to ${\sf Proj}(Q)$. These two functors set up the equivalence. \item\label{12.5} A notable consequence of the previous example is the existence of an involution on ${\sf Hilb}(Q)$, induced by the obvious involution on ${\sf Proj}(Q)$: the involute of a morphism $\phi\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} N$ in ${\sf Hilb}(Q)$ is the unique module morphism $\phi^{\sf o}\:N\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ characterised by $$\<\phi(s),t\>=\<s,\phi^{\sf o}(t)\>$$ for all basis elements $s$ of $M$ and $t$ of $N$. \end{enumerate} \end{example} In the localic case (cf.\ the example above) we can moreover prove an alternative formulation of the symmetry condition in Proposition \ref{10}: an ``openness'' condition formulated on the principal elements. In the next example we recall and explain this. \begin{example}\label{13} Let $X$ be a locale. Every element $u\in X$ is a symmetric idempotent, and the open sublocale $\down u\subseteq X$ is precisely the $X$-module of fixpoints of $u\wedge-\:X\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} X$. If $M$ is an $X$-module for which each left adjoint $\zeta\:\down u\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ is {\em open}, in the sense that $$\mbox{for all $x\leq u$ and $m\in M$: }\zeta(x\wedge\zeta^*(m))=\zeta(x)\wedge m,$$ then it is ${\cal E}$-principally symmetric. The converse also holds, provided that $M$ is ${\cal E}$-principally generated, in which case $M$ is an {\em \'etale $X$-module} in the terminology of [Heymans and Stubbe, 2009]. \end{example} \noindent {\em Proof\ }: Let $\zeta\:\down u\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ and $\eta\:\down v\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ be left adjoints, suppose that $\zeta$ is open: with $x:=u$ and $m:=\eta(v)$ in the above formula, it follows that $$\zeta(\zeta^*(\eta(v)))=\zeta(u)\wedge\eta(v).$$ Applying $\eta^*$ (which preserves infima) gives $$(\eta^*\circ\zeta)(\zeta^*(\eta(v)))=\eta^*(\zeta(u))\wedge\eta^*(\eta(v)).$$ The right hand side equals $\eta^*(\zeta(u))$ because $\eta^*(\eta(v))=v$ (the adjunction $\eta\dashv\eta^*$ splits). The left hand side equals $\eta^*(\zeta(u))\wedge\zeta^*(\eta(v))$, because the $X$-module morphism $\eta^*\circ\zeta\:\down u\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$}\down v$ is represented by $\eta^*(\zeta(u))\leq u\wedge v$. Thus we get $$\eta^*(\zeta(u))\wedge\zeta^*(\eta(v))=\eta^*(\zeta(u))\mbox{, or in other words }\eta^*(\zeta(u))\leq \zeta^*(\eta(v)).$$ Going through the same argument but exchanging $\zeta$ and $\eta$ proves that $M$ is ${\cal E}$-principally symmetric. \par To prove the converse, we assume that $M$ is ${\cal E}$-principally generated. We showed in [Heymans and Stubbe, 2009, Prop.\ 8.2] that then necessarily $M$ is a locale and that there is a locale morphism\footnote{That locale morphism satisfies some further particular properties, which made us call it a {\em skew local homeomorphism} in that reference.} $f\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} X$ such that $m\cdot x=m\wedge f^*(x)$ for all $m\in M$ and $x\in X$. It follows easily from this characterisation that, for all $s\in\Gamma_{\sf can}$, $$M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\:m\mapsto s\wedge m$$ is an $X$-module morphism. But under the hypothesis that $M$ is ${\cal E}$-principally symmetric, we can compose the left adjoint module morphism $\down\<s,s\>_{\sf can}\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\:x\mapsto s\cdot x$ with its right adjoint $M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$}\down\<s,s\>_{\sf can}\:m\mapsto \<s,m\>_{\sf can}$ to obtain $$M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\:m\mapsto s\cdot\<s,m\>_{\sf can}.$$ We claim that these module morphisms are equal: we shall show that they coincide on elements of $\Gamma_{\sf can}$, which suffices because $\Gamma_{\sf can}$ is a Hilbert basis. Indeed, for $r,t\in\Gamma_{\sf can}$ we can compute that $$\<r,s\wedge t\>_{\sf can}=\<r,s\>_{\sf can}\wedge\<r,t\>_{\sf can}=\<r,s\>_{\sf can}\wedge\<s,t\>_{\sf can}=\<r,s\cdot\<s,t\>_{\sf can}\>_{\sf can}.$$ (The first equality holds because $\<r,-\>_{\sf can}$ is a right adjoint, and the second equality holds because $\<r,s\>_{\sf can}\wedge\<r,t\>_{\sf can}=\<s,r\>_{\sf can}\wedge\<r,t\>_{\sf can}=\<s,r\cdot\<r,t\>_{\sf can}\>_{\sf can}\leq\<s,t\>_{\sf can}$ and similarly $\<r,s\>_{\sf can}\wedge\<s,t\>_{\sf can}\leq\<r,t\>_{\sf can}$.) Taking the supremum over all $r\in\Gamma_{\sf can}$ proves that $$s\wedge t=\bigvee_{r\in\Gamma_{\sf can}}r\cdot\<r,s\wedge t\>_{\sf can}=\bigvee_{r\in\Gamma_{\sf can}}r\cdot\<r,s\cdot\<s,t\>_{\sf can}\>_{\sf can}=s\cdot\<s,t\>_{\sf can}$$ as claimed. For any left adjoint $\zeta\:\down u\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ in ${\sf Mod}(X)$ we can apply the above to $s:=\zeta(u)\in\Gamma_{\sf can}$, to find that $$\zeta(\zeta^*(m))=\zeta(u)\wedge m$$ for any $m\in M$. This allows us to verify in turn that for any $x\leq u$, $$\zeta(x\wedge\zeta^*(m))=\zeta(\zeta^*(m))\cdot x=(\zeta(u)\wedge m)\cdot x=(\zeta(u)\cdot x)\wedge m=\zeta(x)\wedge m$$ as wanted. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent The above direct argument relies on elementary order theory. There is a shorter alternative, using results in the literature: an \'etale $X$-module is the same thing as local homeomorphism into $X$ [Heymans and Stubbe, 2009, Theorem 7.12], which is the same thing as a Hilbert $X$-module [Rodrigues and Resende, 2008, Theorem 3.15], which is the same thing as an ${\cal E}$-principally generated and ${\cal E}$-principally symmetric $X$-module (by our upcoming Theorem \ref{14}). \section{(Sometimes) all Hilbert structure is canonical}\label{D} The previous section was concerned with the {\em canonical} Hilbert structure on a $Q$-module $M$: we showed that there is a {\em canonical} Hilbert basis for the {\em canonical} (pre-)inner product on $M$ if and only if $M$ is ${\cal E}$-principally generated and ${\cal E}$-principally symmetric, two natural notions based on the behaviour of certain adjunctions in ${\sf Mod}(Q)$. This section is devoted to the perhaps surprising fact that, for a certain class of quantales (containing many cases of interest), the {\em only possible} Hilbert structure is the canonical one. \begin{theorem}\label{14} Let $Q$ be a modular quantal frame, ${\cal E}\subseteq Q$ the set of symmetric idempotents, and $M$ a $Q$-module. If $M$ bears a Hilbert structure, then necessarily $M$ is ${\cal E}$-principally generated and ${\cal E}$-symmetric, and the involved inner product is the canonical one, which moreover is strict (and, by Theorem \ref{11}, admits the canonical Hilbert basis). \end{theorem} The proof of the theorem shall be given as a series of lemmas. The first one straightforwardly extrapolates a result known to [Resende and Rodrigues, 2009] in the case of modules on a locale, and appears in [Resende, 2008]. We recalled the construction of the category ${\sf Matr}(Q)$ of matrices with entries in $Q$ in Example \ref{12}--\ref{12.4}, and remarked that whenever $Q$ is involutive then so is ${\sf Matr}(Q)$. \begin{lemma}\label{15} If $Q$ is an involutive quantale and $M$ is a $Q$-module with an inner product $\<-,-\>$ admitting a Hilbert basis $\Gamma$, then the following holds for all $m,n\in M$: $$\bigvee_{s\in\Gamma}\<m,s\>\circ\<s,n\>=\<m,n\>.$$ In particular, $(\Gamma,\<-,-\>)$ is a so-called {\em projection matrix}: a symmetric idempotent in the involutive quantaloid ${\sf Matr}(Q)$. \end{lemma} \noindent {\em Proof\ }: In $\<m,n\>$, use $n=\bigvee_{s\in\Gamma}s\cdot\<s,n\>$ and apply the linearity of $\<m,-\>$. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent The following lemma refers to the notion of {\em total regularity}, which we here state in a bare-bones matrix-form, but which actually has deep connections with sheaf theory; it was introduced in the context of quantaloid-enriched categorical structures by Stubbe [2005b] and played a crucial role in [Heymans and Stubbe, 2009] too. \begin{lemma}\label{16} Let $Q$ be an involutive quantale, and $M$ a $Q$-module with an inner product $\<-,-\>$ admitting a Hilbert basis $\Gamma$. The following statements are equivalent: \begin{enumerate} \item\label{16.0} for all $s\in\Gamma$: $s=s\cdot\<s,s\>$, \item\label{16.1} for all $s\in\Gamma$: $s\leq s\cdot\<s,s\>$, \item\label{16.2} the projection matrix $(\Gamma,\<-,-\>)$ is {\em totally regular}, i.e.\ for all $s,t\in\Gamma$: $$\<s,t\>\circ\<t,t\>=\<s,t\>=\<s,s\>\circ\<s,t\>.$$ \end{enumerate} If $Q$ moreover satisfies $q\leq q\circ q^{\sf o}\circ q$ for every $q\in Q$, then these equivalent conditions always hold\footnote{More generally, under this condition it is true that any projection matrix with entries in $Q$, i.e.\ any symmetric idempotent in ${\sf Matr}(Q)$, is totally regular.}. \end{lemma} \noindent {\em Proof\ }: Due to the Hilbert basis, $s\cdot\<s,s\>\leq\bigvee_{t\in\Gamma}t\cdot\<t,s\>=s$ for any $s\in\Gamma$ and thus $(\ref{16.1}\Rightarrow\ref{16.0})$. To see that ($\ref{16.1}\Rightarrow\ref{16.2}$), compute for $s,t\in\Gamma$ that $\<s,t\>=\<s,t\cdot\<t,t\>\>=\<s,t\>\circ\<t,t\>$. Conversely, ($\ref{16.2}\Rightarrow\ref{16.1}$) because, fixing a $t\in\Gamma$ we have for all $s\in\Gamma$ that $\<s,t\>=\<s,t\>\circ\<t,t\>=\<s,t\cdot\<t,t\>\>$; but therefore $$t=\bigvee_{s\in\Gamma}s\cdot\<s,t\>=\bigvee_{s\in\Gamma}s\cdot\<s,t\cdot\<t,t\>\>=t\cdot\<t,t\>.$$ Now if moreover every element $q\in Q$ satisfies $q\leq q\circ q^{\sf o}\circ q$ then we can compute for $s,t\in\Gamma$ that $$\<s,t\>\leq\<s,t\>\circ\<s,t\>^{\sf o}\circ\<s,t\>=\<s,t\>\circ\<t,s\>\circ\<s,t\> \left\{\begin{array}{c}\leq\<s,s\>\circ\<s,t\>\leq\<s,t\> \\ \mbox{ } \\ \leq\<s,t\>\circ\<t,t\>\leq\<s,t\> \end{array}\right. $$ precisely as wanted in the second condition. (We used that $\<r,s\>\circ\<s,t\>\leq\<r,t\>$ for any $r,s,t\in\Gamma$, as follows trivially from the formula in Lemma \ref{15}.) $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent Particularly for a modular quantale $Q$ the above result is interesting: because $q\leq q\circ q^{\sf o}\circ q$ holds as consequence of the modular law, it follows that for every $Q$-module with Hilbert structure its Hilbert basis is totally regular. Next are two lemmas which contain the important (and less straightforward) matter. \begin{lemma}\label{17} If $Q$ is an involutive quantale, and $M$ a $Q$-module with an inner product $\<-,-\>$ admitting a Hilbert basis $\Gamma$ satisfying the equivalent conditions in Lemma \ref{16}, then for any $s\in\Gamma$ there is an adjunction $$Q^{\<s,s\>}\xymatrix@=15mm{\ar@{}[r]|{\bot}\ar@/^2mm/@<1mm>[r]^{s\cdot-} & \ar@/^2mm/@<1mm>[l]^{\<s,-\>}}M$$ in ${\sf Mod}(Q)$. Writing ${\cal E}\subseteq Q$ for the set of symmetric idempotents, such an an $M$ is always ${\cal E}$-principally generated; and if $M$ is moreover ${\cal E}$-principally symmetric then $\<-,-\>$ coincides with the canonical (pre-)inner product $\<-,-\>_{\sf can}$. \end{lemma} \noindent {\em Proof\ }: For $s\in\Gamma$, compose the inclusion $Q^{\<s,s\>}\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q$ with the module morphism $s\cdot-\:Q\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ to obtain a module morphism $$\zeta_s\,\:Q^{\<s,s\>}\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M:q\mapsto s\cdot q.$$ Because we assume $s=s\cdot\<s,s\>$ it follows that $\<s,s\>\circ\<s,m\>=\<s\cdot\<s,s\>,m\>=\<s,m\>$ for any $m\in M$, and therefore the obvious module morphism $\<s,-\>\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q$ co-restricts to a module morphism $$\zeta'_s\,\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} Q^{\<s,s\>}:m\mapsto\<s,m\>.$$ We shall show that $\zeta_s\dashv\zeta'_s$ in ${\sf Mod}(Q)$; in fact, it suffices to prove that this adjunction holds in the category of ordered sets and order-preserving maps. Thus, consider $q\in Q^{\<s,s\>}$ and $m\in M$: if $s\cdot q\leq m$ then $q=\<s,s\>\circ q=\<s,s\cdot q\>\leq\<s,m\>$; conversely, if $q\leq\<s,m\>$ then $s\cdot q\leq s\cdot\<s,m\>\leq\bigvee_{t\in\Gamma}t\cdot\<t,m\>=m$. The module $M$ is ${\cal E}$-principally generated because for any $m\in M$ we have \begin{eqnarray*} m & = & \bigvee_{s\in\Gamma}s\cdot\<s,m\> \\ & = & \bigvee_{s\in\Gamma}\zeta_s\zeta^*_s(m) \\ & \leq & \bigvee\left\{\zeta(\zeta^*(m))\ \Big|\ e\in{\cal E},\ \zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in }{\sf Mod}(Q)\right\} \\ & \leq & m. \end{eqnarray*} It follows directly from Lemma \ref{15} that $$\<m,n\>=\bigvee_{s\in\Gamma}(\zeta_s^*(m))^{\sf o}\circ\zeta_s^*(n),$$ and from the above it is clear that this is smaller than $$\<m,n\>_{\sf can}=\bigvee\left\{(\zeta^*(m))^{\sf o}\circ\zeta^*(n)\ \Big|\ e\in{\cal E}, \zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M\mbox{ left adjoint in }{\sf Mod}(Q)\right\}.$$ Now suppose that $M$ is ${\cal E}$-principally symmetric. Fixing a left adjoint $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ in ${\sf Mod}(Q)$, with $e\in{\cal E}$, we can compute for any $s\in\Gamma$ that $$\<\zeta(e),s\>=\<s,\zeta(e)\>^{\sf o}=(\zeta^*_s(\zeta(e)))^{\sf o}=\zeta^*(\zeta_s(\<s,s\>))=\zeta^*(s\cdot\<s,s\>)=\zeta^*(s);$$ the symmetry was crucially used in the third equality, and the assumption that the equivalent conditions in Lemma \ref{16} hold in the last one. But morphisms in ${\sf Mod}(Q)$ with domain $M$ are equal if they coincide on the Hilbert basis $\Gamma$, so for all $m\in M$ we have $\<\zeta(e),m\>=\zeta^*(m)$. Therefore we find that \begin{eqnarray*} (\zeta^*(m))^{\sf o}\circ\zeta^*(n) & = & \<m,\zeta(e)\>\circ\zeta^*(n)\\ & = & \<m,\zeta(e)\cdot\zeta^*(n)\>\\ & = & \<m,\zeta(\zeta^*(n))\>\\ & \leq & \<m,n\>. \end{eqnarray*} To pass from the third to the fourth line we use the counit of the adjunction $\zeta\dashv\zeta^*$. All this means that $\<m,n\>_{\sf can}\leq\<m,n\>$, and we are done. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent It is only in the next statement that we require $Q$ to be a modular quantal frame. \begin{lemma}\label{18} If $Q$ is a modular quantal frame, and $M$ a $Q$-module with an inner product $\<-,-\>$ admitting a Hilbert basis $\Gamma$, then $M$ is ${\cal E}$-principally symmetric (for ${\cal E}\subseteq Q$ the set of symmetric idempotents). \end{lemma} \noindent {\em Proof\ }: We shall prove that the first of the equivalent conditions in Proposition \ref{10} holds {\em for the given inner product} on $M$; as we have remarked right after the proof of that Proposition, this suffices to infer the ${\cal E}$-principal symmetry of $M$. Because here we assume $M$ to have a Hilbert basis $\Gamma$, it in fact suffices to show that, for any $e\in{\cal E}$, any left adjoint $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$ and any $s\in\Gamma$: $\zeta^*(s)=\<\zeta(e),s\>$. First remark that, with these notations, $$\zeta(e)\cdot\zeta^*(s)=\zeta(\zeta^*(s))\leq s$$ trivially holds. On the other hand, using all assumptions we can compute that $$\begin{array}{llll} e & = & e\wedge\zeta^*(\zeta(e)) & \mbox{(unit of $\zeta\dashv\zeta^*$)} \\[4pt] & = & \displaystyle{e\wedge\zeta^*\Big(\bigvee_{s\in\Gamma}s\cdot\<s,\zeta(e)\>\Big)} & \mbox{($\Gamma$ is a Hilbert basis)} \\[4pt] & = & \displaystyle{e\wedge\bigvee_{s\in\Gamma}\Big(\zeta^*\left(s\right)\circ\<s,\zeta(e)\>\Big)} & \mbox{($\zeta^*$ is a module morphism)} \\[4pt] & = & \displaystyle{\bigvee_{s\in\Gamma}\Big(e\wedge\zeta^*\left(s\right)\circ\<s,\zeta(e)\>\Big)} & \mbox{($Q$ is a frame)}\\[4pt] & \leq & \displaystyle{\bigvee_{s\in\Gamma}\Big(e\circ\<s,\zeta(e)\>^{\sf o}\wedge\zeta^*\left(s\right)\Big)\circ\<s,\zeta(e)\>} & \mbox{(by the modular law)}\\[4pt] & = & \displaystyle{\bigvee_{s\in\Gamma}\Big(\<\zeta(e),s\>\wedge\zeta^*\left(s\right)\Big)\circ\<s,\zeta(e)\>} & \mbox{(symmetry)} \\[4pt] & \leq & \displaystyle{\bigvee_{s\in\Gamma}\<\zeta(e),s\>\circ\<s,\zeta(e)\>} & \mbox{(trivially)} \\[4pt] & = & \displaystyle{\<\zeta(e),\zeta(e)\>}. & \mbox{(by Lemma \ref{15})} \end{array}$$ Hence, combining both the previous inequalitites, $$\zeta^*(s)=e\circ\zeta^*(s)\leq\<\zeta(e),\zeta(e)\>\circ\zeta^*(s)=\<\zeta(e),\zeta(e)\cdot\zeta^*(s)\>\leq\<\zeta(e),s\>$$ and we have the ``$\leq$'' of the required equality. To see that also ``$\geq$'' holds, we first apply the modularity of $Q$ again to compute $$\begin{array}{llll} e & = & \displaystyle{\bigvee_{s\in\Gamma}\Big(e\wedge\zeta^*\left(s\right)\circ\<s,\zeta(e)\>\Big)} & \mbox{(as above)} \\[4pt] & \leq & \displaystyle{\bigvee_{s\in\Gamma}\zeta^*\left(s\right)\circ\Big(\left(\zeta^*\left(s\right)\right)^{\sf o}\circ e\wedge\<s,\zeta(e)\>\Big)} & \mbox{(by the modular law)}\\[4pt] & \leq & \displaystyle{\bigvee_{s\in\Gamma}\zeta^*\left(s\right)\circ\left(\zeta^*\left(s\right)\right)^{\sf o}}. & \mbox{(trivial)} \end{array}$$ Now we combine this with the first inequality that we proved to obtain $$\begin{array}{llll} \<\zeta(e),s\> & = & e\circ\<\zeta(e),s\> & \mbox{(trivial)} \\[4pt] & \leq & \displaystyle{\bigvee_{t\in\Gamma}\zeta^*\left(t\right)\circ\left(\zeta^*\left(t\right)\right)^{\sf o}\circ\<\zeta(e),s\>} & \mbox{(by the above)} \\[4pt] & = & \displaystyle{\bigvee_{t\in\Gamma}\zeta^*\left(t\right)\circ\<\zeta(e)\cdot\zeta^*\left(t\right),s\>} & \mbox{(``conjugate-linearity'' of inner product)} \\[4pt] & \leq & \displaystyle{\bigvee_{t\in\Gamma}\zeta^*\left(t\right)\circ\<t,s\>} & \mbox{(because $\zeta(e)\cdot\zeta^*(t)\leq t$)} \\[4pt] & = & \displaystyle{\zeta^*\Big(\bigvee_{t\in\Gamma}t\cdot\<t,s\>\Big)} & \mbox{($\zeta^*$ is a module morphism)} \\[4pt] & = & \displaystyle{\zeta^*(s)}. & \mbox{($\Gamma$ is a Hilbert basis)} \end{array}$$ and we are done. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent Relying on categorical machinery, the previous Lemma can alternatively be proved as follows: Bearing in mind Example \ref{12}--\ref{12.5}, the requirement that $\zeta^*(s)=\<\zeta(u),s\>$ at the end of the first paragraph of the proof above is equivalent to asking for $\zeta^*=\zeta^{\sf o}$ in the category ${\sf Hilb}(Q)$. But $Q$ being a modular quantal frame is equivalent to the matrix quantaloid ${\sf Matr}(Q)$ being modular, which in turn implies that the quantaloid ${\sf Proj}(Q)$ of projection matrices, obtained by splitting the symmetric idempotents in ${\sf Matr}(Q)$, is modular too. In any modular quantaloid, the right adjoint of a morphism, should it exist, is necessarily its involute: this thus holds in ${\sf Proj}(Q)$, and also in its equivalent ${\sf Hilb}(Q)$. Therefore in particular $\zeta^*=\zeta^{\sf o}$ for any left adjoint $\zeta\:Q^e\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} M$, as wanted. Lastly we have a simple lemma asserting the strictness of inner products in certain cases. \begin{lemma}\label{19} Let $Q$ be an involutive quantale in which $q\leq q\circ q^{\sf o}\circ q$ holds for any $q\in Q$. If $M$ is a $Q$-module with an inner product $\<-,-\>$ admitting a Hilbert basis $\Gamma$, then this inner product is strict. \end{lemma} \noindent {\em Proof\ }: Let $q\in Q$: if $q^{\sf o}\circ q=0$ then $q\leq q\circ q^{\sf o}\circ q=q\circ0=0$ hence $q=0$. Now suppose that $\<m,m\>=0$ for an $m\in M$. The formula in Lemma \ref{15} implies that $$\<s,m\>^{\sf o}\circ\<s,m\>=\<m,s\>\circ\<s,m\>\leq\bigvee_{t\in\Gamma}\<m,t\>\circ\<t,m\>=0$$ for all $s\in\Gamma$, whence $\<s,m\>=0$ for all $s\in\Gamma$. But then $m=\bigvee_{s\in\Gamma}s\cdot\<s,m\>=0$ as required. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent Having all these lemmas, we assemble the proof of the statement in the beginning of this section. \par\medskip\noindent {\em Proof of Theorem \ref{14}\ }: Because $Q$ is by hypothesis a modular quantal frame we have by Lemma \ref{18} that $M$ is ${\cal E}$-principally symmetric. It follows from $Q$'s modularity and Lemma \ref{16} that Lemma \ref{17} applies, showing that $M$ is ${\cal E}$-principally generated. Together with the fact that $M$ is ${\cal E}$-principally symmetric this moreover entails the equality of the given inproduct with the canonical one. Finally, the strictness of the (canonical) inner product is a consequence of Lemma \ref{19}. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent \begin{example}\label{20} We end with examples that refer to the category ${\sf Hilb}(Q)$ of Hilbert modules, and particularly to applications in sheaf theory. \begin{enumerate} \item\label{20.1} As in Example \ref{12}--\ref{12.4} we write ${\sf Hilb}(Q)$ for the quantaloid of $Q$-modules with Hilbert structure. For a modular quantal frame $Q$, Theorem \ref{14} allows us to consider ${\sf Hilb}(Q)$ as a full subquantaloid of ${\sf Mod}(Q)$: there is only one relevant Hilbert structure on a $Q$-module. Moreover, Lemma \ref{18} implies that, whenever $\phi\:M\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} N$ is a left adjoint in ${\sf Hilb}(Q)$, then $\phi\dashv\phi^{\sf o}$ (compare with Example \ref{12}--\ref{12.5}). Because in this case every symmetric idempotent in ${\sf Matr}(Q)$ is totally regular, we therefore get the equivalences of quantaloids $${\sf Hilb}(Q)\simeq{\sf Proj}(Q)\simeq{\sf Dist}^{\sf o}(Q_{\cal E})$$ where $Q_{\cal E}$ denotes the quantaloid obtained as universal splitting of the symmetric idempotents of $Q$, and ${\sf Dist}^{\sf o}(Q_{\cal E})$ is the full subquantaloid of ${\sf Dist}(Q_{\cal E})$ (= the quantaloid of $Q_{\cal E}$-enriched categories and distributors [Stubbe, 2005a]) determined by the {\em symmetric $Q_{\cal E}$-categories}. \item\label{20.2} Sheaves on sites: For a small site $({\cal C},J)$ and $Q$ the associated modular quantal frame as in Example \ref{1.1}--\ref{1.1.6}, the category ${\sf Sh}({\cal C},J)$ is equivalent to the category of $Q$-modules with canonical Hilbert structure and the left adjoint module morphisms between them: $${\sf Sh}({\cal C},J)\simeq{\sf Map}({\sf Hilb}(Q)).$$ With a bit more work this can be rephrased as equivalent quantaloids: $${\sf Rel}({\sf Sh}({\cal C},J))\simeq{\sf Dist}^{\sf o}({\cal Q})\simeq{\sf Hilb}(Q).$$ \par\noindent\textit{Sketch of the proof:} Let ${\cal Q}$ be as in Example \ref{1.1}--\ref{1.1.6}. Walters [1982] proved that ${\sf Sh}({\cal C},J)$ is equivalent to ${\sf Map}({\sf Dist}^{\sf o}({\cal Q}))$, the full subcategory of ${\sf Map}({\sf Dist}({\cal Q}))$ determined by the \textit{symmetric} ${\cal Q}$-categories. In the previous example we indicated that ${\sf Hilb}(Q)\simeq\mathsf{Proj}(Q)\simeq{\sf Dist}^{\sf o}(Q_{\cal E})$, hence it suffices to prove that ${\sf Dist}({\cal Q})\simeq{\sf Dist}(Q_{\cal E})$. But this follows from the fact that ${\cal Q}$, regarded as a subquantaloid of $Q_{\cal E}$, is {\em dense} in $Q_{\cal E}$: for any $X\in Q_{\cal E}$, ${\sf id}_X=\bigvee_{i\in I}f_i\circ f_i^*$ with each $f_i\:X_i\mbox{$\xymatrix@1@C=5mm{\ar@{->}[r]&}$} X$ a left adjoint with $X_i\in{\cal Q}_0$. This property is due to the modularity of $Q$ and the coreflexive idempotents ($e\leq{\sf id}_{{\sf dom}(e)}$) of ${\cal Q}$ (corresponding to closed sieves on ${\cal C}$) being suprema of the form above. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent \item\label{20.3} Sheaves on an \'etale groupoid: We understand that Resende [2008] defines a ``sheaf'' on an involutive quantale $Q$ to be a $Q$-module $M$ with Hilbert structure {\em satisfying moreover} $\bigvee\Gamma=\top_M$, and proves -- via the correspondence between \'etale groupoids and inverse quantal frames from [Resende, 2007] -- that, for an \'etale groupoid ${\cal G}$, the topos of ${\cal G}$-sheaves is equivalent to the category with as objects those ``sheaves'' on an inverse quantal frame ${\cal O}({\cal G})$ and as morphisms the left adjoint ${\cal O}({\cal G})$-module morphisms that have their involute as right adjoint (which he describes as ``direct image homomorphisms''). This may appear to be in contradiction with the examples above: sheaves (on a site) can be described as modules (on a modular quantal frame) with Hilbert structure {\em without} any further conditions. However it turns out that, when $Q$ is an inverse quantal frame (as in the main example of [Resende, 2008]), then the extra condition is anyway a {\em consequence} of the features of $Q$ (see the proof below). We further repeat from Example \ref{20}--\ref{20.1} that, because ${\cal O}({\cal G})$ is a modular quantal frame, any left adjoint in ${\sf Hilb}({\cal O}({\cal G}))$ has its involute as right adjoint (but this need not be so for involutive quantales in general, where we think this is an important extra condition). Conclusively, by Theorem \ref{14} the topos of sheaves on an \'etale groupoid ${\cal G}$ is equivalent to ${\sf Map}({\sf Hilb}({\cal O}({\cal G})))$. \par\noindent\textit{Proof:} If $Q$ is an inverse quantal frame and $M$ a $Q$-module with inner product $\<-,-\>$ admitting a Hilbert basis $\Gamma$, we may assume without loss of generality that $\Gamma$ is {\em maximal} in the following way: $$\Gamma=\left\{s\in M\ \Big|\ \forall m\in M:s\cdot\<s,m\>\leq m\right\}.$$ If $p\in Q$ is a partial unit and $s\in\Gamma$ then $s\cdot p\in\Gamma$: because for all $m\in M$, $$(s\cdot p)\cdot\<s\cdot p,m\>=(s\cdot pp^{\sf o})\cdot\<s,m\>\leq s\cdot\<s,m\>\leq m;$$ hence certainly $s\cdot p\leq\bigvee\Gamma$. Since $\top_Q$ is by assumption the join of all partial units, this implies $s\cdot\top_Q\leq\bigvee\Gamma$, whence $$\top_M=\bigvee_{s\in\Gamma}s\cdot\<s,\top_M\>\leq\bigvee_{s\in\Gamma}s\cdot\top_Q\leq\bigvee\Gamma$$ which proves the claim. $\mbox{ }\hfill\Box$\par\vspace{1.8mm}\noindent \end{enumerate} \end{example} \section{Concluding remarks}\label{E} In this paper we proved the following results in the theory of quantale modules: (1) every module on an involutive quantale $Q$ bears a canonical (pre-)inner product; (2) that canonical (pre-)inner product admits the canonical Hilbert basis if and only if the module is principally generated and principally symmetric; and (3) if $Q$ is a modular quantal frame then the only possible Hilbert structure (= inner product plus Hilbert basis) on a $Q$-module is the canonical one. In the examples we explained the use of these results in sheaf theory: we argued in particular that the category of sheaves on a site $({\cal C},J)$ is equivalent to a category of quantale modules with (canonical) Hilbert structure. These results are a natural continuation of our previous work. Whereas Stubbe [2005b] described ordered sheaves on a quantaloid ${\cal Q}$ as particular ${\cal Q}$-enriched categorical strutures, Heymans and Stubbe [2009] reformulated this -- via the correspondence between cocomplete ${\cal Q}$-categories and ${\cal Q}$-modules, and the particular role of ${\cal Q}$-modules in the theory of ordered sheaves on ${\cal Q}$ [Stubbe, 2006, 2007] -- in a module-theoretic language: ordered sheaves on ${\cal Q}$ are the same thing as principally generated ${\cal Q}$-modules. The material in this paper suggests that the ``symmetrically ordered'' sheaves (i.e.\ sheaves {\it tout court}) on an involutive quantale $Q$ are those principally generated $Q$-modules which are moreover principally symmetric. The latter in turn coincide with modules bearing a canonical Hilbert structure (which, for modules on a modular quantal frame, is the only possible Hilbert structure). Our current research is concerned with a further elaboration of that novel notion, ``principal symmetry'': we extend it from quantale modules to quantaloid modules, and even to quantaloid-enriched categories. A future paper shall in particular contain all remaining details from Examples \ref{1.1}--\ref{1.1.6} and \ref{20}--\ref{20.2}.
2,877,628,090,020
arxiv
\section{Introduction}\label{sec:introduction} Music generation is an important component of computational and AI creativity, leading to many potential applications including automatic background music generation for video, music improvisation in human-computer music performance and customizing stylistic music for individual music therapy, to name a few. While works such as MelNet~\cite{vasquez2019melnet} and JukeBox~\cite{dhariwal2020jukebox} have demonstrated a degree of success in generating music in the audio domain, the majority of the work is in the symbolic domain, i.e., the score, as this is the most fundamental representation of music composition. Research has tackled this question from many angles, including monophonic melody generation \cite{medeot2018structurenet}, polyphonic performance generation~\cite{huang2018music} and drum pattern generation\cite{wei2019generating}. This paper focuses on melody generation, a crucial component in music writing practice. Recently, deep learning has demonstrated success in capturing implicit rules about music from the data, compared to conventional rule-based and statistical methods~\cite{huang2018music, huang2020pop, hakimi2020bebopnet}. However, there are three problems that are difficult to address: (1) Modeling larger scale music structure and multiple levels of repetition as seen in popular songs, (2) Controllability to match music to video or create desired tempo, styles, and mood, and (3) Scarcity of training data due to limited curated and machine-readable compositions, especially in a given style. Since humans can imitate music styles with just a few samples, there is reason to believe there exists a solution that enables music generation with few samples as well. We aim to explore automatic melody generation with multiple levels of structure awareness and controllability. Our focus is on (1) addressing structural consistency inside a phrase and on the global scale, and (2) giving explicit control to users to manipulate melody contour and rhythm structure directly. Our solution, \textit{MusicFrameworks}, is based on the design of hierarchical music representations we call \textit{music frameworks} inspired by Hiller and Ames \cite{10.2307/832731}. A music framework is an abstract hierarchical description of a song, including high-level music structure such as repeated sections and phrases, and lower-level representations such as rhythm structure and melodic contour. The idea is to represent a piece of music by music frameworks, and then learn to generate melodies from music frameworks. Controllability is achieved by editing the music frameworks at any level (song, section and phrase); we also present methods that generate these representations from scratch. \textit{MusicFrameworks} can create long-term music structures, including repetition, by factoring music generation into sub-problems, allowing simpler models and requiring less data. Evaluations of the \textit{MusicFrameworks} approach include objective measures to show expected behavior and subjective assessments. We compare human-composed melodies and melodies generated under various conditions to study the effectiveness of music frameworks. We summarize our contributions as follows: (1) devising a hierarchical music structure representation and approach called {\it MusicFrameworks} capable of capturing repetitive structure at multiple levels, (2) enabling controllability at multiple levels of abstraction through music frameworks, (3) a set of methods that analyze a song to derive music frameworks that can be used in music imitation and subsequent deep learning processes, (4) a set of neural networks that generate a song using the \textit{MusicFrameworks} approach, (5) useful musical features and encodings to introduce musical inductive biases into deep learning, (6) comparison of different deep learning architectures for relatively small amounts of training data and a sizable listening test evaluating the musicality of our method against human-composed music. \section{Related Work}\label{sec:relatedwork} Automation of music composition with computers can be traced back to 1957 \cite{Hiller}. Long before representation learning, musicians looked for models that explain the generative process of music\cite{10.2307/832731}. Early music generation systems often relied on generative rules or constraint satisfaction~\cite{10.2307/832731, cope1991computers, copealgo, copecom}. Subsequent approaches replaced human learning of rules with machine learning, such as statistical models \cite{schulze2011music} and connectionist approaches \cite{boulanger2012modeling}. Now, deep learning has emerged as one of the most powerful tools to encode implicit rules from data \cite{briot2019deep, liang2016bachbot, hadjeres2017deepbach, huang2019counterpoint, huang2018music}. One challenge of music modelling is capturing repetitive patterns and long-term dependencies. There are a few models using rule-based and statistical methods to construct long-term repetitive structure in classical music \cite{collins2017computer} and pop music \cite{elowsson2012algorithmic, dai2021personalized}. Machine learning models with memory and the ability to associate context have also been popular in this area and include LSTMs and Transformers ~\cite{vaswani2017attention, huang2018music, musenet, huang2020pop}, which operate by generating music one or a few notes at a time, based on information from previously generated notes. These models enable free generation and motif continuation, but it is difficult to control the generated content. StructureNet \cite{medeot2018structurenet}, PopMNet \cite{popmnet2020} and Racchmaninof \cite{collins2017computer} are more closely related to our work in that they introduce explicit models for music structure. Another thread of work enables a degree of controllability by modeling the distribution of music via an intermediate representation (embedding). One such approach is to use Generative Adversarial Networks (GANs) to model the distribution of music ~\cite{Goodfellow2014GenerativeAN, yu2017seqgan, Dong_Hsiao_Yang_Yang_2018}. GANs learn a mapping from a point $z$ sampled from a prior distribution to an instance of generated music $x$ and hence represents the distribution of music with $z$. Another method is the Autoencoder, consisting of an encoder transforming music $x$ into embedding $z$ and a decoder that reconstructs music $x$ again from embedding $z$. The most popular models are Variational Auto-Encoders (VAE) and their variants \cite{kingma2013auto, 47078, yang2019deep, Tan2020MusicFC, kawaiattributes, Wang2020LearningIR}. These models can be controlled by manipulating the embedding, for example, mix-and-matching embeddings of different pitch contours and rhythms \cite{47078, yang2019deep, chen2020music}. However, a high-dimensional continuous vector has limited interpretability and thus is difficult for a user to control; it is also difficult to model full-length music with a simple fixed-length representation. In contrast, our approach uses a hierarchical music representation (i.e., {\it music framework}) as an ``embedding'' of music that encodes long-term dependency in a form that is both interpretable and controllable. \section{Method}\label{sec:method} We describe a controllable melody generation system that uses hierarchical music representation to generate full-length pop song melodies with multi-level repetition and structure. As shown in Figure \ref{fig:architecture}, a song is input in MIDI format. We analyze its \textit{music framework}, which is an abstracted description of the music ideas of the input song. Then we use the music framework to generate a new song with deep learning models. Our work is with pop music because structures are relatively simple and listeners are generally familiar with the style and thus able to evaluate compositions. We use a Chinese pop song dataset, POP909 \cite{pop909-ismir2020}, and use its cleaned version \cite{dai2020automatic} (with more labeling and corrections) for training and testing in this paper. We further transpose all the major songs' key signatures into C major. We use integers 1--15 to represent scale degrees in C3--C5, and 0 to represent a rest. For rhythm, we use the $16^{th}$ note as the minimum unit. A note in the melody is represented as $(p, d)$, where $p$ is the pitch number from 0 to 15, and $d$ is duration in sixteenths. For chord progressions, we use integers 1--7 to represent seven scale degree chords in the major mode. (We currently work only with triads, and convert seventh chords into corresponding triads). \begin{figure}[tb] \centerline{ \includegraphics[width=0.98\columnwidth]{figs/architecture.png}} \caption{Architecture of \textit{MusicFrameworks}.} \label{fig:architecture} \end{figure} \subsection{Music Frameworks Analysis}\label{sec:analysis} \begin{figure}[tb] \centerline{ \includegraphics[width=0.95\columnwidth]{figs/musicframework.png}} \caption{An example music framework.} \label{fig:musicframework} \end{figure} \begin{figure}[tb] \centerline{ \includegraphics[width=0.95\columnwidth]{figs/basic_components.png}} \caption{An example melody and its basic melody. The basic rhythm form also appears below the original melody as ``a,0.38 b,0.06, ...'' indicating similarity and complexity. } \label{fig:basic_components} \end{figure} As shown in Figure \ref{fig:musicframework}, a music framework contains two parts: (1) section and phrase-level structure analysis results; (2) basic melody and basic rhythm form within each phrase. A phrase is a small-scale segment that usually ranges from 4 to 16 measures. Phrases can be repeated as in AABBB shown in Figure \ref{fig:musicframework}. Sections contain multiple phrases, e.g., the illustrated song has an intro, a main theme section (phrase A as verse and phrase B as chorus), a bridge section followed by a repeat of the theme, and an outro section, which is a typical pop song structure. We extract the section and phrase structure based on finding approximate repetitions following the work of \cite{dai2020automatic}. Within each phrase, the \textit{basic melody} is an abstraction of melody and contour. Basic melody is a sequence of half notes representing the most common pitch in each 2-beat segment of the original phrase (see Figure \ref{fig:basic_components}). The \textit{basic rhythm form} consists of a per-measure descriptor with two components: a pattern label based on a rhythm similarity measure \cite{dai2020automatic} (measures with matching labels are similar) and a numerical rhythmic complexity, which is simply the number of notes divided by 16. With the analysis algorithm, we can process a music dataset such as POP909 for subsequent machine learning and music generation. The music frameworks enable controllability via examples in which a user can also mix and match different music frameworks from multiple songs. For example, a new song can be generated using the structure from song A, basic melody from song B, and basic rhythm form from song C. Users can also edit or directly create a music framework for even greater control. Alternatively, we also created generative algorithms to create new music frameworks without any user intervention as described in subsequent sections. \begin{figure}[tb] \centerline{ \includegraphics[width=0.95\columnwidth]{figs/generation.png}} \caption{Generation process from music frameworks within each phrase.} \label{fig:generation} \vspace{-1em} \end{figure} \subsection{Generation Using Music Frameworks} At the top level, section and phrase structure can be provided by a user or simply selected from a library of already analyzed data. We considered several options for imitation at this top level: (1) copy the first several measures of melody from the previously generated phrase (teacher forcing mode) and then complete the current phrase; (2) use the same or similar basic melody from the previous phrase to generate an altered melody with a similar melodic contour; (3) use the same or similar basic rhythm form of the previous phrase to generate a similar rhythm. These options leave room for users to customize their personal preferences. In this study, we forgo all human control by randomly choosing between the first and second option. At the phrase level, as shown in Figure \ref{fig:generation}, we first generate a basic melody (or a human provides one). Next, we generate rhythm using the basic rhythm form. Finally, we generate a new melody given the basic melody, generated rhythm, and chords copied from the input song. \subsubsection{Basic Melody Generation}\label{sec:basicmelody} We use an auto-regressive approach to generate basic melodies. The input $x_i = (\text{pos}_i, c_i, ...)$ ($i \in {1,...,n}$) is a set of features that guides basic melody generation where $\text{pos}_i$ is the positional feature of the $i^{th}$ note and $c_i$ represents contextual chord information (neighboring chords). We denote $p_i$ of the $i^{th}$ note. Here we fix the duration of each note in the basic melody to the half-note as in the analysis algorithm described in Section \ref{sec:analysis}. $c_i$ contains the previous, current and next chords and their lengths for the $i^{th}$ note. $pos_i$ includes the position of the $i^{th}$ note in the current phrase and a long-term positional feature indicating whether the current phrase is at the end of a section or not. \subsubsection{Network Architecture} We use an auto-regressive model based on Transformer and LSTM. The architecture (Figure \ref{fig:transformer-lstm}) consists of an encoder and a decoder. The encoder has two layers of transformers that learn a feature representation of the inputs (e.g. positional encodings and chords). The decoder concatenates the encoded representation and the last predicted note as input and passes them through one unidirectional LSTM followed by two layers of $1D$ convolutions of kernel size 1. Both the input and the last predicted notes to the decoder are passed through a projection layer (aka. a dense layer) respectively before they are processed by the network. The final output is the next note predicted by the decoder via categorical distribution $Pr(p_i | X, p_1,...,p_{i-1})$. We also tried using other deep neural network architectures such as a pre-trained full Transformer with random masking (described in Section \ref{sec:objexperiment}) for comparison. \subsubsection{Sampling with Dynamic Time Warping Control} In the sampling stage, we tried three ways to autoregressively generate the basic melody sequence: (1) randomly sample from the estimated posterior distribution of $p_i$ at each step; (2) randomly sample 100 generated sequences and pick the one with highest overall estimated probability; (3) beam search sampling according to the estimated probability. Apart from the above three sampling methods, we also want to control the basic melody contour shape in order to generate similar or repeated phrases. We use a melody contour rating function (based on Dynamic Time Warping) \cite{dai2021personalized} to estimate the contour similarity between two basic melodies. When we want to generate a repetition phrase that has a similar basic melody compared to a previous phrase, we estimate the contour similarity rating between the generated basic melody and the reference basic melody. We only accept basic melodies with a similarity above a threshold of $0.7$. \begin{figure}[tb] \centerline{ \includegraphics[width=1.0\columnwidth]{figs/networks.png}} \caption{Transformer-LSTM architecture for melody, basic melody and rhythmic pattern generation.} \label{fig:transformer-lstm} \vspace{-1em} \end{figure} \subsubsection{Realized Rhythm Generation} We now turn to the lower level of music generation that transforms music frameworks into realized songs. The first step is to determine the note onsets, namely the rhythm. Instead of generating note onset time one by one, we generate 2-beat rhythm patterns, which more readily encode rhythmic patterns and metrical structure. It is also easier to model (and apparently to learn) similarity using rhythm patterns than with sequences of individual durations. We generate 2-beat patterns sequentially under the control of a basic rhythm form. There are 256 possible rhythm patterns with a 2-beat length using our smallest subdivision of sixteenths. For each rhythm pattern $r_i$, the input of the rhythm generation model is $x_i = (r_{i-1}, \text{brf}_{i}, \text{pos}_{i})$, where $r_{i-1}$ is the previously generated rhythm pattern, $\text{brf}_{i}$ is the index of the first measure similar to it (or the current measure if there is no previous reference) and the current measure complexity; $\text{pos}_{i}$ contains three positional components: (1) the position of the $i^{th}$ pattern in the current phrase; (2) a long-term positional feature indicating whether the current phrase is at the end of a section or not; (3) whether the $i^{th}$ rhythm pattern starts at the barline or not. We also use a Transformer-LSTM architecture (Figure \ref{fig:transformer-lstm}), but with different model settings (size). In the sampling stage, we use beam search. \subsubsection{Realized Melody Generation} We generate melody from a basic melody, a rhythm and a chord progression using another Transformer-LSTM architecture similar to generating basic melody in Figure \ref{fig:transformer-lstm}. In this case, the index $i$ represents the $i^{th}$ note determined by the rhythm. The input feature $x_i$ also includes the current note's duration, the current chord, the basic melody pitch, and three positional features for multiple-level structure guidance: the two positional features for basic melody generation (Section \ref{sec:basicmelody}) and the offset of the current note within the current measure. We also experimented with other deep neural network architectures described in Section \ref{sec:objexperiment} for comparison. To sample a good sounding melody, we randomly generate 100 sequences by sampling the autoregressive model. We pick the one with the highest overall estimated probability. More details about the network are in Section \ref{sec:objexperiment}. \section{Experiment and Evaluation}\label{sec:experiment} \subsection{Model Evaluation and Comparison}\label{sec:objexperiment} As a model-selection study, we compared the ability of different deep neural network architectures implementing {\it MusicFrameworks} to predict the next element in the sequence. \textit{Basic Melody Accuracy} is the percent of correct predictions of the next pitch of the basic melody (half notes). \textit{Rhythm Accuracy} is the percent of correctly predicted 2-beat rhythm patterns. \textit{Melody Accuracy} is the accuracy of next pitch prediction. We used 4188 phrases from 528 songs in major mode from the POP909 dataset, using 90\% of them as training data and the other 10\% for validation. The first line in Table \ref{tab:accuracy} represents the Transformer-LSTM models introduced in Section \ref{sec:method}. In all three networks, the projection size and feed forward channels are 128; there are 8 heads in the multi-head encoder attention layer; LSTM hidden size is 64; dropout rate for basic melody and realized melody generation is 0.2, dropout rate for rhythm generation is 0.1; decoder input projection size is 8 for rhythm generation and 17 for others. For learning rate, we used the Adam optimizer with $\beta 1 = 0.9, \beta 2 = 0.99, \epsilon = 10^{-6}$, and the same formula in \cite{vaswani2017attention} to vary the learning rate over the course of training, with 2000 warmup steps. We compared this model with several alternatives: the second model is a bi-directional LSTM followed by a uni-directional LSTM (model size is 64 in both). The third model is a Transformer with two layers of encoder and two layers of decoder (with same parameter settings as Transformer-LSTM), and we first pre-trained the encoder with 10\% of random masking of input (similar to training in BERT \cite{Devlin2019BERTPO}), and then trained the encoder and decoder together. No music frameworks (the fourth line) means generate without basic melody or basic rhythm form, using a Transformer-LSTM model. The results in Table \ref{tab:accuracy} show that the Transformer-LSTM achieved the best accuracy. The full Transformer model performed poorly on this relatively small dataset due to overfitting. Also, in both rhythm and melody generation, the \textit{MusicFrameworks} approach significantly improves the model accuracy. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|} \hline & Basic Melody & Rhythm & Melody \\ \hline Trans-LSTM & 38.7\% & \textbf{50.1\%} & \textbf{55.2\%} \\ \hline LSTM & \textbf{39.8\%} & 43.6\% & 51.2\% \\ \hline Transformer & 30.9\% & 25.8\% & 39.3\% \\ \hline No M.F. & NA & 33.1\% & 37.4\%\\ \hline \end{tabular} \end{center} \vspace{-0.1in} \caption{Validation Accuracy of different model architectures. ``No M.F.'' means no music frameworks used here.} \label{tab:accuracy} \vspace{-1em} \end{table} \subsection{Objective Evaluation} We use Transformer-LSTM model for all further evaluations. First, we examine whether music frameworks promote controllability. We aim to show that given a basic melody and rhythm form as guidance, the model can generate a new melody that follows the contour of the basic melody, and has a similar rhythm form (Figure \ref{fig:generated_bm} and \ref{fig:generated_rhythm}). For this ``sanity check,'' we randomly picked 20 test songs and generated 500 alternative basic melodies and rhythm forms. After generating and analyzing 500 phrases, we found the analyzed basic melodies match the target (input) basic melodies with an accuracy of 92.27\%; the accuracy of rhythm similarity labels is 94.63\%; the rhythmic complexity matches the target (within $0.2$) 81.79\% of the time. Thus, these aspects of melody are easily controllable. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{figs/generated_bm.png}} \vspace{-0.1in} \caption{This is a generated melody (yellow piano roll) from our system following the input basic melody (blue frame piano roll).} \label{fig:generated_bm} \end{figure} \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{figs/generated_rhythm.png}} \vspace{-0.1in} \caption{A generated rhythm from our system given the input basic rhythm form. The analyzed basic rhythm form of the output is very similar to the input.} \label{fig:generated_rhythm} \vspace{-1em} \end{figure} Previous work \cite{dai2020} has shown that pitch and rhythm distributions are related to different levels of long-term structure. We confirmed that our generation exhibits similar structure-related distributions to that of the POP909 dataset. For example, the probability of a generated tonic at end of a phrase is 48.28\%, and at the end of a section is 87.63\%, while in the training data the probabilities are 49.01\% (phrase-end) and 86.57\% (section-end). \subsection{Subjective Evaluation} \subsubsection{Design of the listening evaluation} We conducted a listening test to evaluate the generated songs. To avoid listening fatigue, we presented sections lasting about 1 minute and containing at least 3 phrases. We randomly selected 6 sections from different songs in the validation set as seeds and then generated melodies based on conditions 1--6 presented in Table \ref{tab:evaluation}. To render audio, each melody is mixed with the original chords played as simple block triads via a piano synthesizer. For each section and each condition, we generated at least 2 versions, with 105 generated sections in total. In each rating session, a listener first enters information about their music background and then provides ratings for six pairs of songs. Each pair is generated from the same input seed song using different generation conditions (see Table \ref{tab:evaluation}). For each pair, the listener answers: (1) whether they heard the songs before the survey (yes or no); (2) how much they like the melody of the two songs (integer from 1 to 5); and (3) how similar are the two songs' melodies (integer from 1 to 5). We also embedded one validation test in which a human-composed song and a randomized song are provided to help filter out careless ratings. \subsubsection{Results and discussion} We distributed the survey on Chinese social media and collected 274 listener reports. We removed invalid answers following the validation test and a few other criteria. We ended up with 1212 complete pairs of ratings from 196 unique listeners. The demographics information about the listeners are as follows: \\ \hspace*{0.2in}\textit{Gender} male: 120, female: 75, other: 1;\\ \hspace*{0.2in}\textit{Age distribution} 0-10: 0, 11-20: 17, 21-30: 149, 31-40: 28, 41-50: 0, 51-60: 2, $>$60: 0;\\ \hspace*{0.2in}\textit{Music proficiency levels} lowest (listen to music $<$ 1 hour/week): 16, low (listen to music 1--15 hours/week): 62, medium (listen to music $>$ 15 hours/week): 21, high (studied music for 1--5 years): 52, expert ($>$ 5 years of music practice): 44;\\ \hspace*{0.2in}\textit{Nationality} Chinese: 180, Others: 16 (note that the POP909 dataset is primarily Chinese pop songs, and listeners who are more familiar with this style are likely to be more reliable and discriminating raters.) Figure~\ref{fig:rating} shows a distribution of ratings for the seven paired conditions in Table \ref{tab:evaluation}. In each pair, we show two bar plots with mean and standard deviation overlaid: the left half shows the distribution of ratings in the first condition and the right half shows those in the second condition. The first three pairs compare music generation with and without a music framework as an intermediate representation. The first two pairs at the bottom compare music with an automatically generated basic melody and rhythm to music using the basic melody and rhythm from a human-composed song. The last two pairs show the ratings of our method compared to music in the POP909 dataset. We also conducted a paired T-test to check the significance against the hypothesis that the first condition is not preferred over the second condition, shown under the distribution plot. In addition, we derived listener preference based on the relative ratings, summarized in Figure~\ref{fig:pref}. This visualization provides a different view from ratings as it shows how frequently one condition is preferred over the other or there is no preference (equal ratings). Based on these two plots, we point out the following observations: \vspace{0.2em} \begin{itemize}[leftmargin=*,noitemsep, nolistsep] \item Basic melody and basic rhythm form improve the quality of generated melody. Indicated by low p-values and strong preference in ``1 vs 3'', ``2 vs 3'' and ``4 vs 5,'' generating basic melody and basic rhythm before melody generation has higher ratings than generating melody without these music framework representations. \item Melody generation conditioned on generated basic melody and basic rhythm has similar ratings to melody generated from human-composed music's basic melody and basic rhythm form. This observation can be derived from similar distribution and near random preference distribution in ``1 vs 2'' and ``1 vs 4,'' indicating that preference for the generated basic melody and rhythm form are close to those of music in our dataset. \item Although both distribution tests suggest that human-composed music has higher ratings than generated music in test pairs ``0 vs 1'' and ``0 vs 6'' (and this is statistically significant), the preference test suggests that around half of the time the generated music is as good as or better than human-composed music, indicating the usability of the \textit{MusicFrameworks} approach. \end{itemize} \vspace{0.2em} \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|} \hline & R.Melody & Basic Melody & Rhythm \\ \hline 0 & copy & copy & copy \\ \hline 1 & gen & copy & copy \\ \hline 2 & gen & gen & copy \\ \hline 3 & gen & without & copy \\ \hline 4 & gen & copy & gen with BRF \\ \hline 5 & gen & copy & gen without BRF \\ \hline 6 & gen & gen & gen with BRF \\ \hline \end{tabular} \end{center} \vspace{-1em} \caption{Seven evaluation conditions. Group 0 is human-composed. R.Melody: realized melody; gen: generated from our system; BRF: Basic Rhythm Form; copy: directly copying that part from the human-composed song; without: not using music frameworks. } \label{tab:evaluation} \end{table} \begin{figure}[tb] \centerline{ \includegraphics[width=0.95\columnwidth]{figs/ratings.png}} \vspace{-0.1in} \caption{Rating distribution comparison for each paired groups. *For conditions 0 vs 1 and 0 vs 6, we removed the cases where the listeners indicated having heard the song before.} \label{fig:rating} \vspace{-1em} \end{figure} \begin{figure}[tb] \centerline{ \includegraphics[width=0.95\columnwidth]{figs/preference.pdf}} \vspace{-0.1in} \caption{Preference distribution for each paired groups. *For conditions 0 vs 1 and 0 vs 6, we removed the cases where the listeners indicated having heard the song before.} \label{fig:pref} \vspace{-1em} \end{figure} To understand the gap between our generated music and human-composed music, we look into the comments written by listeners and summarize our findings below: \vspace{0.2em} \begin{itemize}[leftmargin=*,noitemsep,nolistsep] \item Since sampling is used in the generative process, there is a non-zero chance that a poor note choice may be generated. Though this does not affect the posterior probability significantly, it degrades the subjective rating. Repeated notes also have an adverse effect on musicality with a lesser influence on posterior probability. \item \textit{MusicFrameworks} uses basic melody and rhythm form to control long-term dependency, i.e., phrases that are repetitions or imitations share the same or similar music framework; however, the generated melody has a chance to sound different due to the sampling process. A human listener can distinguish a human-composed song from a machine-generated song by listening for exact repetition. \item Basic melody provides more benefit for longer phrases. For short phrases (4-6 bars), generating melodies from scratch is competitive with generating via basic melody. \item The human-composed songs used in this study are from the most popular ones in Chinese pop history. Even though raters may think they do not recognize the song, there is a chance that they have heard it. A large portion of the comments suggest that a lot of the test music sounds great and it was an enjoyable experience working on these surveys. However, some listeners point out that concentrating on relatively long excerpts was not a natural listening experience. \end{itemize} \vspace{0.2em} \vspace{-0.5em} \section{Conclusion}\label{sec:conclusion} \textit{MusicFrameworks} is a deep melody generation system using hierarchical music structure representations to enable a multi-level generative process. The key idea is to adopt an abstract representation, \textit{music frameworks}, including long-term repetitive structures, phrase-level \textit{basic melodies} and \textit{basic rhythm forms}. We introduced analysis algorithms to obtain music frameworks from songs. We created a neural network that generates basic melody and additional networks to generate melodies. We also designed musical features and encodings to better introduce musical inductive bias into deep learning models. Both objective and subjective evaluations show the importance of having music frameworks. About 50\% of the generated songs are rated as good as or better than human-composed songs. Another important feature of the \textit{MusicFrameworks} approach is \textit{controllability} through manipulation of music frameworks, which can be freely edited and combined to guide compositions. In the future, we hope to develop more intelligent ways to analyze music and music frameworks supporting a richer musical vocabulary, generation of harmony and polyphonic generation. We believe that hierachical and structured representations offer a way to capture and imitate musical style, offering interesting new research opportunities. \vspace{-0.5em} \section{acknowledgments} We would like to thank Dr. Stephen Neely for his insights on musical rhythm.
2,877,628,090,021
arxiv
\section{Introduction\label{sec:Introduction}} Active matter is an important field of research that considers particle systems whose microscopic components are characterized by systematic persistent dynamic rules and by various types of mutual interactions. On a microscopic scale, the persistent dynamics breaks the detailed-balance condition (underlying all of equilibrium physics) and defines active matter as out-of-equilibrium systems. Many different models of active matter have been proposed. They feature a wide range of self-propelled dynamics and of mutual interactions. A great many theoretical studies employ either Langevin or molecular dynamics\cite{CatesReview2012,KineticModel,FilyMarchetti2012,CahnHilliard,ActiveCrystalizationLoewen2012,Dumbbell1} in order to model persistent motion. Recently, a kinetic Monte Carlo (MC) approach was proposed\cite{BerthierHardSpheres,KlKaKr2018} as a minimal model for active matter in two dimensions. Although the primary interest in theoretical models of active matter comes from the non-equilibrium nature, their properties can often be connected to their equilibrium counterparts that are realized in the zero-persistence limit. This limit is of particular interest in two dimensions, where equilibrium particle systems with short-range interactions cannot crystallize\cite{Mermin}. Nevertheless, it was established that, from the high-temperature (low-density) liquid regime towards the low-temperature (high-density) solid regime, two-dimensional equilibrium particle systems normally undergo two phase transitions\cite{HalperinNelson1978,NelsonHalperin1979,Young1979}. These transitions describe the passage into and out of a hexatic phase that is sandwiched between the liquid and the solid phase (see \tab{tab:PhaseCharacteristics}). In this work, we are concerned with two-dimensional models with repulsive inverse-power-law interactions, for which the two-step melting scenario in equilibrium is firmly established\cite{BernardKrauth2011,KapferKrauth2015}. In this work, we extend our earlier findings for a special case\cite{KlKaKr2018} and show that kinetic MC generically reproduces motility-induced phase separation (MIPS) in two-dimensional active-particle systems with a wide range of inverse-power-law interactions including the hard-sphere limit. We also confirm the stability of the hexatic phase up to considerable values of the activity, and conjecture that it is indeed stable at any value of the activity. We finally confirm that MIPS is generically a liquid--gas transition under the kinetic MC dynamics. Moreover, it is decoupled from the melting transitions. This separation can be understood in the limit of infinitesimal MC steps from the scaling behavior of the MIPS phase transition and the melting transitions. The work is organized in the following order. In \secti{sec:Model}, the essential elements of the kinetic MC algorithm are described, the interplay of persistence with interactions is illustrated on a simple case of two particles in one dimension (1D), and possible anisotropy effects in two dimensions (2D) are analyzed. In \secti{sec:Results}, we discuss the effect of the stiffness of the interparticle potential on the full quantitative phase diagram of two-dimensional particle systems on the activity--density plane. The continuous-time limit of the kinetic MC dynamics is discussed in \secti{sec:RelevantParameters}, with a focus on the number of relevant parameters. \begin{table} \caption{\label{tab:PhaseCharacteristics} Decay of correlation functions in the liquid, hexatic, and solid phases in two-dimensional particle systems.} \begin{ruledtabular} \begin{tabular}{llcr} Order & Liquid phase & Hexatic phase& Solid phase\\ \hline \begin{tabular}[c]{@{}l@{}}Positional\end{tabular} & short-range\footnote{$\propto \exp{(-r/\xi)}$\label{foot:exp}} & short-range\textsuperscript{\ref{foot:exp}} & quasi-long-range\footnote{$\propto r^{-\alpha}$\label{foot:alg}} \\ \begin{tabular}[c]{@{}l@{}}Orientational\end{tabular} & short-range\textsuperscript{\ref{foot:exp}} & quasi-long-range\textsuperscript{\ref{foot:alg}} & long-range\footnote{$\propto$ constant} \end{tabular} \end{ruledtabular} \end{table} \section{Model: kinetic MC\label{sec:Model}} \begin{figure} \centering \includegraphics[width=.45\textwidth]{MeanAbsDisp.pdf} \caption{Mean absolute displacement \emph{vs.} time $t$ for a single particle. \subcap{a} The crossover from ballistic to diffusive motion shifts to larger $t$ with increasing activity, measured in terms of the persistence length $\lambda$ (see \eq{EQ:lambda}). \subcap{b} Data collapse illustrating the crossover from $\propto t$ to $\propto \sqrt{t}$ at around $(1,1)$ expressed using \eqtwo{EQ:lambda}{EQ:tau}.} \label{F:MeanAbsDisp} \end{figure} The kinetic MC algorithm with which we model active dynamics consists of the standard Metropolis filter combined with a memory term for the proposed moves. The memory term is characterized by a time scale $\tau$, which allows for a smooth tuning from a passive motion (described by equilibrium statistical mechanics) to a self-propelled/persistent particle motion, where a single particle moves ballistically (mean square displacement $\propto t^2$) for times $t \ll \tau$ and moves diffusively (mean square displacement $\propto t$) for times $t \gg \tau$ (see \subfig{F:MeanAbsDisp}{a}). In contrast to active Brownian particles \cite{ReviewActiveBrownian}, the velocity amplitude fluctuates in the MC dynamics. The dynamics is comparable with the active Ornstein--Uhlenbeck process (see \textit{e.g.} Ref.~\onlinecite{HowFarOrnsteinUhlenbeck}), where the velocity performs a random walk in a harmonic potential. Similarly, in the discrete-time kinetic MC approach the increment performs a random walk in a box with reflecting boundary conditions. For a single particle in two dimensions, the $x$- and $y$-components of the increment $(\epsilon_x(t), \epsilon_y(t))$ at time $t$ are sampled from two independent Gaussian distributions of standard deviation $\sigma$ and the mean corresponds to the previously sampled increment ($\epsilon_x(t-1)$ or $\epsilon_y(t-1)$, respectively). This walk is confined in $x$ and $y$ by reflecting boundaries at $x = \pm \delta$ and $y = \pm \delta$. In two dimensions the average distance a single particle covers before changing the direction is given by\cite{KlKaKr2018} the persistence length \begin{equation} \lambda \simeq 0.62 \frac{\delta^3}{\sigma^2}\,, \label{EQ:lambda} \end{equation} and the persistence time is given by \begin{equation} \tau = \frac{8}{\pi^2}\frac{\delta^2}{ \sigma^2}\,. \label{EQ:tau} \end{equation} This characteristic length and time scales are confirmed in numerical simulations. For example, the crossover from ballistic to diffusive motion in \subfig{F:MeanAbsDisp}{b} appears in the rescaled time-dependence of the mean absolute displacement around the point $(t/\tau = 1,\langle r(t)\rangle/\lambda = 1)$. In the many-particle case, the increment for each particle performs its proper random $\vec\epsilon$-walk, independent of the other particles. Interactions between particles are introduced by the Metropolis filter. In one kinetic MC step, a particle $i$ is chosen at random. The change of its position ($\vec{r}_i(t+1) = \vec{r}_i(t) + \vec{\epsilon}_i(t)$) is accepted with probability \begin{equation} P(E' \to E) = \min\left[1,e^{-\beta\Delta E)}\right]\,, \label{EqMetropolisFilter} \end{equation} where $\Delta E = E - E'$ is the total energy change caused by the particle displacement $\vec{r}_i(t) \rightarrow \vec{r}_i(t+1)$. The parameter $1/\beta = k_\text{B}T$ is now an energy scale rather than a temperature. The random $\vec\epsilon$-walk persists whether the resulting displacement is accepted or not. For a detailed description of the kinetic MC approach, see Ref.~\onlinecite{KlKaKr2018}. \subsection{1D Model for Persistence\label{sec:EffectOfPersistence}} Kinetic MC, \emph{via} the memory present in its displacements, generates persistence in a manner that differs from equilibrium systems. This has far-ranging consequences for many-particle systems, but the effects of a memory term in the equations of motion can already be studied for $N=1$ or $N=2$ particles. Here we study the case of two particles on a ring (a line of length $L$ with periodic boundary conditions), interacting with an inverse-power-law pair potential that we will use throughout this work \begin{equation} U(r) = u_0 \left( \frac{\gamma}{r} \right)^n\,, \label{eq:Potential} \end{equation} where $\gamma$ reduces to the particle diameter in the hard-disk limit $n\to \infty$. The 1D inter-particle distance $r$ in \eq{eq:Potential} is easily generalized to higher dimensions. \begin{figure} \centering \includegraphics[width=.45\textwidth]{1ParticleFight.pdf} \caption{Persistence for two 1D particles on a ring (line of length $L$ with periodic boundary conditions). \subcap{a} and \subcap{b} indicate the two states of the system that lead to the local maxima of the pair-correlation function. White arrows indicate the sign of the displacements $\epsilon_i$. \subcap{c} Pair correlation function $P(r)$ for different $\lambda$. Maximum jump length $\delta= L/40$ $\lambda$ is varied by changing $\sigma$. Data for $n = 6$, $u_0\beta = 1$. The unit length scale is set by $\gamma$ in \eq{eq:Potential}.} \label{F:TwoParticlesOnRing} \end{figure} \fig{F:TwoParticlesOnRing} shows the probability distribution $P(r)$ of the inter-particle distance. The maximum jump length $\delta$ is kept constant and the persistence length $\lambda$ is varied by changing $\sigma$. As the interaction is repulsive, the two particles repel each other for small $\lambda$, and the Boltzmann weight is maximal at $r= L/2$ for $\lambda = 0 $ (see \subfig{F:TwoParticlesOnRing}{c}). For increased $\lambda$, a peak appears at small $r$, and its position decreases with increasing $\lambda$. This means that particles are more probable to be near each other, which is due to the case, where the particles try to move against each other (see \subfig{F:TwoParticlesOnRing}{a}). The higher the persistence length, the stronger the force the particles push against the interparticle potential barrier and therefore, the peak position shifts. This shows that the self-propulsion force increases with $\lambda$ or equivalently with the persistence time $\tau$, and the shift is a result of the larger number of attempts in the Metropolis filter to increase the total energy. At finite $\lambda$, the original (Boltzmann) peak at $r = L/2$ shifts to smaller $r$ and takes on a position that is independent of $\lambda$. This peak appears due to arrangements where one particle \quot{hunts} the other circling around the ring with $|\epsilon_\text{hunter}| > |\epsilon_\text{hunted}|$ (see \subfig{F:TwoParticlesOnRing}{b}). For this to have a non-negligible probability, the role of the slower and faster particle has to be stable for a sufficient time span, which explains that the peak appears only after overcoming a certain threshold in $\lambda$. The independence of the peak position on $\lambda$ is understood from the following argument. Moves of the slower particle are always accepted by the Metropolis filter, as they decrease the total energy. In contrast, moves of the faster particle are often rejected as they increase the total energy. This leads to a competition where the slower particle increases $r$ in every attempt, and the faster particle tries to decreases $r$, but does not succeed in every attempt, thereby leading to an average "hunting distance" $r$, which is independent of $\lambda$ (see \subfig{F:TwoParticlesOnRing}{c}). The bimodal probability distribution of $P(r)$ (see \fig{F:TwoParticlesOnRing}) is a consequence of the non-constant velocity amplitude and thus in contrast to \textit{e.g.} active Brownian particles. The result clearly shows that the self-propulsion force in the kinetic MC dynamics depends on the persistence time $\tau$. This is not the case in the Langevin approaches of active Brownian particles or the active Ornstein--Uhlenbeck process. However, as discussed in \secti{sec:RelevantParameters}, the persistent length $\lambda$ is the relevant measure for activity is this dynamics. \subsection{Anisotropic effects} \begin{figure} \centering \includegraphics[width=0.475\textwidth]{{2DAnisotropie}.pdf} \caption{Kinetic-MC displacement distribution $P_{\Delta t}(|\Delta x|,|\Delta y|)$, in time $\Delta t$, for a single particle (infinite system). \subcap{a}: Anisotropic total displacement for small times ($\Delta t = \tfrac{3}{4}\tau$). \subcap{b}: Isotropic total displacement for large times ($\Delta t = 8\tau$). Inset in \subcap{b}: Displacement distribution for a circular reflecting boundary for small times ($\Delta t = \tfrac{3}{4}\tau$). (Linear color code with zero at purple.)} \label{F2DAnisotropie} \end{figure} In our kinetic MC algorithm, displacements $\boldsymbol{\varepsilon} = (\epsilon_x, \epsilon_y) \in [-\delta,\delta]^2$ are confined to a square box rather than being sampled from an isotropic distribution (as for example a circle of radius $\delta$). (The 2D Gaussian distribution of the Ornstein--Uhlenbeck process is also isotropic.) Although the square box is chosen for simplicity only, it is useful to check that it does not induce anisotropies. This trivially follows in the passive limit for the steady-state probability distribution because of the detailed-balance condition. At small times ($t \ll \tau$), the reflecting boundary conditions for the sampling scheme of the displacements introduces some degree of anisotropy in the two-dimensional single-particle dynamics (see \subfig{F2DAnisotropie}{a}). The probability distribution $P_{\Delta t}(|\Delta x|,|\Delta y|)$ of the absolute particle displacement $\Delta x$ (or $\Delta y$) in the $x$ (or $y$) component in a time $\Delta t$ is anisotropic for $\Delta t < \tau$ (see \subfig{F2DAnisotropie}{a}) whereas, without the square box, this displacement (which is constructed from identically distributed, independent Gaussians in both dimensions) would be isotropic. However, the isotropy is reinstalled for $t \gg \tau $ (see \subfig{F2DAnisotropie}{b}). As $\tau =0$ in the passive limit, the particle dynamics is isotropic for all times\cite{BookKrauth}. The anisotropy for $t < \tau$ could also be avoided by choosing a circular sampling box of radius $\delta$ with reflective boundary conditions for the displacements (see \subfig{F2DAnisotropie}{b}). However, such a choice would be more costly to implement. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{{2DAcceptedDispl}.pdf} \caption{ Two-dimensional probability distribution of the accepted displacements $P(\epsilon_x, \epsilon_y | \text{accepted})$. \subcap{a} Many-body system in the MIPS region. \subcap{b} In the solid near the solid--hexatic transition at a density far above the equilibrium melting lines ($n = 6$, $N= 10976$).} \label{F2DAcceptedDispl} \end{figure} We conjecture that the square box in fact renders anisotropic the long-time dynamics neither for $N=1$ nor for the many-body case. In the dilute case (where $\lambda$ is much smaller than the mean free path) the kinetic MC dynamics effectively reverts to the detailed-balance dynamics as interactions between particles happen at the diffusive time scale. At higher densities, anisotropy in the many-body properties might arise if the probability distribution of the accepted displacements is itself anisotropic. However, this is not the case (see \fig{F2DAcceptedDispl}). At higher densities, all large proposed displacements have a vanishing probability to be accepted by the Metropolis filter, thus leading to an effectively isotropic dynamics. Therefore, the Metropolis filter effectively realizes a circular reflecting $\boldsymbol{\varepsilon}$-sampling box without additional computational cost. Pair-correlation functions are also found to be perfectly isotropic at high density, both in the motility-induced liquid phase (at a density deep inside the equilibrium solid phase) and in the MIPS region (see \fig{F: 2DPairCorrAnisoSolHexLiq}). \begin{figure} \centering \includegraphics[width=0.475\textwidth]{{2DAnisotropyPairCorr}.pdf} \caption{Effective isotropic dynamics ($n = 6$, $N= 43904$). \subcap{a} and \subcap{c}: pair-correlation function $g(r,\theta)$ in polar coordinates averaged over $100$ configurations. \subcap{b} and \subcap{d}: difference between $g(r,\theta)$ and its angular average $g(r)$. The red arrows indicate $\pi/4, 3\pi/4, 5\pi/4$ and $7\pi/4$. Inset in \subcap{d}: snapshot of configuration showing MIPS corresponding to \subcap{c} and \subcap{d}.} \label{F: 2DPairCorrAnisoSolHexLiq} \end{figure} \section{Two-dimensional simulation results\label{sec:Results}} Our simulations are performed in an ensemble of $N$ particles confined to a rectangular\cite{KlKaKr2018} box of volume $V$ with periodic boundary conditions. The density $\phi = \gamma^2 N/V$ (with $\gamma$ from \eq{eq:Potential}) is varied by changing $V$. In the simulations $\delta$ is kept constant and the activity is varied by changing $\sigma$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FullPhaseDiagram.pdf} \caption{Phase diagram as a function of density $\phi$ and persistence length $\lambda$ \subcap{a}: Soft-disk potential with $n = 16$. \subcap{b}: Comparison of phase boundaries \cite{KlKaKr2018} for $n=6$ with the steeper $n=16$ case ($\delta = 0.1$, $\gamma = 1$, $u_0\beta = 1$). MIPS is always separated from the melting transitions. } \label{F:FullPhaseDiagram} \end{figure} \subsection{Full phase diagram and the effect of stiffness\label{sec:PhaseDiagram}} \subfig{F:FullPhaseDiagram}{a} shows the full phase diagram for the potential in \eq{eq:Potential} with $n = 16$ on the $\phi$--$\lambda$ plane. At all $\lambda$, the equilibrium two-step melting transition\cite{KapferKrauth2015} is recovered. For increasing $\lambda$, the melting lines shift to higher densities. Remarkably, the hexatic phase separating the liquid and solid phases is stable even far from equilibrium. This non-equilibrium two-step melting can be induced either by reducing the density (just as in equilibrium) or by increasing the persistence length. In addition to these melting transitions, at low $\phi$ but high $\lambda$, a motility-induced liquid--gas coexistence region opens up. It is separated form the melting transitions by a disordered fluid phase. This generalizes the phase diagram under the same dynamics, but for $n = 6$, found previously\cite{KlKaKr2018}. A change of $n$ (see \eq{eq:Potential}) only shifts the positions of the phase boundaries (see \subfig{F:FullPhaseDiagram}{b}). As for the equilibrium melting transitions \cite{KapferKrauth2015}, both the liquid--hexatic and the hexatic--solid phase boundaries shift at constant persistence length $\lambda$ to smaller densities with increasing $n$. Increasing $\lambda$ shifts the melting transitions to higher densities. However, the shift is smaller for larger $n$, resulting in steeper transition lines (see \subfig{F:FullPhaseDiagram}{b}). At the same time, the onset of the motility-induced liquid--gas coexistence shifts to smaller $\phi$ and $\lambda$ and the coexisting region shrinks. This ensures that the liquid--gas coexistence and the melting transitions remain disjoint. In \secti{sec:MIPS}, we argue that there is no singular change in the phase diagram even in the hard-disk limit $n\to \infty$. \subsection{Non-equilibrium two-step melting \label{sec:Melting}} In equilibrium, the Mermin--Wagner theorem forbids long-range translational (\emph{i.e.} crystalline) order in a 2D particle system with short-range interactions \cite{MerminWagner,Mermin1968}. However, at large densities, particles can arrange in locally hexagonal configurations. This can lead to two different high-density phases\cite{NelsonHalperin1979,HalperinNelson1978,Young1979}, which are characterized by different degrees of orientational and positional order (see \tab{tab:PhaseCharacteristics}). In this section, we define these measures of order and use them to quantify the two-step melting far from equilibrium. The local bond-orientational order parameter $\psi_6(\bm{r}_i)$ measures the six-fold orientation near a particle $i$. It is defined as \begin{equation*} \psi_6(\bm{r}_i) = \frac{1}{\text{number of neighbors $j$ of $i$ } } \sum_j \exp(6 \imath\theta_{ij})\,, \end{equation*} where $\imath$ is the imaginary unit and $\theta_{ij}$ it the angle enclosed by the $x$-axis and the connection line between particle $i$ and its neighbor $j$. Here, we use the Voronoi construction to identify neighbors and $\psi_6(\VEC{r}_i)$ is calculated with Voronoi weights\cite{VoronoiWeights}. Then, the correlation function \begin{equation} g_6(r)\propto \left\langle \sum_{i,j}^N \psi^\star_6(\bm{r}_i)\psi_6(\bm{r}_j) \delta(r - r_{ij}) \right\rangle \end{equation} is a measure of the correlation of the local six-fold orientational order at distance $r$ and its decay is used to quantify the degree of orientational order in the system (see \tab{tab:PhaseCharacteristics}). The direction-dependent pair-correlation function $g(x,y)$ provides a measure for the positional order. This two-dimensional histogram is averaged over different configurations $C$ after re-aligning \cite{BernardKrauth2011} $g_C(x,y)$ such that the $\Delta$x-axis points in the direction of the global orientation parameter $\Psi_6(C) = \sum_i^N \psi_6(\bm{r}_i)$ of $C$. Then, the decay of, \textit{e.g.} $g(x,0)$ determines the degree of positional order. The correlation functions ($g(x,0)$ and $g_6(r)$), allow one to identify the two-dimensional phases by the properties summarized in \tab{tab:PhaseCharacteristics}. In equilibrium, the Kosterlitz--Thouless--Halperin--Nelson--Young theory provides an additional selection criterion\cite{NelsonHalperin1979,HalperinNelson1978,Young1979}, where the exponent (defined in \tab{tab:PhaseCharacteristics}) $\alpha \leq 1/4$ for the orientational order and $\alpha \leq 1/3$ for the positional order give theoretical bounds for the hexatic phase. However, these bounds are not shown to apply outside equilibrium. We thus identify the phases by the characteristic decay of $g(x,0)$ and $g_6(r)$. Orientational and positional correlation functions change as the system melts from solid to liquid passing through the hexatic phase (see \fig{F:Melting_n16}, at a density far above the equilibrium melting point). Our simulations clearly identify solid state points with power-law decay in $g(x,0)$ and constant $g_6(r)$, hexatic state points with exponential decay of $g(x,0)$ and quasi-long-range order in $g_6(r)$, and liquid state points, where both correlations decay exponentially. Snapshots illustrate these different phases (see \fig{F:Melting_n16} and \tab{tab:PhaseCharacteristics}). The power-law exponent in $g_6(r)$ grows when approaching the liquid--hexatic transition and therefore, weakening the order in the hexatic phase. This behavior of $g_6(r)$ is in agreement with equilibrium studies\cite{KapferKrauth2015} and with our earlier results\cite{KlKaKr2018} obtained with the same dynamics for $n = 6$. We check that our simulations indeed reach the steady state (as in earlier work\cite{KlKaKr2018}) by verifying the convergence of the spatial correlation functions to the same steady state starting from a crystalline and a liquid initial particle arrangement. These very time-consuming computations assure that the defining decays of the correlation functions obtained in the hexatic phase reflect the physical system and not a bias introduced by the initial condition. \begin{figure} \includegraphics[width = 0.48 \textwidth]{{Melting1.2_n16}.pdf \caption{\label{F:Melting_n16} Two-step melting for $n=16$ at high density. \subcap{a}: Positional correlation $g(x,y=0)$ \subcap{b} orientational correlation $g_6(r)$. Particles in \subcap{c} are color-coded according to local orientational order $\psi_6$. \quot{A} and \quot{B} are solid (algebraic $g(x,0)$, long-range $g_6(r)$). \quot{C} and \quot{D} are hexatic (exponential $g(x,0)$, algebraic $g_6(r)$). \quot{E} is liquid (both decays exponential). ($d = (\pi N /V)^{-1/2}$, $\phi = 1.2$, $N = 43904$, $\delta = 0.1$, $u_0\beta = 1$, \quot{A}: $\lambda = 0.5$, \quot{B}: $\lambda = 0.7$, \quot{C}: $\lambda = 1.0$, \quot{D}: $\lambda = 1.1$, \quot{E}: $\lambda = 1.4$)} \end{figure} \subsection{Motility-induced phase separation\label{sec:MIPS}} At sufficiently low densities and high activities, a liquid--gas coexistence region opens up with a roughly U-shaped phase boundary (see \subfig{F:FullPhaseDiagram}{a}). Following an analysis of local densities, which was applied earlier\cite{KlKaKr2018} for $n = 6$, we confirm that also for $n = 16$ the densities ($\phi_\text{Liquid}$ and $\phi_\text{Gas}$) of the two coexisting phases depend on $\lambda$ but not on the global density $\phi$. Therefore, the low-density boundary of the MIPS region in \subfig{F:FullPhaseDiagram}{a} is given by\cite{KlKaKr2018} $\phi=\phi_\text{Gas}(\lambda)$ and the high-density boundary by $\phi = \phi_\text{Liquid}(\lambda)$, respectively. A much discussed question\cite{FilyMarchetti2012,KineticModel,JanusParticleSpeck2013,Dumbbell1, MeltingABP} concerns the nature of the two phases at coexistence. We can clearly identify the high-density phase as liquid, and MIPS as a liquid--gas coexistence. The particles in the snapshots in \fig{F:MIPS-HardSoft} are $\psi_6$-color-coded (see \fig{F:Melting_n16} for definition), illustrating short-ranged orientational order in the liquid phase. We do not observe that the orientational correlation in the liquid phase increases with increasing $\lambda$. Even at very high activities (e.g. at $\lambda = 4 \times 10^3$ in \subfig{F:MIPS-HardSoft}{a}), the local orientational order changes upon length scales of the order of the interparticle distance (also compare with \subfig{F:Melting_n16}{c} point E). We do not observe any quantitative difference with the orientational order in MIPS previously observed\cite{KlKaKr2018} for $n=6$ for the same kinetic MC dynamics. Furthermore, we also recover MIPS in the hard-disk system in the form of a liquid--gas coexistence (see \subfig{F:MIPS-HardSoft}{b}). We conjecture from these findings that the separation of the MIPS region from the melting transitions is a generic feature of self-propelled particles, at least within kinetic MC dynamics. \begin{figure} \includegraphics[width = 0.475 \textwidth]{MIPS_Soft_Hard.pdf \caption{\label{F:MIPS-HardSoft} Short-range order in MIPS for $n = 16$ and for hard disks. \subcap{a} Snapshots for $n = 16$ at $\phi = 0.4$. \subcap{b}: Snapshots for hard disks ($n \to \infty$) at $\phi = 0.2$. The liquid phase is clearly identified by a short-ranged orientational correlation illustrated by the $\psi_6$ color code defined in \fig{F:Melting_n16}. Data for $N = 10976$, $\delta = 0.1$, $\beta u_0 = 1$.} \end{figure} \section{Relevant parameters\label{sec:RelevantParameters}} So far (as our previous work \cite{KlKaKr2018}), we have considered the phase diagram as a function of the persistence length and the density, keeping the maximum step size $\delta$ constant (see \fig{F:FullPhaseDiagram}). However, $\delta$ has a profound influence on the phase boundaries (see \fig{F:PhaseDiagramDeltaDependence}). Keeping $\phi$ constant, the melting lines shift to smaller activities as $\delta$ is decreased, while in contrast the MIPS phase boundary shifts to larger activities. In this section, we study this $\delta$-dependence of melting and of MIPS. The kinetic MC dynamics depends on three parameters ($\delta$, $\lambda$, $\phi$). We now show that MIPS (seen at high $\lambda$) and the melting close to equilibrium are described by a different reduced set of relevant parameters in the $\delta \to 0$ limit. The single relevant parameter\cite{KapferKrauth2015}, which describes the melting transitions for inverse-power-law potentials is not commensurate with the reduced parameters for MIPS, as it does not capture the critical melting density in the passive limit. Therefore, there are separate descriptions of MIPS and the melting transitions. Here we introduce the Master equation as a stochastic descriptions of our kinetic MC dynamics, in addition to a Langevin description and the associated Fokker--Planck equation. We also compare the dimensional reduction of the relevant parameters with other stochastic models of active matter. \begin{figure} \includegraphics[width = 0.45 \textwidth]{ScalingPhaseDiagramDeltaScaling.jpg} \caption{\label{F:PhaseDiagramDeltaDependence} Phase boundaries (MIPS and melting transitions) for different step sizes $\delta$. Data for $n = 6$, $u_0\beta = 1.0$.} \end{figure} \subsection{A simple argument\label{SecApproximateDynamics}} We first address the question of relevant parameters with a simple argument for a single particle. (A more detailed analysis is presented in the following \secti{S: exact analysis of MC} and \secti{Multi-particle conti limit KMC}.) We consider the small-$\delta$ limit, which is justified for the choice of parameters used in the simulations. For a single particle in a 1D confining potential $U(x)$, the kinetic MC rule is approximated by the following discrete-time ($k = 0,1,2,\dots$) dynamics: \begin{subequations} \begin{eqnarray} \epsilon_{k+1} &=\epsilon_k+r_k+R\left(\frac{\epsilon_k}{\delta}\right)\,,\\ x_{k+1} &= x_k+\epsilon_k\,f(x_k,\epsilon_k)\,, \label{EQ_ApproxDynam} \end{eqnarray} \label{EQ_ApproxDynam_set} \end{subequations} where $r_k$ is a Gaussian random number with $\langle r_k\rangle=0$, $\langle r_k r_{k'}\rangle=\sigma^2\delta_{k,k'}$ and \begin{equation} f(x,\epsilon)=\min\left\{1,\exp\left(-\frac{U(x+\epsilon)-U(x)}{k_\text{B}T} \right)\right\} \label{Eq:Meto1} \end{equation} is the acceptance rate of the Metropolis filter in \eq{EqMetropolisFilter}. The reflecting boundary at $\epsilon=\pm \delta$ is denoted by $R$ (without specifying it rigorously) and $\delta_{k,k'}$ is the Kronecker delta. Defining a set of rescaled coordinates, \begin{equation} t=k\, \delta, \qquad v(t)=\frac{\epsilon_k}{\delta}, \qquad \xi(t)=\frac{r_k}{\delta^2}, \qquad x(t)=x_k\,, \label{Eq:BallisticScaling} \end{equation} and taking the small-$\delta$ limit, we get \begin{align} \dot{v}(t)&=\xi(t)+R(v(t))+\mathcal{O}(\delta)\\ \dot{x}(t)&= v(t)\,f\big(x(t),v(t)\delta\big)+\mathcal{O}(\delta) \end{align} where $\langle \xi(t)\rangle=0$, $\langle \xi(t)\xi(t') \rangle=\lambda^{-1}\delta(t-t')+\mathcal{O}(\delta/\lambda)$, with\footnote{The persistence length used for the numerical results \eq{EQ:lambda} differs by a numerical constant in both one and two dimensions.} $\lambda=\delta^3/\sigma^2$ and we use a continuous limit of the Kronecker delta $\delta_{k,k'} \simeq \delta\,\delta (t-t')$, with $\delta(x)$ being the Dirac delta function. Moreover, in the small-$\delta$ limit, using Taylor expansion, we get \begin{equation*} \frac{U(x+v\delta)-U(x)}{k_\text{B}T} = \Gamma_1 v U'(x)\left[ 1 + \frac{\delta v}{2}\frac{ U''(x)}{U'(x)}+\cdots\right]\,, \end{equation*} where $\Gamma_1=\delta/k_\text{B}T$. As long as $\delta U''(x)/U'(x) \ll 1$, from \eq{Eq:Meto1}, we get $f[x(t),v(t)\delta]\simeq h[x(t),v(t)]$, with \begin{equation} h[x(t),v(t)] = \min\left\{1,\exp\left(-\Gamma_1 v U'(x)\right) \right\}\,. \label{eq:h def 0} \end{equation} This gives a continuous dynamics for $\delta \to 0$ \begin{subequations} \begin{align} \dot{v}(t)&=\xi(t)+ R(v(t))+\mathcal{O}(\delta)\,, \label{eq:Langevin cont v} \\ \dot{x}(t)&= v(t)\,h\big(x(t),v(t)\big)+\mathcal{O}(\delta)+\mathcal{O}\left(\delta\frac{ U''(x)}{U'(x)}\right)\,. \label{eq:Langevin cont x} \end{align} \label{eq:contiLangevin1} \end{subequations} Clearly, this rescaled dynamics depends on only two relevant parameters, namely \begin{equation} \Gamma_1 = \frac{\delta}{k_\text{B}T}\,, \text{ and } \lambda = \frac{\delta^3}{\sigma^2}\,. \label{Eq:reducedParameters1} \end{equation} However, for this description to be valid, the subleading terms have to be negligible, leading to the following range of validity for the scaling in \eq{Eq:reducedParameters1}: \begin{itemize} \item[a)] $\delta\ll \lambda$ (from $\langle\xi(t)\xi(t')\rangle$), \item [b)] $\delta\ll \lambda^{-1/2}$ (from \eq{eq:Langevin cont v}), and \item[c)] $\delta\ll 1$ and $\delta \ll U'(x)/U''(x)$ (from \eq{eq:Langevin cont x}). \end{itemize} Therefore, the scaling expressed in the two-parameter reduction in \eq{Eq:reducedParameters1} breaks down in a) the passive limit, b) at very high persistence lengths (for constant $\phi$), and c) at high density (for constant $\lambda$). \subsection{Stochastic description of the MC dynamics\label{S: exact analysis of MC}} We begin the rigorous analysis by considering a single particle in a 1D confining potential $U(x)$. The kinetic MC dynamics is Markovian in the $(x,\epsilon)$ space. The discrete kinetic MC time is denoted by $k=0,1,2,\dots$. The conditional probability for a transition $(y,\epsilon')\rightarrow (x,\epsilon)$ in one time step is given by the Markov matrix \begin{equation} M(x,\epsilon | y,\epsilon') = g(\epsilon,\epsilon')W_{\epsilon}(x,y)\,, \label{eq:Master cond prob} \end{equation} where, $\epsilon$ is sampled with probability $g(\epsilon,\epsilon')$ and $W_{\epsilon}(x,y)$ is due to the Metropolis filter. It can be shown\cite{KlamserThesis} that \begin{align} g(\epsilon,\epsilon') =\frac{1}{2\delta}+&\frac{1}{\delta}\sum_{m=1}^{\infty}\exp\left[-\frac{\pi^2 \sigma^2 m^2}{8\delta^2}\right]\cr \cos&\left(\frac{m \pi}{2\delta}(\epsilon+\delta)\right)\cos\left(\frac{m \pi}{2\delta}(\epsilon'+\delta)\right), \label{DisplacementDistribution} \end{align} whereas the Metropolis filter in \eq{EqMetropolisFilter} yields \begin{equation} W_{\epsilon}(x, y)=f(y,\epsilon)\,\delta(x-y-\epsilon)+[1-f(y,\epsilon)]\,\delta(x-y), \label{CondiPropPos} \end{equation} with $f(x,\epsilon)$ as defined in \eq{Eq:Meto1}. Using this in the corresponding discrete-time Master equation \begin{equation*} P_{k+1}(x,\epsilon) = \int dy \int_{-\delta}^\delta d\epsilon' \, M(x,\epsilon | y, \epsilon') P_k(y,\epsilon') \end{equation*} gives \begin{align} P_{k+1}(x,\epsilon) =\int d\epsilon' g(\epsilon , \epsilon')P_k(x,\epsilon') +\int d\epsilon' g(\epsilon , \epsilon')\cr \left\{f(x-\epsilon,\epsilon)P_k(x-\epsilon,\epsilon')-f(x,\epsilon) P_k(x,\epsilon')\right\}\,. \label{eq:Master discrete} \end{align} This describes the exact time evolution of the probability $P_k(x,\epsilon)$ in the kinetic MC dynamics\footnote{The approximate dynamics in \eq{EQ_ApproxDynam_set} has a different Master equation. Nevertheless, both describe the same dynamics in the small-$\delta$ limit.}. In the passive limit, this satisfies the standard detailed-balance condition with respect to the Boltzmann distribution (see \app{S:PassiveDiffusive}). \subsubsection*{Rescaled coordinates} To determine the relevant number of control parameters, we use the scaled coordinates defined in \eq{Eq:BallisticScaling} in the Master equation (\eq{eq:Master discrete}): \begin{align} \tilde{P}_{t+\delta}(x,v) = &\int dv' \tilde{g}(v , v')\tilde{P}_t(x,v')\cr & +\int dv' \tilde{g}(v , v')\left\{f(x-\delta v,\delta v)\tilde{P}_t(x-\delta v,\delta v')\right.\cr &\left. -f(x,\delta v) \tilde{P}_t(x,\delta v')\right\}\,, \label{EQ_rescaled_Master_1} \end{align} with $P_k(x,\epsilon) = \tilde{P}_t(x,v) / \delta$ and $g(\epsilon,\epsilon') = \tilde{g}(v,v') / \delta$. We have shown in \secti{SecApproximateDynamics} that the effective number of control parameters can be reduced by taking the small-$\delta$ limit. In this limit $f(x,\delta v) \simeq h(x,v)$ (see \eq{eq:h def 0}). Using a Taylor expansion, this leads to \begin{align*} \frac{\partial}{\partial t}\tilde{P}_{t}(x,v) = &a_1(v)\partial_v \tilde{P}_t(x,v)+\frac{a_2(v)}{2}\partial_v^2 \tilde{P}_t(x,v) \cr &-\int dv' \tilde{g}(v , v')v \frac{\partial}{\partial x}\left\{h(x, v)\tilde{P}_t(x, v')\right\}+\cdots\,, \end{align*} where we used $\int dv'\, \tilde{g}(v,v')=1$ and \begin{equation*} a_m(v)=\frac{1}{\delta}\int dv' (v'-v)^m \tilde{g}(v,v')\,. \end{equation*} $a_m(v)$ can be computed using \eq{DisplacementDistribution}. Alternatively, we can use the free case \begin{equation} \tilde{g}(v,v') =\frac{1}{\sqrt{2\pi \delta / \lambda } } \exp\left[-\frac{\left(v-v' \right)^2}{2 \delta / \lambda}\right] \label{DisplacementDistribution free} \end{equation} in combination with a zero-current condition on $\tilde{P}_t(x,v)$ for the reflecting boundary. This simplifies the calculation of $a_m(v)$, giving $a_1(v)=0$, and $a_2(v)=1/\lambda$, which leads to the Fokker--Planck equation at small $\delta$, \begin{subequations} \begin{equation} \frac{\partial}{\partial t}\tilde{P}_{t}(x,v) = \frac{1}{2\lambda}\partial_v^2 \tilde{P}_t(x,v) - \frac{\partial}{\partial x}\left\{v h(x, v)\tilde{P}_t(x, v)\right\} \label{eq:cont active delta small 1} \end{equation} with the reflecting boundary condition \begin{equation} \frac{\partial}{\partial v}\tilde{P}_t(x,v)=0\qquad \textrm{for $v=\pm 1$}. \end{equation} \label{EQ:FokkerPlanckActiveSinglePart} \end{subequations} This Fokker--Planck equation is equivalent to the coupled Langevin equation in \eq{eq:contiLangevin1} of the approximate dynamics. In consistence with the analysis in \secti{SecApproximateDynamics}, this Fokker--Planck equation depends on two parameters ($\lambda$ and $\Gamma_1$) given in \eq{Eq:reducedParameters1}. However, as noted earlier, this is only true in a certain range of parameters where the subleading terms are negligible. In particular, the Fokker--Planck equation in \eq{EQ:FokkerPlanckActiveSinglePart} fails to describe the passive limit ($\lambda = 0$), where the relevant parameter is different \cite{KapferKrauth2015} from $\Gamma_1$. To describe the passive limit, a diffusive scaling with $\delta$ is required \begin{equation} t=k\, \delta^2, \qquad v=\frac{\epsilon}{\delta}, \qquad x=x\,. \label{eq:diffusive scaling} \end{equation} Starting from \eq{eq:Master discrete} and following a similar procedure as for the derivation of \eq{eq:cont active delta small 1}, leads to the well-known Fokker--Planck equation for a passive particle in a potential \begin{equation} \frac{\partial \tilde{P}_{t}(x)}{\partial t} =\frac{1}{k_\text{B}T}\frac{\partial}{\partial x}[U'(x)\tilde{P}_t(x)]+\frac{\partial^2}{\partial x^2}[\tilde{P}_t(x)]\,. \label{eq:FP passive single} \end{equation} A detailed derivation\cite{KlamserThesis} finds back that in the passive limit the inverse-power-law potential is described by a single control parameter, as is well known\cite{KapferKrauth2015}. In conclusion, the number of relevant parameters can be reduced when $\delta$ is small: the persistent limit has two relevant parameters ($\Gamma_1$ and $\lambda$) and the passive limit ($\lambda = 0$) has a single parameter $\Gamma_0$ (see \eq{eq:Gamma0}). However, $\Gamma_1$ does not converge to $\Gamma_0$ for $\lambda \to 0$. Therefore, the order of the limits $\delta \to 0$ and $\lambda \to 0$ cannot be exchanged. \subsection{Multi-particle case\label{Multi-particle conti limit KMC}} The discussion in \secti{S: exact analysis of MC} can be generalized to the multi-particle case. The single-particle Fokker--Planck equation in \eq{eq:cont active delta small 1} generalizes to the $N$-particle Fokker--Planck equation \begin{equation} \frac{\partial}{\partial t}P_{t}[\textbf{x,v}] = \frac{1}{2\lambda}\sum_i\frac{\partial^2}{ \partial v_i^2} P_{t}[\textbf{x,v}] - \sum_i\frac{\partial}{\partial x_i}\left\{v_i h_i(\textbf{x}, v_i)P_{t}[\textbf{x,v}]\right\}\,, \label{eq:FP mutiparticle active} \end{equation} where particles interact via the inter-particle potential $U(\VEC{x})$, with $ \VEC{x} =\left\{ x_1,\cdots,x_N\right\}$ and \begin{equation*} h_i(\textbf{x}, v_i)=\min\left\{1,\exp(\Gamma_1 v_i F_i[\textbf{x}]) \right\}\,, \end{equation*} with the force on particle $i$ \begin{equation} F_i[\textbf{x}]=-\frac{\partial U[\textbf{x}]}{\partial x_i}\,. \end{equation} The reflecting boundary condition in the velocity space is \begin{equation} \frac{\partial}{\partial v_i}P_{t}[\textbf{x,v}]=0\qquad \textrm{for $v_i=\pm 1$.} \end{equation} The Fokker-Planck \eqref{eq:FP passive single} in the passive limit has a very similar multi-particle generalization. \subsubsection*{Power-law interaction potential} The numerical studies presented in this work are for the inverse-power-law potential in \eq{eq:Potential}. The force on particle $i$ is \begin{equation} F_i[\textbf{x}]=n u_0\gamma^n\sum_{j\ne i}\frac{\text{sgn}(x_i-x_j)}{ |x_i-x_j|^{n+1}}\,. \label{eq:ForcePoserLaw} \end{equation} For this specific choice, two dimensionless parameters characterize the persistent many-particle behavior. These can be obtained from the mean inter-particle distance \begin{equation} d = \frac{L}{N} = \frac{\gamma}{\phi}\,, \qquad \text{for 1D} \end{equation} with $L$ being the system size or, equivalently, from the dimensionless density \begin{equation} \phi = \frac{\gamma N}{L}\,, \qquad \text{for 1D.} \end{equation} Then, using the scaled coordinates \begin{equation} \tilde{t}=\frac{t}{d}, \qquad \tilde{v}(\tilde{t})=v(t), \qquad \tilde{x}(\tilde{t})=\frac{x(t)}{d}, \end{equation} in \eq{eq:FP mutiparticle active}, we obtain the rescaled Fokker--Planck equation \begin{equation} \frac{\partial}{\partial \tilde{t}}\tilde{P}_{\tilde{t}}[\tilde{\textbf{x}},\tilde{\mathbf{v}}] = \frac{1}{2\tilde{\lambda}}\sum_i\frac{\partial^2}{ \partial \tilde{v}_i^2} \tilde{P}_{\tilde{t}}[\tilde{\textbf{x}},\tilde{\textbf{v}}] - \sum_i\frac{\partial}{\partial \tilde{x}_i}\left\{\tilde{v}_i \tilde{h}_i(\tilde{\textbf{x}}, \tilde{\mathbf{v}}_i)\tilde{P}_{\tilde{t}}[\tilde{\textbf{x}},\tilde{\mathbf{v}} ]\right\}\,, \label{eq:FP mutiparticle active d scaled} \end{equation} with \begin{equation} \tilde{h}_i(\tilde{\VEC{x}}, \tilde{v}_i)=\min\left\{1,\exp\left(\Gamma \tilde{v}_i \sum_{j\ne i}\frac{\text{sgn}(\tilde{x}_i-\tilde{x}_j)}{( \tilde{x}_i-\tilde{x}_j)^{n+1}}\right)\right\}\,, \end{equation} and \begin{equation*} \tilde{\lambda}=\frac{\lambda}{d}\,,\quad \text{and} \quad \Gamma=\frac{n u_0 \gamma^n \delta}{k_\text{B}Td^{n+1}}\,. \label{eq: active gamma} \end{equation*} These two parameters ($\tilde{\lambda}$ and $\Gamma$) govern the scaled many-particle probability distribution. It is useful to express these parameters in terms $\phi$, which leads to \begin{equation} \tilde{\lambda}=\frac{\delta^3}{\sigma^2\gamma}\phi\,\,\quad \text{and}\quad \Gamma=\frac{u_0}{k_\text{B}T}\frac{n\delta}{\gamma}\phi^{n+1}\,. \label{EQ:scale1} \end{equation} The generalization to higher dimensions is straightforward. For example, in 2D, where $\phi = N\gamma^2/V$ and $d = \sqrt{V/(N\pi)} = \gamma/\sqrt{\pi \phi}$, the two relevant parameters are \begin{equation} \tilde{\lambda}=\frac{\delta^3}{\sigma^2\gamma}\sqrt{\pi \phi}\,\,\quad \text{and}\quad \Gamma=\frac{u_0}{k_\text{B}T}\frac{n\delta}{\gamma}(\pi \phi)^{(n+1)/2}\,. \label{EQ:scale2222} \end{equation} In this form, the dimensionality primarily enters into the $\phi$-dependence of these two parameters (see \eq{EQ:scale1}). The validity of these reduced parameter sets is confirmed by numerical simulations (see \fig{F_ScalingActiveScaling}). \begin{figure} \centering \includegraphics[width = 0.4 \textwidth]{ScalingActiveScaling.pdf} \hfill % \caption{Numerical verification of the reduced set of relevant parameters in the active limit (finite $\lambda$). \subcap{a}: 1D single-particle case. Histogram of the particle position in a confining potential $U(x) = u_0 \gamma^6 ((L+x)^{-6} + (L-x)^{-6})$. Data for different $\delta$, $\lambda$ and $\beta$, but constant $\lambda/L = 2.5$ and $\beta u_0 \delta \gamma^6/L^7 = 6.209 \times 10^{-6}$. \subcap{b}:~Two-dimensional many-particle case. Pair-correlation function $g(r)$ for $N = 16$ and $n = 6$. Data for different $\delta$, $\lambda$ and $\beta$, at constant $\lambda/d = 11.1$ and $\beta u_0 \delta \gamma^6/d^7 = 0.209$. (All data for $\gamma = 1$, $u_0 = 1$.) } \label{F_ScalingActiveScaling} \end{figure} As discussed earlier in the single-particle case, this parameter reduction does not extend to the passive regime. In this case a diffusive scaling is required \begin{equation} \tilde{t}=\frac{t}{d^2}, \qquad \tilde{x}(\tilde{t})=\frac{x(t)}{d}\,, \end{equation} in terms of which the Fokker--Planck equation becomes\cite{KlamserThesis} \begin{equation} \frac{\partial}{\partial \tilde{t}}\tilde{P}_{\tilde{t}}[\tilde{\textbf{x}}] = -\frac{1}{6}\Gamma_0\sum_i\frac{\partial}{\partial \tilde{x}_i}\left\{\sum_{j\ne i}\frac{1}{( \tilde{x}_i-\tilde{x}_j)^7} P_{\tilde{t}}[\tilde{\textbf{x}}]\right\}+\frac{1}{6}\sum_i\frac{\partial^2}{ \partial \tilde{x}_i^2} P_{\tilde{t}}[\tilde{\textbf{x}}] \,, \label{eq:FP mutiparticle passive double scaled} \end{equation} with \begin{equation} \Gamma_0= \begin{cases} \frac{n u_0 }{k_\text{B} T} \phi^n \,, \text{ in 1D} \\ \frac{nu_0}{k_\text{B} T}(\pi \phi)^{n/2}\,, \text{ in 2D} \,. \end{cases} \label{eq:Gamma0} \end{equation} Therefore, the probability only depends\cite{KapferKrauth2015} on $\Gamma_0$, which differs from $\Gamma$. In summary, the small-$\delta$ limit of the systems described in this work is described by two relevant parameters for the active regime and a single relevant parameter for the passive regime. There is no smooth transition from one regime to the other when $\delta$ is small. This is because the limits $\delta \to 0$ and $\lambda \to 0$ are not exchangeable. Therefore, a phase diagram covering both passive and active is not possible on a two parameter space. A complete phase diagram is three dimensional on a rescaled parameter space $\delta/d$, $\lambda/d$ and $\Gamma_0$ in \eq{eq:Gamma0}. Expressed in terms of $\phi$, they yield \begin{align*} \frac{\delta}{\gamma}\phi\,, \frac{\lambda}{\gamma}\phi\,, &\text{ and } \Gamma_0\,, \quad \text{for 1D}\\ \frac{\delta}{\gamma}\sqrt{\pi\phi}\,, \frac{\lambda}{\gamma}\sqrt{\pi\phi}\,, &\text{ and } \Gamma_0\,, \quad \text{for 2D.} \end{align*} \subsubsection*{Continuous-time description} In the analysis presented so far, the Fokker--Planck description was obtained in the small-$\delta$ limit. However, it is possible\cite{KlamserThesis} to define a continuous-time description of the MC dynamics for arbitrary values of $\delta$. This can be obtained using the Kramers--Moyal expansion\cite{StochasticProcessesVanKampen} of the Master equation \eq{eq:Master discrete}, which leads to the Fokker--Planck equation \begin{align} \partial_t P_t(x,\epsilon) = &\frac{\sigma^2}{2} \frac{\partial^2}{\partial\epsilon^2}P_t(x,\epsilon) + f(x-\epsilon,\epsilon)P_t(x-\epsilon,\epsilon)\cr &-f(x,\epsilon)P_t(x,\epsilon)\,, \label{eq:Fokker--Planck general} \end{align} with the reflecting boundary at $\epsilon = \pm\delta$ as zero-current condition \begin{equation} \frac{\partial P_t(x,\epsilon)}{\partial\epsilon}\bigg \vert_{\epsilon=\pm\delta} = 0\,. \label{eq:Fokker--Planck general_boundary} \end{equation} Taking the small-$\delta$ limit of \eq{eq:Fokker--Planck general}, we recover the scaling of \eq{EQ:scale2222} that was obtained from the discrete-time Master equation \eq{eq:Master discrete}. \subsection{Active random acceleration process} An intriguing aspect of the dimensional reduction of the phase space is the singular nature of the passive case on a two-parameter plane for $\delta \to 0$. This singular behavior of the rescaled parameters is not due to the discrete nature of the MC dynamics as it also appears in the continuous-time description discussed above. For the kinetic MC, the singularity appears in the small-$\delta$ limit. However, it can also appear in other stochastic models of active matter, even without taking a vanishing limit of a parameter similar to $\delta$. To illustrate this point, we define a continuous model which closely resembles the kinetic MC dynamics. This model, which we refer as the active random acceleration process\cite{RandomAccelerationProcess} is defined for continuous time by a coupled Langevin equation \begin{subequations} \begin{align} \dot{\epsilon}_t &= r_t + R\left(\frac{\epsilon_t}{\delta}\right)\,,\label{eq: eps Langevin ARAP}\\ \dot{x}_t &= v_0\epsilon_t + \frac{1}{k_\text{B}T} \, F(x_t) + s_t\,, \label{eq: x Langevin ARAP} \end{align} \label{Eq:ARAP} \end{subequations} where $x_t$ is the position of the self-propelled particle at time $t$. The two noise terms, $r_t$ and $s_t$, are Gaussian white noises with zero mean and covariance $\langle r_t r_{t'}\rangle = \sigma^2 \delta(t-t')$ and $\langle s_t s_{t'}\rangle = 2D \delta(t-t')$, respectively. The term $R$ denotes the reflecting boundary condition at $\pm \delta$. The parameter $v_0$ characterizes the strength of the self-propulsion\footnote{Putting $1/k_\text{B}T$ in the force term of \eq{eq: x Langevin ARAP} is motivated by the aim to compare with the kinetic MC results.}. Similar to the kinetic MC dynamics, in the scaled coordinates $t\to t d / \delta$, $\epsilon \to v \delta$, $x \to x d$ the dynamics \eq{Eq:ARAP} is described by the Fokker--Planck equation \begin{align} \label{eq: rescaled ARAP - dimensionless FP} \frac{\partial}{\partial t}P_t(x,v) = \frac{d}{2\lambda} \frac{\partial^2}{\partial v^2} P_t(x,v) + \frac{D}{\delta d} \frac{\partial^2}{\partial x^2} P_t(x,v)\cr - \frac{1}{\delta k_\text{B}T} \frac{\partial}{\partial x} \left[ F(xd) P_t(x,v)\right] - v_0\frac{\partial}{\partial x} [v P_t(x,v)]\,. \end{align} Its multi-particle generalization is straightforward. For the inter-particle force \eq{eq:ForcePoserLaw}, the steady-state probability distribution is controlled by three parameters \begin{align} A= \frac{D}{\delta d} \frac{\lambda}{d}\,,\, B = v_0 \frac{\lambda}{d}\,, \text{ and }\,C = \frac{n u_0 \gamma^n}{k_{\text{B}}T \delta d^{n+1}} \frac{\lambda}{d}\,. \end{align} The passive limit corresponds to $v_0 = 0$, where the $x$ and $v$ coordinates decouple and the probability $P(x,v) = P(x)P(v)$. In such case, we see that the $P(x)$ follows the usual equilibrium Fokker--Planck equation and the steady-state $P(x)$ is determined by a single control parameter $C/A$, which coincides with $\Gamma_0$ for $D = 1$. Then, importantly, $P(x)\propto \exp(- C u(x)/A)$, where $u(x)$ is the rescaled potential and therefore, setting $D = 0$ makes the probability distribution uniform and independent of the inter-particle interaction, which corresponds to an infinite temperature. In the active limit, there are the three control parameters $A$, $B$, and $C$. The only way to reduce the number of parameters, while keeping the particles active and interacting, is by setting $D = 0$ and thereby $A = 0$. However, this would make the finite-temperature passive limit inaccessible, as we discussed above. This shows that the singular behavior of the MC dynamics, when trying to reduce the number of control parameters to two also happens in this active random acceleration process. A similar singular behavior is present for other models. We have checked\cite{KlamserThesis} that the same statement applies for the active Ornstein--Uhlenbeck process. From this analysis, we can conclude that in these classes of active dynamics, at least three independent relevant parameters are required to describe the full phase diagram including the passive regime. \section{Conclusions\label{Conclusions}} In this work, we presented a kinetic-Monte Carlo perspective on two-dimensional active matter. Within this approach, we established (in extension of our earlier work \cite{KlKaKr2018}) the presence of the liquid, hexatic, and solid active-matter phases from their constituent decay laws of positional and orientational order (see \tab{tab:PhaseCharacteristics}). We also ascertained continuity of the active-matter phases in the passive limit and recovered the phases of the corresponding equilibrium system. We have not tested very soft repulsive inverse-power-law potentials ($n < 6$ in \eq{eq:Potential}), but expect on the basis of our findings that for sufficiently steep repulsive inverse-power-law potentials the two-step melting behavior of the equilibrium system is maintained up to high activities, and possibly up to infinite persistence lengths. The stability of the intermediate hexatic phase in the high-activity regime---far above the linear-response regime---is intriguing. It may be due to an underlying symmetry that is yet to be understood. We have not addressed here the nature of the melting transitions in the active region, that is, the question of whether the liquid--hexatic phase transition is of first order or of Kosterlitz--Thouless type\cite{KT1,KosterlitzThouless1973} (as in the equilibrium system \cite{KapferKrauth2015}), and whether this theory continues to apply at all to the hexatic--solid transition in active systems. Besides the melting phase transitions, we have established the existence of the MIPS phase and identified it as a coexistence between a liquid and a gas: two phases with exponential decays of both correlation functions, but with typically very different densities. The coexistence region was identified for a wide range of interaction potentials including the hard-disk limit. It remains disjoint from the melting lines for all values of $n$ (possibly excluding very soft potentials). An analogy of the MIPS with the liquid--gas coexistence in equilibrium indicates the existence\cite{SpeckCritical} of a critical point, although we have not studied it in detail. We also discussed the dimensionality of the phase diagram. We showed that the qualitative phase behavior is robust against changes of the maximum step size $\delta$. The steady state reached by the kinetic MC dynamics is fully described by the density, the persistence length, and the maximum step size $\delta$, although a two-dimensional scaling describes the MIPS phase and the melting transitions at high density $\phi$ for small $\delta$. We argued in this direction using the stochastic description of the kinetic MC dynamics, and its formulation in terms of a Langevin dynamics. The common scaling with $\delta$ of the melting transitions and of the MIPS breaks down in the vicinity of the equilibrium phase transition point. This is needed to obtain the one-parameter scaling (for an inverse-power-law potential) in the passive limit. The detailed phase diagrams found in our present work once more illustrate the rich collective properties in non-equilibrium physics, and the current limits of our understanding of these models and, more generally, of the physics of active matter. \begin{acknowledgments} We thank H. Löwen and L. Berthier for helpful discussions. W.K. acknowledges support from the Alexander von Humboldt Foundation. \end{acknowledgments}
2,877,628,090,022
arxiv
\section{Introduction} The generation and the growth of water waves by wind is an old problem in geophysical fluid dynamics, with a wide range of applications, and challenges that have occupied the community for at least 150 years \citep{helmholtz1868, kelvin1871}. \citet{jeffreys25} suggested that wind waves grow because the pressure on the windward face of a crest is greater than the pressure on the leeward face of that crest, an ansatz he called the `sheltering hypothesis'. The modern foundations of a theory were laid down by \citet{phillips57} and \citet{miles57}, summaries of which can be found in the books of \citet{phillipsbook} and \citet{janssenbook}. We consider a layer of water of infinite depth over which a turbulent wind blows (Fig.~\ref{schema}). The air pressure fluctuations generate ripples on the water surface \citep{phillips57}. However, because the generation time scale is much smaller than the ripple period, we average the turbulent fluctuations over the longest period and model the mean wind field as a parallel inviscid steady flow, $\boldsymbol{U} = U(z)\ \boldsymbol{\hat{x}}$, where $U$ is a continuous and monotonic function of the vertical coordinate, $z$, and $\boldsymbol{\hat{x}}$ is a horizontal unit vector. Following \citet{miles57}, we study the linear stability of the wind field under perturbations induced by the tiny waves -- which we call ripples -- generated by turbulent fluctuations on the water surface, including gravity, $g$, and surface tension, $\sigma$. The shear is efficiently dissipated in the water, so that $U(z\le 0)=0$. We restrict our analysis to two-dimensional incompressible perturbations, assuming that Squire's theorem holds (we check it a posteriori in Appendix \ref{Appsquire}). The amplitude of a wave-induced perturbation as a function of the vertical variable, $z$, is determined by the Rayleigh equation, which expresses the conservation of vorticity along the streamlines \citep{drazin-reid}. Key quantities to determine are the Fourier components of the aerodynamic pressure, which \citet{miles57} wrote as \begin{equation} \hat{p}_0(0^+)\equiv \dair V^2 (\alpha + i \beta) k \hat{\eta}_0,\quad\text{with}\quad\alpha,\beta =O(1), \label{alphabeta} \end{equation} where $\dair$ is the density of air, $V$ is a characteristic wind speed and $\hat{\eta}_0$ is the amplitude of a Fourier mode -- characterized by the wavenumber $k$ -- of the displacement of the water surface; the subscript $0$ indicates that these are leading-order quantities in the expansion in powers of the air/water density ratio, \begin{equation} {\frac{\dair}{\dwat}\equiv\epsilon\ll1.} \end{equation} We emphasize that equation (\ref{alphabeta}) is a generalization of the Jeffreys sheltering hypothesis, which states that the aerodynamic pressure is in phase with the wave slope, and thus $\alpha =0$. The calculation of $\alpha$ and $\beta$ involves the solution of the Rayleigh equation, which exhibits singular behaviour at the critical level $z=\zc$, where the wind velocity, $U(z)$, equals the phase speed of water waves. The problem has been studied extensively over the last 60 years with a focus on $\beta$, because it is proportional to the growth rate of the wave. \citet{conte-miles}, \citet{hughes-reid} and \citet{beji-nadaoka} solved the Rayleigh equation numerically for various wind profiles, but an exact analytical solution exists only for an exponential profile -- a crude approximation of the mean turbulent wind. Moreover, it involves a hypergeometric function from which it is difficult to extract the maximum growth rate \citep{young-wolfe}. \citet{miles93} revisited his original work using the logarithmic wind profile and including the effects of turbulence. Using a variational method, confirmed by matched asymptotic expansions, he found an approximate formula for $\beta$ and fitted a subset of the experimental growth rates collated by \citet{plant82}. The coefficient $\alpha$ has generally been assumed to be negative and is neglected, evidently computed only by \citet{conte-miles} and \citet{milespart2}. However, we have been unable to demonstrate that $\alpha<0$ and our analysis shows that such an assumption is false. \citet{lighthill62} showed that the energy transfer from the wind to the waves occurs at the critical level, which is the height at which the wind speed equals the phase speed of water waves, observed for example by \citet{hristov-nature} in the range $16<c / u_{\star}<40$. Furthermore, in order to approximate the growth rate of the Miles instability, \citet{carpenter-et-al17} modelled the air--water interface and the critical level as interacting vortex sheets. However, their minimal model does not yield the dependence of the maximum growth rate on the physical parameters. In \S \ref{model}, we describe the normal modes of the air--water interface in the presence of a wind field. We then recover the results of Miles' theory perturbatively using a small air--water density ratio; $\epsilon\ll1$. In \S \ref{longwaves}, we use asymptotic methods to solve the Rayleigh equation for waves whose wavelength is much larger than the characteristic length scale of a given wind profile. We note that, in an Appendix to \citet{morland-saffman}, Miles used such a long wave approximation to simplify the exact solution for an exponential wind profile, but because we work directly with the Rayleigh equation, our approach is more general. We check the accuracy of our asymptotic expressions numerically, using a variant of the method proposed by \citet{hughes-reid}. In \S \ref{appli}, we obtain explicit expressions for $\alpha$ and $\beta$, and show that $\alpha$ can be non-negative. Next, we determine the growth rate of the Miles instability, and fit the entire range of the data compiled by \citet{plant82} using the logarithmic profile. In \S \ref{strongwind}, we study the strong wind limit introduced by \citet{young-wolfe}. We find that the fastest growing wave is characterized by $\alpha=0$ and is therefore accompanied by an aerodynamic pressure that is proportional to the wave slope, consistent with the Jeffreys sheltering hypothesis. We note that this result also holds approximately for moderate wind. We conclude in \S \ref{ccl}. \section{Wind-wave model} \label{model} Ripples on the water surface create small perturbations in the wind field. The perturbed velocity field is $\boldsymbol{U} + \boldsymbol{u}$, with $\boldsymbol{u}=u(x,z,t)\ \boldsymbol{\hat{x}} + w(x,z,t)\ \boldsymbol{\hat{z}}$, where $t$ is time. The incompressibility condition, $\boldsymbol{\nabla}\cdot\boldsymbol{u}=0$, allows us to introduce the streamfunction, $\psi(x,z,t)$, such that $u =\partial_z\psi$ and $w =-\partial_x\psi$. \subsection{Normal modes} We consider a surface displacement field of the form $\eta(x,t) = \Re\big\{\hat{\eta}\ e^{ik(x-c t)}\big\}$, where $c$ is a complex phase speed to be determined. The $x$-average over a wavelength, $2\pi/k$, is denoted by an overbar. Thus, since $\overline{\eta(x,t)}=0$, the unperturbed water surface, $z=0$, corresponds to the mean water level. Following \citet{young-wolfe}, we define the wave energy, $\E\equiv\K+\V$, as the sum of the mean kinetic energy per unit area, $\K$, and the mean potential energy per unit area, $\V$, which are given by \begin{equation} \K(t)\equiv\ \int_{-\infty}^{0^-} dz\ \frac{\dwat \overline{|\boldsymbol{u}|^2}}{2} + \int_{0^+}^{+\infty} dz\ \frac{\dair \overline{|\boldsymbol{u}|^2}}{2} \end{equation} and \begin{equation} \V(t)\equiv\frac{1}{2}\overline{\Big\{ (\dwat -\dair)g\ \eta^2 + \sigma\ (\partial_x\eta)^2\Big\}}, \end{equation} where $\dwat$ is the density of water. \textcolor{black}{The rate of change of wave energy is \begin{equation} \frac{d \E}{dt} = \int_{0^+}^{+\infty} dz\ \tau(z,t) U'(z), \label{NRJconserv} \end{equation} and $\tau\equiv-\dair\ \overline{uw}$ is the wave-induced Reynolds stress.} Following the canonical procedure \citep[e.g.,][]{drazin-reid}, we write the streamfunction in terms of normal modes as $\psi(x,z,t) = \Re\big\{\hat{\psi}(z)\ e^{ik(x-c t)}\big\}$. This leads to the Rayleigh equation \begin{equation} \lin\hat{\psi} =0,\quad\text{with}\quad \lin(z,c)=\big[U(z)-c\big]\bigg[\frac{d^2}{dz^2}- k^2\bigg] - U''(z),\label{Rayleigh} \end{equation} where the prime denotes differentiation with respect to $z$. The solution of equation (\ref{Rayleigh}) in the water, where there is no shear, is $\hat{\psi}(z\le 0) = \hat{\psi}(z=0)\ e^{kz} $, which we use to derive the boundary condition at $z=0^+$ and obtain \citep{morland-saffman} \begin{equation} \Big(kc^2-g - \frac{\sigma}{\dwat} k^2\Big)\hat{\psi}(0)=\epsilon\Big\{ c^2 \hat{\psi}' + (cU'-g)\hat{\psi}\Big\}\Big|_{0^+}.\label{BC} \end{equation} \textcolor{black}{\subsection{Perturbative resolution of the eigenvalue problem}} \label{perturbationtheory} Following \citet{janssenbook} and \citet{young-wolfe}, we expand the eigenvalue and the eigenfunction in the air in a power series in $\epsilon\ll 1$ as \refstepcounter{equation} $$ c=c_0+\epsilon\ c_1+ ...\qquad\text{and}\qquad\hat{\psi}^{\rm{a}}=\hat{\psi}_0+\epsilon\ \hat{\psi}_1+ ...\ , \label{expansions} \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ where `$\rm{a}$' denotes `air'. Similarly, the amplitude of the surface displacement, $\hat{\eta}$, and the amplitude of the perturbation pressure in the air, $\hat{p}^{\rm{a}}=\hat{p}^{\rm{a}}(z)$, are \refstepcounter{equation} $$ \hat{\eta}=\hat{\eta}_0+\epsilon\ \hat{\eta}_1+ ...\qquad\text{and}\qquad\hat{p}^{\rm{a}}=\hat{p}_0+\epsilon\ \hat{p}_1+ ...\ . \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ To leading order the ripples are not affected by the wind, but they induce a neutral perturbation on the air flow, determined by \begin{equation} \lin(z,c_0)\ \hat{\psi}_0(z) =0,\qquad z\ge 0. \label{L_0} \end{equation} The leading-order eigenvalue, $c_0$, is by definition the phase speed of water waves. Imposition of the boundary condition (\ref{BC}) yields the dispersion relation \begin{equation} c_0(k)=\frac{\cm}{\sqrt{2}}\sqrt{\frac{\kc}{k}+\frac{k}{\kc}}\ , \label{surf1} \end{equation} where \refstepcounter{equation} $$ \cm\equiv \bigg[\frac{4\sigma g}{\dwat}\bigg]^{\frac{1}{4}}\qquad\text{and}\qquad \kc\equiv\sqrt{\frac{\dwat g}{\sigma}}. \eqno{(\theequation{\mathit{a},\mathit{b}})}\label{surf2} $$ The minimum phase speed, $\cm$, arises from the competition between surface tension and gravitational forces and occurs when $k=\kc$; the capillary wavenumber. Following \citet{phillipsbook}, the leading-order amplitude of the aerodynamic pressure (cf. Eq. \ref{alphabeta}) is \begin{equation} \hat{p}_0(0^+)= \dwat c_0^2 (\mu+ i\gamma) k \hat{\eta}_0,\quad\text{with}\quad\mu, \gamma =O(\epsilon). \label{phillips} \end{equation} The phase difference between the aerodynamic pressure and the wave slope is proportional to $\mu$, which can be considered as the deviation from Jeffreys' sheltering hypothesis. Because \begin{equation} \hat{p}_0 = \dair\ \mathrsfso{W}(\hat{\psi}_0, U-c_0), \end{equation} where $\mathrsfso{W}$ is the Wronskian, the eigenvalue at the next order -- determined by the boundary condition (\ref{BC}) -- can be written as \begin{equation} \epsilon\ c_1 = \frac{c_0}{2}\Bigg( \mu +i \gamma -\frac{\epsilon}{1+\big[\frac{k}{\kc}\big]^2}\Bigg). \label{c1} \end{equation} Hence, $\mu$ is twice the wind-dependent relative change of the phase speed of water waves due to the coupling with the air, and $\gamma$ is the energy growth rate, normalized by the angular frequency of water waves. The last term in equation (\ref{c1}), which did not appear in \citet{miles57}, is the difference between the phase speed of interfacial waves and the phase speed of surface waves. Moreover, if we expand the wave energy as \begin{equation} \E = \E_0+\epsilon\ \E_1 + ...\ , \end{equation} we find \begin{equation} \gamma= \frac{1}{k c_0} \frac{\epsilon}{\E_0}\frac{d\E_1}{dt}\bigg|_{t=0}. \label{gamma1} \end{equation} Now, comparing equations (\ref{gamma1}) with (\ref{NRJconserv}), we retrieve the result of \citet{janssenbook}; \begin{equation} \gamma=\frac{\hat{\tau}_0(z=0^+)}{k\E_0},\quad\text{where}\quad \hat{\tau}_0(z) = - \dair \ \frac{k}{2}\ \Im\big\{ \hat{\psi}_0(z) \hat{\psi}'^{*}_0(z)\big\} \label{tau0} \end{equation} is the leading-order amplitude of $\tau(z,t)$, in which the star denotes complex conjugation, and $\E_0$ is the energy (per unit area) of water waves. This demonstrates that water waves extract energy from the wind through the work of the wave-induced Reynolds stress. \subsection{Boundary-value problems} \begin{table} \begin{center} \resizebox{\columnwidth}{!}{\begin{tabular} {c c c c} \hline \raisebox{1.5ex}{\textcolor{white}{l}}&Gravity waves&Capillary waves& Capillary-gravity waves\\[0.5ex] \hline \raisebox{2ex}{\textcolor{white}{l}}Control parameters&$Fr\equiv \frac{V}{\sqrt{gL}}$ &$We\equiv \frac{\dwat V^2 L}{\sigma}$&$\cmd\equiv \frac{\cm}{V}$ and $\kcd\equiv \kc L$ \\ [1ex] \hline \raisebox{2ex}{\textcolor{white}{l}}$\C(\kd)$&$\frac{1}{Fr\sqrt{\kd}}$ &$ \sqrt{\frac{\kd}{We}}$& $\frac{\cmd}{\sqrt{2}}\sqrt{\frac{\kcd}{\kd}+\frac{\kd}{\kcd}}$ \\ [1ex] \hline \raisebox{1.5ex}{\textcolor{white}{l}}$m$& $\frac{1}{Fr^2}$ &$ \frac{1}{We}$& $\frac{\cmd^2}{2}$ \\ [0.5ex] \hline \raisebox{1.5ex}{\textcolor{white}{l}}$q$&$\frac{2}{3}$ &$2$& $1$ \\ [0.5ex] \hline \end{tabular}} \end{center} \caption{The first row shows the control parameters of the three kinds of waves considered here: the Froude number, $Fr$, and the Weber number, $We$, describe the competition between the shear in the air and the relevant restoring force; $\cmd$ and $\kcd$ are a dimensionless minimum phase speed and a dimensionless capillary wavenumber, respectively. The second row gives the corresponding dimensionless dispersion relations. The third row gives the small parameter, $m\ll1$, defining the strong wind limit for each, and the last row gives the exponents $q$ characterizing the associated asymptotic states in the case of the exponential wind profile. } \label{tablewave} \end{table} The wind profile has velocity scale $V$, and length scale $L$, giving the dimensionless variables \refstepcounter{equation} $$ \z= \frac{z}{L},\quad\kd = kL, \quad\U= \frac{U}{V}, \quad\text{and}\quad \C = \frac{c_0}{V}. \eqno{(\theequation{\mathit{a},\mathit{b},\mathit{c},\mathit{d}})} $$ We consider two standard wind profiles, shown in Figure \ref{schema}. For the exponential profile, $V=U_{\infty}$ and $L=d$, and for the logarithmic profile, $V=u_{\star}$ and $L=z_0$, where all symbols are defined in the legend of Figure \ref{schema}. Their dimensionless forms are \refstepcounter{equation} $$ \U(\z)=1-e^{-\z}\qquad\text{and}\qquad \U(\z)= \ln(1+\z)/\kappa, \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ respectively. We stress that typical values of $z_0$ are of the order of millimetres \citep{wu75} while the wavelengths of capillary--gravity waves range from millimetres to hundreds of metres. Thus, most wind waves are long in the sense that their wavelengths are much greater than the characteristic length scale of the wind profile, and $\kd= kz_0$ is a natural small parameter. For the three dispersion relations given in Table \ref{tablewave}, we solve the following boundary-value problem: \begin{widetext} \refstepcounter{equation} $$ \chi''(\z) -\bigg[\kd^2 + \frac{ \U''(\z)}{\U(\z)-\C(\kd)}\bigg]\chi(\z)=0,\qquad \chi(0) = 1, \qquad\chi'(\z) +\kd\ \chi(\z) \underset{\z\to+\infty}{\longrightarrow} 0 \label{solved} \eqno{(\theequation{\mathit{a},\mathit{b},\mathit{c}})}, $$ \end{widetext} where $\chi\equiv \hat{\psi}_0/\hat{\psi}_0(0)$ is the leading-order normalized streamfunction amplitude. We emphasize that physically relevant wind profiles satisfy \begin{equation} \lim\limits_{\z\to+\infty}\frac{ \U''(\z)}{\U(\z)-\C(\kd)} = 0, \label{eq:just} \end{equation} which justifies the far field condition (\ref{solved}\textit{c}). In practice, the coefficients defined in equation (\ref{phillips}), which are more physical than the $\alpha$ and $\beta$ introduced by \citet{miles57}, are calculated as follows: \refstepcounter{equation} $$ \mu =\epsilon \ \bigg(\frac{\U'}{\kd\C} +\frac{ \Re\{\chi'\}}{\kd}\bigg) \bigg|_{0^+}\quad\text{and}\quad \gamma = \frac{\epsilon}{\kd}\ \Im\big\{\chi'(0^+)\big\}.\label{coeffs} \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ Note that $\alpha=\epsilon \mu/\C^2$ and $\beta=\epsilon \gamma/\C^2$. The Miles formula states that \citep{janssenbook} \begin{equation} \gamma = -\epsilon\ \frac{\pi}{\kd}\ \frac{\Uppc}{\Upc}\ |\crit|^2, \label{miles} \end{equation} where the subscript `c' denotes evaluation at the critical level $\zcd = \zcd (\kd)$, defined by \begin{equation} U(\zcd)=\C. \end{equation} The expression (\ref{miles}) originates from the global property of the solution of the boundary-value problem (\ref{solved}), \begin{equation} \Im\big\{\chi'(0^+)\big\} =-\pi\ \frac{\Uppc}{\Upc}\ |\crit|^2,\label{property} \end{equation} which we use to assess the accuracy of our numerical solutions. We evaluate the accuracy of the asymptotic methods developed here using the asymptotic suction boundary layer profile, $\U(\z) =1-e^{-\z}$, for which an exact solution of the Rayleigh equation exists \citep{young-wolfe}. However, for comparison with experimental data we shall use the more common mean turbulent boundary layer profile, $\U(\z) =\ln(1+\z)/\kappa$. \section{Long wave asymptotics} \label{longwaves} Long waves are characterized by $\kd\ll1$. \textcolor{black}{The following analysis is valid for the three functions $\C(\kd)$ given in Table \ref{tablewave}. For capillary--gravity waves, the wavelength is of the order of the capillary wavelength, $\lc\equiv 2\pi/ \kc$, so $\lc$ must be large compared with $L$, namely \begin{equation} \kcd\ll1. \label{longcapgrav} \end{equation} \subsection{General procedure}} Setting $\kd=0$ in equation (\ref{solved}\textit{a}), we find two linearly independent solutions \citep{drazin-reid}, \refstepcounter{equation} $$ \chi_1(\z) \equiv \U(\z)-\C \quad\text{and}\quad \chi_2(\z) \equiv \chi_1(\z) \int^\z \frac{d\tilde{z}}{\chi_1(\tilde{z})^2}\ . \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ We call the outer solution the linear combination of $\chi_1$ and $\chi_2$, namely \begin{equation} \out(\z)\equiv E\ \chi_1(\z) + F\ \chi_2(\z),\quad\text{with}\quad E,F\in \mathbb{C}. \end{equation} \textcolor{black}{We consider wind profiles such that $\U'>0$, $\U''<0$ and $\U'''>0$, so that there exists a unique position} $\zs$ between the critical level, $\zcd$, and infinity at which \begin{equation} Q(\kd,\zs) = 0,\quad\text{where}\quad Q(\kd,\z)\equiv\kd^2+\frac{\U''(\z)}{\U(\z)-\C(\kd)}\ . \label{condi} \end{equation} \textcolor{black}{Then the outer solution holds for $\z\ll\zs$. Eq. \eqref{eq:just} together with the far field condition (\ref{solved}\textit{c}) imply that $\chi(\z)\sim \solinf(\z)$ for $\z\gg\zs$, where \begin{equation} \solinf(\z)\simeq G\ e^{-\kd\z},\qquad\text{with}\qquad G \in\mathbb{C}. \label{farfield} \end{equation} We stress that $\zs=\zs(\kd)$. For standard wind profiles, we show in Appendix \ref{Appinflex} that \begin{equation} \lim\limits_{\kd\to0} \zs =+\infty,\qquad\text{but}\qquad \lim\limits_{\kd\to0} \kd\zs =0. \end{equation} In order to match the outer solution and the far field solution within an intermediate layer centred at $\z=\zs$, we introduce the rescaled variable $\xi\equiv \kd \z$. Then, we determine the constants $E$, $F$ and $G$ using the matching condition \begin{equation} \lim\limits_{\z\to+\infty} \out(\z) = \lim\limits_{\xi\to0} \solinf(\xi). \label{match} \end{equation}} Clearly, the asymptotic behaviour of $\out$ depends on the choice of $\U=\U(\z)$, whereas \begin{equation} \solinf(\xi)\sim G(1- \xi),\qquad \xi\to0. \end{equation} Hence, there are profiles, such as the logarithmic profile, for which matching is not possible. However, we note that the solution of the Rayleigh equation has an inflexion point at $\z=\zs$, and thus its behaviour is linear within the intermediate layer. Therefore, we anticipate that patching, rather than rigorous matching, of $\out$ and $\solinf$ at $\z=\zs$ will still give reasonable results. \textcolor{black}{In order for rigorous matching of solutions in all cases, a more detailed treatment around the point $\z=\zs$ is necessary, but we provide numerical evidence that the present approach faithfully reproduces the behaviour in this region.} \textcolor{black}{In practice, we work with a transformed variable $\Z = \Z(\z,\zcd)$, such that the function $Q$ introduced in equation (\ref{condi}) becomes independent of $\C(\kd)$. Using this transformation, the domain $[0,+\infty[$ becomes $[\Z_{\rm{inf}},+\infty[$, where $\Z_{\rm{inf}}= \Z_{\rm{inf}}(\zcd)$ depends on the wind profile and can be negative. In all cases considered here, we check that $\Z_{\rm{inf}} \le 1$, even in the limit $\kd\to 0$. \subsection{Matching for the exponential profile: $\U(\z) =1-e^{-\z}$} For this profile, we use the variables \refstepcounter{equation} $$ \Z \equiv \z-\zcd \qquad\text{and}\qquad \X(\Z)\equiv\chi(\z), \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ in terms of which the boundary-value problem (\ref{solved}) becomes \begin{equation} \X''(\Z)- \Bigg[\kd^2+ \frac{1} { 1-e^{\Z} } \Bigg] \ \X(\Z)=0, \end{equation} \refstepcounter{equation} $$ \X(-\zcd)=1,\qquad\X'(\Z) +\kd\ \X(\Z) \underset{\Z\to+\infty}{\longrightarrow} 0. \label{BCexp} \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ Here, the outer solution is \begin{equation} \X_{\rm{out}}(\Z) =E (1-e^{-\Z}) + F \big(1-e^{-\Z} \big) \bigg\{\frac{1}{1-e^\Z}+\Log\big(e^\Z-1\big)\bigg\}, \end{equation} where $\Log$ denotes a continuation of the natural logarithm to the negative real numbers \begin{equation} \Log(\z-\zcd)\equiv \ln|\z-\zcd|- i\qquad \text{for }\z<\zcd. \label{Log} \end{equation} The choice of the branch cut, which is just above the negative real axis as $\Upc>0$, follows from \citet{linbook}. The matching condition (\ref{match}) gives $E=G$ and $F=-\kd\ G$, with the remaining constant, $G$, being determined by the boundary condition (\ref{BCexp}\textit{a}).} \textcolor{black}{We construct a uniformly valid composite solution using the Van Dyke additive rule \citep[see e.g.,][]{B-O} as} \begin{widetext} \begin{align} \Xu(\Z)&= \Xout(\Z)+\Xinf(\Z)-(E+F\ \Z)\nonumber \\ &= G(\kd,\pp)\ \bigg\{ 1- e^{-\Z} - \kd \bigg[ \frac{1-e^{-\Z}}{1-e^{\Z}} + \big(1-e^{-\Z}\big)\Log\big( e^{\Z} -1\big) \bigg] + e^{-\kd\Z} - (1-\kd\Z) \bigg\}, \label{unifexp} \end{align} \end{widetext} where \begin{equation} G(\kd,\pp) = \frac{1-\pp}{1-\kd \pp + \kd\ln(\pp) + i\kd\pi}\ , \quad\text{and}\quad \pp\equiv\C^{-1}. \label{Gexp} \end{equation} Note that $\pp = \pp(\kd)$ because of the dispersion relation, $\C = \C(\kd)$. In Appendix \ref{Appexp}, we retrieve the expression for $\crit$ when $\zcd \ll1$ that Miles derived in an Appendix to \citet{morland-saffman}. \subsection{Patching for the logarithmic profile: $\U(\z) =\ln(1+\z)/\kappa$} We perform the coordinate transformation \refstepcounter{equation} $$ \Z \equiv \frac{1+\z}{1+\zcd} \qquad\text{and}\qquad \X(\Z)\equiv\chi(\z), \label{newvar'} \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ and hence the boundary-value problem (\ref{solved}) becomes \begin{widetext} \refstepcounter{equation} $$ \X''(\Z)- \Bigg[\Long^2- \frac{1} { \Z^2 \ln(\Z) } \Bigg] \ \X(\Z)=0,\qquad \X\bigg(\frac{1}{1+\zcd}\bigg)=1,\qquad\X'(\Z) +\Long\ \X(\Z) \underset{\Z\to+\infty}{\longrightarrow} 0, \eqno{(\theequation{\mathit{a},\mathit{b},\mathit{c}})} $$ where we have introduced $\Long\equiv \kd(1+\zcd)$. We use $\Long$ instead of $\kd$ as a small parameter. After a patching at $\Zs \equiv (1+\zs)/(1+\zcd)$, we find the following outer solution: \begin{equation} \Xout(\Z) = \begin{cases} \Big( J(\Zs) \ln(\Z)-H(\Zs)\big[\ln(\Z)\li(\Z)-\Z\big]\Big) \frac{G(\zcd,\Zs)}{\Zs }\ e^{-\Long \Zs} \qquad\text{if }\Z>1,\\ \\ \Big( J(\Zs) \ln(\Z)-H(\Zs)\big[\ln(\Z)\big[\li(\Z) -i \pi\big]-\Z\big]\Big) \frac{G(\zcd,\Zs)}{\Zs }\ e^{-\Long \Zs} \qquad\text{if }\Z<1, \end{cases} \label{outerlog} \end{equation} \end{widetext} where \begin{equation} \li(\Z)\equiv \cauchy\int_0^\Z \frac{d\tilde{z}}{\ln(\tilde{z})} \end{equation} is the logarithmic integral function, in which $\cauchy$ denotes the Cauchy principal value. The amplitude of the far field solution is \begin{equation} G(\zcd,\Zs) = \frac{\Zs(1+\zcd)e^{\Long \Zs}}{H(\Zs) g(\zcd)-J(\Zs) f(\zcd)-i\pi H(\Zs) f(\zcd)}\ , \label{Glog} \end{equation} where \begin{align} H(\Zs) &\equiv \frac{1}{\Long \Zs}+1, \\ J(\Zs)&\equiv H(\Zs)\li(\Zs)-\Long \Zs^2\ ,\\ f(\zcd)&\equiv(1+\zcd)\ln(1+\zcd),\qquad\text{and} \\ g(\zcd)&\equiv1+f(\zcd)\li\bigg(\frac{1}{1+\zcd}\bigg). \end{align} Clearly, for a given dispersion relation, the above parameters are all functions of $\kd$. \textcolor{black}{Matching is not possible here because the behaviour of $\Xout(\Z)$ at large $\Z$ is not linear. \subsection{Discussion} \label{discussion} \begin{figure*}[htbp!] (a)\includegraphics[trim = 0 0 0 0, clip, width = 0.47\textwidth]{longexp_a} (b)\includegraphics[trim = 0 0 0 0, clip, width = 0.45\textwidth]{longexp_b} \caption{Comparison of the uniformly valid composite solution (\ref{unifexp}) with the numerical solution of the Rayleigh equation for the exponential wind profile, for two values of $\kd$ and $\C$=0.25. The dots and the stars depict the real and imaginary parts of the numerical solution, respectively. The continuous line shows the real part of (\ref{unifexp}) and the dashed line the imaginary part.} \label{longexp} \end{figure*} \begin{figure*}[htbp!] (a)\includegraphics[trim = 0 0 0 0, clip, width = 0.46\textwidth]{longlog_a} (b)\includegraphics[trim = 0 0 0 0, clip, width = 0.47\textwidth]{longlog_b} \caption{Comparison of the outer and far field solutions patched at the inflexion point, $\z=\zs$, with the numerical solution of the Rayleigh equation for the logarithmic wind profile, for two values of $\kd$ and $\C$=0.5. The dots and the stars depict the real and imaginary parts of the numerical solution, respectively. The continuous line shows the real part of the outer solution (\ref{outerlog}) and the dashed line the imaginary part. The dash-dotted and dotted lines represent the real and imaginary parts of the far field solution (\ref{farfield}), respectively.} \label{longlog} \end{figure*} In Figures \ref{longexp} and \ref{longlog}, we} compare our uniformly valid composite solution for the exponential profile, and our patched solution for the logarithmic profile with the numerical solutions. \textcolor{black}{Both the matching and the patching give excellent results for sufficiently small values of $\kd$, the magnitude of which depends on the wind profile. We note that these are plots for fixed values of $\kd$ and $\C$ and that any of three dispersion relations can be retrieved with a proper choice of the control parameters.} Moreover, we assess our approach by checking that $\Xu(\Z)$ and $\Xout(\Z)$ satisfy the global property (\ref{property}). Above the critical level, the phase of the solution of the Rayleigh equation is constant, equal to the phase of $G$, showing that long waves interact with the wind between the mean water level and the critical level. For the two standard wind profiles considered here, both the real and the imaginary part of the solution of (\ref{solved}) have an extremum at $\z=\zex$, between the critical level, $\zcd$, and the inflexion point, $\zs$. In Appendix \ref{Appfixedpoint}, we show the extremum is always a maximum for the imaginary part, whereas for the real part it is a minimum when $\kd<\kd_{\star}$ but a maximum when $\kd>\kd_{\star}$; where $\kd_{\star}$ is the wavenumber of the fastest growing wave. We also show that the air flow above wind waves has two elliptic points at the level $\z=\zex$ in the domain $kx\in[0,2\pi[$. These elliptic points can be seen in Figure 13a of \citet{young-wolfe}, obtained from the hypergeometric solution of the Rayleigh equation in the case of the exponential profile, and in Figure 1(e) of \citet{hristov-nature}, obtained from the numerical solution for the logarithmic profile. \section{Application to the Miles instability}\label{appli} \subsection{Normalized energy growth rate and deviation from the sheltering hypothesis} \label{appli1} \begin{figure*}[htbp!] (a)\includegraphics[trim = 0 0 0 0, clip, width = 0.49\textwidth]{mu} (b)\includegraphics[trim = 0 0 0 0, clip, width = 0.45\textwidth]{gamma} (c)\includegraphics[trim = 0 0 0 0, clip, width = 0.75\textwidth]{gammamu} \caption{Long wave asymptotic results for capillary--gravity waves and the logarithmic profile. (a) Twice the wind-dependent relative change of phase speed, $\mu$. (b) The normalized energy growth rate, $\gamma$, as a function of the dimensionless wavenumber, $\kd = kL$, where $L$ is the length scale associated with the wind profile. (c) Plot of $\gamma$ versus $\mu$ for two values of $\cmd.$ } \label{logmugamma} \end{figure*} We calculate the coefficients $\mu$ and $\gamma$ for long waves using the expressions (\ref{coeffs}\textit{a,b}). In the case of the exponential profile, we find \begin{align} \mu^{\rm{exp}}_{\rm{long}} (\kd)&= -\epsilon\ \frac{(\pp-1)^2[1-\kd \pp + \kd\ln(\pp)]}{[1-\kd \pp + \kd\ln(\pp)]^2 + [\kd \pi]^2 }\qquad \text{and}\nonumber \\ \gamma^{\rm{exp}}_{\rm{long}}(\kd) &= \frac{\epsilon \ \pi\ \kd (\pp-1)^2 } {[1-\kd \pp + \kd\ln(\pp)]^2 + [\kd \pi]^2 }\ , \label{gammaexp} \end{align} and in the case of the logarithmic profile, we find \begin{figure} \centering \includegraphics[ width=\columnwidth]{plantmiles} \caption{Comparison of the normalized energy growth rate (multiplied by $2\pi$) calculated using the long wavelength asymptotics for the logarithmic profile and gravity waves characterized by a Froude number $Fr=12$, with the experimental data compiled by \citet{plant82}. The dashed line shows the results of \citet{miles93} for the same Froude number. } \label{plantmiles} \end{figure} \begin{widetext} \begin{align} \mu^{\rm{log}}_{\rm{long}}(\kd) &= \frac{ \epsilon H(\Zs)}{\kd \ln(1+\zcd)}\ \frac{ H(\Zs)g(\zcd) -J(\Zs)f(\zcd)}{\big[ H(\Zs)g(\zcd) -J(\Zs)f(\zcd)\big]^2 +\big[\pi H(\Zs) f(\zcd) \big]^2}\ \qquad \text{and}\nonumber \\ \gamma^{\rm{log}}_{\rm{long}}(\kd) &= \frac{\epsilon}{\kd}\ \frac{\pi (1+\zcd) H^2(\Zs)}{\big[ H(\Zs)g(\zcd) -J(\Zs)f(\zcd)\big]^2 +\big[\pi H(\Zs) f(\zcd) \big]^2}\ . \label{gammalog} \end{align} \end{widetext} The dependence upon the physical parameters, $Fr$, $We$, $\cmd$ and/or $\kcd$, is contained in the inverse phase speed, $\pp$, for the exponential profile, or the critical level, $\zc$, and the transformed inflexion point, $\Zs$, for the logarithmic profile. For capillary--gravity waves and the logarithmic profile we compare the numerical evaluation of $\mu$ and $\gamma$ with our asymptotic expressions in Figure \ref{logmugamma}, and note that the plots for the exponential profile are very similar. In anticipation of the strong wind limit (see \S \ref{logSW}), we have chosen the control parameters, $\kcd$ and $\cmd$, such that the fastest growing waves are driven by both gravity and surface tension. We maintain the scaling of the wavenumber with the length scale of the wind profile, $L$, although we note that we could also use the capillary wavelength. For both profiles, the asymptotics show very good agreement with the numerics, even when $\kd =O(1)$. The normalized growth rate, $\gamma$, has a maximum at $\kd =\kmax$ in the long wave regime. The deviation from the Jeffreys sheltering hypothesis, as captured by $\mu$ (cf. \ref{phillips}), is equal to zero for a wavenumber close to $\kd =\kmax$, indicating that the fastest growing wave is such that the aerodynamic pressure is almost in phase with the wave slope. Thus, we demonstrate the validity of Jeffreys' intuition of wind-wave growth and show that the assumption of \citet{conte-miles} and \citet{milespart2} that $\alpha<0$ was erroneous. \subsection{Classical case: logarithmic profile and $\sigma=0$} \citet{plant82} collected experimental data for the normalized energy growth rate (multiplied by $2\pi$). In Figure \ref{plantmiles}, we compare his results with the long wave asymptotics for the logarithmic profile and gravity waves characterized by a Froude number $Fr=12$\textcolor{black}{; the range of $\kd$ used here is $[10^{-5}, 0.135]$. }Our analysis provides a good fit of the entire range of data, contrary to that of \citet{miles93}. Nonetheless, the measurements were made in different conditions and the data analysed using different dispersion relations; for instance, \citet{larson-wright} considered capillary--gravity waves. Therefore, it would be more appropriate to consider a range of Froude numbers, or more generally a range of $\cmd$ and $\kcd$, the control parameters for capillary--gravity waves. In addition to the challenging aspects of making these measurements, this may explain the significant scatter of the data. \subsection{Interpretive framework of the Miles mechanism} \citet{jeffreys25} proposed that wind waves grow because of a pressure asymmetry due to flow separation: the air flowing over a wave separates on the downwind side and reattaches on the upwind side of the next crest. \citet{banner-melville} argue that there is no air flow separation unless the waves break. In \ref{appli1} we showed that the condition for optimal growth is equivalent to Jeffreys' idea of the aerodynamic pressure being in phase with the wave slope. We develop this finding in section \ref{strongwind} with the aid of the strong wind limit. Jeffreys' idea can be understood as follows. In the absence of wind, the aerodynamic pressure is in a phase opposite to that of the surface displacement -- high pressure at the wave troughs, low pressure at the wave crests -- and the streamlines of the air flow adjust to the water surface. For growth to happen, a phase shift of the streamlines is required \citep{lighthill62,stewart74}, as shown in Figure \ref{mechanism}. The optimal phase shift can even be intuited by observing that a node on the windward side (point $M$) is moving down while a node on the leeward side (point $N$) is moving up. Therefore, if the pressure is maximal windward and minimal leeward, the motion of the nodes will be enhanced, and hence the optimal phase shift is equal to $\pi/2$. Mathematically, a non-zero phase shift can only arise from a complex leading-order streamfunction amplitude; $\hat{\psi_0}(z)\in \mathbb{C}$. We recall that $\hat{\psi_0}$ is the first term in an expansion in powers of the air/water density ratio $\epsilon$, regarded as a coupling constant. Therefore, $\hat{\psi_0}$ determines the neutral perturbation induced by the water waves on the air flow. Because the Rayleigh equation (\ref{L_0}) has real coefficients, only a singularity can lead to a complex solution and hence to a critical layer. In \S \ref{model}, we introduced the wave-induced Reynolds stress, $\tau=-\dair\ \overline{uw}$, where $u$ and $w$ are the velocity components of the air flow perturbation and the overbar denotes the average over a wavelength. It is evident from Figure \ref{mechanism} that a phase shift of the streamlines implies that $\tau >0$. This view of the growth in terms of a positive wave-induced Reynolds stress was first pointed out by \citet{stewart74} and formalized by \citet{janssenbook}, and we made the connection with the energy growth rate in the end of \S \ref{perturbationtheory}. The leading-order amplitude of $\tau(z,t)$ is shown in Eq. \eqref{tau0} and clearly displays the necessity of a complex streamfunction. (Note that the quantity on the right hand side of Eq. (\ref{tau0}) is piecewise constant.) As a consequence of the global property (\ref{property}), $\hat{\tau}_0$ maintains the same positive value from the water surface up to the critical level, $z=z_c$, and then jumps to zero as it must vanish in the far field. Since the original work of \citet{miles57} our understanding of the basic question of how wind waves grow has advanced in three key aspects: (i) waves grow due to the work of a positive wave-induced Reynolds stress; (ii) waves grow due to the phase shift of the streamlines of the air flow; and (iii) waves grow due to an asymmetric pressure distribution. Our contribution here is to prove that for optimal growth the phase shift must be equal to $\pi/2$, as intuited by Jeffreys. Furthermore, we made clear that all these contributions have the same mathematical basis -- a complex streamfunction -- and the same physical foundation -- a critical layer. \begin{figure} \centerline{\includegraphics[width=1.1\columnwidth]{mechanism}} \caption{Streamlines of the air flow (dashed) over water waves, modified from \citet{stewart74}. The solid line depicts the water surface, defined by the displacement $\eta(x,t) = a\cos(kx-\omega t)$. The wave slope is $\partial_x\eta(x,t) =- ka\sin(kx-\omega t)$, and the vertical speed of the points on that curve is $\partial_t\eta(x,t) = \omega a\sin(kx-\omega t)$. Point $M$ has a phase equal to $-\pi/2$ (positive slope), and thus moves downward. Point $N$ has a phase equal to $\pi/2$ (negative slope), and thus moves upward. The pressure asymmetry, caused by the phase shift of the streamlines, enhances the motion of $M$ and $N$. Thick black arrows represent the velocity field of the air flow perturbation, $\boldsymbol{u}=u\ \boldsymbol{\hat{x}}+ w\ \boldsymbol{\hat{z}}$. We observe that $\overline{uw}<0$, where the overbar denotes the average over a wavelength. } \label{mechanism} \end{figure} \section{Strong wind limit} \label{strongwind} Following \citet{young-wolfe}, we introduce a parameter $m$, which is a measure of the strength of the wind. As seen in Table \ref{tablewave}, $m$ depends on the restoring force. In the strong wind limit, defined by $m\ll1$, $\kmax$ tends to the point at which $\mu_{\rm{long}}$ vanishes, which shows that the Jeffreys sheltering hypothesis is in fact the condition for optimal growth of wind waves. A derivation of the results stated below is given in Appendix \ref{Appmax}. \textcolor{black}{\subsection{Exponential wind profile} \label{SWexp} For $\U(\z)=1-e^{-\z}$, when $m\ll1$ }the normalized energy growth rate becomes a Lorentzian function \begin{equation} \frac{\gamma^{\rm{exp}}_{\rm{long, SW}}(\kd)}{\gm(q)} = \frac{\big[\Delta(q)\big]^2 }{\big[\kd-\kd_{\star}\big]^2+ \big[\Delta(q)\big]^2}\ , \label{gammaSW} \end{equation} where `SW' denotes `strong wind', and \refstepcounter{equation} $$ \gm(q)\equiv \frac{\epsilon}{\pi m^{\frac{3q}{2}}} \qquad\text{and}\qquad \Delta(q) \equiv q\pi m^q. \label{paramSW} \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ The parameter $\Delta$ is the half-width at half-maximum, and $q$ is a rational number completely determined by the restoring force. For gravity waves, $q= \frac{2}{3}$ \citep{young-wolfe}, while $q= 2$ for capillary waves. Furthermore \begin{equation} \kd_{\star} \simeq m^{\frac{q}{2}} - \frac{q^2}{2} m^q \ln(m) + \frac{q^4}{4} m^{\frac{3q}{2}} \big[\ln(m)\big]^2- \frac{q^3}{4} m^{\frac{3q}{2}} \ln(m) , \label{kstar_exp} \end{equation} which generalizes the asymptotic formula obtained by \citet{young-wolfe} using the exact solution of the Rayleigh equation. In the case of capillary--gravity waves, $\kcd$ and $m$ are both small parameters (see Eq. \ref{longcapgrav}), so that there exists an exponent $\nu>0$ such that $\kcd=m^{\nu}$. This exponent, originating from the choice of the control parameters, determines the value of $\kmax/\kcd$, and hence whether the fastest growing waves are driven by gravity, surface tension or both. We find that, if $\nu=\frac{1}{2}$, then $\kmax/\kcd= O(1)$, and hence the effects of gravity and surface tension play an equal role in the fastest growing waves. For $\nu=\frac{1}{2}$, we generalize the strong wind limit formula (\ref{gammaSW}) to capillary--gravity waves by taking $q=1$ and performing the transformations \refstepcounter{equation} $$ \gm(q) \to \frac{\gm(q)}{\xmax^2+1} \quad\text{and}\quad \Delta(q)\to \Delta(q) Q(\xmax), \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ where \refstepcounter{equation} $$ \xmax\equiv\frac{\kmax}{\kcd} \simeq \frac{119}{81}\quad\text{and}\quad Q(\xmax)\equiv \big[\xmax^2+1\big]^{\frac{3}{2}}\frac{2 \sqrt{\xmax}}{\xmax^2+3}\ . \label{expmaxCG} \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ We also obtain the asymptotic form of $\mu^{\rm{exp}}_{\rm{long}}$ as $m\ll1$, which is \begin{equation} \frac{\mu^{\rm{exp}}_{\rm{long, SW}}(\kd)}{\mu_{\rm{max}}(q)} = \frac{2\Delta(q) [\kd-\kmax] }{\big[\kd-\kmax\big]^2+ \big[\Delta(q)\big]^2}\ , \label{muSW} \end{equation} with $\mu_{\rm{max}}(q)\equiv \frac{\gamma_{\rm{max}}}{2}$. From equations (\ref{gammaSW}) and (\ref{muSW}), we deduce that, in the strong wind limit, the graph of $\gamma$ versus $\mu$ becomes a circle of radius $\mu_{\rm{max}}$, centered at $(0,\mu_{\rm{max}})$. \subsection{Logarithmic wind profile} \label{logSW} \begin{figure*}[htbp!] (a)\includegraphics[trim = 0 0 0 0, clip, width = 0.52\textwidth]{inversepower} (b)\includegraphics[trim = 0 0 0 0, clip, width = 0.38\textwidth]{squarepower} \caption{(a) The position of the maximum of $\gamma^{\rm{log}}_{\rm{long}}(\kd)$ and the position of the zero of $\mu^{\rm{log}}_{\rm{long}}(\kd)$, and (b) the amplitude of the maximum growth rate $ \gm^{\rm{grav}}$ for gravity waves and the logarithmic wind profile as a function of the Froude number.} \label{powerlaws} \end{figure*} For $\U(\z)=\ln(1+\z)/\kappa$, we find numerically that, in the strong wind limit, the fastest growing gravity wave is determined by \begin{equation} \kmax^{\rm{grav}} \sim N_1\sqrt{m},\qquad m\ll1, \label{ksloggrav} \end{equation} where $N_1=0.22$. Moreover, we show that the corresponding maximum growth rate is \begin{equation} \gm^{\rm{grav}} \sim N_2\ \frac{\epsilon}{\kappa^2\pi m}\ ,\qquad m\ll1, \label{gmloggrav} \end{equation} where $N_2=0.29$. Whereas the small parameter $m= Fr^{-2}$ provides a convenient means of carrying out the asymptotic analysis, we believe that the Froude number itself provides better physical intuition. In particular, we see from Figure \ref{plantmiles} that in practice $Fr= O(10)$. Thus, we rewrite equations (\ref{ksloggrav}) and (\ref{gmloggrav}) as \refstepcounter{equation} $$ \kmax^{\rm{grav}} \sim \frac{N_1}{Fr} \qquad\text{and}\qquad \gm^{\rm{grav}} \sim N_2\ \frac{\epsilon}{\kappa^2\pi}\ Fr^2. \label{powersFr} \eqno{(\theequation{\mathit{a},\mathit{b}})} $$ In Figure \ref{powerlaws}(a), we plot the position of the maximum of $\gamma^{\rm{log}}_{\rm{long}}(\kd)$ and the position of the zero of $\mu^{\rm{log}}_{\rm{long}}(\kd)$ as a function of $Fr$. For both quantities, the results can be fitted with an inverse power law, with the fit for $\gamma$ slightly better than that for $\mu$. Furthermore, the two sets of points are quite close to each other, confirming that for large values of $Fr$ the position of the zero of $\mu$ is an excellent approximation to the position of the maximum of $\gamma$. Figure \ref{powerlaws}(b) shows the amplitude of the maximum growth rate as a function of $Fr$, fitted with a square law. We explain in Appendix \ref{Appmax} that (\ref{powersFr}\textit{b}) is deduced from (\ref{powersFr}\textit{a}) by assuming that $\mu=0$ determines the maximum of $\gamma$. As shown in Figure \ref{powerlaws}(a), such an assumption is very reasonable but inevitably introduces some error. In contrast, the growth rate of capillary waves does not have a maximum, but diverges at small $\kd$ and, independent of the value of $m$, $\mu^{\rm{log}}_{\rm{long}}$ does not vanish. Nonetheless, the assumption that the effect of gravity is negligible does not hold for $\kd\ll\kcd$. Therefore, this divergence of $\gamma$ is not physical. \textcolor{black}{In the case of capillary--gravity waves, as for the exponential profile, there exists an exponent $\tilde{\nu}>0$ such that $\kcd=m^{\tilde{\nu}}$. We show that $\kmax/\kcd= O(1)$ for $\tilde{\nu}=1$ (see Figure \ref{logmugamma} where $\kcd=0.8$) and \begin{equation} \gm^{\rm{CG}} \sim N_3\ \frac{\epsilon}{\pi \big[\kappa m\big]^2}\ ,\qquad m\ll1,\label{gmlogCG} \end{equation} where $N_3$ is a numerical constant. } Therefore, the wind-wave interaction has similar overall characteristics for both the exponential and the logarithmic wind profiles, differing only in the numerical details. \section{Conclusion} \label{ccl} We examine the Miles mechanism of wind-wave instability through the lens of an asymptotic analysis of the Rayleigh equation. \textcolor{black}{In the view of \citet{miles57}, free surface waves with phase speed $c_0$ perturb the wind field, and energy is transferred from the mean flow to the perturbation at the critical level, $z=z_c$, where the wind speed is equal to $c_0$. The subsequent feedback on the normal stress at the water surface leads to the growth of waves. However, the coupling with the wind field also affects the phase speed. We calculate the energy growth rate normalized by the angular frequency of free surface waves, $\gamma$, and twice the wind-dependent relative change of the phase speed, $\mu$. The emphasis is on $\mu$ and $\gamma$ being respectively proportional to the real and imaginary parts of the Fourier components of the aerodynamic pressure (see Eq. \ref{phillips}). A parameter $m$ accounts for the competition between the shear in the air and the restoring force; gravity and/or surface tension.} In the strong wind limit, defined by $m\ll1$, we find that (i) the functions $\mu=\mu(\kd)$ and $\gamma=\gamma(\kd)$ are self-similar with respect to $m$; (ii) the similarity exponents depend on the restoring force and the wind profile (see Eqs. \ref{gammaSW} and \ref{muSW} for the exponential profile); and (iii) $\gamma$ is maximal when $\mu =0$, consistent with the sheltering hypothesis of \citet{jeffreys25}. \textcolor{black}{In other words, the growth of surface waves is optimal when the aerodynamic pressure is in phase with the wave slope, and the overall instability mechanism is qualitatively independent of the strength of the wind and of the restoring force. }Additionally, we show that long waves interact with the wind only between the mean water level and the critical level, $z=\zc$. Finally, we use our asymptotic solutions to fit the entire range of data compiled by \citet{plant82}. \section*{Acknowledgements} We acknowledge Swedish Research Council grant no. 638-2013-9243 for support.
2,877,628,090,023
arxiv
\section{Acknowledgment} \section{Introduction} \label{sec:Introduction} Understanding maintenance activities is critical for practitioners to effectively support the evolution of their projects in terms of enhancing cost-effectiveness, managing technical debt, and better allocation of maintenance related resources. Therefore, a plethora of studies have been performed on automatic classification of repository artifacts (\textit{e.g.}, bug reports, issues, code changes) in general, and commit messages in particular for several purposes, including the approximation of maintenance activities \citep{gharbi2019classification,honel2020using}, identification of bug fixes \citep{zafar2019towards}, detection of security-relevant changes \citep{sabetta2018practical,alsolai2020systematic}. Recently, there have been a focus on analyzing commit messages in the context of refactoring. Refactoring, being the art of improving software internal design without altering its external behavior \citep{alomar2021preserving}, is the \textit{de-facto} way to reduce technical debt \citep{avgeriou2016managing}. To help manage this technical debt, a lot of research focus has shifted to analyzing developers' refactoring practices through mining code changes and commit messages \citep{veerappa2013empirical,naiya2015relationship,ubayashi2018can,counsell2018developers,counsell2019relationship}. For instance, \citep{alomar2019can} developed a taxonomy of textual patterns, used by developers when documenting their refactoring activities, to understand how developers document these refactoring activities and many empirical studies have focused on mining commit messages to extract the reason behind developers' choice to refactor in terms of optimizing structural metrics, (\textit{e.g.}, coupling, complexity, etc.) \citep{pantiuchina2018improving,alomar2019impact}, and quality attributes (\textit{e.g.}, readability, etc.) \citep{fakhoury2019improving}. Commit messages were also used by \citep{rebai2020recommending} to recommend refactoring operations. While there is a heavy reliance on the valuable information contained in commit messages, little is known about the extent to which such information can properly describe the actual refactoring changes in the source code. Specifically, studies have shown that developers do often misuse refactoring related terminology, in their documentation \citep{zhangpreliminary18}. Because commit message analysis relies on the notion that refactorings are described in such a way that they can be distinguished from one another (\textit{i.e.,}\xspace rename is described differently than move method), it is important to know whether this is generally true and in particular \textit{how} refactorings can be distinguished by the way they are described in commit messages. \textcolor{black}{Recent studies have been heavily investigating how developers document refactoring to gain more insights on how refactoring is being practically applied. They parse commit messages to extract the intent behind the refactoring, then measure the impact of the refactoring on the source code quality, and verify the consistency between what was described in the message with the measurement in the source code. For instance, \citep{pantiuchina2018improving} found a misperception between the state-of-the-art structural metrics, widely used as indicators for refactoring, and what developers actually document as an improvement when they refactor their source code. Similarly, \citep{alomar2019impact} have found that not all metrics are equally capturing developers perception of software quality. \citep{fakhoury2019improving} have found that current readability frameworks are unable to capture what developers intended to be refactorings that improve the source code readability. Such misperception between the theory of detecting refactoring opportunities, through removing code smells and improving structural metrics, and practical intents driving developers to refactor, could explain the shortage of developers adoption of current refactoring tools \citep{murphy2012we,kim2014empirical}. \textcolor{black}{\citep{arnaoudova2016linguistic} investigated} the linguistic antipatterns that are in disjunction with the source code. \textcolor{black}{In another} important dimension that can be investigated, is the consistency between the documentation of the refactoring actions, and the refactoring types that were actually performed in the source code. Just like documenting features and bug fixes, recent studies have shown that developers intentionally describe refactoring activities in commit messages, \textit{i.e.,}\xspace self-affirm the existence of refactoring activity \citep{alomar2019can,zhangpreliminary18}. Yet, little is known about the extent to which, the description of refactorings, in the commit message, matches the actual refactoring action that was committed.} \begin{figure}[ \centering \includegraphics[width=1.0\linewidth]{commit_example.png} \caption{\textcolor{black}{An example of a refactoring, and its corresponding documentation.}} \label{fig:example} \end{figure} Therefore, we study the ways in which terminology used to describe refactorings in commit messages to distinguish different refactorings from one another by studying the discriminative power of various machine learning techniques when provided this terminology. As an illustrative example, we refer to the simplified example extracted from the \texttt{bekvon/residence} project\footnote{ \url{https://github.com/bekvon/residence/commit/76c364ea47e5a28b2041a0bb3323cb48bab180c9} (last checked 2020/06/20)} reported in Figure \ref{fig:example}. The commit message states the purpose of refactoring as a rename of getter function for better readability. Based on the developers commit message, can we automatically deduce the existence of a refactoring whose type is \textit{Rename Method}. An intuitive solution for this problem is to detect the refactoring type in the source code and string-match it in the commit message to check whether it is mentioned as a form of verification. Such a solution assumes that developers refer to refactorings as they are known in the refactoring catalogue \citep{Fowler:1999:RID:311424,wake2004refactoring}. Previous studies found that developers misuse refactoring related terms \citep{soares2013comparing}, which hinders the accuracy of the string matching solution and presents a challenge for any solution that attempts to verify the consistency between refactoring and its corresponding documentation. \textcolor{black}{The goal of our study is to investigate whether different words and phrases found in refactoring commit messages are unique to different types of refactorings (\textit{e.g.}, rename, move, extract, inline, etc.). In pursuit of this goal, we deploy machine learning techniques for the prediction of refactoring operation types based on commit messages. The results of this study can help us determine the types of words and phrases which best discriminate one type of refactoring from another; providing greater insight into the way refactoring is affirmed, which can be used to help automatically document refactorings in a more systematic way. Additionally, this work is critical in supporting refactoring documentation and in reducing the amount of effort needed by developers to appropriately describe what happened during a sequence of changes and help improve comprehension of those changes via commit messages. The work helps us understand how developers discriminate against different refactoring types through human language descriptions. Further, a recent industrial case study at Xerox reveals that developers rarely report specific refactoring operations as part of their documentation when submitting refactoring changes \citep{alomar2021icse}. \textcolor{black}{With} the lack of refactoring documentation guidelines, the reviewers are forced to ask for more details in order to recognize the need for refactoring. The authors designed a procedure for documenting any refactoring review requests, respecting three dimensions that they referred to as the three \textit{\textbf{I}}s, namely, \textit{\textbf{I}ntent}, \textit{\textbf{I}nstruction}, and \textit{\textbf{I}mpact}. Our study sheds light on the need to improve the quality of documenting refactoring types, which is considered one of the recommended dimensions to include in refactoring documentation.} In this paper, we formulate the prediction of refactoring operation types as a multi-class classification problem. Our solution relies on textual mining of commit messages to extract the corresponding features (\textit{i.e.}, keywords) that better represent each class (\textit{i.e.}, refactoring type) in order to automatically predict, for a given commit, the type of refactoring being applied and documented. To build our model, we collected a dataset of commits that are known to contain the type of refactorings considered in this study. So, we use Refactoring Miner \citep{tsantalis2018accurate} to extract, from different open source projects, commits that are known to contain a refactoring operation. Using Refactoring Miner, we collected a dataset of 5,004\xspace instances, from 800\xspace projects each instance represents a commit message, and a refactoring operation whose type is one of the 6\xspace method-level types considered in this study, namely \textit{Extract Method}, \textit{Inline Method}, \textit{Move Method}, \textit{Pull-up Method}, \textit{Push-down Method}, and \textit{Rename Method}. Then, we use the N-Gram technique \citep{manning1999foundations} to identify relevant features, for each of the classes, and which will be used to develop various classifiers, including Random Forest, Logistic Regression, and Gradient Boosted Machine. Our key findings show that there is no uniform accuracy across all refactoring types, \textit{i.e.,}\xspace some refactorings can achieve up to 90\% in terms of F-measure, while others achieve 35\% at best. This indicates that the documentations of some refactoring types, such as \textit{Rename Method} are likely to follow best documentation practices than others, while some types are harder to distinguish and tend to be more ambiguous , such as \textit{Move Method}, \textit{Pull-up Method}, and \textit{Push-down Method}. \textcolor{black}{This paper makes the following contributions:} \begin{enumerate} \item \textcolor{black}{ We identify the common keywords and phrases developers utilize when describing their refactoring activity in the commit messages. Since there is also a significant amount of ambiguity in the way words are used, our work can reduce this confusion and the keywords that we discuss in this work are a strong starting point for determining what phrases should be used to reduce ambiguity and improve the quality of refactoring documentation. To the best of our knowledge, this is the first work attempted to assess the quality of the documentation of refactoring types using text mining technique.} \item \textcolor{black}{We formulate the refactoring type prediction as a multi-class classification problem based on commit messages mining, and we challenge various models.} \item \textcolor{black}{We evaluate the performance of our prediction model by comparing it against a baseline keyword-based approach that relies on matching messages with known refactoring type \citep{kim2014empirical,zhangpreliminary18,Ratzinger:2008:RRS:1370750.1370759,stroggylos2007refactoring,citeulike:2881658,soares2013comparing}}. \item \textcolor{black}{We discuss the inconsistency cases between the documentation of the refactoring actions, and the refactoring types that were actually performed in the source code.} \item \textcolor{black}{We deploy our model as a lightweight web-service that is publicly available for software engineers and practitioners. We publicly provide our best model and the dataset that served as the \textit{ground-truth}, for replication and extension purposes \citep{SAR2019WEB}}\textcolor{black}{)}. \end{enumerate} The rest of this paper is structured as follows. We review existing studies related to refactoring documentation and commit classification in Section \ref{sec:RelatedWork}. Next, in Section \ref{sec:methodology}, we detail our classification methodology, including the data collection and preprocessing, and choice of the classification algorithms. Then, we evaluate our approach, in Section \ref{sec:results}, and report a comparative study between various classifiers, while identifying most influential features. In Section \ref{sec:Implication}, we report the implications of our study, and in Section \ref{sec:threats}, we discuss the threats to our work's validity. Finally, we conclude and describe our future work in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:RelatedWork} \begin{table*} \centering \caption{\textcolor{black}{Related Work in Commit Classification Using Machine Learning.}} \label{Table:Related_Work_in_Commit_Classification} \begin{sideways} \begin{adjustbox}{width=\textheight,totalheight=\textwidth,keepaspectratio} \begin{tabular}{lcclllll}\hline \toprule \bfseries Study & \bfseries Year & \bfseries Binary / Multi-class & \bfseries Category & \bfseries Machine Learning & \bfseries Training Size & \bfseries Result \\ \midrule \multirow{2}{*}{\citep{article}} & \multirow{2}{*}{2006} & \multirow{2}{*}{No/Yes} & Swanson's category & \multirow{2}{*}{Naive Bayes} & \multirow{2}{*}{400} & \multirow{2}{*}{Accuracy: 70\%} \\ & & & Administrative & \\ \hline \multirow{3}{*}{\citep{5090025}} & \multirow{3}{*}{2009} & \multirow{3}{*}{No/Yes} & \multirow{2}{*}{Swanson's category} & J48 / Naive Bayes / SMO & \multirow{3}{*}{2000} & \multirow{2}{*}{F-measure: 51\%} \\ & & & \multirow{2}{*}{Feature Addition} & KStar / IBk / JRip / ZeroR & & \multirow{2}{*}{Accuracy: 52\%} \\ & & & & Non-Functional \\ \hline \multirow{2}{*}{\citep{Hindle:2011:ATN:1985441.1985466}} & \multirow{2}{*}{2011} & \multirow{2}{*}{No/Yes} & \multirow{2}{*}{Non-Functional} & rule / decision trees / vector space & \multirow{2}{*}{Not mentioned} & Receiver Operating \\ & & & & SVM / CLR / HOMER / BR & & Characteristic up to 80\% \\ \hline \citep{Levin:2017:BAC:3127005.3127016} & 2017 & No/Yes & Swanson's category & J48 / GBM / RF & 1151 & Accuracy: 76\% \\ \hline \multirow{3}{*}{\citep{honel2019importance}} & \multirow{3}{*}{2019} & \multirow{3}{*}{No/Yes} & \multirow{3}{*}{Swanson's category} & LssvmRadical / SVM / GBM & \multirow{3}{*}{1151} & \multirow{3}{*}{Accuracy: up to 89\%} \\ & & & & xgbTree / LDA / MDA / NN / avNNet & & \\ & & & & C5.0 / RF / Naive Bayes / LogitBoost & & \\\hline \citep{gharbi2019classification} & 2019 & No/Yes & Swanson's category & DT / kNN / RF / MLP & 5000 & F-measure: 45.79\%\\ \hline \multirow{2}{*}{\citep{Jane2020enhancing}} & \multirow{2}{*}{2020} & \multirow{2}{*}{Yes/Yes} & binary: CMR vs non-CMR & \multirow{2}{*}{NB / LR / SVM / kNN} & \multirow{2}{*}{1529} & F-measure: 84\% \\ & & & multi-class: 12 refactoring types & & & F-measure: 71\% \\ \hline \multirow{2}{*}{\citep{alomar2020toward}} & \multirow{2}{*}{2020} & \multirow{2}{*}{Yes/Yes} & binary: SAR vs non-SAR & RF / LR / GBM / DJ / BPM & 1823 & F-measure: 98\% \\ & & & multi-class: Internal QA / External QA / code smell & SVM / LD-SVM / NN / AP & 1044 & F-measure: 93\% \\ \hline \multirow{2}{*}{\citep{alomar2021we}} & \multirow{2}{*}{2020} & \multirow{2}{*}{No/Yes} & multi-class: Internal QA / EXternal QA / code smell & RF / LR / kNN / DT / SVC & \multirow{2}{*}{1702} & \multirow{2}{*}{F-measure: 87\%} \\ & & & Bug Fix / Functional & Mutlinomial Naive Bayes \\ \hline \textcolor{black}{\citep{aniche2020effectiveness}} & \textcolor{black}{2020} & \textcolor{black}{Yes/No} & \textcolor{black}{multi-class: 20 refactoring types} & \textcolor{black}{LR / NB / SVM / DT / RF / NN} & \textcolor{black}{2 million} & \textcolor{black}{F-measure: > 90\%} \\ \hline \textcolor{black}{\citep{marmolejos2021use}} & \textcolor{black}{2021} & \textcolor{black}{Yes/No} & \textcolor{black}{SAR vs non-SAR} & \textcolor{black}{BPM / AP / LR / GBM / NN} & \textcolor{black}{3000} & \textcolor{black}{F-measure: 96\%} \\ \hline \textcolor{black}{\multirow{2}{*}{\citep{alomar2021behind}}} & \multirow{2}{*}{\textcolor{black}{2021}} & \multirow{2}{*}{\textcolor{black}{No/Yes}} & \textcolor{black}{multi-class: Internal QA / External QA / code smell} & \textcolor{black}{RF / LR / kNN / DT / SVC } & \multirow{2}{*}{\textcolor{black}{1702}} & \multirow{2}{*}{\textcolor{black}{F-measure: 87\%}} \\ & & & \textcolor{black}{Bug Fix / Functional} & \textcolor{black}{Mutlinomial Naive Bayes} \\ \bottomrule \end{tabular} \end{adjustbox} \end{sideways} \vspace{-.2cm} \end{table*} In this section, we report studies related to developer's perception of refactoring and its documentation, along with the current state-of-the-art studies related to commit messages classification. \subsection{Refactoring Documentation} A number of studies have focused recently on the identification and detection of refactoring activities during the software life-cycle. One of the common approaches to identify refactoring activities is to analyze the commit messages in versioned repositories. \citep{stroggylos2007refactoring} searched words stemming from the verb \textit{\say{refactor}} such as \say{refactoring} or \say{refactored} to identify refactoring-related commits. \citep{Ratzinger:2008:RRS:1370750.1370759,citeulike:2881658} also used a similar keyword-based approach to detect refactoring activity between a pair of program versions to identify whether a transformation contains refactoring. The authors identified refactorings based on a set of keywords detected in commit messages, and focusing on the following 13 terms in their search approach: \textit{refactor, restruct, clean, not used, unused, reformat, import, remove, replace, split, reorg, rename, and move}. Later, \citep{murphy2012we} replicated Ratzinger's experiment in two open source systems using Ratzinger's 13 keywords. They conclude that commit messages in version histories are unreliable indicators of refactoring activities. This is due to the fact that developers do not consistently document refactoring activities in the commit messages. In another study, \citep{soares2013comparing} compared and evaluated three approaches, namely, manual analysis, commit message (Ratzinger et al.'s approach \citep{Ratzinger:2008:RRS:1370750.1370759,citeulike:2881658}), and dynamic analysis (SafeRefactor approach \citep{Soares2009safetytool}) to analyze refactorings in open source repositories, in terms of behavioral preservation. The authors found, in their experiment, that manual analysis achieves the best results in this comparative study and is considered as the most reliable approach in detecting behavior-preserving transformations. In another study, \citep{kim2014empirical} surveyed 328 professional software engineers at Microsoft to investigate when and how they do refactoring. They first identified refactoring branches and then asked developers about the keywords that are usually used to mark refactoring events in commit messages. When surveyed, the developers mentioned several keywords to mark refactoring activities. \citep{kim2014empirical} matched the top ten refactoring-related keywords identified from the survey against the commit messages to identify refactoring commits from version histories. Using this approach, they found 94.29\% of commits do not have any of the keywords, and only 5.76\% of commits included refactoring-related keywords. Prior works \citep{zhangpreliminary18,alomar2019can} have explored how developers document their refactoring activities in commit messages; this activity is called Self-Admitted Refactoring or Self-Affirmed Refactoring (SAR). In particular, SAR indicates developers' explicit documentation of refactoring operations intentionally introduced during a code change. The existence of such patterns unlocks more studies that question the developer's perception of quality attributes (\textit{e.g.,} coupling, complexity), typically used in recommending refactoring. For instance, \citep{alomar2019impact} identified which quality models are more in-line with the developer's vision of quality optimization when they explicitly mention in the commit messages that they refactor to improve these quality attributes. This study shows that, although there is a variety of structural metrics can represent internal quality attributes, not all of them can measure what developers consider to be an improvement in their source code. \textcolor{black}{Furthermore, \citep{alomar2021behind} explored the relationship between developers' experience and refactoring. Their main findings show that refactoring contributors that frequently refactor the code tend to document less than developers that occasionally perform refactoring.} \subsection{Commit Classification} \citep{5090025} proposed an automated technique to classify commits into maintenance categories using seven machine learning techniques. To define their classification schema, they extended Swanson's categorization \citep{Swanson:1976:DM:800253.807723} with two additional changes: Feature Addition, and Non-Functional. They observed that no single classifier is the best. \textcolor{black}{\citep{Hindle:2011:ATN:1985441.1985466} conducted} another experiment that classifies history logs in which their classification of commits involves the non-functional requirements (NFRs) a commit addresses. Since the commit may possibly be assigned to multiple NFRs, they used three different learners for this purpose along with using several single-class machine learners. \citep{article} had a similar idea to \citep{5090025} and extended the Swanson categorization hierarchically. They, however, selected one classifier (\textit{i.e.,} Naive Bayes) for their classification of code transactions. Moreover, maintenance requests have been classified using two different machine learning techniques (\textit{i.e.,} Naive Bayesian and Decision Tree) in \citep{5561540}. \citep{McMillan:2011:CSA:2117694.2119646} explored three popular learners to categorize software application for maintenance. Their results show that SVM is the best performing machine learner for categorization over the others. \citep{Levin:2017:BAC:3127005.3127016} automatically classified commits into three main maintenance activities using three classification models namely, J48, Gradient Boosting Machine (GBM), and Random Forest (RF). They found that the RF model outperforms the two other models (accuracy: 76\% versus 70\% and 72\%). Recently, a replicated study \citep{honel2019importance} of \citep{Levin:2017:BAC:3127005.3127016} introduced code density of a commit to study the purpose of a change. Using code-density based classification, they achieved up to 89\% accuracy for cross project commit classification using LogitBoost classifier. In another study, \citep{gharbi2019classification} proposed a multi-label active learning-based approach to classify commit messages into maintenance categories. Their experimental results showed that the proposed approach achieved an F-measure of 45.79\%. \citep{Jane2020enhancing} developed a model to first detect refactoring commit messages from non-refactoring commits, and then differentiated between 12 refactoring types. Their findings showed that Naive Bayes and SVM achieved the best performance with an F-measure of 84\% and 0.71\% for binary and multiclass classification problems, respectively. \textcolor{black}{Another experiment that predicts refactoring was conducted using quality metrics. \citep{aniche2020effectiveness} used a machine learning approach that involves predicting refactoring using code, process, and ownership metrics. The resulting models predict 20 different refactorings at class, method, and variable-levels with an accuracy often higher than 90\%}. More recently, \citep{alomar2020toward} proposed an approach to classify self-affirmed refactoring in commit messages. Their results show that their approach is able to accurately classify SAR commits with accuracy of 98\% and 93\% for two-class and multiclass classification methods, respectively, outperforming the two state-of-the-art approaches, \textit{i.e.,}\xspace the keyword-based and the random classifier. In a follow-up work, \citep{alomar2021we} performed a multi-class classification to categorize these commits into three categories, namely, Internal Quality Attribute, External Quality Attribute, and Code Smell Resolution, along with the traditional Bug Fix and Functional categories. This classification challenges the original definition of refactoring, being exclusive to improving software design and fixing code smells. \textcolor{black}{\citep{marmolejos2021use} proposed a framework to identify refactoring documentation by using different techniques, such as feature hashing and feature selection (Chi-squared and Fisher score), and five machine learning algorithms. As per their results, the combination of Chi-Squared with Bayes point machine and Fisher score with Bayes point machine could be the most efficient when it comes to automatically identifying refactoring documentation, with an F-measure of 96\%.} We summarize these state-of-the-art studies in Table~\ref{Table:Related_Work_in_Commit_Classification}. Our work is in the intersection of the above-mentioned studies, as we \textcolor{black}{leverage} commit classifications techniques to automatically classify refactoring documentation. While prior studies searched for the existence of refactoring documentation, we further challenge it by checking whether the granularity of documentation can reach up to the level of distinguishing the types of executed refactorings. \textcolor{black}{The refactoring types that we want to identify are the following:} \begin{itemize} \item \textcolor{black}{\textit{Extract Method.} creating a new method by extracting a selection of code from inside the body of an existing method.} \item \textcolor{black}{\textit{Inline Method.} replacing calls and usages of a method with its body, and potentially removing its declaration.} \item \textcolor{black}{\textit{Move Method.} changing the declaration of a method, from one class to another one.} \item \textcolor{black}{\textit{Pull-up Method.} moving up a method in the inheritance chain from a child class to a parent class.} \item \textcolor{black}{\textit{Push-down Method.} moving down a method in the inheritance chain from a parent class to a child class.} \item \textcolor{black}{\textit{Rename Method.} changing the name of a method identifier to a different one.} \end{itemize} We chose types that are applied to the same level, \textit{i.e.,}\xspace for the sake of consistency. Our approach can also be applied to class-level or package-level refactorings. In the next section, we detail the design of our proposed approach. \section{Study Design} \label{sec:methodology} The aim of our work is to reveal the extent to which a clear documentation of refactorings can help in correctly classifying them. The manual search for such correlation between refactoring types and their corresponding proper description can be time-consuming and error prone. We refer to solutions that can properly discriminate, and resolve, textual ambiguity; imitating the human decision making \citep{murphy2012machine} versus other, simpler techniques such as string-matching \citep{Ratzinger:2008:RRS:1370750.1370759,stroggylos2007refactoring,soares2013comparing,ratzinger2005improving} which can be used, to some extent, to solve the same problem. We opt for the supervised learning where predictors (\textit{i.e.,}\xspace independent variables) are developed to decide about the dependent variable's value, which, in our case, refers to the commit message classification. Thus, our dependent variable is represented by the refactoring types to be predicted. The independent variables will be extracted from the keywords used by developers to describe each type of refactoring in their commit messages. Therefore, we need to first setup a dataset that can characterize each class adequately. Since our aim is to investigate which types of refactoring are more adequately documented than others, we formulate this problem as a multiclass classification problem. Hence, when we build our dataset, we choose commits such that each contains one type of refactoring being performed. Then, we provide, for each class (\textit{i.e.,}\xspace refactoring type) a set of commit messages that are meant to document it. In the following, we elaborate on the technical details of our adopted classification technique, starting from the data collection, through its preparation and finally the models training and validation. The overview of our approach is depicted in Figure~\ref{fig:approach_overview}. \begin{figure*}[htbp] \centering \includegraphics[width=1\textwidth]{ASEj_Approach_v2.png} \caption{Overall Prediction Framework.} \label{fig:approach_overview} \end{figure*} \subsection{Overall Classification Framework} In a nutshell, the goal of our work is to automatically identify then classify commit messages containing refactoring documentation. Our approach takes as input, a commit message, and classifies it into one of six common method-level refactoring operations: \textit{Extract Method}, \textit{Inline Method}, \textit{Move Method}, \textit{Pull-up Method}, \textit{Push-down Method}, and \textit{Rename Method}. The overall framework of our approach is depicted in Figure~\ref{fig:approach_overview}. We formulate a two-phased approach that consists of a model building phase and a prediction phase. In the model building phase, our goal is to build a model from a corpus of real-world documented refactoring operations (\textit{i.e.,} commit messages). In the prediction phase, the built model will be used to predict the type of a given refactoring-related commit messages. Our framework takes commit messages along with their ground truth categories obtained by manual inspection as input for the training procedure extracted from different projects. Based on this input, the commit messages are preprocessed, allowing for informative featurization. Thereafter, for each commit message, we extract features (\textit{i.e.,} words) to create a structured feature space. Then, we use the extracted features to build the training set. In total, \textcolor{black}{we experimented with 9 commonly} used classifiers to evaluate our prediction model, namely, Gradient Boosted Machine (GMB) \citep{friedman2001greedy}, Support Vector Machine (SVM) \citep{wu2008top}, Locally Deep SVM (LD-SVM) \citep{jose2013local}, Averaged Perceptron Method (APM) \citep{collins2002discriminative}, Bayes Point Machine (BPM) \citep{herbrich2001bayes}, Logistic Regression (LR) \citep{andrew2007scalable}, Random Forest (RF) \citep{prinzie2008random}, Decision Jungle (DJ) \citep{shotton2013decision}, and Neural Network (NN) \citep{hansen1990neural}. We selected these classifiers as they are commonly used in previous commit classification studies as well as several software engineering classification/prediction problems \citep{Hindle:2011:ATN:1985441.1985466,Levin:2017:BAC:3127005.3127016,article,5090025,5561540,levin2019towards,honel2019importance}, as outlined in Table \ref{Table:Related_Work_in_Commit_Classification}. After training all models, we use a testing set to challenge the performance. Since the model has already learned the vocabulary of N-Gram (discussed in Section~\ref{sec:n-gram}) and their weights from the training dataset, we extract features from the test data based on that vocabulary and weights, and input them to the model. Finally, the classifier will output the predicted label for each tested commit message. \subsection{Commit Classification} Our solution design has six main phases: (1) data collection and refactoring detection, (2) data labeling, (3) text cleaning and preprocessing, (4) feature extraction using N-Gram, (5) model training and building, and (6) model evaluation. Since a commit message is written in plain text, we follow the approach provided by \citep{kowsari2019text,alomar2020toward} that discussed a recent trend in text classification techniques and algorithms. \subsubsection{Data Collection \& Refactoring Detection} To perform this study, we randomly selected 800 projects, which were curated open-source Java projects hosted on GitHub as described in Table~\ref{Table:DATA_Overview}. These curated projects were selected from an available dataset by \citep{munaiah2017curating}, while verifying that they were Java-based; the only language supported by Refactoring Miner. The authors of this dataset classified \say{well-engineered software projects} based on the projects’ use of software engineering practices such as documentation, testing, and project management. Additionally, these projects are non-forked (\textit{i.e.,}\xspace not cloned from other projects), as forked projects may impact our conclusions by introducing duplicate code and documents. Also, 74.6\% of the projects had their most recent commit within the last four years. The 800 selected projects analyzed in this study have a total of 748,001 commits, and a total of 711,495 refactoring operations from 111,884 refactoring commits. Additionally, these projects contain 732 commits and involve 19 developers on average \textcolor{black}{(corresponding to the median of 346.5 commits and 7 developers)}. An overview of the projects is provided in Table~\ref{Table:DATA_Overview}. To extract the entire refactoring history of each project, we use Refactoring Miner because it achieved the highest accuracy in detecting refactorings compared to the state-of-the-art available tools, with a precision of 98\% and recall of 87\% \citep{tsantalis2018accurate,silva2016we} along with being suitable for our study that requires a high degree of automation in data mining. \begin{table}[h] \begin{center} \caption{\textcolor{black}{Projects Overview.}} \label{Table:DATA_Overview} \begin{tabular}{lr}\hline \toprule \bfseries Item & \bfseries Count \\ \midrule Total of projects & 800 \\ Total commits & 748,001 \\ Refactoring commits & 111,884 \\ Refactoring operations & 711,495 \\ \midrule \multicolumn{2}{c}{\textbf{\textit{Considered Projects - Refactored Code Elements}}}\\ \bfseries Code Element & \bfseries \# of Refactorings \\ \midrule Method & 302,929 \\ Class & 228,974 \\ Attribute & 80,509 \\ Parameter & 42,992 \\ Variable & 28,765 \\ Package & 2380 \\ Interface & 1742 \\ \bottomrule \end{tabular} \end{center} \end{table} \subsubsection{Data Labeling} Our goal is to provide the classifier with sufficient commits that represent the refactoring operations considered in this study. Since the number of candidate commits to classify is large, we cannot manually process them all, and so we need to randomly sample a subset while making sure it equitably represents the featured classes, \textit{i.e.,} refactoring types. Since an imbalanced training dataset or class starvation (\textit{i.e.,} not having adequate instances of a certain class) could worsen the performance of the model \citep{Levin:2017:BAC:3127005.3127016,levin2019towards}, we make sure that the classes for multiclass classification problem are equally distributed when preparing the data for the training (\textit{cf.,} Table~\ref{Table:Instances per class (train, test)}). The classification process has been performed by the authors of the paper. To approximate the needed number of commits to add, we reviewed the thresholds used in the studies related to commit classification (see Table~\ref{Table:Related_Work_in_Commit_Classification}). The highest number of commits used in comparable studies was 5,000 commits \citep{gharbi2019classification}. Thus, we select a sample of 5,004 commits from 800 projects for each classification model. Below we detail the manual analysis of the data we use for our classification. To prepare the dataset for the multiclass classification, we first run Refactoring Miner \citep{tsantalis2018accurate} on the 800\xspace open-source projects we presented in Table \ref{Table:DATA_Overview}, in order to identify all commits containing refactorings. Then, we filter them to only keep commits with at most one refactoring type. Then, we cluster them by the types of refactorings we selected for this study. For each cluster, we start the random sampling of potential commits to include for our training set. For each randomly \textcolor{black}{selected} commit, we manually read through its message to verify whether it contains any textual description of the refactoring. Any commit with no such textual description is discarded. \textcolor{black}{In our work, we discard the commits that do not contain any textual description of refactoring to narrow down the commit messages eliminating the ones that are less likely to be classified as one of the refactoring types. It is important to note that we removed these commit messages because (1) these commit messages do not contain enough information and do not describe the code change , and (2) we want to train the classifier on well-documented commit messages, and label commits that contain enough information about refactorings so that we can assess the quality of refactoring documentation.} An example of commits that we retain in our dataset is illustrated in Figure \ref{fig:example}. An example of commits that we discard documents a pull request, \textit{e.g.,}\xspace "\textit{Merge pull request \#6 from marcel-blonk/develop make map type handle interfaces correctly}"\footnote{Commit extracted from sage-bionetworks/schema-to-pojo.}. Commits whose messages do not contain any kind of refactoring documentation would represent a noise in our dataset. Such commits would have been kept if the problem was formulated to binary identify refactoring documentation, but this is out of the scope of our work. This process resulted in selecting 5,004 stratified samples, divided equally for each stratum. \textcolor{black}{It is worth noting that upon performing the manual inspection of a subset of commit messages, we noticed that developers mostly document refactoring when they perform one or very few refactoring operations. However, if developers performed multiple refactoring operations, they are unlikely to detail refactoring activity in the commit messages. Figure \ref{fig:example_multilabel_case} depicts an example of a commit message in which a developer stated that they performed only Extract refactoring operations. Yet, when running the Refactoring Miner tool, it shows that there are 36 refactoring operations performed in this commit message, namely, \textit{Extract Method}, \textit{Extract Superclass}, \textit{Pull up Attribute}, \textit{Pull up Method}, and \textit{Rename Method}.} \begin{table}[h] \begin{center} \caption{Number of Refactoring Instances per Class.} \label{Table:Instances per class (train, test)} \begin{adjustbox}{width=1.0\columnwidth,center} \begin{tabular}{lcccccc}\hline \toprule \bfseries Dataset & \bfseries Extract & \bfseries Inline & \bfseries Move & \bfseries Pull Up & \bfseries Push Down & \bfseries Rename \\ \midrule 5,004 instances & 834 & 834 & 834 & 834 & 834 & 834 \\ \bottomrule \end{tabular} \end{adjustbox} \end{center} \end{table} \begin{figure}[ \centering \includegraphics[width=1.0\linewidth]{Github_multilabel_case.PNG} \caption{\textcolor{black}{An example of multiple refactorings, and its corresponding documentation.}} \label{fig:example_multilabel_case} \end{figure} \subsubsection{Text Cleaning \& Preprocessing} After the data preparation phase, we applied a similar methodology explained in \citep{kowsari2019text,kochhar2014automatic} for text pre-processing. In order for the commit messages to be classified into correct categories, they need to be preprocessed and cleaned; put into a format that the classification algorithms will process. This way, the noise will be removed, allowing for informative featurization. To extract features (\textit{i.e.,} words), we preprocess the text as follows: \begin{itemize} \item \textbf{Tokenization:} The goal of tokenization is to investigate the words in a sentence. The tokenization process breaks a stream of text into words, phrases, symbols, or other meaningful elements called tokens \citep{kowsari2019text}. In our work, we tokenize each commit by splitting the text into its constituent set of words. We also split tokens on special characters (\textit{e.g.,} the string \say{package-level} would be separated into two tokens, \say{\textit{package}} and \say{\textit{level}}). \item \textbf{Lemmatization:} The lemmatization process either replaces the suffix of a word with a different one or removes the suffix of a word to get the basic word form (lemma). We opted to use lemmatization over stemming, as the lemma of a word is a valid English word \citep{lane2019natural}. In our work, the lemmatization process involves sentence separation, part-of-speech identification, and generating dictionary form. We split the commit messages into sentences, since input text could constitute a long chunk of text. The part-of-speech identification helps in filtering words used as features that aid in key-phrase extraction. Lastly, since the word could have multiple dictionary forms, only the most probable form is generated. \item \textbf{Stop-Word Removal:} Stop words (\textit{i.e.,} words and common English words such as \say{is}, \say{are}, \say{if}, etc) are removed since they do not play any role as features for the classifier \citep{saif2014stopwords}. \item \textbf{Capitalization Normalization:} Since text could have a diversity of capitalization to form a sentence and this could be problematic when classifying large commits, all the words in the commit messages are converted to lower case and all verb contractions are expanded. \item \textbf{Noise Removal:} Special characters and numbers are removed since they can deteriorate the classification. More specifically, we remove all numeric characters, unique and duplicate special characters, email addresses and URLs. \end{itemize} \subsubsection{Feature Extraction Using N-Gram} \label{sec:n-gram} After cleaning and preprocessing the text, we apply feature extraction to extract only the most useful information from text strings to differentiate classes in both classification problems. In particular, we selected the N-Gram technique for feature extraction. The N-Gram technique is a set of \textit{n-word} that occurs in a text set and could be used as a feature to represent that text \citep{kowsari2019text}. In general, N-Gram term has more semantic than an isolated word. Some of the keywords (\textit{e.g.,} \say{\textit{extract}}) do not provide much information when used on its own. However, when collecting N-Gram from commit message (\textit{e.g.,} \textit{Refactor createOrUpdate method in MongoChannelStore to extract methods and make code more readable}), the keyword \say{extract} clearly indicates that this refactoring commit belongs to \textit{Extract Method} refactoring. In our classification, we use bigrams since it is very common to enhance the performance of text classification \citep{tan2002use}, and we select Fisher Score filter-based feature selection \citep{duda2012pattern,gu2012generalized} to \textit{featurize} text and manage the size of the text feature vector \textcolor{black}{like} \citep{kochhar2014automatic}. As for the weighting function, we used the standard Term Frequency-Inverse Document Frequency (TF-IDF) \citep{manning2008schu} as it is commonly used in the literature \citep{gharbi2019classification,ouni2016multi,lin2013empirical,le2015rclinker}. The value for each N-Gram is proportional to its TF score multiplied by its IDF score. Thus, each preprocessed word in the commit message is assigned a value which is the weight of the word computed using this weighting scheme. TF-IDF gives greater weight (\textit{e.g.,} value) to words which occur frequently in fewer documents rather than words which occur frequently in many documents. \subsubsection{Model Training and Building} In this phase, we performed the 10-fold cross-validation technique to assess the variability and reliability of the classifier. Specifically, for each of the classification methods, we combined the commit messages into a single large dataset. Then, we split the dataset into ten folds, where each fold contained an equal proportion of commit messages. Thereafter, we performed ten evaluation rounds with different testing dataset in which nine folds were used as training dataset and the remaining one of the ten folds is used as the testing dataset. We aggregated the results of the ten evaluation rounds and reported the average performance for each classifier. \subsubsection{Classifier Selection and Model Evaluation} Selecting the proper classifier for optimal classification of the commits is a rather challenging task \citep{fernandez2014we}. Best practices suggest that developers properly document their commits by providing a commit message along with every commit they make to the repository. These commit messages are typically written using natural language, and generally convey some descriptive information about the commit changes they represent. In this study, we are dealing with multiclass classification problem since the commit messages are categorized into six different types. Since we have a predefined set of categories (\textit{i.e.,}\xspace refactoring types), our approach relies on supervised machine learning algorithms to assign each commit message to one category. Since it is very important to come up with an optimal classifier that can provide satisfactory results, several studies have compared various classifiers such as K-Nearest Neighbor (KNN), Naive Bayes Multinomial, Gradient Boosting Machine (GBM), and Random Forest (RF) in the context of commit classification into similar categories \citep{Levin:2017:BAC:3127005.3127016,levin2019towards, kochhar2014automatic}. These studies found that Random Forest (RF) often achieves high performance. We investigated each classifier in our study using common statistical measures (\textit{precision, recall, and F-measure}) of classification performance to compare each of them based on Azure Machine Learning (Azure ML) \citep{mund2015microsoft}. It is important to note that the calculation of F-measure for multiclass classification is not supported by Azure ML. Thus, we compute F-measure using the following formula: \begin{equation} F = 2*\left(\frac{Precision * Recall} {Precision + Recall}\right) \end{equation} where Precision (P) and Recall (R) are calculated as follows: \begin{equation} P= \frac{tp}{tp+fp} , \;\;\;\;\;\;\; R= \frac{tp}{tp+fn} \end{equation} It is worth noting that a few models that we consider are inherently binary classifiers. In order to adjust for multiclass classification, each classifier applies the One-vs-All strategy for issues that require multiple output classes \citep{Lorena2009}. Thus, to ensure fairness, we use One-vs-All strategy for multiclass classification when using the following five classifiers: Gradient Boosted Machine (GMB), Support Vector Machine (SVM), Locally Deep SVM (LD-SVM), Averaged Perceptron Method (APM), and Bayes Point Machine (BPM). The remaining classifiers, consider in this study, are: Logistic Regression (LR), Random Forest (RF), Decision Jungle (DJ), and Neural Network (NN). Our experiment is conducted using Microsoft Azure Machine Learning platform (Azure ML) \citep{mund2015microsoft}, as it provides a built-in web-service once the classification models are deployed. \textcolor{black}{We provide, in Table \ref{Table:Parameters}, the default parameter values of the classification algorithms in our study replicability purposes.} \begin{table}[h!] \centering \caption{Default Parameter Values for the Classification Algorithms.} \label{Table:Parameters} \begin{adjustbox}{width=1.0\textwidth,center} \begin{tabular}{@{}llll@{}} \toprule \multicolumn{1}{l}{\textbf{Algorithm}} & \multicolumn{1}{l}{\textbf{Parameter}} & \multicolumn{1}{l}{\textbf{Description}} & \multicolumn{1}{l}{\textbf{Default Value}} \\ \midrule \multirow{4}{*}{Random Forest} & n\_estimators & Number of decision trees & 8 \\ & max\_depth & Maximum depth of the decision trees & 32 \\ & n\_samples\_leaf & Number of random splits per node & 128 \\ & min\_samples\_split & Minimum number of samples per leaf node & 1 \\ \midrule \multirow{4}{*}{Logistic Regression} & optimiz\_tol & Optimization tolerance & 1E-07 \\ & 1\_weight & L1 regularization weight & 1\\ & L2\_weight & L2 regularization weight & 1 \\ & memory\_L\_BFGS & Memory size for L-BFGS & 20 \\ \midrule \multirow{4}{*}{Gradient Boosted Machine} & max\_n\_leaf & Maximum number of leaves per tree & 20 \\ & min\_samples\_leaf & Minimum number of samples per leaf node & 10 \\ & learning\_rate & Learning rate & 0.2 \\ & n\_tree & Number of trees constructed & 100 \\ \midrule \multirow{4}{*}{Decision Jungle} & n\_estimators & Number of decision directed acyclic graphs & 8 \\ & max\_depth & Maximum depth of the decision directed acyclic graphs & 32\\ & max\_width & Maximum of the decision directed acyclic graphs & 128 \\ & n\_optimiz & Number of optimization steps per decision directed acyclic graphs layer & 2048 \\ \midrule \multirow{2}{*}{Support Vector Classification} & n\_iter & Number of iterations & 1 \\ & Lambda & Lambda & 0.001 \\ \midrule \multirow{6}{*}{Locally Deep SVM} & max\_depth & Depth of the tree & 3 \\ & lam\_weight & Lambda weight & 0.1 \\ & n\_theta & Lambda Theta & 0.01 \\ & n\_theta\_Prime & Lambda Theta Prime & 0.01 \\ & n\_sigmoid & Sigmoid sharpness & 1 \\ & n\_iter & Number of iterations & 15000 \\ \midrule \multirow{5}{*}{Neural Network} & n\_nodes & Number of hidden nodes & 100 \\ & learning\_rate & The learning rate & 0.1 \\ & n\_learning\_rate & Number of learning iterations & 100 \\ & learning\_rate\_weights & Initial learning weights diameter & 0.1 \\ & momentum & Momentum & 0 \\ \midrule \multirow{2}{*}{Average Perceptron Method} & learning\_rate & Learning rate & 1 \\ & m\_iter & Maximum number of iterations & 10 \\ \midrule \multirow{1}{*}{Bayes Point Machine} & n\_training\_iter & Number of training iterations & 30 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \begin{table*}[h] \centering \caption{Performance of Each Model, in Terms of Precision (P), Recall (R), and F-measure (F1), per Refactoring Type (a set of 5,004 commits).} \label{Table:ClassifierScores_Details} \begin{adjustbox}{width=1.0\textwidth,center} \centering \begin{tabular}{lrrrlrrrlrrr} \hline \multicolumn{4}{|c|}{\textit{\textbf{Random Forest}}} & \multicolumn{4}{c|}{\textit{\textbf{Logistic Regression}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Gradient Boosted Machine}}} \\ \hline \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{l}{\textbf{P}} & \multicolumn{1}{l}{\textbf{R}} & \multicolumn{1}{l}{\textbf{F1}} \\ \hline Extract Method & 0.58 & 0.65 & \multicolumn{1}{r|}{0.62} & Extract Method & 0.63 & 0.64 & \multicolumn{1}{r|}{0.63} & Extract Method & 0.71 & 0.68 & 0.69 \\ Inline Method & 0.41 & 0.46 & \multicolumn{1}{r|}{0.44} & Inline Method & 0.43 & 0.48 & \multicolumn{1}{r|}{0.45} & Inline Method & 0.45 & 0.44 & 0.45 \\ Move Method & 0.57 & 0.67 & \multicolumn{1}{r|}{0.61} & Move Method & 0.57 & 0.61 & \multicolumn{1}{r|}{0.59} & Move Method & 0.61 & 0.66 & 0.63 \\ Pull Up Method & 0.41 & 0.31 & \multicolumn{1}{r|}{0.35} & Pull Up Method & 0.41 & 0.38 & \multicolumn{1}{r|}{0.40} & Pull Up Method & 0.42 & 0.41 & 0.42 \\ Push Down Method & 0.42 & 0.32 & \multicolumn{1}{r|}{0.36} & Push Down Method & 0.40 & 0.36 & \multicolumn{1}{r|}{0.38} & Push Down Method & 0.44 & 0.41 & 0.42 \\ Rename Method & 0.89 & 0.92 & \multicolumn{1}{r|}{0.91} & Rename Method & 0.93 & 0.87 & \multicolumn{1}{r|}{0.90} & Rename Method & 0.91 & 0.94 & 0.93 \\ \hline \multicolumn{12}{l}{} \\ \hline \multicolumn{4}{|c|}{\textit{\textbf{Decision Jungle}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Support Vector Machine}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Locally Deep SVM}}} \\ \hline \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} \\ \hline Extract Method & 0.54 & 0.66 & \multicolumn{1}{r|}{0.59} & Extract Method & 0.55 & 0.56 & \multicolumn{1}{r|}{0.55} & Extract Method & 0.54 & 0.54 & 0.54 \\ Inline Method & 0.40 & 0.43 & \multicolumn{1}{r|}{0.42} & Inline Method & 0.38 & 0.39 & \multicolumn{1}{r|}{0.39} & Inline Method & 0.35 & 0.35 & 0.35 \\ Move Method & 0.58 & 0.73 & \multicolumn{1}{r|}{0.65} & Move Method & 0.50 & 0.51 & \multicolumn{1}{r|}{0.50} & Move Method & 0.47 & 0.46 & 0.47 \\ Pull Up Method & 0.39 & 0.21 & \multicolumn{1}{r|}{0.27} & Pull Up Method & 0.37 & 0.36 & \multicolumn{1}{r|}{0.36} & Pull Up Method & 0.34 & 0.38 & 0.36 \\ Push Down Method & 0.38 & 0.27 & \multicolumn{1}{r|}{0.31} & Push Down Method & 0.37 & 0.38 & \multicolumn{1}{r|}{0.37} & Push Down Method & 0.41 & 0.39 & 0.40 \\ Rename Method & 0.90 & 0.96 & \multicolumn{1}{r|}{0.93} & Rename Method & 0.86 & 0.81 & \multicolumn{1}{r|}{0.84} & Rename Method & 0.85 & 0.78 & 0.81 \\ \hline \multicolumn{12}{l}{} \\ \hline \multicolumn{4}{|c|}{\textit{\textbf{Neural Network}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Averaged Perceptron Method}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Bayes Point Machine}}} \\ \hline \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} \\ \hline Extract Method & 0.58 & 0.50 & \multicolumn{1}{r|}{0.54} & Extract Method & 0.54 & 0.53 & \multicolumn{1}{r|}{0.53} & Extract Method & 0.49 & 0.46 & 0.48 \\ Inline Method & 0.37 & 0.37 & \multicolumn{1}{r|}{0.37} & Inline Method & 0.36 & 0.38 & \multicolumn{1}{r|}{0.37} & Inline Method & 0.33& 0.35 & 0.34 \\ Move Method & 0.50 & 0.44 & \multicolumn{1}{r|}{0.47} & Move Method & 0.45 & 0.48 & \multicolumn{1}{r|}{0.46} & Move Method & 0.40 & 0.49 & 0.44 \\ Pull Up Method & 0.36 & 0.35 & \multicolumn{1}{r|}{0.35} & Pull Up Method & 0.36 & 0.37 & \multicolumn{1}{r|}{0.36} & Pull Up Method & 0.36 & 0.35 & 0.36 \\ Push Down Method & 0.37 & 0.46 & \multicolumn{1}{r|}{0.41} & Push Down Method & 0.39 & 0.38 & \multicolumn{1}{r|}{0.39} & Push Down Method & 0.38 & 0.36 & 0.37 \\ Rename Method & 0.82 & 0.86 & \multicolumn{1}{r|}{0.84} & Rename Method & 0.85 & 0.81 & \multicolumn{1}{r|}{0.83} & Rename Method & 0.70 & 0.61 & 0.65 \\ \hline \end{tabular} \end{adjustbox} \end{table*} \begin{comment} \begin{table*}[h] \centering \caption{\textcolor{black}{Performance of Each Model, in Terms of Precision (P), Recall (R), and F-measure (F1), per Refactoring Type (a set of 6,000 commits).}} \label{Table:ClassifierScores_Details_6000} \begin{adjustbox}{width=1.0\textwidth,center} \centering \begin{tabular}{lrrrlrrrlrrr} \hline \multicolumn{4}{|c|}{\textit{\textbf{Random Forest}}} & \multicolumn{4}{c|}{\textit{\textbf{Logistic Regression}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Gradient Boosted Machine}}} \\ \hline \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{l}{\textbf{P}} & \multicolumn{1}{l}{\textbf{R}} & \multicolumn{1}{l}{\textbf{F1}} \\ \hline Extract Method & 0.65 & 0.73 & \multicolumn{1}{r|}{0.69} & Extract Method & 0.68 & 0.76 & \multicolumn{1}{r|}{0.72} & Extract Method & 0.73 & 0.72 & 0.72 \\ Inline Method & 0.37 & 0.40 & \multicolumn{1}{r|}{0.38} & Inline Method & 0.41 & 0.42 & \multicolumn{1}{r|}{0.42} & Inline Method & 0.38 & 0.37 & 0.37 \\ Move Method & 0.51 & 0.46& \multicolumn{1}{r|}{0.48} & Move Method & 0.56 & 0.46 & \multicolumn{1}{r|}{0.50} & Move Method & 0.48 & 0.47 & 0.47 \\ Pull Up Method & 0.37 & 0.34 & \multicolumn{1}{r|}{0.35} & Pull Up Method & 0.40 & 0.40& \multicolumn{1}{r|}{0.40} & Pull Up Method & 0.37 & 0.38 & 0.37\\ Push Down Method & 0.41 & 0.32 & \multicolumn{1}{r|}{0.36} & Push Down Method & 0.41 & 0.45 & \multicolumn{1}{r|}{0.43} & Push Down Method & 0.43 & 0.45 & 0.44 \\ Rename Method & 0.75 & 0.87& \multicolumn{1}{r|}{0.81} & Rename Method & 0.91 & 0.84 & \multicolumn{1}{r|}{0.88} & Rename Method & 0.89 & 0.87 & 0.88 \\ \hline \multicolumn{12}{l}{} \\ \hline \multicolumn{4}{|c|}{\textit{\textbf{Decision Jungle}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Support Vector Machine}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Locally Deep SVM}}} \\ \hline \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} \\ \hline Extract Method & 0.63 & 0.66 & \multicolumn{1}{r|}{0.64} & Extract Method & 0.61& 0.65& \multicolumn{1}{r|}{0.63} & Extract Method & 0.64 & 0.57 & 0.60 \\ Inline Method & 0.42 & 0.30 & \multicolumn{1}{r|}{0.35} & Inline Method &0.33 & 0.37 & \multicolumn{1}{r|}{0.35} & Inline Method &0.30 & 0.31 & 0.31 \\ Move Method & 0.57 & 0.46 & \multicolumn{1}{r|}{0.51} & Move Method & 0.40 & 0.42 & \multicolumn{1}{r|}{0.41} & Move Method & 0.37 & 0.35& 0.36 \\ Pull Up Method & 0.37 & 0.45 & \multicolumn{1}{r|}{0.41} & Pull Up Method & 0.35 &0.28 & \multicolumn{1}{r|}{0.31} & Pull Up Method & 0.29 & 0.31 & 0.30 \\ Push Down Method & 0.40 & 0.40 & \multicolumn{1}{r|}{0.40} & Push Down Method & 0.36 & 0.38 & \multicolumn{1}{r|}{0.37} & Push Down Method & 0.34 & 0.36 & 0.35 \\ Rename Method & 0.83 & 0.86 & \multicolumn{1}{r|}{0.85} & Rename Method & 0.87 & 0.81 & \multicolumn{1}{r|}{0.84} & Rename Method & 0.84 & 0.80 & 0.82\\ \hline \multicolumn{12}{l}{} \\ \hline \multicolumn{4}{|c|}{\textit{\textbf{Neural Network}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Averaged Perceptron Method}}} & \multicolumn{4}{c|}{\textit{\textbf{One-vs-All Bayes Point Machine}}} \\ \hline \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Refactoring type}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} \\ \hline Extract Method & 0.61 & 0.60& \multicolumn{1}{r|}{0.61} & Extract Method & 0.62 & 0.61 & \multicolumn{1}{r|}{0.61} & Extract Method & 0.48 & 0.61& 0.53 \\ Inline Method & 0.30& 0.34& \multicolumn{1}{r|}{0.32} & Inline Method & 0.33 &0.34 & \multicolumn{1}{r|}{0.34} & Inline Method & 0.29 & 0.28 & 0.29\\ Move Method & 0.36 & 0.39 & \multicolumn{1}{r|}{0.38} & Move Method & 0.39 & 0.38 & \multicolumn{1}{r|}{0.39} & Move Method & 0.35& 0.37 & 0.36 \\ Pull Up Method & 0.31 & 0.24 & \multicolumn{1}{r|}{0.27} & Pull Up Method & 0.31 & 0.34 & \multicolumn{1}{r|}{0.33} & Pull Up Method & 0.30 & 0.30& 0.30 \\ Push Down Method &0.35 & 0.34 & \multicolumn{1}{r|}{0.35} & Push Down Method & 0.35 & 0.34 & \multicolumn{1}{r|}{0.35} & Push Down Method & 0.33& 0.28 &0.31 \\ Rename Method & 0.80 & 0.83& \multicolumn{1}{r|}{0.81} & Rename Method & 0.83 & 0.82& \multicolumn{1}{r|}{0.82} & Rename Method & 0.70 & 0.60& 0.65\\ \hline \end{tabular} \end{adjustbox} \end{table*} \end{comment} \section{Results \& Discussions} \label{sec:results} In this section, we assess the performance of our approach, and aim at answering the following research questions: \begin{itemize} \item \textbf{RQ1. (effectiveness)} How effective is our supervised learning in predicting the type of refactoring?\xspace \item \textbf{RQ2. (baseline comparison)} How do our model compare with keyword-based classification?\xspace \item \textcolor{black}{\textbf{RQ3. (terminology)} What are the frequent terms utilized by developers when documenting refactoring types?\xspace} \item \textcolor{black}{\textbf{RQ4. (inconsistency)} How useful is our approach in analyzing the inconsistency types between source code and documentation?\xspace} \end{itemize} \textbf{Replication package.} We provide our comprehensive experiments package available in \citep{SAR2019WEB} to further replicate and extend our study. \subsection{RQ1. How effective is our supervised learning in predicting the type of refactoring?\xspace} Table \ref{Table:ClassifierScores_Details} reports the performance results of each classifier, in terms of precision, recall and F-measure, broken down per class, \textit{i.e.,}\xspace refactoring type. According to Table \ref{Table:ClassifierScores_Details}, Random Forest (RF), Gradient Boosting Machine (GBM), and Logistic Regression (LR) are performing relatively higher than their competitor classifiers, in terms of F-measure, across the majority classes. We also observe that the GBM was able to achieve the highest average F-measure of 0.59, in comparison with RF and LR, whose F-measure is respectively 0.54 and 0.55. Random Forest and Boosting learning machines belong to the family of ensemble learning machines, and have typically yielded superior predictive performance mainly due to the fact that they both aggregate several learnings. As for Logistic Regression, the fact that Logistic Regression achieves comparable performance as Random Forest and Boosting can be explained by the fact that the underlying true model for the text data has an inherent structure that matches the logistic regression assumption. Overall, there is an interesting pattern that we can observe across all classifiers: there is an agreement between all models that the \textit{Rename Method} refactoring is the easiest to classify, with an F-measure starting from 0.65 (Bayes Point Machine) and reaching up to 0.93 (GBM). The \textit{Extract Method} refactoring classification was the second highest for all classifiers except Decision Jungle. Its F-measure varies from 0.48 (Bayes Point Machine) to 0.69 (GBM). Furthermore, we observe that for the \textit{Move Method} refactoring, the classifiers' performance varies between 0.46 (Averaged Perceptron Method) and 0.63 (GBM). As for the remaining classes, the performance of classifiers was similar and relatively low, when compared with the previous classes. For instance, the classifiers' performance, for the \textit{Inline Method} refactoring varies between 0.34 (Bayes Point Machine) and 0.45 (GBM). For the \textit{Pull-up Method} and the \textit{Push-down Method} refactorings, the highest F-measure scored across all classifiers was 0.42. To gain a better understanding on why there exists such differences in the prediction between the refactoring types, we further analyzed the confusion matrix of the GBM classifier. During our qualitative analysis, we made the following observations: \begin{table*}[h] \centering \caption{Examples of Wrongly Predicted Commit Messages, by the Gradient Boosting Machine (GBM).} \label{Table:example} \begin{adjustbox}{width=1.0\textwidth,center} \begin{tabular}{lllllll}\hline \toprule \bfseries Observation & \bfseries Ref. Operation & \bfseries Commit Message Example \\ \midrule Similar Expression & Extract Method & \say{\textit{fcrepo-1029: \textbf{move} purge code \textbf{to} separate method}} \\ & Inline Method & \say{\textit{ISQReader: \textbf{move} the dialog code \textbf{into} run() and tidy up}} \\ & Move Method & \say{\textit{\textbf{Move} send/receive code \textbf{from} SMTPSession \textbf{to} TextProtocolTester [...]}} \\ & Pull Up Method & \say{\textit{HV-1239 \textbf{Moving} shared code \textbf{up} to CascadableConstraintMapping[...]}} \\ & Push Down Method & \say{\textit{\textbf{Move} group communication \textbf{down} to jvstm-ispn only [...]}} \\ \hline Inadequate Expression & Extract Method & \say{\textit{\textbf{Merged} updateTopic and updateTopicInline.}} \\ & Inline Method & \say{\textit{\textbf{Extracting} transactions from HadoopArchiveFileSystem. [...]}}\\ & Move Method & \say{\textit{Improve code structure. \textbf{Added} tests.}}\\ & Pull Up Method & \say{\textit{\textbf{split} out into ERXAjaxContext so you can [...]}}\\ & Push Down Method & \say{\textit{\textbf{removed} deprecated method getConfigServer()}}\\ & Rename Method & \say{\textit{\textbf{Added} extended names for mixins.}}\\ \bottomrule \end{tabular} \end{adjustbox} \vspace{-.3cm} \end{table*} \vspace{.2cm} \noindent\textbf{Observation \# 1. Similar Expressions.} Our first observation relates to the terminology and keywords developers use to describe each refactoring type. We notice that \textit{Rename Method} has the highest accuracy across all classifiers because developers typically use the keyword \textit{rename} to describe renaming methods. However, for the other types, developers do not stick to how these types are named in the refactoring catalog, and use various terminologies, to describe them. We enumerate, in Table \ref{Table:example}, examples from messages belonging to \textit{Extract/Inline/Pull-up/Push-down Method} classes, and which were wrongly predicted as \textit{Move Method}. For instance, the process of extracting a method was described in one of the commits as "\textit{moving} purge code to a \textit{separate method}". While we can induce the extraction of the method, it was mislabeled by GBM classifier. \begin{comment} \vspace{.2cm} \noindent\textbf{Observation \# 2. Generic Expressions.} The main challenge that we observed across various commits, is the tendency of developers to provide a high-level description of their refactoring, through the use of general expressions and patterns, such as \textit{refactor}, \textit{restructure}, \textit{redesign}, \textit{code clean up}, etc. Such patterns cannot be framed into one single type, \textit{i.e.,}\xspace they can be used to describe all refactoring types, and so, to improve the accuracy of the models, they should be treated as stop words and filtered out during the data preprocessing. \end{comment} \vspace{.2cm} \noindent\textbf{Observation \# 2. Inadequate Expressions.} Occasionally, some messages contain keywords that are counter-intuitive to our model, resulting in a misclassification. Table \ref{Table:example} contains samples of misclassified commits, we report the correct label, while keywords that induced the wrong prediction are in bold. Let us take the following message: \say{\textit{Merged updateTopic and updateTopicInline}}, which documents inlining two methods, namely \texttt{updateTopic()} and \texttt{updateTopicInline()}, however, Refactoring Miner has detected an extraction of the method. To further understand this, we conducted a manual analysis of random samples. Our verification indicates that the keywords used by our model are not necessarily meant to document the underlying refactoring, as developers may document other changes performed in the commit. It is worth noting that a recent study has reported that developers do misuse refactoring-related terms in their documentations \citep{zhangpreliminary18}. Such cases will also hinder the accuracy of our prediction. \begin{tcolorbox} \textit{Summary.} The accuracy of refactoring prediction is not uniform across all types. Some types are easier to predict than others. The prediction results for \textit{Rename Method}, \textit{Extract Method}, and \textit{Move Method} were ranging from 63\% to 93\% in terms of F-measure. However, our model was not able to accurately distinguish between \textit{Inline Method}, \textit{Pull-up Method}, and \textit{Push-down Method}, as its F-measure was between 42\% and 45\%. \end{tcolorbox} \subsection{RQ2. How do our model compare with keyword-based classification?\xspace} We opt to test the keyword-based approach because it was used to identify refactoring commits in previous studies \citep{kim2014empirical,zhangpreliminary18,Ratzinger:2008:RRS:1370750.1370759,stroggylos2007refactoring,ratzinger2005improving,murphy2012we,Mauczka2012}. The keyword-based approach also measures the extent to which developers explicitly mention their refactoring operations in their commit messages. The keyword-based approach simply uses the following keywords, namely \say{\textit{extract}}, \say{\textit{inlin}}, \say{\textit{mov}}, \say{\textit{pull}}, \say{\textit{push}}, and \say{\textit{renam}}, to perform the prediction. Note that we manually check the results to remove any false matching, \textit{e.g.,}\xspace for the keyword \textit{mov}, we filtered matchings like \textit{movie} and \textit{movement}. Figures~\ref{Chart:Visualization of the Precision}, \ref{Chart:Visualization of the Recall}, and \ref{Chart:Visualization of the F1-measure for Different Classifiers} present the experimental results of our approach compared with the keyword-based prediction. Our approach provides an F-measure improvement across all refactoring types. One case in which the keyword-based approach could not detect the type of refactoring but the ML-based approach detects correctly is best illustrated in the following commit message: \say{\textit{Change name of `Decorator' to `Events'}}. The keyword-based approach does not capture this message as it does not contain the keyword \say{\textit{renam}}. This is intuitive since the model has identified a set of keywords that were also used to indicate a given refactoring type. For example, if we refer to Table \ref{Table:Features}, the \textit{Inline Method} refactoring was found to be documented using various keywords such as \textit{combine}, \textit{gather}, and \textit{merge}. Similarly with the \textit{Extract Method} refactoring, whose documentation contained \textit{add}, \textit{create}, \textit{split}, and \textit{separate}. It is worth noting that the highest performance of the keyword-based approach was achieved when predicting the \textit{Move Method} refactoring, being able to capture the vast majority of commits containing this type (true positives), along with many other commits containing mainly the \textit{Pull-up Method}, and \textit{Push-down Method} refactortings, because developers typically document them using the \say{\textit{move}} keyword, as we illustrated in Table \ref{Table:example}. \begin{figure}[tbp] \centering \begin{tikzpicture} \begin{scope}[scale=0.83] \centering \begin{axis}[ ybar, axis on top, height=6cm, width=13cm, bar width=0.5cm, ymajorgrids, tick align=inside, major grid style={draw=white}, enlarge y limits={value=.1,upper}, ymin=0, ymax=100, axis x line*=bottom, axis y line*=right, y axis line style={opacity=0}, tickwidth=0pt, enlarge x limits=true, enlarge x limits={abs=2cm}, legend style={ at={(0.5,-0.1)}, anchor=north, legend columns=-1, /tikz/every even column/.append style={column sep=0.5cm} }, ylabel={Precision (\%)}, symbolic x coords={ Extract, Inline, Move, Pull up, Push down, Rename}, xtick=data, nodes near coords={ \pgfmathprintnumber[precision=0]{\pgfplotspointmeta} } ] \addplot [draw=none, fill=blue!30] coordinates { (Extract,71) (Inline, 45) (Move, 61) (Pull up, 42) (Push down, 44) (Rename, 91) }; \addplot [draw=none,fill=red!30] coordinates { (Extract,76) (Inline, 81) (Move,38) (Pull up, 69) (Push down,67) (Rename, 87) }; \legend{Our approach, Keyword-based} \end{axis} \end{scope} \end{tikzpicture} \caption{\textcolor{black}{Visualization of the Precision for Different Approaches.}} \label{Chart:Visualization of the Precision} \begin{tikzpicture} \begin{scope}[scale=0.83] \centering \begin{axis}[ ybar, axis on top, height=6cm, width=13cm, bar width=0.5cm, ymajorgrids, tick align=inside, major grid style={draw=white}, enlarge y limits={value=.1,upper}, ymin=0, ymax=100, axis x line*=bottom, axis y line*=right, y axis line style={opacity=0}, tickwidth=0pt, enlarge x limits=true, enlarge x limits={abs=2cm}, legend style={ at={(0.5,-0.1)}, anchor=north, legend columns=-1, /tikz/every even column/.append style={column sep=0.5cm} }, ylabel={Recall (\%)}, symbolic x coords={ Extract, Inline, Move, Pull up, Push down, Rename}, xtick=data, nodes near coords={ \pgfmathprintnumber[precision=0]{\pgfplotspointmeta} } ] \addplot [draw=none, fill=blue!30] coordinates { (Extract,68) (Inline, 44) (Move, 66) (Pull up, 41) (Push down, 41) (Rename, 94) }; \addplot [draw=none,fill=red!30] coordinates { (Extract,30) (Inline,5) (Move,83) (Pull up, 6) (Push down,5) (Rename, 98) }; \legend{Our approach, Keyword-based} \end{axis} \end{scope} \end{tikzpicture} \caption{\textcolor{black}{Visualization of the Recall for Different Approaches.}} \label{Chart:Visualization of the Recall} \centering \begin{tikzpicture} \centering \begin{scope}[scale=0.83] \begin{axis}[ ybar, axis on top, height=6cm, width=13cm, bar width=0.5cm, ymajorgrids, tick align=inside, major grid style={draw=white}, enlarge y limits={value=.1,upper}, ymin=0, ymax=100, axis x line*=bottom, axis y line*=right, y axis line style={opacity=0}, tickwidth=0pt, enlarge x limits=true, enlarge x limits={abs=2cm}, legend style={ at={(0.5,-0.1)}, anchor=north, legend columns=-1, /tikz/every even column/.append style={column sep=0.5cm} }, ylabel={F-measure (\%)}, symbolic x coords={ Extract, Inline, Move, Pull up, Push down, Rename}, xtick=data, nodes near coords={ \pgfmathprintnumber[precision=0]{\pgfplotspointmeta} } ] \addplot [draw=none, fill=blue!30] coordinates { (Extract, 69) (Inline, 45) (Move, 63) (Pull up, 42) (Push down, 42) (Rename, 93) }; \addplot [draw=none,fill=red!30] coordinates { (Extract,44) (Inline, 9) (Move,53) (Pull up,12) (Push down, 10) (Rename, 92) }; \legend{Our approach, Keyword-based} \end{axis} \end{scope} \end{tikzpicture} \caption{\textcolor{black}{Visualization of the F-measure for Different Approaches.}} \label{Chart:Visualization of the F1-measure for Different Classifiers} \end{figure} \begin{table}[h] \begin{center} \caption{Relevant Features per Class. \label{Table:Features} \begin{adjustbox}{width=1.0\columnwidth,center} \begin{tabular}{llllll}\hline \toprule \bfseries Extract & \bfseries Inline & \bfseries Move & \bfseries Pull Up & \bfseries Push Down & \bfseries Rename \\ \midrule Add & Combine & Move & Move & Move & Change \\ Create & Gather & Add & Pull & Push & Fix \\ Extract & Inline & & Shift & Reduce & Improve \\ Move & Merge & & & Remove & Rename \\ Separate & Move & & & & Update \\ Split & & & & & \\ Break up & \\ \bottomrule \end{tabular} \end{adjustbox} \end{center} \end{table} \begin{tcolorbox} \textit{Summary.} The keyword-based approach performs significantly lower than ML models. It assumes that developers are familiar with the catalog of refactorings, or refactoring types being offered in the IDEs. Our findings show that developers tend to document refactoring using the same set of patterns. The keyword-based approach scored relatively better performance for the \textit{Rename Method} type because its keyword (\textit{i.e.,}\xspace rename) is intuitive, in contrast with other types, such as \textit{Inline Method} and \textit{Push-down Method}. \end{tcolorbox} \subsection{RQ3. \textcolor{black}{What are the frequent terms utilized by developers when documenting refactoring types?\xspace}} \textcolor{black}{This research question examines the textual content of the commit messages to determine the frequent refactoring types-related terminology developers utilize when documenting their refactoring activity. In this RQ, we utilize natural language processing techniques, more specifically bigram analysis, to extract the frequent bigrams developers utilize in describing their refactoring activity for each refactoring type considered in our study. Bigrams are a sequence of two adjacent words in a sentence; in this instance, the commit messages. We also look at trigrams to locate sets of common terms. Unlike unigrams, bigrams and trigrams provide a certain level of context for terms, which helps our analysis by reducing the chance of making false presumptions. Before our extraction, we first run Refactoring Miner in order to identify commits containing refactorings from each type of refactoring operations considered in this study as discussed in Section \ref{sec:methodology}.} \textcolor{black}{Upon a closer inspection of the refactoring patterns in Tables \ref{Table:doc1}, \ref{Table:doc2}, and \ref{Table:doc3}, we have made several observations: (1) the keywords and phrases used in renaming refactorings are the most discriminative, indicating that these terms are strongly associated with the action of renaming, (2) the patterns used for extract refactorings are associated with the motivation behind refactoring, \textit{e.g.,}\xspace remove duplication, improve clarity, and improve reusability, (3) for move, pull up, and push down, developers used the term “move” interchangeably as the main action of these refactoring operations involve moving the code elements, and (4) the terms used in inlining refactorings are limited as developers mainly used specific keywords to demonstrate the action.} \begin{table}[htbp] \begin{center} \caption{\textcolor{black}{Relevant Terms per Refactoring Types.}} \label{Table:doc1} \begin{adjustbox}{width=1.0\columnwidth,center} \begin{tabular}{llllll}\hline \toprule \bfseries Rename & \bfseries Extract \\ \midrule alter* method name for more consistency & add* a new method \\ better method name & add* method\\ chang* method name & add* new [] function\\ chang* method name for clarity & add* new method\\ chang* method name for consistency & add* several methods\\ chang* some method name & add* some convenience functions\\ chang* test method name & add* the [] method\\ chang* the method name & add* the method []\\ chang* the name & break* up the jumbo methods\\ clarif* method name & brok* up long methods into a bunch of smaller methods\\ clean* up method name & brok* up the [] method into a separate []\\ correct* a method name & creat* a higher level [] method\\ correct* method name & creat* a new method\\ fix* a typo in a method name & creat* method\\ fix* confusing method name & creat* separate method\\ fix* inconsistent method name & extract* common code from \\ fix* incorrect method name & extract* a few methods out\\ fix* method name & extract* a method\\ fix* method name conflict & extract* abstract method\\ fix* method name typo & extract* common code\\ fix* misspelled method name & extract* common method\\ fix* several method names & extract* method\\ fix* spelling for method name & extract* out a method\\ fix* typo in method name & extract* out function\\ improv* method name & extract* out the method\\ improv* the name & extract* some methods\\ made the method name a bit more explicit & extract* some methods for code clarity sake\\ method name chang* & extract* the [] method from []\\ method name fix* & extract* some stuff to a method\\ method name improv* &fix* for method code size\\ method name refactor* & mov* [] into separate methods\\ method names in tests changed & refactor* duplicate code into separate method\\ minor change to method name & refactor* some methods\\ minor refactorings to method name & refactor*: Introduc* a method\\ modif* test method name & separat* [] from []\\ more meaningful method name & separat* a method\\ normaliz* getter method name & split* [] into separate methods\\ polish test method name & split* into separate functions\\ refactor* method name & split* into smaller pieces first\\ refactor* some method names & split* into some smaller assert to reuse\\ renam* factory methods & split* the [] into component parts for clarity\\ renam* for clarification & split* the [] method in several sub-methods\\ renam* for clarity & split* the [] method into a []\\ renam* for consistency & split* the code into [] and []\\ renam* method & split* the HUGE generate method into different methods\\ renam* method name & split* up\\ renam* misleading method name & split* up [] a bit more neatly\\ renam* of code & split* up a complex method\\ renam* of component & split* up the [] method\\ renam* of function name & split* up the [] method into some methods\\ renam* some internal variables and methods &\\ renam* some methods &\\ renam* the method &\\ shorten* method name &\\ simplif* user method name &\\ solv* typo in method name &\\ standardization of method name &\\ tid* up method naming &\\ tid* up test method name &\\ unif* execution method name &\\ uniformiz* method name &\\ updat* method name &\\ updat* the test name &\\ using more correct method name & \\ \bottomrule \end{tabular} \end{adjustbox} \end{center} \end{table} \begin{table}[htbp] \begin{center} \caption{\textcolor{black}{Relevant Terms per Refactoring Types (cont.).}} \label{Table:doc2} \begin{adjustbox}{width=1.0\columnwidth,center} \begin{tabular}{llllll}\hline \toprule \bfseries Move & \bfseries Inline \\ \midrule mov* [] to [] & add* methods for merge operation \\ mov* [] to new method & combin* method\\ mov* all code into the only implementing class & consolidat* methods \\ mov* all utility methods into the same class & consolidat* some code\\ mov* around some methods & delet* unused method\\ mov* code around & inlin* helper methods\\ mov* formerly static methods to new & inlin* method\\ mov* from [] to [] & inlin* method only called once \\ mov* into & inlin* private method\\ mov* method & inlin* some methods\\ mov* out of & inlin* some trivial method\\ mov* some & inlin* the simplest method\\ mov* some code into a static utility method & merg* [] and [] into 1 method \\ mov* some methods and/or classes around & merg* [] and [] methods\\ mov* some methods to & merg* code into static method\\ mov* some of it's responsibilities out to other classes & merg* refactoring\\ mov* some of the methods into a class & merg* some code simplification\\ mov* some static methods to Utils & more cleanup and merge resolution\\ mov* some stuff & refactor* [] into []\\ mov* static methods to a util class & refactor*: remov* some unused methods\\ mov* stuff out of the & remov* unused methods\\ mov* the [] & simplif* things my inlining both the method and the argument \\ mov* the implementation of the methods to & some consolidation of methods\\ mov* the method tests in their own class & useless method inlined\\ mov* the methods \\ mov* the notion of [] from [] to [] &\\ mov* to [] & \\ mov* util methods & \\ refactor* : mov* code &\\ refactor* out the methods into separate class &\\ refactor* some methods &\\ refactor* some methods to external helper class &\\ refactor* the code to move the [] to the [] &\\ refactor* to move the [] to [] &\\ refactor*: Move helper method to helper class &\\ refactor*: move to a helper method &\\ some static methods were moved from [] to [] & \\ \bottomrule \end{tabular} \end{adjustbox} \end{center} \end{table} \begin{table}[h] \begin{center} \caption{\textcolor{black}{Relevant Terms per Refactoring Types (cont.).}} \label{Table:doc3} \begin{adjustbox}{width=1.0\columnwidth,center} \begin{tabular}{llllll}\hline \toprule \bfseries Pull Up & \bfseries Push Down \\ \midrule bunch of methods pulled up & chang* to shift functions\\ mov* common code in & minimal code duplication\\ mov* common code into & mov* common parts of\\ mov* common code to & mov* references to [] and [] into subclass\\ mov* more methods to & mov* test sections out of\\ mov* the common unit test setup to a base class & mov* [] from superclass\\ mov* the implementation to the superclass & mov* [] implementations into subclasses\\ mov* to & mov* some methods off [] onto a [] subclass\\ pull* to class level & push [] into []\\ pull* common & push to method level\\ pull* from & push* down\\ pull* from a specified & push* down to\\ pull* out & push* entities around\\ pull* out some common functionality & push* the [] code down into the \\ pull* out test methods into common area & push* to\\ pull* reusable & push* some stuff down from \\ pull* reusable code out of & reduc* the amount of implementation-specific code\\ pull* to & remov* dependency on\\ pull* up & remov* duplicate\\ pull* up common methods & remov* redundant\\ pull* up more properties to the base type & remov* redundant functions\\ pull* up some functionality from & stuff moved to separate\\ pull* up some methods &\\ pull* up to &\\ pull* out common code &\\ refactor* to "pull up" &\\ shift* further method to parent & \\ \bottomrule \end{tabular} \end{adjustbox} \end{center} \end{table} \begin{tcolorbox} \textit{Summary.} \textcolor{black}{Developers discriminate against different refactoring types through human language descriptions. The terminology used in rename refactorings are the most discriminative, indicating that these terms are strongly associated with the action of renaming.} \end{tcolorbox} \subsection{RQ4. \textcolor{black}{How useful is our approach in analyzing the inconsistency types between source code and documentation?\xspace}} \textcolor{black}{Although our approach attempted to thoroughly predict method-level refactoring types, several inconsistency types between source code and documentation might occur. Several studies \citep{arnaoudova2016linguistic,fakhoury2019measuring,kim2016automatic} have identified and detected recurring poor practices related to inconsistencies among the documentation and implementation of the code elements. Because such inconsistencies can affect software comprehensibility and maintainability, this research question aims at exploring the frequency of different inconsistency types that might help in reporting any early inconsistency between refactoring types detected by refactoring detector tools and their documentation. Specifically, we are studying the following inconsistency types:} \vspace{.2cm} \noindent\textbf{\textcolor{black}{Case \# 1. Refactoring of type A is detected based on the source code but the description does not correspond to any refactoring.}} \textcolor{black}{To obtain the data for this type of inconsistency, we need to add a set of commits in which the documentation does not correspond to any type of refactorings considered in this study. We started by randomly selecting 834 refactoring commits detected by Refactoring Miner while making sure no specific documentation about refactoring is reported. For example, we excluded the terms \say{\textit{extract}}, \say{\textit{inlin}}, and \say{\textit{mov}} since these terms correspond to the method-level refactoring operations. The 834 commits equated to the number of commits per refactoring type, as shown in Table \ref{Table:Instances per class (train, test)}. We then had to manually examine the list of commits to determine their appropriateness for this analysis. Next, we built a new model by considering adding this set to the training data with a \say{None} label. Since RQ1 shows that the GBM was able to achieve the highest average F-measure of 0.59, we used the GBM for our model, and we achieved the average F-measure of 0.58. Using a confidence level of 99\% and an interval of 5\%, we constructed a sample size of 588 commits for the manual analysis. The majority of these commits (85.03 \%) indicated there is a consistency between the refactoring detector and the model prediction, whereas a minority of these commits (14.96\%) shows inconsistent results.} \textcolor{black}{The main challenge that we observed across various commits, is the tendency of developers to provide a high-level description of their refactoring, through the use of general expressions and patterns, such as \textit{refactor}, \textit{restructure}, and \textit{code clean up}, etc. Such patterns cannot be framed into one single type, \textit{i.e.,}\xspace they can be used to describe all refactoring types. The following example demonstrates such a case:} \begin{center} \fbox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule\relax}{\centering ``Just cleaned up the code a bit.'' }} \captionof{Quote}{\textcolor{black}{Inconsistency type (Case \# 1)} \label{Quote:case1}} \end{center} \textcolor{black}{This phenomenon of using high level description to document low-level changes is also observed frequently in bug fix commit messages, where text messages would just contain the popular pattern of "\textit{fix bug X}". However, this is less problematic in the context of bugs because developers can still use the bug number (\textit{}X) to locate the corresponding bug report, and so access the bug’s proper documentation in the bug report. Whereas, for refactoring documentation, this is a persistent problem since without providing the rationale and the appropriate explanation of the change, there is no way to trace back such information anywhere in the project.} \textcolor{black}{In practice, developers perform refactorings as singular transformations and in conjunction with other refactorings (\textit{i.e.,}\xspace batch or composite refactorings). Previous studies (\textit{e.g.,}\xspace \citep{bibiano2020does}) explored how single or composite refactorings contribute to the code smell removal or internal quality attribute improvement. Since developers perform these kinds of refactorings at the source code level, we expect that developers apply such practice of single or multiple transformation types at the documentation level on real development practices. Our previous studies on refactoring documentation showed that developers self-affirmed the action of refactoring in both open source (\textit{e.g.,}\xspace \citep{alomar2019can,alomar2020toward,alomar2021we}) and industry \citep{alomar2021icse} at different levels of granularity including the high-level and fine-grained descriptions. A previous study \citep{yamashita2020changebeadsthreader} on tailoring untangled changes pointed out that developers often mix changes in different intentional tasks in one comment. The authors proposed an approach that regards a sequence of fine-grained changes that are about to be committed as a single commit by developers to merge and split change clusters to support the manual tailoring of untangling changes.} \textcolor{black}{From a practical point of view, researchers and practitioners can benefit from the proposed model to detect inconsistency types between refactoring detectors at the source code and documentation level, and to accelerate code review process since recent studies expressed the need to improve the quality of documentation for refactoring and non-refactoring changes \citep{alomar2021icse,ebert2021exploratory}}. \vspace{.2cm} \noindent\textbf{\textcolor{black}{Case \# 2. Refactoring of type A is detected based on the description but the source code change does not correspond to any refactoring.}} \textcolor{black}{To perform our analysis, we need to include a set of commits that do not correspond to any refactoring operations and then feed this set into the training data with a \say{None} label. Thus, after running Refactoring Miner on a set of commits, we randomly selected 834 non-refactoring commits as indicated by Refactoring Miner. The selection of 834 commits was due to the count of refactoring types (see Table \ref{Table:Instances per class (train, test)}). We then built a model considering adding the set of non-refactoring commits in the training data. \textcolor{black}{Similarly to Case} \#1, we consider using the GBM for the newly created model, and we achieve the average F-measure of 0.56. To better understand the nature of this type of inconsistency, we performed a manual validation of 588 commits from the test data, this sample corresponds to a confidence level of 99\% and a confidence interval of 5\%. The majority of the commits (436 instances or 74.14\%) shows an agreement between the results obtained from the tool and our model, and (152 instances or 25.85\%) illustrates the disagreement case.} \textcolor{black}{\citep{soares2020relation} reported that such type of inconsistency might indicate that developers apply refactorings that are different from refactorings defined by \citep{Fowler:1999:RID:311424}.} \textcolor{black}{Moreover, we observe in our study that developers are documenting what they consider to be refactoring in non source code files. These files include configuration files, maven file, or database are not associated with refactoring operations detected by the tool even though the description contains refactoring operation-related keywords. The following example demonstrates such a case:} \begin{center} \fbox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule\relax}{\centering ``Renamed table. The same table name was used in another test, which made this test fail when running all tests.'' }} \captionof{Quote}{\textcolor{black}{Inconsistency type (Case \# 2)} \label{Quote:case2}} \end{center} \textcolor{black}{Such changes would not be detected by Refactoring Miner or any other detection tool because these tools are conceived to operate on only source files. Interestingly, our model results show that developers would also perform what they call refactoring on other files. If we refer to the original definition of refactoring, these changes may not be necessarily considered as refactorings, but with the rise of continuous integration, and infrastructure as service, many non-source files are now evolving as part of the project's ecosystem. These files undergo maintenance and evolution as well (updating dependencies, changing configurations, etc.). Therefore, there is a need for the refactoring community to properly taxonomize changes to these files, and evolve its toolset to detect them as well.} \textcolor{black}{Existing studies on configuration files have focused on the interactions between Java and XML configuration files \citep{chen2008toward}, the identification and detection of CI configuration bad practices that violate the best practices in CI configuration files (\textit{e.g.,}\xspace redirecting scripts into interpreters, bypassing security checks, and using commands in an incorrect phase) and the prevalence of these anti-patterns in CI specifications \citep{zampetti2020empirical,gallaba2018use}. Since refactoring on other files is under research, future CI research and tooling needs to focus on the development of automated CI anti-pattern detectors and refactoring recommenders, and avoid the consequences of misusing CI features.} \vspace{.2cm} \noindent\textbf{\textcolor{black}{Case \# 3. Refactoring of type A is detected based on the source code, refactoring of type B is detected based on the description and A is different from B.}} \textcolor{black}{Previous studies investigated the case when there is a disagreement between source code and its documentation in the context of programming misconception \citep{swidan2018programming}, linguistic anti-patterns \citep{arnaoudova2016linguistic}, bug localization \citep{fakhoury2019measuring}, and code review \citep{ebert2021exploratory}. In their study on misconceptions in programming education for school students, \citep{swidan2018programming} observed that younger learners hold common programming misconceptions that cause them to make errors. The authors recommended developing intervention methods to catch those misconceptions as early as possible. Further, \citep{arnaoudova2016linguistic} investigated developers' perception of linguistic anti-patterns and developed a catalog of 17 types of linguistic anti-patterns related to inconsistencies, findings that the majority of the participants perceive linguistic anti-patterns as poor practices and must be avoided. \citep{fakhoury2019measuring} showed that inconsistencies in the source code have a significant effect on cognitive load, success, and time spent on program comprehension. More recently, \citep{ebert2021exploratory} discussed how developers deal with confusion in code reviews caused by unclear commit messages and lack of documentation. According to their survey with developers, one of the most frequent reasons for confusion is lack of documentation and missing code change rationale.} \textcolor{black}{Since the presence of inconsistencies can mislead developers, we aim to investigate this phenomenon.} \textcolor{black}{For this type of inconsistency, we randomly selected 588 refactoring commits to check the percentage of the agreement and the mismatch between refactoring types detected by the Refactoring Miner and our model. This quantity roughly equates to a sample size with a confidence level of 99\% and a confidence interval of 5\%. We then run our deployed model on these commits in order to compare our results with that obtained by the Refactoring Miner. The result shows that the inconsistency case represents 60.20\% of the commits whereas only 39.79\% of the commits are consistent.} \textcolor{black}{Concerning our manual analysis, we observe that developers provide inadequate description of the code changes. The following example demonstrates such a case in which the tool detected composite refactoring operations \textit{i.e.,}\xspace \textit{Extract}, \textit{Rename}, and \textit{Move} whereas our model predicted the commit as \textit{Extract} based on the description:} \begin{center} \fbox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule\relax}{\centering ``Extract BindingHelper for re-use in wizards.'' }} \captionof{Quote}{\textcolor{black}{Inconsistency type (Case \# 3)} \label{Quote:case3}} \end{center} \textcolor{black}{Our analysis for the three types of inconsistency shows that there is a need to improve the quality of refactoring documentation, and encourage the invention of the refactoring documentation generator. This offers a valuable opportunity to improve and standardize the format of the documentation. We believe that by combining the documentation with the state-of-the-art refactoring detectors, we can better understand the applied refactoring. For future work, we plan to perform an in depth study and extensive manual low-level source code inspection to better understand the phenomenon (\textit{i.e.,}\xspace inconsistency cases).} \begin{tcolorbox} \textit{Summary.} \textcolor{black}{Our model can work in conjunction with refactoring detectors \citep{tsantalis2018accurate,silva2017refdiff} in order to report any early inconsistency between refactoring types and their documentation.} \end{tcolorbox} \section{Research Implications} \label{sec:Implication} The main implications of this study are as follows: \begin{enumerate} \item While existing studies, in classifying code changes using their commit messages \citep{Levin:2017:BAC:3127005.3127016,gharbi2019classification,levin2019towards}, have been achieving relatively higher accuracies in comparison with our model, this reveals a lack of refactoring documentation \textit{culture}, unlike documenting other code changes such as, API migration, bug fixes, and feature updates. However, the end goal our model is not to detect refactorings, but to work in conjunction with refactoring detectors \citep{tsantalis2018accurate,Silva:2016:WWR:2950290.2950305} in order to report any early inconsistency between refactoring types and their documentation. This is useful not only to improve the quality of documentation, which has been found to be lacking when it comes to describing code changes \citep{treude2020beyond}, but also to improve the understandability of code changes for code review and evolution purposes. For instance, a recent study has found that revealing more details about refactoring, such as types and intents, helps in facilitating its acceptance in code reviews \citep{bibiano2020does}. \item The words and phrases used in rename refactorings are the most discriminative, indicating that these terms are strongly associated with the action of renaming. Future work to help document rename refactorings, which are shown to be under-documented at between 1 and 6\% of the time \citep{arnaoudova:2014, peruma2020contextualizing}, can use our approach to determine what keywords they should use, or recommend to developers, when generating commit messages. \item Refactorings are generally associated with a specific set of keywords and phrases found in commit messages. However, there is also a significant amount of ambiguity in the way words are used; particularly for pull-up and push-down refactorings. A system which recommends how to document refactorings can reduce this confusion and the keywords that we discuss in this work are a strong starting point for determining what phrases should be used to reduce ambiguity. \item Our approach can be used to study the discriminative terms found in commit messages and can be used to detect the common words and phrases which describe different types of refactorings. In this study, we used this approach on a large number of systems but it could also be used on singular systems to detect project-specific ways of describing refactorings; further bolstering any future recommendation system's ability to tailor recommended commit messages/keywords to a specific project. \item Our study helps us understand refactoring documentation practices that trigger the need to explore the motivation behind refactoring. The study helps future developers to follow best documentation practices and improve the quality of the refactoring documentation. Further, the refactoring motivations tell the opinion of developers, so it is important for managers to learn developers' opinions and feelings especially for distributed software development practices. If developers do not document, managers will not know their intention. Since software engineering is a human-centric process, it is important for managers to understand the people's intention to work on the team through their documentation. \end{enumerate} \section{Threats to Validity} \label{sec:threats} In this section, we describe potential threats to validity of our research method, and the actions we took to mitigate them. \textbf{Internal Validity.} \textcolor{black}{Our analysis is mainly threatened by the accuracy of the Refactoring Miner tool because the tool may miss the detection of some refactorings. However, previous studies \citep{tsantalis2018accurate,Silva:2016:WWR:2950290.2950305} report that Refactoring Miner has high precision and recall scores (\textit{i.e.}, a precision of 98\% and a recall of 87\%) compared to other state-of-the-art refactoring detection tools and is frequently utilized in refactoring studies (\textit{e.g.,}\citep{aniche2020effectiveness,chavez2017does,peruma2020contextualizing,alomar2019can,alomar2020toward,alomar2020relationship,alomar2020developers,alomar2021refactoringreuse}). A recent survey \citep{tan2019survey} compares several refactoring detection tools and shows that Refactoring Miner is currently the most accurate refactoring detection tool, which gives us confidence in using the tool.} \noindent\textbf{Construct Validity.} Since our approach heavily depends on commit messages, we used well-commented Java projects when performing our study. Thus, the quality and the quantity of commit messages might have an impact on our findings. Another important limitation concerns the size of the dataset used for training and evaluation. The size of the used dataset was determined similarly to previous commit classification studies, but we are not certain that this number is optimal for our problem. It is better to use a systematic technique for choosing the size of the evaluation set. Another threat to validity can be related to the list of keywords that we used to identify set of commits for keyword-based approach as developers might use other keywords when documenting refactoring. However, the impact of this threat was limited to the refactoring operation-related keywords detected by Refactoring Miner. \noindent\textbf{External Validity.} The first threat relates to the commits that are extracted only from open source Java projects. Our results may not generalize to commercially developed projects, or to other projects using different programming languages. Further, since a commit message could potentially belong to multiple refactoring types, our model does not consider such cases. However, exploring how to automatically classify commits into this kind of hybrid categories is an interesting direction for future work. \vspace{-.2cm} \section{Conclusion} \label{sec:conclusion} In this paper, we formulated the prediction of refactorings as a multiclass classification problem, \textit{i.e.,} classifying refactoring commits into six method-level refactoring operations, applying nine supervised machine learning algorithms. We compared the performance of our approach to the keyword-based baseline and our results show that our approach outperforms the keyword-based approach. \textcolor{black}{Specifically, our main findings show that (1) the prediction results for \textit{Rename Method}, \textit{Extract Method}, and \textit{Move Method} were ranging from 63\% to 93\% in terms of F-measure. However, our model was not able to accurately distinguish between \textit{Inline Method}, \textit{Pull-up Method}, and \textit{Push-down Method}, as its F-measure was between 42\% and 45\%, (2) the keyword-based approach performs significantly lower than ML models, (3) developers discriminate against different refactoring operations through human language descriptions, and (4) there is a need to improve the quality of refactoring documentation and encourage the invention of the refactoring documentation generator.} In the future, we plan to study the applicability of our approach to other projects developed in different programming languages, and to other domains, \textit{i.e.,}\xspace consider using commit messages written in different programming languages to predict refactoring and compare findings. We also plan to use the extension of Refactoring Miner \citep{tsantalis2020refactoringminer} that supports low-level refactorings. Another interesting research direction is to investigate if our approach can be applied to statement-level refactoring (\textit{e.g.,}\xspace \textit{Extract Variable}). \textcolor{black}{Additionally, since a commit message could potentially belong to multiple categories (\textit{e.g.}, \textit{Extract Method} and \textit{Move Method}), future research could usefully apply multi-label classification to automatically classify commits into this kind of hybrid categories}. Further, although we used commit messages as our primary source of text, our approach is not restricted to a specific source of textual information. In our future work, we can test our approach using other types of information, including issue descriptions. \section{Acknowledgments} This material is based on work supported by the National Science Foundation under Grant No. 1757680. \bibliographystyle{model5-names} {\footnotesize
2,877,628,090,024
arxiv
\section{Introduction} \label{sec:intorduction} Wireless data traffic is growing at an unprecedented rate, exacerbating the demand for improved design strategies for the next generation wireless infrastructure \cite{Furuskar2015}. Deployment of small base stations (sBSs) to offload wireless data from a macro base station (BS) can have the potential to not only improve the network performance during peak data traffic periods, but also to integrate existing WiFi and cellular technologies in an efficient manner \cite{Bennis2013}, \cite{Chou2014}. A potential drawback of the small-cell infrastructure to offload wireless data from a macro BS is that the backhaul link-capacity required to support the peak data traffic can be alarmingly high, necessitating complex and expensive solutions to ensure high throughput and performance during peak traffic periods. Caching can reduce the peak backhaul load by storing popular contents in local cache memories located at the sBSs \cite{Niesen2012}. Benefits of coded caching across sBSs is shown in \cite{Golrezaei2012} and \cite{Xu2017}, while in \cite{Bacstuug2015a} caching is analyzed for networks modeled using independent Poisson point processes (PPPs). The performance of TCP is shown to improve with the help of caching in \cite{Hu2003}, while caching-based content-centric networking, and an information-centric architecture for energy-efficient content distribution are proposed in \cite{Wang2014} and \cite{Fang2014}, respectively. Results on caching video files and their benefits are presented in \cite{Pedarsani2014} - \nocite{Golrezaei2014}\cite{Li2016}, while the advantages of data caching and content distribution in device-to-device (D2D) communications are studied in \cite{Ji2016} - \nocite{Zhang2016}\cite{Chen2017a}. In \cite{Gregori2016}, proactive caching is shown to increase the energy efficiency of D2D communications, while the advantages of caching on mobile social networks is reported in \cite{Wu2016}. Most papers in the literature assume {{\em a priori}} knowledge of the popularity profile of the cached contents, which is unreasonable in practical scenarios. This assumption is relaxed in \cite{Blasco2014} - \nocite{Blasco2014a}\cite{Bacstuug2015}, and various learning-based approaches are proposed to estimate the popularity profile, and theoretical analyses have been carried out to study the implications of learning the popularity profile and user preferences on the performance \cite{Golrezaei2013} - \nocite{Tatar2014}\nocite{Bharath2016}\nocite{Song2017}\cite{Chen2016}. However, these works assume that the popularity profile is stationary and statistically independent across time. In practice, there are many applications (for example, video on demand) in which the popularity profile of cached contents is a function of time \cite{Cha2009} - \nocite{Szabo2010}\cite{Kim2017}. Motivated by these applications and the growing significance of caching in improving the quality of service for end-users during peak traffic periods, we analyze the performance of a random caching strategy for a \emph{non-stationary} popularity profile, which may have statistical dependence across time. A heterogenous network in which the users, BSs, and sBSs are distributed according to independent PPPs is considered. The sBSs employ a random caching strategy. A protocol model for communication is proposed, and a cost function, which captures the backhaul link overhead called the ``offloading loss'', is considered. The offloading loss at time $t$, which depends on the popularity profile, is denoted by $\mathcal{T}(t)$. Our goal is to obtain risk bounds on this offloading loss when the popularity profile is time-varying and unknown. Under a certain request model (see \texttt{Assumption $1$}), the BS first estimates the popularity profile based on the requests observed during the first $t$ slots. It then chooses the caching probabilities $\pi \triangleq (\pi_1,\pi_2,\ldots, \pi_N)$, where $N$ is the number of popular content items that can be cached, in order to minimize its offloading loss $\hat{\mathcal{T}}(t)$, based on the estimated popularity profile. sBSs in the coverage area of the BS use this optimal caching policy to store content items in their caches. Since the popularity profile is time-varying, it becomes necessary to frequently refresh the caches, say after every $T$ time slots, albeit at an additional cost. Thus, it is important to investigate the minimum periodicity $T$ of cache updates that guarantees a desired offloading loss. In this paper, we derive probably approximately correct (PAC) type guarantees on the \emph{offloading loss difference} $\Delta_{\mathcal{T}}(t,T)$, which is defined as the difference between the offloading loss incurred by using the outdated caching policy obtained by optimizing $\hat{\mathcal{T}} (t)$ at time $t+T$, and the optimal offloading loss at time $t+T$. We show that $\Delta_{\mathcal{T}}(t,T) < \epsilon$ with a probability of at least $1-\delta$ for any $\delta > \zeta$ and $\epsilon > 0$, where $\zeta$ is a function of the $\beta$-mixing coefficient, the number of content items $N$, and the user density. The $\beta$-mixing coefficient is a measure of the statistical dependency of the time-varying popularity profiles. If the popularity profile process is ``sufficiently'' mixing, {{\em i.e.}}, if the process becomes almost independent after a sufficiently long time, and if the user density is very high, then the desired $\epsilon$ can be achieved for negligibly small $\delta > 0$. In particular, to achieve a fixed probability $\delta > \zeta$, we require the error $\epsilon$ to be a function of $N$, the rate of change of the popularity profile, and the Rademacher complexity, which is a measure of the difficulty in estimating the offloading loss. The following are the main findings of this paper: (1) the error $\epsilon$ increases with $N$; (2) the desired error $\epsilon$ can be achieved with higher probability (i.e., $\zeta$ becomes smaller) for a larger user density, thus improving the caching performance, since higher user density results in more user-requests, allowing a better estimate of the popularity profile; (3) the higher the correlation of the popularity profile across time (defined in terms of the $\beta-$mixing coefficient), the longer the waiting time $t$ to achieve a target error level $\epsilon$ with probability $1-\delta$; (4) the error $\epsilon$ is a function of the rate of change of the popularity profile, and hence, the cache refresh period $T$. Thus, outdated cache contents lead to a larger error for a given $\delta$, and a rapidly varying popularity profile requires more frequent updates to achieve the desired error performance; (5) a higher Rademacher complexity results in poorer error performance; and (6) when the user requests are independent and identically distributed ({i.i.d.}), the error performance is better compared to non-stationary and statistically dependent requests. For stationary popularity profiles and large $t$, frequent cache-updates are not necessary to achieve the desired performance. Finally, motivated by our theoretical bounds, we present an algorithm which updates the cache contents only if the discrepancy that captures the rate at which the popularity profile is changing, is large. We demonstrate the benefits of using the proposed cache update policy compared to periodic cache updates through simulations. To the best of our knowledge, this is the first time random caching is studied with non-stationary, statistically dependent, and unknown popularity profiles from a learning theory perspective. The initial results of this work can be found in \cite{Bharath2017}. The remainder of the paper is organized as follows. In \secref{sec:sys_model}, we present the system model and introduce the notation. The problem statement is introduced in \secref{sec:problem_statement}, while the main results are presented in \secref{sec:mainresult_1}. Performance analyses for Bernoulli and Poisson request models are analyzed in \secref{sec:bernoulli_poisson}. Numerical results are presented in \secref{sec:numerical_results}. Concluding remarks are provided in \secref{sec:conclusion}. \section{System Model} \label{sec:sys_model} A heterogenous cellular network is considered in which the users, BSs and sBSs are spatially distributed according to independent PPPs with densities $\lambda_u$, $\lambda_b$ and $\lambda_s$, respectively \cite{Baccelli1997}. The sets of users, BSs and sBSs are denoted by $\Phi_u \subseteq \mathbb{R}^2$, $\Phi_b \subseteq \mathbb{R}^2$, and $\Phi_s \subseteq \mathbb{R}^2$, respectively. Each user requests a content item (i.e., {\it file}) from the library $\mathcal{F} \triangleq \{f_1, \ldots, f_N\}$ of $N$ files, each of size $B$ bits, from its neighboring sBSs. The requests are assumed to be statistically independent across users. However, the requests from each user are assumed to be \emph{non-stationary} and statistically \emph{dependent} across time. We assume that the size of the cache at each sBS is at most $M$ files. The problem considered in this paper is that of caching relevant ``popular" files at the sBSs, wherein, depending on the availability of the file in the local cache, the file requested by a user will be served directly by one of its neighboring sBSs. In order to access cached content items, a user $u \in \Phi_u$ identifies and communicates with a set of neighboring sBSs employing the following protocol: sBS $s$ located at $x_s \in \Phi_s$ communicates with user $u$ located at $x_u \in \Phi_u$ if $\norm{x_u - x_s} < \gamma$, for some $\gamma > 0$. This condition determines the communication radius. In this protocol, we ignore the interference from other users in the network. The set of neighbors of user $u$ located at $x_u$ is denoted by $\mathcal{N}_u \triangleq \{y \in \Phi_s: \norm{y - x_u} < \gamma\}$. The caching policy will depend on the distribution of the requests from the users, which is assumed to be unknown, and should be estimated. In the next subsection, we present a stochastic process modeling the requests from the users, and devise a method for estimating its distribution. \subsection{User Request Model} \begin{figure}[h!] \begin{center} {\includegraphics[height=5cm,width=12.0cm]{time_div.eps}} \caption{A time period consisting of $t$ time slots, each of duration $\Delta$, is divided into $2m$ blocks, where the $i^{\text{th}}$ block consists of $a_i$ slots, and $t = \sum_{i=1}^{2m} a_i$.} \label{fig:figslot_timediv} \end{center} \end{figure} Let the stochastic process $X_v(\tau) \in \{1,2,\ldots,N\}$ denote the index of the requested file by user $v \in \Phi_u$ at time $\tau \in \mathbb{R}$. For example, each user can maintain an independent local Poisson clock, and makes a request whenever the local clock ticks. For any two users $v,w \in \Phi_u$, the request processes $X_v(\tau)$ and $X_w(\tau)$ are independent. For the ease of analysis, let us divide the time into slots of size $\Delta > 0$ each. Further, for each $v \in \Phi_u$, $\{X_v(\tau), \tau \in \mathbb{R}\}$ is a non-stationary and statistically dependent stochastic process across time slots, but the process $X_v(\tau)$ within each time slot (i.e., $\tau \in [i \Delta, (i+1) \Delta)$, $i=1,2,\ldots$) is assumed to be stationary. Further, we assume that there is a ``typical" BS at the origin with a coverage radius of $R > 0$. The BS estimates the popularity of the content items based on the requests it receives. Essentially, at a given time slot $t$, the BS collects requests (for $t$ time slots) from all the users in the BS's coverage area to estimate the popularity profile of the requested files. Let $n_u \sim \text{Poiss}(\pi \lambda_u R^2)$ denote the number of users in its coverage area. The random arrival instants of the requests from different users are assumed to satisfy the following assumption. \textbf{Assumption $1$:} There exist constants $0 \leq \alpha_{\texttt{min}} \leq \alpha_{\texttt{max}} \leq 1$ such that for any random $n_u = n \geq 1$ users in the coverage area of the BS, the number of requests in $a \in \mathbb{N}$ time slots, denoted by $r_a \in \mathbb{N}$, satisfies $\Pr\{\alpha_{\texttt{min}} n a \leq r_a \leq \alpha_{\texttt{max}} n a \left\vert \right. n_u = n\} > \zeta_{a,n}$ for some $\zeta_{a,n} > 0$. \label{def:request_model} It turns out that the results based on the above assumption can be used to derive performance guarantees when the arrival process is a homogenous Poisson point process (see Sec.~\ref{sec:bernoulli_poisson}). Further, we assume that the request instants and the number of requests within a time slot are independent of the files requested. The set of request instants at which the requests from all the users in the coverage area of the BS arrive within the $i^{\text{th}}$ time slot is denoted by $\mathcal{R}_{i}$. Let $X(\tau) \triangleq \bigcup_{v \in \Phi_u \bigcap \norm{v}_2 \leq R} \{X_v(\tau)\}$ denote the set of requests from all the users in the coverage area of the BS at time $\tau \in \mathbb{R}$. Note that if two or more users request for the same file at time $\tau \in \mathbb{R}$, then it is counted as the same index due to the union in the definition of $X(\tau)$. However, this event does not occur almost surely. The set of requests from all the users in time slots $t_1$ to $t_2$ is denoted by $X_{t_1, t_2} \triangleq \{{X}(\tau): \tau \in \mathcal{R}_{t_1,t_2}\}$, where $\mathcal{R}_{t_1,t_2} \triangleq \bigcup_{i=t_1}^{t_2} \mathcal{R}_{i}$ (see Fig. \ref{fig:figslot_timediv}). After receiving requests $X_{1, t}$ within first $t$ time slots, the BS computes the empirical estimate of the popularity profile, {{\em i.e.}}, the probability of the $i^{\text{th}}$ file being requested is estimated as follows: \begin{eqnarray} \hat{p}_{i,t} = \frac{1}{r_t} \sum_{s \in \mathcal{R}_{1,t}} \mathds{1}\{X(s) = i\},~ i=1, \ldots, N, \label{eq:estimation_popularity} \end{eqnarray} where $r_t \triangleq \abs{\mathcal{R}_{1,t}}$ is the total number of requests in the first $t$ slots, and the indicator function $\mathds{1}\{X(s) = i\}$ is one when the event $\{X(s) = i\}$ occurs, zero otherwise. The accuracy of the estimate $\hat{\mathcal{P}}^{(t)} \triangleq \{\hat{p}_{i,t}: i = 1,2,\ldots,N\}$ depends on (i) the number of available samples, which in turn is related to the number of users in the coverage area of the BS, (ii) the number of requests per user, and (iii) the behavior of the process $X(s)$. The estimate in \eqref{eq:estimation_popularity} is valid only when there is a positive number of user requests, which is guaranteed by \texttt{Assumption $1$} above. In the next section, we present the performance measure for the above model, and state the main problem addressed in the paper. \section{Problem Statement}\label{sec:problem_statement} We consider a typical user located at the origin denoted by $o \in \Phi_u$. At time slot $t \in \mathbb{N}$, the ``offloading loss'' is defined as \begin{eqnarray} \mathcal{T}(\Pi^{(t)}, \mathcal{P}^{(t)}, X_{1,t-1}) \triangleq \frac{B}{R_0}\Pr\left\{f_o \notin \mathcal{N}_u \left \vert \right. X_{1,t-1} \right\}, \label{eq:metric} \end{eqnarray} where $\Pi^{(t)}$ denotes the caching policy, $\mathcal{P}^{(t)} \triangleq \{p_1(t),p_2(t),\ldots,p_N(t)\}$ is the popularity profile in slot $t$, $R_0$ and $\frac{B}{R_0}$ denote the rate supported by the BS and the time overhead incurred in transmitting the file from the BS to the user, respectively, and $f_o$ denotes the file requested by the typical user in the $t$-th slot. In \eqref{eq:metric}, with a slight abuse of notation, $f_0 \notin \mathcal{N}_u$ denotes the event that the requested file $f_0$ is not present in the caches of the neighboring sBSs. The offloading loss is the scaled probability of the content requested by user $o$ not being cached by any of the sBSs within its communication range conditioned on the requests received by the BS until the beginning of time slot $t$, {{\em i.e.}}, $X_{1,t-1}$. We employ the following random caching strategy, which enables us to derive a closed form expression for the offloading loss at time $t$. \textbf{Random caching strategy:} At time $t$ (determined by the BS), each sBS $s \in \Phi_s$ caches content items in an {i.i.d.} fashion by generating $M$ indices distributed according to $\Pi^{(t)} \triangleq \left\{\pi_i(t): \sum_{i=1}^N \pi_i(t) = 1, \right\}$ (see \cite{Ji2013}). We seek to solve the following optimization problem: \begin{eqnarray} &\min\limits_{\Pi^{(\tau)} \in \mathcal{P}_{\pi}:\tau \in \mathbb{N}}&\limsup_{t \rightarrow \infty} \frac{1}{t} \sum_{\tau = 1}^t\mathcal{T}(\Pi^{(\tau)},\mathcal{P}^{(\tau)}, X_{1,\tau-1}), \label{eq:main_opt_problem} \end{eqnarray} where $\mathcal{P}_{\pi}$ denotes the $N-$dimensional probability simplex. An expression for $\mathcal{T}(\Pi^{(t)},\mathcal{P}^{(t)}, X_{1,t-1}) $ is given in the following theorem, whose proof can obtained by replacing $p_i$ by $p_{X,i}(t)$ in the proof of Theorem $1$ found in \cite[Appendix A]{Bharath2016}. \begin{thrm} The average offloading loss at time $t$ for the random caching strategy $\Pi^{(t)}$ is given by \begin{eqnarray} \mathcal{T}(\Pi^{(t)},\mathcal{P}^{(t)}, X_{1,t-1}) = \sum_{i=1}^N g(\pi_i(t)) p_{X,i}(t), \label{eq:mean_througput_expression} \end{eqnarray} where $p_{X,i}(t) \triangleq \Pr \{f_i \text{ requested by $o$ in slot $t$} | X_{1,t-1}\}$, and $g(\pi_i(t)) \triangleq \frac{B}{R_0}\exp\{-\lambda_u \pi \gamma^2[1-(1-\pi_i(t))^M]\}$. \label{thrm:thm_mean_throughput} \end{thrm} Even assuming that the conditional probabilities $p_{X,i}(t)$ are perfectly known, the complexity involved in solving \eqref{eq:main_opt_problem} can be high owing to the fact that the caching policy at time $t$ depends on $X_{1,t}$, which grows with $t$. In practice, the conditional probability $\Pr \{f_i \text{ requested } | X_{1,t-1}\}$ is unknown, and has to be estimated. Also, the BS may not have enough samples to compute a reasonably good estimate of this conditional probability. Hence, it is reasonable to consider the unconditional probability in the definition of the offloading loss. Thus, one can minimize the offloading loss $\mathcal{T}(\Pi^{(t)},\mathcal{P}^{(t)}) \triangleq \left[ \sum_{i=1}^N g(\pi_i(t)) p_{i}(t) \right]$, where $p_i(t)$ is the probability of the $i^{\text{th}}$ file being requested at time $t$. However, the $p_i(t)$'s are unknown; and hence, an estimate of the popularity profile needs to be used in place of $\mathcal{P}^{(t)}$. More precisely, at time $t$, let $\hat{\Pi}^{*}_t$ denote the caching policy obtained using an estimate $\hat{\mathcal{P}}^{(t)}$; that is, \begin{eqnarray} \hat{\Pi}^{*}_t = \arg \min_{\Pi^{(t)} \in \mathcal{P}_{\pi}} ~~ {\mathcal{T}}(\Pi^{(t)},\hat{\mathcal{P}}^{(t)}). \label{eq:opt_problem_emperical} \end{eqnarray} Suppose that the cache contents chosen by the optimal caching policy at time $t$ will be used to satisfy user demands over the period $(t, t+T]$. Let us consider the offloading loss in using $\hat{\Pi}^{*}_t$ at a later time, say at time $t+T$. The offloading loss at time $t + T$ is given by $\hat{\mathcal{T}}^*(t + T) \triangleq \mathcal{T}(\hat{\Pi}^{*}_t,{\mathcal{P}}^{(t + T)})$. Further, let ${\Pi}^{*}_{t + T}$ denote the optimal caching policy at time $t + T$ using perfect knowledge of the popularity profile $\mathcal{P}^{(t+T)}$; that is, \begin{eqnarray} {\Pi}^{*}_{t + T} = \arg \min_{\Pi^{(t + T)} \in \mathcal{P}_{\pi}} ~ {\mathcal{T}}(\Pi^{(t+T)},{\mathcal{P}}^{(t + T)}), \label{eq:opt_problem} \end{eqnarray} with the corresponding offloading loss ${\mathcal{T}}^*(t + T) \triangleq \mathcal{T}({\Pi}^{*}_{t + T},{\mathcal{P}}^{(t + T)})$. Similar to \cite{Bharath2016}, the central theme of this paper is the analysis of the \emph{offloading loss gap}, $\Delta_{\mathcal{T}}(t,T) \triangleq \hat{\mathcal{T}}{^*}(t + T) - \mathcal{T}{^*}(t + T)$. For example, if $\Delta_{\mathcal{T}}(t,T)$ is small, then each term in \eqref{eq:main_opt_problem} is small, which results in a small average offloading loss. This approach is central to the analyses of prediction problems involving non-stationary stochastic processes \cite{Kuznetsov2014}. The number of requests in any given slot and the requested file index are independent. For example, if the arrivals are Poisson, then the number of requests in any two disjoint intervals are independent. However, the files requested across time are correlated. This assumption is reasonable when the popularity depends on, for example, the files that are trending due to their popularity elsewhere, while a user's decision to browse is independent of the popularity. The unconditional probability does not lead to the independence of the files requested in any slot $t$ from the files requested in future slots. Moreover, an estimate of the popularity profile at time slot $t$ depends on the past requests. However, for future work we aim to investigate generalization bounds retaining the conditioning on the past requests, which makes the offloading loss $\mathcal{T}(\Pi^{(t)}, \mathcal{P}^{(t)}, X_{1,t-1}) \triangleq \frac{B}{R_0}\Pr\left\{f_o \notin \mathcal{N}_u \left \vert \right. X_{1,t-1} \right\}$ at any given slot $t$ random. \section{Main Results} \label{sec:mainresult_1} We study risk bounds on the offloading loss difference, $\Delta_{\mathcal{T}}(t,T)$, when the popularity profile is non-stationary. Essentially, for any $\epsilon > 0$, we seek to identify a risk bound $\delta > 0$, such that \begin{eqnarray} \label{eq:prob_dff} \Pr\left\{\hat{\mathcal{T}}{^\ast}(t + T) - \mathcal{T}{^\ast}(t + T) > \epsilon\right\} < \delta. \end{eqnarray} First, we relate \eqref{eq:prob_dff} to an expression in terms of the estimation error in the following theorem. \begin{thrm} For the estimate of the popularity profile in \eqref{eq:estimation_popularity}, the following bound holds: \begin{eqnarray} \nonumber \Pr\left\{\hat{\mathcal{T}}{^*}(t + T) - \mathcal{T}{^*}(t + T) > \epsilon \right\} \leq 2\Pr\left\{\mathcal{A}_{T}(X_{1,t}) > \epsilon \right\}, \end{eqnarray} where $\mathcal{A}_{T}(X_{1,t}) \triangleq \sup_{\Pi \in \mathcal{P}_\pi} \abs{\sum_{i=1}^N g(\pi_i) (\hat{p}_{i,t} - p_{i,t+T})}$, and $g(\pi_i)$ is defined in \thrmref{thrm:thm_mean_throughput}. \label{thm:machine_learning} \end{thrm} \begin{proof} See \appref{app:proof_ml}. \end{proof} The term $\Pr\left\{\mathcal{A}_{T}(X_{1,t}) > \epsilon \right\}$ can be bounded as follows: \begin{eqnarray} \nonumber \Pr\left\{\mathcal{A}_{T}(X_{1,t}) > \epsilon \right\} &=& \sum_{j=0}^\infty \Pr\left\{\mathcal{A}_{T}(X_{1,t}) > \epsilon \left \vert \right. n_u = j\right\} \Pr\{n_u = j\} \\ \nonumber &\leq& \Pr\left\{n_u = 0 \right\} + \sum_{j=1}^\infty \Pr\left\{\mathcal{A}_{T}(X_{1,t}) > \epsilon \left \vert \right. n_u = j\right\} \Pr\{n_u = j\} \\ &=& \exp\left\{-\lambda_u \pi R^2\right\} + \sum_{j=1}^\infty \Pr\left\{\mathcal{A}_{T}(X_{1,t}) > \epsilon \left \vert \right. n_u = j\right\} \Pr\{n_u = j\} \label{eq:first_bound}. \end{eqnarray} We next derive an upper bound on $\Pr \left\{ \mathcal{A}_{T}(X_{1,t}) > \epsilon | n_u = j\right\}$. The term $\mathcal{A}_{T}(X_{1,t})$ depends on $\hat{p}_{i,t}$, which involves the sum of non-stationary random variables which are possibly correlated across time. In order to apply the standard large deviation bounds, we must convert the sum of non-stationary dependent random variables to a sum of blocks of independent random vectors through a coupling argument, which is explained next. For a given stochastic process $X_{1,\infty}$, and $s \in \mathbb{N}$, let $\mathbb{P}_{\tau,\tau + s}(\star)$ and $\mathbb{P}_{1 \rightarrow \tau}(\star) \otimes \mathbb{P}_{\tau + s \rightarrow \infty}(\star )$ denote the joint and product distributions of the stochastic processes $X_{1,\tau}$ and $X_{\tau+s, \infty}$, respectively. If $X_{1,\tau}$ and $X_{\tau + s, \infty}$ are independent, then $\norm{\mathbb{P}_{\tau,\tau + s}(\star ) - \mathbb{P}_{1 \rightarrow \tau}(\star ) \otimes \mathbb{P}_{\tau + s \rightarrow \infty}(\star)}_{\texttt{TV}} = 0$, where $\norm{\star}_{\texttt{TV}}$ denotes the total variational norm. Thus, for a given $s$, this difference, maximized over all $1 \leq \tau \leq \infty$ is a natural measure of the dependency between $X_{1,\tau}$ and $X_{\tau + s, \infty}$. This is commonly referred to as the $\beta-$mixing coefficient, and for $s \in \mathbb{N}$, it is given by \begin{equation} \beta(s) \triangleq \sup_{1 \leq \tau \leq \infty} \norm{\mathbb{P}_{\tau,\tau + s}(\star ) - \mathbb{P}_{1 \rightarrow \tau}(\star)\\ \otimes \mathbb{P}_{\tau + s \rightarrow \infty}(\star) }_{\textit{TV}}. \end{equation} A stochastic process is said to be $\beta$-mixing if $\beta(s) \rightarrow 0$ as $s \rightarrow \infty$. For a given stochastic process that is $\beta$-mixing, two well-separated sequences of the process are approximately independent, where the approximation error is given by $\beta(s)$. Thus, we assume that the request process $X(t)$ is a $\beta$-mixing stochastic process, i.e., $\beta(s) \rightarrow 0$ as $s \rightarrow \infty$. We now provide the details of the coupling argument, through which the dependent stochastic process is replaced by independent blocks of random variables. This will facilitate the use of a concentration inequality; in particular, McDiarmid's inequality. Fix $m \in \mathbb{N}$, and consider $2m$ consecutive blocks, where the block $i$, $i \in \{1,2,\ldots,2m\}$, consists of $a_i$ time slots, and $t \triangleq \sum_{j=1}^{2m} a_j$ is the total number of time slots (see Fig. \ref{fig:figslot_timediv}). Let $a_0 \triangleq 0$. Consider the time instants at which the requests arrive corresponding to odd and even blocks defined as $\mathbb{T}^{(t)}_o \triangleq \bigcup_{j:j = 0,2,4,\ldots,2(m-1)} \mathcal{R}_{a_j+1,a_{j+1} }$ and $\mathbb{T}^{(t)}_e \triangleq \bigcup_{j:j = 1,3,5,\ldots,2m-1} \mathcal{R}_{a_j+1,a_{j+1} }$, respectively. Thus, the requests corresponding to the odd and even blocks are given by $X^e_{1,t} \triangleq \{X(s): s \in \mathbb{T}^{(t)}_e\}$ and $X^o_{1,t} \triangleq \{X(s): s \in \mathbb{T}^{(t)}_o\}$, respectively. In order to use a coupling argument, define new stochastic process $\tilde{X}(\tau)$, $\tau \in \mathbb{R}$, such that for a fixed $\mathcal{R}_{a_{i-1}+1,a_i}$, $\{\tilde{X}(\tau): \tau \in \mathcal{R}_{a_{i-1}+1,a_i}\}$ and $\{X(\tau): \tau \in \mathcal{R}_{a_{i-1}+1,a_i}\}$ have the same distribution, $i=1,2,\ldots,2m$. Now, consider $\tilde{X}^h_{1,t} \triangleq \{\tilde{X}(s): s \in \mathbb{T}^{(t)}_h\}$, $h \in \{e,o\}$, such that the requests in the even (and odd) blocks of $\tilde X_{1,t}$ are independent. However, within each block, the random variables can be arbitrarily correlated. We can always construct such a stochastic process, and the pair $({X}(s), \tilde{X}(s))$ is called a \emph{coupling} (see Fig. \ref{fig:figslot_timediv}). We define $\tilde X_{1,t}^e$ and $\tilde X_{1,t}^o$ similarly to $X_{1,t}^e$ and $X_{1,t}^o$, respectively. The following theorem provides a bound on the performance guarantees in terms of the $\beta-$mixing coefficient. \begin{thrm} \label{thm:main_res1} For the given model, and the popularity estimate in \eqref{eq:estimation_popularity}, with a probability of at least $1-\delta$, the following holds \begin{eqnarray} \nonumber \hat{\mathcal{T}}{^\ast}(t + T) \hspace{-0.3cm}&-&\hspace{-0.3cm} \mathcal{T}{^\ast}(t + T) < \min\{\mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})], \mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^o_{1,t})]\} + \frac{N \alpha_{\texttt{max}} B a_{\texttt{max}}}{ \alpha_{\texttt{min}} R_0 a_{\texttt{min}}} \sqrt{\frac{\log \left(\frac{2}{\delta^{'}}\right)}{2 m}}, \label{eq:thm3eq} \end{eqnarray} where $\delta^{'} \triangleq \delta/2 - \exp\left\{-\lambda_u \pi R^2\right\} - \sum_{i=2}^{2m-1} \beta(a_i) - e^{-\lambda_u \pi R^2} \sum_{j=1}^\infty \sum_{i=1}^{2m} (1-\zeta_{a_i,j}) \frac{(\lambda_u \pi R^2)^j}{j!} > 0$. Further, \begin{eqnarray} \mathcal{A}_T(\tilde X^{(h)}_{1,t}) \triangleq \sup_{\Pi \in \mathcal{P}_\pi} \left\vert \sum_{i=1}^N g(\pi_i) \left(\hat{p}^h_{i,t} - p_{i,t+T}\right)\right \vert, \end{eqnarray} where $\hat{p}^h_{i,t} \triangleq \frac{1}{\abs{\mathbb{T}^{(t)}_h}} \sum_{s \in \mathbb{T}^{(t)}_h} \mathds{1}\{\tilde X(s) = i\}$, $h \in \{e,o\}$. \end{thrm} \begin{proof} See \appref{app:main_res1_proof}. \end{proof} Note that $\delta^{'} > 0$ implies a bound on $\delta$. Next, we bound $\min \{\mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})], \mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^o_{1,t})]\}$ to get the desired result. The bound that we derive depends on the Rademacher complexity and the nonstationarity of the stochastic process. We begin with the following definition. \defn{ \textbf{(Rademacher complexity)} The Rademacher complexity of $\mathcal{P}_\pi$ is defined as \cite[Chapter 3]{Mohri2012} \begin{eqnarray*} \mathcal{R}^{(t)}_h \triangleq \mathbb{E}_{\tilde X, \bm{\sigma}} \frac{1}{\abs{\mathbb{T}^{(t)}_h}}\sup_{\Pi \in \mathcal{P}_\pi}\sum_{i=1}^N \!\! g(\pi_i) \vert \sum_{s \in \mathbb{T}^{(t)}_h} \!\!\sigma_{i,s} \mathds{1}\{\tilde X(s) = i\} \vert, \end{eqnarray*} where the Rademacher random variables $\sigma_{i,s} \in \{-1,1\}$, $i=1,2,\ldots, N$ for $s \in \mathbb{T}^{(t)}_h$ are {i.i.d.} with probability $1/2$, $\bm{\sigma} \triangleq \{\sigma_{i,s} \in \{-1,1\}: i=1,2,\ldots, N, s \in \mathbb{T}^{(t)}_h\}$, and $h \in \{e,o\}$. \label{def:rademachercomplexity} } Next, we provide one of the main results of this paper. \begin{thrm} \label{thm:main_res2} For the given model and the popularity estimate in \eqref{eq:estimation_popularity}, with a probability of at least $1-\delta$, the following holds: \begin{eqnarray} \label{eq:pacbound_2} \nonumber \hat{\mathcal{T}}{^\ast}(t + T) < \mathcal{T}{^\ast}(t + T) + 2 \max\{\mathcal{R}^{(t)}_e,\mathcal{R}^{(t)}_o\} + \max\{\Delta^{(e)}_{t,T}, \Delta^{(o)}_{t,T}\} + \frac{N \alpha_{\texttt{max}} B a_{\texttt{max}}}{R_0a_{\texttt{min}} \alpha_{\texttt{min}}} \sqrt{\frac{a_{\texttt{max}}\log \left(\frac{2}{\delta^{'}}\right)}{ t}}, \end{eqnarray} where $\mathcal{R}^{(t)}_h$ is the Rademacher complexity, $a_{\texttt{max}} \triangleq \max_{1\leq i \leq 2m} a_i$, $\Delta^{(h)}_{t,T} \triangleq \sup_{\Pi \in \mathcal{P}_\pi} \sum_{i=1}^N g(\pi_i) d_i^{(h)}(t,T)$, $d^{(h)}_i(t,T) \triangleq \frac{1}{\abs{\mathbb{T}^{(t)}_h}} \sum_{s \in \mathbb{T}^{(t)}_h} \left \vert p_{i,s} - p_{i,t+T} \right \vert$, $h \in \{e,o\}$, and $\delta^{'} > 0$ is as defined in \thrmref{thm:main_res1} with $m = \lceil \frac{t}{a_\texttt{max}} \rceil$. \end{thrm} \begin{proof} See \appref{app:mainres2proof}. \end{proof} \textbf{Remarks:} \begin{enumerate}[(1)] \item The error $\epsilon$ increases linearly with $N$. To compensate for larger values of $N$, the waiting time $t$ should be of the order of $N^2$; a similar observation was also made in \cite{Bharath2016}. As $\lambda_u$ increases, a lower value of $\delta$ can be achieved. In general, as $\lambda_u \rightarrow \infty$, $\delta = 0$ cannot be achieved due to the dependence of the stochastic process across time, {{\em i.e.}}, $\beta(a) > 0$, $a > 0$. \item The error $\epsilon$ decreases as $t$ increases. When the requests are {i.i.d.}, $a_{\texttt{max}} = 1$, and hence, $\epsilon$ is small. Thus, when the requests are correlated we incur a penalty of $a_{\texttt{max}}$, since the error decreases as $\sqrt{1/(t/a_{\texttt{max}})}$ compared to $\sqrt{1/t}$ for {i.i.d.} requests. The error can be reduced by choosing $a_{\texttt{max}} =1$, {{\em i.e.}}, $a_i = 1$, $i=1, \dots, 2m$. Since $\beta(x)$ is a monotonically decreasing function of $x$, the probability of achieving a lower error is very small, indicating a tradeoff between the error and the probability with which the bound in \eqref{eq:pacbound_2} holds. Also, lower values of $\delta^{'}$ result in higher error. This requires the value of $m$ to be small. However, $m$ scales as $t/a_\texttt{max}$, which indicates that if $a_\texttt{max} = \mathcal{O}(\sqrt t)$, then the last term in the error goes down as $1/t^{1/4}$ instead of $1/\sqrt t$. On the other hand, for larger values of $m$, the value of $\delta^{'}$ is small provided the $\beta$-mixing coefficient reduces at a smaller rate compared to $1/\sqrt{t}$; this indicates that one should have sufficiently fast decaying $\beta$-mixing for better performance. The last term in the expression for $\delta^{'}$ depends on $\zeta_{a_i,j}$, whose effect is studied by looking at specific examples, such as the Bernoulli and Poisson models for user requests, as detailed in the next section. \item The error $\epsilon$ increases with $\frac{\alpha_{\texttt{max}}} {\alpha_{\texttt{min}}}$. The higher this ratio, the larger the variation in the number of requests. On the other hand, the lower this ratio, the smaller the error; which indicates a greater number of requests. The non-stationarity of the process is captured through $\Delta_{t,T}^{(h)}$, $h \in \{e,o\}$. For a stationary process $\Delta_{t,T}^{(h)} = 0$, $h\in \{e,o\}$. \item When the user requests are {i.i.d.}, the error does not vanish as $t \rightarrow \infty$, because the Rademacher complexity will not go to zero as $t \rightarrow \infty$. This indicates the difficulty in estimating the offloading loss, or equivalently the popularity profile, for a given caching policy. \item The only term that depends on $T$ is $\max\{\Delta^{(e)}_{t,T}, \Delta^{(o)}_{t,T}\}$. The frequency with which the cache update should be done depends on $\Delta^{(h)}_{t,T}$, $h \in \{e,o\}$. For instance, if $\Delta^{(h)}_{t,T}$, $h \in \{e,o\}$, is high, then the updates should be more frequent. \item The error is directly proportional to the number of bits per file, and inversely proportional to the rate at which the file is transmitted from the SBS to the users. \end{enumerate} \section{Bernoulli and Poisson Requests}\label{sec:bernoulli_poisson} In this section, we consider Bernoulli and Poisson request models, and analyze the implications on the results derived so far. \subsection{Bernoulli request model} Let $X^{k}_u \in \{0,1\}$, $u \in \Phi_u$, denote the request made by user $u$ for a cached file, in the $k^{\text{th}}$ slot. In the Bernoulli model, it is assumed that $X^{k}_u \in \{0,1\}$ is i.i.d. across users and slots. Further, a user makes a request with probability $p$ in each time slot, independent of the file it requests, {{\em i.e.}}, $\Pr\{X^{k}_u = 1\} = p$. The slot width $\Delta > 0$ is chosen such that at most one file is requested. Conditioned on the event that a set of requests are made from several users, the files requested follow a non-stationary dependent random process. This simplified assumption makes the analysis of the offloading loss guarantees tractable. To provide theoretical guarantees for this model, from the general result in \thrmref{thm:main_res2}, it suffices to prove an upper bound on the probability of the event $\left\{r_{a_i} < \alpha_{\texttt{min}} n a_i\right\} \bigcup \left\{r_{a_i}> \alpha_{\texttt{max}} n a_i\right\}$ in the $i$th block of size $a_i$, conditioned on the presence of $n$ users, {{\em i.e.}}, \begin{eqnarray} \label{eq:firstterm_bern_LD} \Pr\left\{r_{a_i} < \alpha_{\texttt{min}} n a_i \bigcup r_{a_i} > \alpha_{\texttt{max}} n a_i \left \vert \right. n_u = n\right\} &\leq& \Pr\left\{r_{a_i} < \alpha_{\texttt{min}} n a_i \left \vert \right. n_u = n\right\} \nonumber \\&&+ \Pr\left\{r_{a_i} > \alpha_{\texttt{max}} n a_i \left \vert \right. n_u = n\right\}, \end{eqnarray} where $r_{a_i}$ is the total number of requests in $a_i$ slots, which is the sum of $a_i n$ independent Bernoulli random variables, leading to $\mathbb{E}[r_{a_i} \left \vert \right. n_u = n] = a_i np$. Towards this end, we use the following result: \begin{thrm} Let $X_1, X_2,\ldots,X_k$ be independent Bernoulli random variable with \begin{equation} \Pr\{X_i = 1\} = p~~~~\Pr\{X_i=0\} = 1-p. \end{equation} Then, for $X \triangleq \sum_{i=1}^n X_i$ and $\lambda > 0$, we have \begin{equation} \Pr\{X \leq \mathbb{E}[X] - \lambda\} \leq \exp\{-\lambda^2/2np\}, \end{equation} and \begin{equation} \Pr\{X \geq \mathbb{E}[X] + \lambda\} \leq \exp\left\{- \frac{\lambda^2}{2(np + \lambda/3)}\right\}. \end{equation} \label{thm:bernoulli1} \end{thrm} Using \thrmref{thm:bernoulli1} conditioned on the event $\{n_u = n\}$, we have the following theorem. \begin{thrm} \label{thm:bern_largedev_bound} For the Bernoulli model with $0 < p < \alpha_{\texttt{min}} < \alpha_{\texttt{max}}$, we have \begin{equation} \Pr\left\{r_{a_i} < \alpha_{\texttt{min}} n a_i \bigcup r_{a_i} > \alpha_{\texttt{max}} n a_i \left \vert \right. n_u = n\right\} \leq 2 \exp\left\{- \frac{\psi_p a_{\texttt{min}} n}{2p} \right\}, \end{equation} for $i=1,2,\ldots,2m$, and $n \geq 1$. In the above, $\psi_p \triangleq \min\left\{ \frac{a_{\texttt{min}}(p-\alpha_{\texttt{max}})^2}{1 + \frac{a_\texttt{max} (\alpha_{\texttt{min}} - p)}{3}}, (p-\alpha_{\texttt{min}})^2 \right\}$. \label{thm:bernoulli2} \end{thrm} \emph{Proof:} From \eqref{eq:firstterm_bern_LD}, it suffices to bound the following two terms $\Pr\left\{r_{a_i} < \alpha_{\texttt{min}} n a_i \left \vert \right. n_u = n\right\}$ and $\Pr\left\{r_{a_i} > \alpha_{\texttt{max}} n a_i \left \vert \right. n_u = n\right\}$. We start by upper bounding the first term in \eqref{eq:firstterm_bern_LD}. Using $\mathbb{E}[r_i \left \vert \right. n_u = n] = np a_i$ and choosing $\lambda \triangleq na_i(\alpha_{\texttt{min}} - p)$ in \thrmref{thm:bernoulli1} results in \begin{eqnarray} \Pr\left\{r_{a_i} < \alpha_{\texttt{min}} n a_i \left \vert \right. n_u = n\right\} &\leq& \exp\left\{-\frac{(p-\alpha_{\texttt{min}})^2 a_i n}{2p}\right\}\nonumber\\ &\leq& \exp\left\{-\frac{(p-\alpha_{\texttt{min}})^2 a_{\texttt{min}} n}{2p}\right\}, \label{eq:bernoulli1} \end{eqnarray} for all $0 < p < \alpha_{\texttt{min}}$, and $i=1,2,\ldots,2m$. Similarly, the second term in \eqref{eq:firstterm_bern_LD} can be bounded as \begin{eqnarray} \Pr\left\{r_{a_i} > \alpha_{\texttt{max}} n a_i \left \vert \right. n_u = n\right\} &\leq& \exp\left\{-\frac{(p-\alpha_{\texttt{max}})^2 a_i^2 n}{2(p + a_i(\alpha_{\texttt{max}} - p)/3)}\right\} \nonumber \\ &\leq& \exp\left\{-\frac{(p-\alpha_{\texttt{max}})^2 a_{\texttt{min}}^2 n}{2(p + a_{\texttt{max}}(\alpha_{\texttt{max}} - p)/3)}\right\} \nonumber \\ &\leq& \exp\left\{-\frac{(p-\alpha_{\texttt{max}})^2 a_{\texttt{min}}^2 n}{2p (1 + a_{\texttt{max}}(\alpha_{\texttt{max}} - p)/3p)}\right\} , \label{eq:bernoulli2} \end{eqnarray} for all $p < \alpha_{\texttt{max}}$ and any $i=1,2,\ldots,2m$. Combining \eqref{eq:bernoulli1} and \eqref{eq:bernoulli2} gives the desired result. This completes the proof of \thrmref{thm:bernoulli2}. $\blacksquare$ By using \thrmref{thm:bernoulli2}, we have $\Pr\left\{\alpha_{\texttt{min}} n a_i < r_{a_i} < \alpha_{\texttt{max}} n a_i \left \vert \right. n_u = n\right\} \geq 1 - 2 \exp\left\{- \frac{\psi_p a_{\texttt{min}} n}{2p} \right\} \triangleq \zeta_{a,n}$. Using this in the expression for $\delta^{'}$ in \thrmref{thm:main_res2}, and after some algebraic manipulation, we obtain the following result. \begin{thrm} For the Bernoulli request model with $0 < p < \alpha_{\texttt{min}} < \alpha_{\texttt{max}}$, and the popularity estimate in \eqref{eq:estimation_popularity}, the following holds with a probability of at least $1-\delta$ \begin{eqnarray} \label{eq:pacbound_2bern} \nonumber \hat{\mathcal{T}}{^\ast}(t + T) \leq \mathcal{T}{^\ast}(t + T) + 2 \max\{\mathcal{R}^{(t)}_e,\mathcal{R}^{(t)}_o\} + \max\{\Delta^{(e)}_{t,T}, \Delta^{(o)}_{t,T}\} + \frac{N B a_{\texttt{max}}\alpha_{\texttt{max}}}{ a_{\texttt{min}} R_0 \alpha_{\texttt{min}}} \sqrt{\frac{a_{\texttt{max}} \log \left(\frac{2}{\delta^{'}}\right)}{ t}}, \end{eqnarray} where $\mathcal{R}^{(t)}_h$ is the Rademacher complexity, and $$\Delta^{(h)}_{t,T} \triangleq \sup_{\Pi \in \mathcal{P}} \sum_{i=1}^N g(\pi_i) d_i^{(h)}(t,T),$$ $d^{(h)}_i(t,T) \triangleq \frac{1}{\abs{\mathbb{T}^{(t)}_h}} \sum_{s \in \mathbb{T}^{(t)}_h} \left \vert p_{i,s} - p_{i,t+T} \right \vert$, $h \in \{e,o\}$. Further, $$\delta^{'} = \frac{\delta}{2} - \left(\exp\left\{-\lambda_u \pi R^2\right\} + \sum_{i=2}^{2m-1} \beta(a_i) + 4m \left[e^{-\lambda_u \pi R^2} (e^{-\lambda_u \pi R^2e^{-\phi_p}} - 1) \right]\right)> 0,$$ where $\phi_p \triangleq \frac{a_\texttt{min} \psi_p}{2p}$, and $\psi_p$ is as defined in Theorem \ref{thm:bern_largedev_bound}. \end{thrm} From the above theorem, the following observations can be made. First, assuming that $a_\texttt{min}$ and $a_\texttt{max}$ grow as $\mathcal{O}(\sqrt t)$ (which implies that $m = \mathcal{O}(\sqrt t)$), the last term in the error in \eqref{eq:pacbound_2bern} goes to zero as $1/t^{1/4}$, while the other terms are not effected by this choice. For $m = \mathcal{O}(\sqrt t)$, the second term in the expression for $\delta^{'}$ tends to zero as $t \rightarrow \infty$, provided that $\beta(\sqrt t) \rightarrow 0$ as $t \rightarrow \infty$. This demands a faster decay rate of $\beta$-mixing. The last term in the expression for $\delta^{'}$ tends to $-\infty$ as $t \rightarrow \infty$, resulting in a larger value of $\delta^{'}$, and hence, reducing the error. As a result of this, asymptotically in $t$, any value of $\delta > 0$ is a valid choice. Thus, by choosing $\delta$ sufficiently close to $0$, a high probability result on the performance can be obtained. \subsection{Poisson request model} \label{sec:poiss} We assume that the requests follow a Poisson model as defined below.\\ \texttt{\textbf{Assumption~$2$}:} The number of requests across users in any interval follows an independent homogenous Poisson process with arrival rate $\lambda_r$. Conditioned on the number of requests, the requested files follow a non-stationary, possibly dependent stochastic process. As in the previous subsection, we first provide a bound on $\zeta_{a_i,n}$ for each $i$. \begin{thrm} For the Poisson request model, with $\alpha_{\texttt{min}} = \frac{\Delta \lambda_r}{e^2}$ and $\alpha_{\texttt{max}} = {\Delta \lambda_r e}$, the following bound holds \begin{eqnarray} \Pr\left\{r_{a_i} < \alpha_{\texttt{min}} n a_i \bigcup r_{a_i} > \alpha_{\texttt{max}} n a_i \left \vert \right. n_u = n\right\} \leq 2\exp\{-n a_\texttt{min} \lambda_r \Delta\}. \label{eq:poisson1} \end{eqnarray} \end{thrm} \emph{Proof:} First, consider the following with $\tau \triangleq \alpha_{\texttt{min}} n a_i$ \begin{eqnarray} \Pr\left\{r_{a_i} < \tau \left \vert \right. n_u = n\right\} &=& \Pr\left\{e^{-s r_{a_i}} > e^{-\tau s}\left \vert \right. n_u = n\right\} \leq \inf_{s > 0} e^{\tau s} \mathbb{E} [e^{-r_{a_i} s}\left \vert \right. n_u = n]\nonumber\\ &\leq& \exp\left\{ - n a_i \left[\Delta \lambda_r - \alpha_{\texttt{min}} \left(1 - \log\left(\frac{\Delta \lambda_r}{\alpha_{\texttt{min}}}\right) \right) \right] \right\}, \end{eqnarray} where the last inequality follows by using Chernoff bound along with the fact that $\mathbb{E} [r_{a_i}] = \lambda_r \Delta n a_i$. Substituting for $\tau$, using $\alpha_{\texttt{min}} = \frac{\Delta \lambda_r}{e^2}$, and the fact that $a_i \geq a_\texttt{min}$ for all $i$, we get \begin{equation} \Pr\left\{r_{a_i} < \tau \left \vert \right. n_u = n\right\} \leq \exp\left\{-n a_\texttt{min} \lambda_r \Delta \left(1+ \frac{1}{e^2}\right) \right\}. \label{eq:poiss_bound1} \end{equation} Now, consider the following term: \begin{eqnarray} \Pr\left\{r_{a_i} > \alpha_{\texttt{max}} n a_i \left \vert \right. n_u = n\right\} &\leq& \exp\left\{-na_i \lambda_r \Delta \left( 1 - \frac{\alpha_{\texttt{max}}}{\lambda_r \Delta} + \frac{\alpha_{\texttt{max}}}{\lambda_r \Delta} \log \left(\frac{\alpha_{\texttt{max}}}{\lambda_r \Delta}\right) \right) \right\} \nonumber \\ &\leq& \exp\{- n a_\texttt{min} \lambda_r \Delta\}, \label{eq:poiss_bound2} \end{eqnarray} where the inequality follows from the Chernoff bound, and the last inequality follows by choosing $\alpha_{\texttt{max}} = {e \Delta \lambda_r} > \alpha_{\texttt{min}} = \Delta \lambda_r / e^2$, and $a_i \geq a_{\texttt{min}}$. From \eqref{eq:poiss_bound1} and \eqref{eq:poiss_bound2}, we get the bound in \eqref{eq:poisson1}. $\blacksquare$ \begin{thrm} For the Poisson request model with the popularity estimate in \eqref{eq:estimation_popularity}, with a probability of at least $1-\delta$, the following holds \begin{eqnarray} \label{eq:pacbound_2} \nonumber \hat{\mathcal{T}}{^\ast}(t + T) \leq \mathcal{T}{^\ast}(t + T) + 2 \max\{\mathcal{R}^{(t)}_e,\mathcal{R}^{(t)}_o\} + \max\{\Delta^{(e)}_{t,T}, \Delta^{(o)}_{t,T}\} + \frac{N B a_{\texttt{max}}e}{ a_{\texttt{min}} R_0} \sqrt{\frac{a_{\texttt{max}} \log \left(\frac{2}{\delta^{'}}\right)}{ t}}, \end{eqnarray} where $\mathcal{R}^{(t)}_h$ is the Rademacher complexity, $$\Delta^{(h)}_{t,T} \triangleq \mathbb{E}\left[\sup_{\Pi \in \mathcal{P}} \sum_{i=1}^N g(\pi_i) d_i^{(h)}(t,T)\right],$$ where $d^{(h)}_i(t,T) \triangleq \frac{1}{\abs{\mathbb{T}^{(t)}_h}} \sum_{s \in \mathbb{T}^{(t)}_h} \left \vert p_{i,s} - p_{i,t+T} \right \vert$, $h \in \{e,o\}$. Further, $$\delta^{'} = \frac{\delta}{2} - \left(\exp\left\{-\lambda_u \pi R^2\right\} + \sum_{i=2}^{2m-1} \beta(a_i) + 4m \left[e^{-\lambda_u \pi R^2} (e^{-\lambda_u \pi R^2e^{-a_\texttt{min} \lambda_r \Delta}} - 1) \right]\right)> 0.$$ \end{thrm} As in the Bernoulli case, a better performance can be achieved by choosing $m = \mathcal{O}(\sqrt t)$ and $a_i = \mathcal{O}(\sqrt t)$ for all $i$. It can also be seen that as $\lambda_r$ (and $\Delta$) increases, a smaller value of $\delta$ is possible leading to a better performance. However, unlike the Bernoulli model, the bound is independent of $\alpha_{\texttt{min}}$ and $\alpha_{\texttt{max}}$. The results presented for the models considered here lead to a simple yet effective algorithm for updating the cache when the popularity profile is varying across time. Next, we provide the details of this algorithm along with numerical simulations. \section{Cache Update Algorithm and Numerical Results}\label{sec:numerical_results} In this section, we present a cache update algorithm following \thrmref{thm:main_res2}, and the corresponding simulation results. \thrmref{thm:main_res2} suggests that the sBSs should update their caches at the time instants at which the error becomes large. The only relevant term is $\max\{\Delta^{(e)}_{t,T},\Delta^{(o)}_{t,T}\} \leq \Delta_{t,T} \triangleq \frac{1}{\abs{\mathbb{T}^{(t)}_e \bigcup \mathbb{T}^{(t)}_o}}\sup_{\Pi \in \mathcal{P}_\pi} \sum_{i=1}^N \sum_{s \in \mathbb{T}^{(t)}_o \bigcup \mathbb{T}^{(t)}_e} g(\pi_i) \left \vert p_{i,s} - p_{i,t+T} \right \vert$. The following cache update mechanism is employed: \begin{enumerate} \item \small{Initialize $t=0$ and $T=0$. Update the caches randomly. \item If $\hat\Delta_{t,T} > \texttt{threshold}$, then update the caches using the caching probability obtained by solving $\hat{\Pi}^{*}_{t+T} = \arg \min_{\Pi^{(t+T)} \in \mathcal{P}_{\pi}} ~~ {\mathcal{T}}(\Pi^{(t+T)},\hat{\mathcal{P}}^{(t+T - 1)})$, where $\hat{\mathcal{P}}^{(t + T-1)}$ is the estimate obtained using \eqref{eq:estimation_popularity}, and set $T = t$. Here, $\hat\Delta_{t,T}$ denotes an estimate of $\Delta_{t,T}$, and $\texttt{threshold} > 0$ determines the error achieved. \item Set $t \leftarrow t+1$ and go to step $2$.} \end{enumerate} \begin{figure}[h!] \begin{center} {\includegraphics[height=3.5in,width=4.5in]{Offload_loss_vs_cache_size_latest.eps}} \caption{Offloading loss as a function of the cache size.} \label{fig:offload_vs_cachesize} \end{center} \end{figure} \begin{figure}[h!] \begin{center} {\includegraphics[height=3.5in,width=4.5in]{Fetching_cost_vs_cachesize.eps}} \caption{Fetching cost versus cache size for two different scenarios of arrival process.} \label{fig:fetching_cost_vs_cachesize} \end{center} \end{figure} We define the fetching cost as the average number of files downloaded at each cache update. The simulation setup consists of sBSs and users distributed according to PPPs with densities $\lambda_B = 0.00001$ and $\lambda_u = 0.0001$, respectively. The number of files is $N=100$, and the coverage of the BS and sBSs are $1000$ m and $500$ m, respectively. We let $\gamma= 500$. The deterministic arrival rate corresponds to a deterministic variation in the distribution of the popularity profile once every $150$ slots; while the random change corresponds to a random change in the popularity profile which occurs once every $100$ slots on average. In the deterministic variation scenario, a random set of $3$ pairs of files are chosen, and are permuted in a uniformly random fashion. In the random variation scenario, two pairs of indices are randomly and uniformly permuted at random times. The requests follow a Poisson arrival model with rates $\lambda_r = 0.09$ and $0.01$ for the scenarios corresponding to random and deterministic changes, respectively. Requests for the files are generated using a Zipf distribution with parameter $\theta = 0.8$. Thus, the arrival is non-stationary but independent across time. This non-stationarity results in oscillations in the curves. The requests from a typical user at the origin are used to evaluate the offloading loss. \figref{fig:offload_vs_cachesize} shows the offloading loss with $B = R_0$ as a function of the cache size for the two scenarios mentioned above. The periodic updates are carried out every $5$ time slots. It is clear from the figure that, for the random variation scenario, the performance of the proposed scheme and the periodic scheme are almost the same. However, we observe in \figref{fig:fetching_cost_vs_cachesize} that the fetching cost of the proposed scheme is lower, as the periodic update scheme requires far too many updates. This confirms that by appropriately choosing the \texttt{threshold} values, the proposed scheme outperforms the periodic cache update scheme for specific scenarios. The variation in the fetching cost for the proposed (deterministic) scheme is an artifact of choosing the \texttt{threshold}. For the deterministic variation case, it can be seen in \figref{fig:fetching_cost_vs_cachesize} that for certain cache sizes ($10, 20$ and $25$), the offloading loss of the proposed scheme outperforms periodic caching, while it performs poorly for other cache sizes. However, the fetching cost is lower than that of the periodic update scheme for all the cache sizes . This shows that in order to achieve a smaller offloading loss, it is better to update more frequently; while in other scenarios (cache size = $15$), it is possible to achieve both a lower offloading loss and a lower fetching cost. A smaller offloading loss can be achieved by lowering the \texttt{threshold} value at the expense of the fetching cost. The gain of the proposed scheme depends on how frequently the popularity profile changes. For example, when the popularity profile changes slowly, the gain is small; but the frequency of updates will also be less in the proposed scheme. \section{Concluding remarks}\label{sec:conclusion} A learning-theoretic analysis of content caching in heterogenous networks with non-stationary, statistically dependent and unknown popularity profiles has been considered. A PAC result on the offloading loss is presented in \thrmref{thm:main_res2}, based on the following caching algorithm: At every slot $t$, the BS computes an estimate of the Rademacher complexity and the discrepancy based on the available requests. The optimal caching policy is employed at the BS based on these estimates, and the cache content items at the sBSs are updated only if the discrepancy in the popularity profile is larger than a pre-specified threshold (to be determined based on the error tolerance). A detailed analysis of this algorithm is relegated to future work. We also presented the performance analyses for the Bernoulli and Poisson request models. \appendices \vspace{-0.2in} \section{Proof of \thrmref{thm:machine_learning}}\label{app:proof_ml} First, we let $\hat{\mathcal{T}}^* \triangleq \mathcal{T}(\hat{\Pi}_t^*,{\mathcal{P}}^{(t+T)})$, $\hat{\mathcal{T}} \triangleq \mathcal{T}({\Pi},\hat{\mathcal{P}}^{(t)})$. Now consider the term $\hat{\mathcal{T}}^* - \inf_{\Pi} \mathcal{T}(\Pi,{\mathcal{P}}^{(t+T)})$. We can write \begin{eqnarray} \hat{\mathcal{T}}^* - \inf_{\Pi} \mathcal{T}(\Pi,{\mathcal{P}}^{(t+T)}) &=& \hat{\mathcal{T}}^* - \hat{\mathcal{T}} + \hat{\mathcal{T}} -\inf_{\Pi} \mathcal{T}(\Pi,{\mathcal{P}}^{(t+T)}) \nonumber\\ &\leq& \hat{\mathcal{T}}^* - \hat{\mathcal{T}} + \sup_\Pi\mathcal{T}({\Pi},\hat{\mathcal{P}}^{(t)}) -\inf_{\Pi} \mathcal{T}(\Pi,{\mathcal{P}}^{(t+T)}) \nonumber\\ &\leq& \hat{\mathcal{T}}^* - \hat{\mathcal{T}} + \sup_\Pi (\mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)}) - \mathcal{T}(\Pi,{\mathcal{P}}^{(t+T)})) \nonumber\\ &\leq& \hat{\mathcal{T}}^* - \hat{\mathcal{T}} + \sup_\Pi \abs{\mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)}) - \mathcal{T}(\Pi,{\mathcal{P}}^{(t+T)})} \nonumber\\ &\leq& \mathcal{T}({\hat{\Pi}}_t^*, \mathcal{P}^{(t+T)}) - \inf_{\Pi} \mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)}) + \sup_\Pi \abs{\mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)}) - \mathcal{T}(\Pi,{\mathcal{P}^{(t+T)}})} \nonumber\\ &\leq& \sup_\Pi \mathcal{T}({{\Pi}}, \mathcal{P}^{(t+T)}) - \inf_{\Pi} \mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)}) + \sup_\Pi \abs{\mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)}) - \mathcal{T}(\Pi,{\mathcal{P}}^{(t+T)})} \nonumber\\ &\leq& \sup_\Pi \abs{\mathcal{T}({\Pi},\mathcal{P}^{(t+T)}) - \mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)})} + \sup_\Pi \abs{\mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)}) - \mathcal{T}(\Pi,{\mathcal{P}}^{(t+T)})}. \nonumber\\ &\leq& 2\sup_\Pi \abs{ \mathcal{T}({{\Pi}}, \mathcal{P}^{(t+T)}) - \mathcal{T}( \Pi,\hat{\mathcal{P}}^{(t)})}, \end{eqnarray} where all the inequalities above are self evident. \section{Proof of \thrmref{thm:main_res1}} \label{app:main_res1_proof} Consider the following: \begin{eqnarray} \nonumber \mathcal{A}_T(X_{1,t}) &\stackrel{(a)}{\leq}& \sup_{\Pi \in \mathcal{P}} \left\vert\frac{\abs{\mathbb{T}^{(t)}_e}}{r_t}\sum_{i=1}^N g(\pi_i) \left(\hat{p}^e_{i,t} - p_{i,t+T}\right)\right \vert + \sup_{\Pi \in \mathcal{P}} \left \vert \frac{{\abs{\mathbb{T}^{(t)}_o}}}{r_t}\sum_{i=1}^N g(\pi_i) \left(\hat{p}^o_{i,t} - p_{i,t+T}\right) \right\vert \\ &\stackrel{(b)}{\leq}& \frac{\abs{\mathbb{T}^{(t)}_e}}{r_t} \mathcal{A}_T(X^e_{1,t}) + \frac{\abs{\mathbb{T}^{(t)}_o}}{r_t} \mathcal{A}_T(X^o_{1,t}), \label{eq:At} \end{eqnarray} where $\hat{p}^h_{i,t} \triangleq \frac{1}{\abs{\mathbb{T}^{(t)}_h}} \sum_{s \in \mathbb{T}^{(t)}_h} \mathds{1}\{X(s) = i\}$, $h \in \{e,o\}$, and $\mathcal{A}_T(X^{(h)}_{1,t}) \triangleq \sup_{\Pi \in \mathcal{P}} \left\vert \sum_{i=1}^N g(\pi_i) \left(\hat{p}^h_{i,t} - p_{i,t+T}\right)\right \vert$. In \eqref{eq:At}, $(a)$ follows from algebraic manipulation and the triangle inequality, and $(b)$ follows from the convexity property. Using \eqref{eq:At}, and the union bound, we can write \begin{eqnarray} \nonumber \Pr \left\{ \mathcal{A}_{T}(X_{1,t}) > \epsilon | n_u = j\right\} &\leq& \Pr \left\{ \frac{\abs{\mathbb{T}^{(t)}_e}}{r_t} \mathcal{A}_T^e(X_{1,t}) + \frac{\abs{\mathbb{T}^{(t)}_o}}{r_t} \mathcal{A}_T^o(X_{1,t}) > \epsilon | n_u = j\right\} \\ \nonumber &\stackrel{(a)}{\leq}& \Pr \{ \mathcal{A}_T(X^e_{1,t}) > \epsilon | n_u = j\} + \Pr \{ \mathcal{A}_T(X^o_{1,t}) > \epsilon | n_u = j\}, \label{eq:evenodd_decouple} \end{eqnarray} where $(a)$ follows from the union bound. We now bound the term corresponding to the even samples. (The bound on the term corresponding to the odd samples can be obtained similarly, and is not shown here for sake of brevity). We begin with $\Pr \{ \mathcal{A}_T(X^e_{1,t}) > \epsilon | n_u = j\} \!\! = \!\! \mathbb{E}[ \mathds{1}\{\mathcal{A}_T(X^e_{1,t}) > \epsilon\} {| n_u = j} ]$. Since the indicator function is bounded, using \cite[Proposition 1]{Kuznetsov2014}, we have the following upper bound: \begin{eqnarray} \nonumber \mathbb{E}[ \mathds{1}\{\mathcal{A}_T(X^e_{1,t}) > \epsilon\} {| n_u = j} ] &\leq& \mathbb{E}[ \mathds{1}\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t}) > \epsilon\} {| n_u = j}] + \sum_{i=2}^{m} \beta(a_{2i - 1}), \nonumber \\ &=& \Pr\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t}) > \epsilon{| n_u = j} \} + \sum_{i=2}^{m} \beta(a_{2i - 1}),\label{eq:odd_coupling} \end{eqnarray} where $\tilde{{X}}^e_{1,t}$ is defined in \secref{sec:mainresult_1}. Since the conditioning is on $\{n_u = j\}$, the time slot difference between adjacent even/odd block is deterministic, and the $\beta$-mixing is not conditioned on the event. Similarly, it can be shown that \begin{eqnarray} \mathbb{E}[ \mathds{1}\{\mathcal{A}_T(X^o_{1,t}) > \epsilon\} {| n_u = j} ] \leq \Pr\{ \mathcal{{A}}_T(\tilde{{X}}^o_{1,t}) > \epsilon{| n_u = j} \} + \sum_{j=1}^{m-1} \beta(a_{2j}), \label{eq:even_coupling} \end{eqnarray} where $\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})$ (resp. $\mathcal{{A}}_T(\tilde{{X}}^o_{1,t})$) is obtained by replacing each block of data in $X^e_{1,t}$ (resp. $X^o_{1,t}$) by $\tilde{X}^e_{1,t}$ (resp. $\tilde{X}^o_{1,t}$) in the definition of $\mathcal{{A}}_T(X^e_{1,t})$ (resp. $\mathcal{{A}}_T(X^o_{1,t})$). Using \eqref{eq:even_coupling} in \eqref{eq:evenodd_decouple}, we get \begin{eqnarray} \Pr \{ \mathcal{A}_{T}(X_{1,t}) > \epsilon | n_u = j\} \leq \!\!\!\! \sum_{h \in \{e,o\}} \Pr\{ \mathcal{{A}}_T(\tilde{{X}}^h_{1,t}) > \epsilon{| n_u = j} \}+ \sum_{j=2}^{2m-1} \beta(a_{j}). \label{eq:bound_evenodd} \end{eqnarray} Since each of the events involves sum of blocks of independent data, we employ McDiarmid's inequality to bound the probability in \eqref{eq:bound_evenodd}, as shown below. \begin{thrm} For any $\max \{\mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})], \mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^o_{1,t})]\} < \epsilon$, and $m > 0$, the following bound holds for all $j \geq 1$: \begin{eqnarray} \sum_{h \in \{e,o\}} \Pr\{\mathcal{{A}}_T(\tilde{{X}}^h_{1,t}) > \epsilon{| n_u = j}\} \leq 2\exp\left\{-2 m g_{t,N} \right\} + \sum_{i=1}^m\zeta_{a_i,j} \Pr\{n_u = j\}, \label{eq:upperbound_mcdiarmid} \end{eqnarray} where $g_{t,N} \triangleq \frac{R_0^2 a_{\texttt{min}}^2 \min\{\epsilon_{e}^2, \epsilon_{o}^2\} \alpha_{\texttt{min}}^2}{a_{\texttt{max}}^2 B^2\alpha_{\texttt{max}}^2 N^2}$, $a_{\texttt{min}} \triangleq \min_{1\leq i \leq 2m} a_i$, $a_{\texttt{max}} \triangleq \max_{1\leq i \leq 2m} a_i$, and $\epsilon_{h} \triangleq \epsilon - \mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^h_{1,t})]$, $h \in \{e,o\}$. \label{thm:mcdiarmid} \end{thrm} \begin{proof} Consider the term corresponding to the even blocks \begin{eqnarray} \label{eq:large_dev_app1} \Pr\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t}) > \epsilon \left \vert \right. n_u = j\right\} = \Pr\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t}) - \mathbb{E}\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})\right\} > \epsilon_e \left \vert \right. n_u = j\right\}, \end{eqnarray} where $\epsilon_e$ is as defined in the theorem. To apply Mcdiarmid's inequality, we let $\tilde{{X}}_{1,t}^e$ and $\hat{{X}}_{1,t}^e$ be independent sequences of even blocks that differ only in one block, say the $i$th block $a_i$. Let the distributions of $\tilde{{X}}_{1,t}^e$ and $\hat{{X}}_{1,t}^e$ be identical. Conditioned on $\{n_u = j\}$, let $s_{ik}$, $k=1,2,\ldots,a_i$ denote the number of requests in the $k$th slot of the $i$th block consisting of $a_i$ slots. Therefore, conditioned on $\{n_u = j\}$, we have \begin{eqnarray} \nonumber \sup_{\Pi \in \mathcal{P}} \left\vert\tilde{g}_{t,T}(\tilde{{X}}_{1,t}^e)\right\vert - \sup_{\Pi \in \mathcal{P}} \left\vert\hat{g}_{t,T}(\hat{{X}}_{1,t}^e)\right\vert &\stackrel{(a)}{\leq}& \sup_{\Pi \in \mathcal{P}} \biggl\vert\sum_{j=1}^N g(\pi_j) \biggl(\frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} \mathds{1}\{\tilde{X}(s) = j\} \biggr.\biggr. - \biggl.\biggl.\mathds{1}\{\hat{X}(s) = j\}\biggr)\biggr\vert \nonumber \\ &\stackrel{(b)}{\leq}& \sup_{1\leq j \leq N} g(\pi_j) \frac{N \sum_{k=1}^{a_i} s_{ik}}{\abs{\mathbb{T}^{(t)}_e}} \leq \frac{B N \sum_{k=1}^{a_i} s_{ik}}{R_0\abs{\mathbb{T}^{(t)}_e}}, \label{eq:bound_event} \end{eqnarray} where $(a)$ follows from the reverse triangle inequality, and $(b)$ follows from the fact that the two sequences $\tilde{{X}}_{1,t}^e$ and $\tilde{{X}}_{1,t}^o$ differ in the $i$th block, and the $i$th block can have at most $\sum_{k=1}^{a_i} s_{ik}$ requests. Further, $$\hat{g}_{t,T}(\tilde{{X}}_{1,t}^e) \triangleq \sum_{i=1}^N g(\pi_i) \left(\frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e}\mathds{1}\{\tilde{X}(s) = i\} - p_{i,t+T}\right),$$ and $\hat{g}_{t,T}(\hat{{X}}_{1,t}^e) $ is defined in a similar fashion. Also, note that $\abs{\mathbb{T}^{(t)}_e} = \sum_{i=1}^{m} \sum_{k=1}^{a_i} s_{ik}$. Now, conditioned on the event that the number of requests in the $i$th block is bounded, i.e., $\mathcal{E}_{j} \triangleq \bigcap_{i=1}^m \left\{\alpha_{\texttt{min}} j a_i \leq r_i \leq \alpha_{\texttt{max}} j a_i \right\}$, we can write \eqref{eq:large_dev_app1} as \begin{eqnarray} \Pr\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t}) - \mathbb{E}\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})\right\} > \epsilon_e \left \vert \right. n_u = j\right\} &\leq& \Pr\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t}) - \mathbb{E}\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})\right\} > \epsilon_e \left \vert \right. \mathcal{E}_{j}, n_u = j\right\} \nonumber \\ &\times& \Pr \{\mathcal{E}_{j} \left \vert \right. n_u=j\} + \Pr\{\mathcal{E}_{j}^c \left \vert \right. n_u = j\}, \nonumber\\ &\leq& \Pr\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t}) - \mathbb{E}\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})\right\} > \epsilon_e \left \vert \right.\mathcal{E}_{j}, n_u = j\right\} \nonumber \\ && + \sum_{i=1}^m\zeta_{a_i,j}, \end{eqnarray} where the last inequality above follows from the union bound and \defref{def:rademachercomplexity}. Using \eqref{eq:bound_event}, and the fact that the event $\mathcal{E}_{j}$ occurs, we have \begin{eqnarray} \frac{B^2 N^2 \sum_{i=1}^m\left(\sum_{k=1}^{a_i} s_{ik}\right)^2}{R_0^2\abs{\mathbb{T}^{(t)}_e}^2} \leq \frac{B^2N^2 m\left(\alpha_{\texttt{max}} j a_{\texttt{max}}\right)^2}{R_0^2\left(\alpha_{\texttt{min}} j a_{\texttt{min}} m\right)^2} = \frac{B^2 N^2 \alpha_{\texttt{max}}^2 a_{\texttt{max}}^2}{R_0^2 \alpha_{\texttt{min}}^2 a_{\texttt{min}}^2 m}, \end{eqnarray} where $a_{\texttt{min}} \triangleq \min_{1\leq i \leq 2m} a_i$ and $a_{\texttt{max}} \triangleq \max_{1 \leq i \leq 2m} a_i $. Using this boundedness property along with Mcdiarmid's inequality, we have \begin{eqnarray} \nonumber \Pr\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t}) - \mathbb{E}\left\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})\right\} > \epsilon_e \left \vert \right.\mathcal{E}_{j}, n_u = j\right\}\leq \exp\left\{-\frac{2a_{\texttt{min}}^2 R_0^2\alpha_{\texttt{min}}^2 m} {\epsilon_e^2 B^2 N^2 a_{\texttt{max}}^2 \alpha_{\texttt{max}}^2}\right\} + \sum_{i=1}^m\zeta_{a_i,j}. \label{eq:even_mcdiarmid_ineq} \end{eqnarray} Similarly, $$\Pr\left\{\mathcal{{A}}_T(\tilde{{X}}^o_{1,t}) - \mathbb{E}\left\{\mathcal{{A}}_T(\tilde{{X}}^o_{1,t})\right\} > \epsilon_e \left \vert \right. \mathcal{E}_{j}, n_u = j\right\} \leq \exp\left\{-\frac{2R_0^2 a_{\texttt{min}}^2\alpha_{\texttt{min}}^2 m} {\epsilon_o^2 B^2 N^2 a_{\texttt{max}}^2\alpha_{\texttt{max}}^2}\right\}+ \sum_{i=1}^m\zeta_{a_i,j}.$$ Combining these two, we get the desired result, which completes the proof of \thrmref{thm:mcdiarmid} and hence \thrmref{thm:main_res1}. \end{proof} The bound in \eqref{eq:upperbound_mcdiarmid} is independent of $j$. From \eqref{eq:upperbound_mcdiarmid}, \eqref{eq:bound_evenodd}, and using the result in \eqref{eq:first_bound}, we get \begin{eqnarray} \Pr\left\{\mathcal{A}_{T}(X_{1,t}) > \epsilon \right\} \leq \exp\left\{-\lambda_u \pi R^2\right\} + \exp\left\{-\psi m \right\} + \sum_{i=2}^{2m-1} \beta(a_i) + e^{-\lambda_u}\sum_{j=1}^\infty\sum_{i=1}^m\zeta_{a_i,j} \frac{\lambda_u^j}{j!}, \end{eqnarray} where $\psi \triangleq \frac{2 a_{\texttt{max}}^2 \min\{\epsilon_{e}^2, \epsilon_{o}^2\} R_0^2\alpha_{\texttt{min}}^2}{a_{\texttt{min}}^2 \alpha_{\texttt{max}}^2 N^2 B^2}$. We need $\Pr\left\{\mathcal{A}_{T}(X_{1,t}) > \epsilon \right\} < \delta/2$, which implies that \begin{eqnarray} \min\{\epsilon_{e}, \epsilon_{o}\} > \frac{N B a_{\texttt{max}}\alpha_{\texttt{max}}}{a_{\texttt{min}} R_0 \alpha_{\texttt{min}}} \sqrt{\frac{\log \left(\frac{2}{\delta^{'}}\right)}{2 m}}, \label{eq:thrm5eq1} \end{eqnarray} where \begin{equation} \delta^{'} \triangleq \delta/2 - \exp\left\{-\lambda_u \pi R^2\right\} - \sum_{i=2}^{2m-1} \beta(a_i) - e^{-\lambda_u}\sum_{j=1}^\infty\sum_{i=1}^m\zeta_{a_i,j} \frac{\lambda_u^j}{j!} > 0. \end{equation} But, $\epsilon_h = \epsilon - \mathbb{E}\left[\mathcal{{A}}_T(\tilde{{X}}^h_{1,t})\right]$, $h \in \{e,o\}$. Using this in \eqref{eq:thrm5eq1} results in the following constraint: \begin{eqnarray} \epsilon > \mathcal{E}_{t,T} + \frac{N B a_{\texttt{max}}\alpha_{\texttt{max}}}{R_0 a_{\texttt{min}} \alpha_{\texttt{min}}} \sqrt{\frac{\log \left(\frac{2}{\delta^{'}}\right)}{2 m}}, \label{eq:epsilon1} \end{eqnarray} where $\mathcal{E}_{t,T} \triangleq \min\left\{\mathbb{E}\left[\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})\right], \mathbb{E}\left[\mathcal{{A}}_T(\tilde{{X}}^o_{1,t})\right]\right\}$. With probability of at least $(1-\delta)$, $\mathcal{T}{^\ast}(t + T) < \mathcal{T}{^\ast}(t + T) < \epsilon$ implies the bound in the theorem after substituting for $\epsilon$ in \eqref{eq:epsilon1}. \section{Proof of \thrmref{thm:main_res2}} \label{app:mainres2proof} We only consider the term $\mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})]$, since an upper bound on the other term follows similarly. As before, let $\hat{p}^e_{i,t} \triangleq \frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} \mathds{1}\{\tilde X(s) = i\}$. Then, \begin{eqnarray} \mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})] &=& \mathbb{E} \left[\sup_{\Pi \in \mathcal{P}} \sum_{i=1}^N g(\pi_i) (\hat{p}^e_{i,t} - p_{i,t+T}) \right]\nonumber\\ &=& \mathbb{E} \left[\sup_{\Pi \in \mathcal{P}} \sum_{i=1}^N g(\pi_i) \left(\hat{p}^e_{i,t} - \frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} p_{i,s} + \frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} p_{i,s} - p_{i,t+T}\right)\right ] \nonumber \\ &\stackrel{(a)}{\leq}& \mathbb{E} \left[\sup_{\Pi \in \mathcal{P}}\sum_{i=1}^N g(\pi_i) \left(\hat{p}^e_{i,t} - \frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} p_{i,s} \right ) + \Delta_{t,T}^{(e)}\right], \label{eq:rademacher1} \end{eqnarray} where $\Delta_{t,T}^{(e)} \triangleq \mathbb{E} \sup_{\Pi \in \mathcal{P}} \sum_{i=1}^N g(\pi_i) d^{(e)}_i(t+T)$, $d^{(e)}_i(t,T) \triangleq \frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} \left \vert p_{i,s} - p_{i,t+T} \right \vert$, and $(a)$ follows from the triangular inequality. Let us consider a sequence of RVs $\bar{{X}}_{1,t}$ independent of $\tilde{{X}}_{1,t}$, but with the same distribution. Thus, $p_{i,s} = \mathbb{E}[\mathds{1}\{\bar{{X}}_{1,t}(s) = i\}]$ $\forall$ $i$, where $\bar{{X}}_{1,t}(s)$ is the $s$th component of $\bar{{X}}_{1,t}$. Substituting the values of $p_{i,s}$ and $\hat{p}_{i,t}^e$, the first term in \eqref{eq:rademacher1} becomes \begin{eqnarray} \nonumber \mathbb{E}_{\tilde X}\left[ \sup_{\Pi \in \mathcal{P}}\sum_{i=1}^N g(\pi_i) \left(\hat{p}^e_{i,t} - \frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} p_{i,s} \right)\right] &=& \mathbb{E}_{\tilde X} \left[\sup_{\Pi \in \mathcal{P}}\sum_{i=1}^N g(\pi_i) \left(\frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e}\Delta_E X_{i,s,t}\right) \right] \nonumber \\ &\stackrel{(a)}{\leq}& \mathbb{E}_{\tilde X, \hat X}\left[\sup_{\Pi \in \mathcal{P}}\sum_{i=1}^N g(\pi_i) \left(\frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e}\Delta X_{i,s,t}\right)\right] \nonumber \\ \nonumber &\stackrel{(b)}{\leq}& \mathbb{E}_{\tilde X, \hat X, \bm{\sigma}}\left[\sup_{\Pi \in \mathcal{P}}\sum_{i=1}^N g(\pi_i) \left(\frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} \sigma_{i,s} \Delta X_{i,s,t}\right)\right] \\ \nonumber &\leq& \mathbb{E}_{\tilde X, \bm{\sigma}}\left[\sup_{\Pi \in \mathcal{P}}\sum_{i=1}^N g(\pi_i) \left(\frac{1}{\abs{\mathbb{T}^{(t)}_e}} \sum_{s \in \mathbb{T}^{(t)}_e} \sigma_{i,s} \mathds{1}\{\tilde X(s) = i\} \right)\right], \\ \label{eq:app_mainres2proof} \end{eqnarray} where $\Delta_E X_{i,s,t} \triangleq \mathds{1}\{\tilde X(s) = i\} - \mathbb{E} [\mathds{1}\left\{\bar{{X}}_{1,t}(s) = i\right\}]$, and $\Delta X_{i,s,t} \triangleq \mathds{1}\{\tilde X(s) = i\} - \mathds{1}\left\{\bar{{X}}_{1,t}(s) = i\right\}$. In \eqref{eq:app_mainres2proof}, $(a)$ follows from the convexity property, and $(b)$ follows from the fact that $\Delta X_{i,s,t}$ and $\sigma_{i,s} \Delta X_{i,s,t}$ have the same distribution, where the Rademacher RVs $\sigma_{i,s} \in \{-1,1\}$ are {i.i.d.} with probability $1/2$ each. We also have $\bm{\sigma} \triangleq \{\sigma_{i,s}: 1 \leq i \leq N, s \in \mathbb{T}^{(t)}_e\}$. Using \defref{def:rademachercomplexity}, we have $\mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})] \leq \mathcal{R}^{(t)}_e + \Delta_{t,T}^{(e)}$. Similar analysis holds for the odd term leading to $\mathbb{E}[\mathcal{{A}}_T(\tilde{{X}}^o_{1,t})] \leq \mathcal{R}^{(t)}_o + \Delta_{t,T}^{(o)}$, where $\mathcal{R}^{(t)}_o$ and $\Delta_{t,T}^{(o)}$ are defined similarly to $\mathcal{R}^{(t)}_e$ and $\Delta_{t,T}^{(e)}$, respectively. Using these, we get $\max\left\{\mathbb{E}\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})\}, \mathbb{E}\{\mathcal{{A}}_T(\tilde{{X}}^e_{1,t})\}\right\} \leq \max\{\mathcal{R}^{(t)}_e, \mathcal{R}^{(t)}_o\} + \max\{\Delta_{t,T}^{(e)}, \Delta_{t,T}^{(o)}\}$. Finally, note that $t = \sum_{j=1}^{2m} a_i \leq 2m \max_{1\leq i \leq 2m} a_i$, which implies $m \geq \frac{t}{2\max_{1\leq i \leq 2m} a_i}$. Using these results in \thrmref{thm:main_res1}, we get the desired result. This completes the proof of \thrmref{thm:main_res2}. $\blacksquare$ \bibliographystyle{IEEEtran}
2,877,628,090,025
arxiv
\section{Introduction}\label{sec0} \par In the paper we characterize Pilipovi{\'c} spaces of the form $\mathcal H _{\flat _\sigma}(\rr d)$ and $\mathcal H _{0,\flat _\sigma} (\rr d)$, considered in \cite{FeGaTo1,Toft14}, in terms of estimates of powers of the harmonic oscillator, on the involved functions. \par The set of Pilipovi{\'c} spaces is a family of Fourier invariant spaces, containing any Fourier invariant (standard) Gelfand-Shilov space. The (standard) Pilipovi{\'c} spaces $\mathcal H_s(\rr d)$ and $\mathcal H_{0,s}(\rr d)$ with respect to $s\in \mathbf R_+$, are the sets of all formal Hermite series expansions \begin{equation}\label{Eq:fHermite} f(x) = \sum _{\alpha \in \nn d}c_\alpha (f)h_\alpha (x) \end{equation} such that \begin{equation}\label{Eq:Cond.} |c_\alpha(f)| \lesssim e^{-r|\alpha |^{\frac 1{2s}}} \end{equation} holds true for some $r>0$ respective for every $r>0$. (See \cite{Ho} and Section 1 for notations.) Evidently, $\mathcal H_s(\rr d)$ and $\mathcal H_{0,s}(\rr d)$ increases with $s$. It is proved in \cite{Pil2} that if $\mathcal S_s(\rr d)$ and $\Sigma_s(\rr d)$ are the Gelfand-Shilov spaces of Roumieu respective Beurling types of order $s$, then \begin{alignat}{2} \mathcal H_s(\rr d) &= \mathcal S_s(\rr d),& \quad s &\ge \frac 12, \label{Eq:Cond2} \\[1ex] \mathcal H_{0,s}(\rr d) &= \Sigma_s(\rr d),& \quad s &> \frac 12, \label{Eq:Cond3} \end{alignat} and $$ \mathcal H_{0,s}(\rr d)\neq \Sigma_s(\rr d)=\{0\},\quad s=\frac 12. $$ \par It is also well-known that $\mathcal S_s(\rr d)=\{0\}$ when $s<\frac 12$ and $\Sigma_s(\rr d)=\{0\}$ when $s\le\frac 12$. These relationships are completed in \cite{Toft14} by the relations \begin{alignat*}{2} \mathcal H_s(\rr d) &\neq \mathcal S_s(\rr d) = \{0\},& \quad s&<\frac 12 \intertext{and} \mathcal H_{0,s}(\rr d) &\neq \Sigma_s(\rr d)=\{0\},& \quad s &\le\frac 12. \end{alignat*} In particular, each Pilipovi\'c space is contained in the Schwartz space $\mathscr S (\rr d)$. \par For $\mathcal H_s(\rr d)$ ($\mathcal H_{0,s}(\rr d)$) we also have the characterizations \begin{equation}\label{Eq:CharPowerHarmOsc} f\in \mathcal H_s(\rr d)\quad (f\in \mathcal H_{0,s}(\rr d))\quad \Leftrightarrow \quad \norm {H^{N}_{d} f}_{L^\infty}\lesssim r^N N!^{2s} \end{equation} for some $r>0$ (for every $r>0$) concerning estimates of powers of the harmonic oscillator $$ H_d=|x|^2-\Delta _x,\qquad x\in \rr d, $$ acting on the involved functions. These relations were obtained in \cite{Pil2} for $s\ge \frac 12$, and in \cite{Toft14} in the general case $s>0$. \par In \cite{FeGaTo1,Toft14} characterizations of $\mathcal H_s(\rr d)$ and $\mathcal H_{0,s}(\rr d)$ were also obtained by certain spaces of analytic functions on $\cc d$, via the Bargmann transform. From these mapping properties it follows that near $s=\frac 12$ there is a jump concerning these Bargmann images. More precisely, if $s=\frac 12$, then the Bargmann image of $\mathcal H _{s}(\rr d)$ (of $\mathcal H _{0,s}(\rr d)$) is the set of all entire functions $F$ on $\cc d$ such that $F$ obeys the condition \begin{equation}\label{Eq:AnalFEst1} |F(z)|\lesssim e^{(\frac 12-r)|z|^2} \qquad (\, |F(z)|\lesssim e^{r|z|^2}\, ) \end{equation} for some $r>0$ (for every $r>0$). For $s<\frac 12$, this estimate is replaced by \begin{equation}\label{Eq:AnalFEst2} |F(z)| \lesssim e^{r(\log (1+|z|))^{\frac 1{1-2s}}} \end{equation} for some $r>0$ (for every $r>0$), which is indeed a stronger condition compared to the case $s=\frac 12$. \par An important motivation for considering the spaces $\mathcal H_{\flat_\sigma}(\rr d)$ and $\mathcal H_{0,\flat_\sigma}(\rr d)$ is to make this gap smaller. More precisely, $\mathcal H_{\flat_\sigma}(\rr d)$ and $\mathcal H_{0,\flat_\sigma}(\rr d)$, which are Pilipovi\'c spaces of Roumieu respective Beurling types, is a family of function spaces, which increases with $\sigma$ and such that $$ \mathcal H _{s_1}(\rr d) \subseteq \mathcal H_{0,\flat_\sigma}(\rr d) \subseteq \mathcal H_{\flat_\sigma}(\rr d) \subseteq \mathcal H _{0,s_2}(\rr d), \qquad s_1<\frac 12,\ s_2\ge \frac 12. $$ The spaces $\mathcal H_{\flat_\sigma}(\rr d)$ and $\mathcal H_{0,\flat_\sigma}(\rr d)$ consist of all formal Hermite series expansions \eqref{Eq:fHermite} such that \begin{equation} \label{Eq:Cond.3} |c_\alpha (f)|\lesssim r^{|\alpha|}\alpha!^{-\frac 1{2\sigma}} \end{equation} hold true for some $r>0$ respectively for every $r>0$. For the Bargmann images of $\mathcal H_{\flat_\sigma}(\rr d)$ and $\mathcal H_{0,\flat_\sigma}(\rr d)$, the conditions \eqref{Eq:AnalFEst1} and \eqref{Eq:AnalFEst2} above are replaced by $$ |F(z)|\lesssim e^{r|z|^{\frac {2\sigma}{\sigma +1}}}, $$ for some $r>0$ respectively for every $r>0$. It follows that the gaps of the Bargmann images of $\mathcal H _s(\rr d)$ and $\mathcal H _{0,s}(\rr d)$ between the cases $s<\frac 12$ and $s\ge \frac 12$ are drastically decreased by including the spaces $\mathcal H_{\flat_\sigma}(\rr d)$ and $\mathcal H_{0,\flat_\sigma}(\rr d)$, $\sigma >0$, in the family of Pilipovi{\'c} spaces. \par In \cite{FeGaTo1}, characterizations of $\mathcal H_{\flat_1}(\rr d)$ and $\mathcal H_{0,\flat_1}(\rr d)$ in terms of estimates of powers of the harmonic oscillator acting on the involved functions which corresponds to \eqref{Eq:CharPowerHarmOsc} are deduced. On the other hand, apart from the case $\sigma =1$, it seems that no such characterizations for $\mathcal H_{\flat_\sigma}(\rr d)$ and $\mathcal H_{0,\flat_\sigma}(\rr d)$ have been obtained so far. \par In Section \ref{sec2} we fill this gap in the theory, and deduce such characterizations. In particular, as a consequence of our main result, Theorem \ref{Thm:Mainthm1} in Section \ref{sec2}, we have \begin{gather*} f\in \mathcal H _{\flat _\sigma }(\rr d) \quad (f\in \mathcal H _{0,\flat _\sigma }(\rr d)) \\[1ex] \Leftrightarrow \\[1ex] \norm {H_d^N f}_{L^\infty} \lesssim 2^N r^{\frac{N}{\log (N\sigma )}} \left ( \frac{2N\sigma}{\log (N\sigma )} \right )^{N ( 1-\frac{1}{\log (N\sigma )} ) } \end{gather*} for some (every) $r > 0$. By choosing $\sigma =1$ we regain the corresponding characterizations in \cite{FeGaTo1} for $\mathcal H_{\flat_1}(\rr d)$ and $\mathcal H_{0,\flat_1}(\rr d)$. \par \section{Preliminaries}\label{sec1} \par In this section we recall some facts about Gelfand-Shilov spaces and Pilipovi{\'c} spaces. \par Let $s>0$. Then the (Fourier invariant) Gelfand-Shilov spaces $\mathcal S _s(\rr d)$ and $\Sigma _s(\rr d)$ of Roumieu and Beurling types, respectively, consists of all $f\in C^\infty (\rr d)$ such that \begin{equation}\label{Eq:GSNorm} \nm f{\mathcal S _{s;r}}\equiv \sup _{\alpha ,\beta \in \nn d} \left ( \frac {\nm {x^\alpha D^\beta f}{L^\infty (\rr d)}} {r^{|\alpha +\beta|}(\alpha !\beta !)^s} \right ) \end{equation} is finite, for some $r>0$ respectively for every $r>0$. The topologies of $\mathcal S _s(\rr d)$ and $\Sigma _s (\rr d)$ are the inductive limit topology and the projective limit topology, respectively, supplied by the norms \eqref{Eq:GSNorm}. \par For $\mathcal H _s(\rr d)$ and $\mathcal H _{0,s}(\rr d)$ we consider the norms \begin{alignat*}{3} \nm f{\mathcal H _{s;r}} &\equiv \sup\limits_{\alpha\in \nn d} (|c_\alpha(f)|e^{r|\alpha|^{\frac 1{2s}}}) & \quad &\text{when} & \quad s &\in \mathbf R_+ \intertext{and} \nm f{\mathcal H _{s;r}} &\equiv \sup \limits _{\alpha\in \nn d} \left( |c_\alpha(f)| r^{|\alpha|}\alpha!^{\frac1{2\sigma}} \right) & \quad &\text{when} & \quad s &= \flat_\sigma. \end{alignat*} By extending $\mathbf R_+$ into $\mathbf R_\flat\equiv\mathbf R_+ \cup \{\flat_\sigma\}_{\sigma>0}$ and letting $$ s_1< \flat_{\sigma _1} < \flat_{\sigma _2} <s_2 \quad \text{when}\quad s_2 \ge \frac 12, \ s_1<\frac 12\ \text{and}\ \sigma _1<\sigma _2, $$ we have $$ \mathcal H _{s_1}(\rr d)\subseteq \mathcal H _{0,s_2}(\rr d) \subseteq \mathcal H _{s_2}(\rr d),\quad s_1,s_2\in\mathbf R_\flat\; \text{and}\; s_1<s_2. $$ Let $r>0$ be fixed. Then the set $\mathcal H _{s;r}(\mathbf R^d)$ consists of all $f\in C^\infty (\rr d)$ such that $\nm f{\mathcal H _{s;r}}$ is finite. It follows that $\mathcal H _{s;r}(\mathbf R^d)$ is a Banach space. \par The Pilipovi{\'c} spaces $\mathcal H _s(\mathbf R^d)$ and $\mathcal H _{0,s}(\mathbf R^d)$ are the inductive limit and the projective limit, respectively, of $\mathcal H _{s;r}(\mathbf R^d)$ with respect to $r>0$. In particular, $$ \mathcal H _s(\mathbf R^d)=\bigcup_{r>0}\mathcal H _{s;r}(\mathbf R^d)\quad \text{and} \quad \mathcal H _{0,s}(\mathbf R^d)=\bigcap_{r>0}\mathcal H _{s;r}(\mathbf R^d) $$ and it follows that $\mathcal H _s(\mathbf R^d)$ is complete, and that $\mathcal H _{0,s}(\mathbf R^d)$ is a Fr\'echet space. It is well-known that the identities \eqref{Eq:Cond2} and \eqref{Eq:Cond3} also hold in topological sense (cf. \cite{Pil2}). \medspace We also need some facts about weight functions. A \emph{weight} on $\rr d$ is a function $\omega \in L^\infty _{loc}(\rr d)$ such that $\omega (x)>0$ for every $x\in \rr d$ and $1/\omega \in L^\infty _{loc}(\rr d)$. The weight $\omega$ on $\rr d$ is called moderate of polynomial type, if there is an integer $N\ge 0$ such that $$ \omega (x+y)\lesssim \omega (x)(1+|y|)^N,\qquad x,y\in \rr d . $$ The set of moderate weights of polynomial types on $\rr d$ is denoted by $\mathscr P (\rr d)$. \par \section{Characterizations of $\mathcal H _{\flat _\sigma}(\rr d)$ and $\mathcal H _{0,\flat _\sigma} (\rr d)$ in terms of powers of the harmonic oscillator}\label{sec2} \par In this section we deduce characterizations of the test function spaces $\mathcal H _{0,\flat \sigma}(\rr d)$ and $\mathcal H _{\flat _\sigma}(\rr d)$. \par More precisely we have the following. \par \begin{thm}\label{Thm:Mainthm1} Let $N\in \mathbf N$, $\sigma >0$, $p\in [1,\infty ]$ and let $f\in C^\infty (\rr d)$ be given by \eqref{Eq:fHermite}. Then the following conditions are equivalent: \begin{enumerate} \item $f\in \mathcal H _{\flat _\sigma}(\rr d)$ ($f\in \mathcal H _{0,\flat _\sigma}(\rr d)$); \vspace{0.1cm} \item for some $r > 0$ (for every $r > 0$) it holds $$ \left | c_\alpha (f)\right | \lesssim r^{|\alpha |}(\alpha !)^{-\frac{1}{2\sigma}}\text ; $$ \vspace{0.1cm} \item for some $r > 0$ (for every $r > 0$) it holds \begin{equation}\label{Eq:GFHarmCond} \norm {H_d^N f}_{L^p} \lesssim 2^N r^{\frac{N}{\log (N\sigma )}} \left ( \frac{2N\sigma}{\log (N\sigma )} \right )^{N ( 1-\frac{1}{\log (N\sigma )} ) }. \end{equation} \end{enumerate} \end{thm} \par \par First we need certain invariance properties concerning the norm condition \eqref{Eq:GFHarmCond}. More precisely, the following result links the conditions \begin{alignat}{2} \label{Eq:GFHarmCond1} \nm{H_d^Nf}{L^{p_0}} &\lesssim 2^N r^{\frac{N}{\log (N\sigma )}} \left ( \frac{2N\sigma}{\log (N\sigma )} \right )^{N ( 1-\frac{1}{\log (N\sigma )} ) }, & \quad N &\ge 0, \intertext{and} \label{Eq:GFHarmCond2} \nm {H_d^Nf}{M^{p,q}_{(\omega )}} &<2^N r^{\frac{N}{\log (N\sigma )}} \left ( \frac{2N\sigma}{\log (N\sigma )} \right )^{N ( 1-\frac{1}{\log (N\sigma )} ) }, & \quad N &\ge N_0. \end{alignat} to each others and shows in particular that the $L^p$ norm in \eqref{Eq:GFHarmCond} can be replaced by other types of Lebesgue or modulation space quasi-norms. \par \begin{prop}\label{Prop:NormEquiv} Let $p_0\in [1,\infty ]$, $p,q\in (0,\infty ]$, $N_0\ge 0$ be an integer, $\sigma > 0$ and let $\omega \in \mathscr P (\rr {2d})$. Then the following conditions are equivalent: \begin{enumerate} \item \eqref{Eq:GFHarmCond1} holds for some $r>0$ (for every $r>0$); \vspace{0.1cm} \item \eqref{Eq:GFHarmCond2} holds for some $r>0$ (for every $r>0$). \end{enumerate} \end{prop} \par We need the following lemma for the proof of Proposition \ref{Prop:NormEquiv}. \par \begin{lemma}\label{Lemma:ExpressionLogExps} Let $R\ge 1$, $I = (0,R]$, \begin{align*} g(r,t_1,t_2) &\equiv \frac {r^{\frac {t_2}{\log t_2}}}{r^{\frac {t_1}{\log t_1}}} \quad \text{and}\quad h(t_1,t_2) \equiv \frac { \left ( \frac {2t_2}{\log t_2} \right ) ^{t_2(1-\frac 1{\log t_2})} } { \left ( \frac {2t_1}{\log t_1} \right ) ^{t_1(1-\frac 1{\log t_1})} }, \end{align*} when $t_1,t_2>e$ and $r>0$. Then \begin{equation}\label{Eq:aAndhEst} 0 \le g(r,t_1,t_2)\le C \quad \text{and} \quad 0\le h(t_1,t_2)\le C^{\frac{t_2}{\log t_2}} \end{equation} when $$ t_1,t_2>R+1,\ 0\le t_2-t_1 \le R, \ r\in I, $$ for some constant $C>0$ which only depends on $R$. \end{lemma} \par \begin{proof} Since $t\mapsto \frac t{\log t}$ is increasing when $t\ge e$, $g$ is upper bounded by one when $r\le 1$, and the boundedness of $g$ follows in this case. \par If $r\ge 1$, $t=t_1$, $u =t_2-t_1>0$ and $\rho =\log r$, then \begin{multline*} 0\le \log g(r,t_1,t_2) = \left ( \frac {t+u}{\log (t+u )}-\frac t{\log t} \right ) \rho \\[1ex] = \frac t{\log t} \left ( \frac {1+\frac u t}{1+\frac {\log (1+\frac u t)}{\log t}} -1 \right ) \rho = \frac t{\log t} \left ( \frac {\frac u t - \frac {\log (1+\frac u t)}{\log t}} {1+\frac {\log (1+\frac u t)}{\log t}} \right ) \rho \\[1ex] < \frac t{\log t} \cdot \frac u t \cdot \rho = \frac {u \rho}{\log t} \le C \end{multline*} for some constant $C$ which only depends on $r$ and $R$. This shows the boundedness of $g$. \par Next we show the estimates for $h(t_1,t_2)$ in \eqref{Eq:aAndhEst}. By taking the logarithm of $h(t_1,t_2)=h(t,t_2)$ we get $$ \log h(t,t_2)=t_2 \log \left ( \frac{2t_2}{\log t_2} \right ) -t \log \left ( \frac{2t}{\log t} \right ) - b(t,t_2), $$ where $$ b(t,t_2) = \left ( \frac{t_2}{\log t_2} \log \left ( \frac{2t_2}{\log t_2} \right ) -\frac t{\log t} \log \left ( \frac{2t}{\log t} \right ) \right ). $$ Since $b(t,t_2)>0$ when $t_2>t$, we get \begin{multline*} \log h(t_1,t_2)< t_2 \log \left ( \frac{2t_2}{\log t_2} \right ) -t \log \left ( \frac{2t}{\log t} \right ) \\[1ex] =(t+u ) \left ( \log \left ( \frac{2t}{\log t} \right ) +\log\left ( \frac{1+\frac{u}{t}} {1+\frac{\log (1+\frac{u}{t})}{\log t}} \right ) \right ) -t\log \left ( \frac{2t}{\log t} \right ) \\[1ex] \le u \log \left ( \frac{2t}{\log t} \right ) +t \log \left ( 1+\frac{u}{t} \right ) +C \\[1ex] \le u \log \left ( \frac{2t}{\log t} \right ) +u +C \lesssim \frac{t_2}{\log t_2}, \end{multline*} for some constant $C\ge 0$. \end{proof} \par \begin{proof}[Proof of Proposition \ref{Prop:NormEquiv}] First we prove that \eqref{Eq:GFHarmCond2} is independent of $N_0\ge 0$ when $p,q\ge 1$. Evidently, if \eqref{Eq:GFHarmCond2} is true for $N_0=0$, then it is true also for $N_0>0$. On the other hand, the map \begin{equation}\label{HarmonicOscModMap} H_d^N \, :\, M^{p,q}_{(v_N\omega )}(\rr d)\to M^{p,q}_{(\omega )}(\rr d), \qquad v_N(x,\xi )=(1+|x|^2+|\xi |^2)^N, \end{equation} and its inverse are continuous and bijective (cf. e.{\,}g. \cite[Theorem 3.10]{SiTo}). Hence, if $0\le N\le N_0$, $N_1=N_0-N\ge 0$ and \eqref{Eq:GFHarmCond2} holds for some $N_0\ge 0$, then $$ \nm {H_d^Nf}{M^{p,q}_{(\omega )}} \lesssim \nm {H_d^{N_0}f}{M^{p,q}_{(\omega /v_{N_1})}} \lesssim \nm {H_d^{N_0}f}{M^{p,q}_{(\omega )}}<\infty , $$ and \eqref{Eq:GFHarmCond2} holds for $N_0=0$. This implies that \eqref{Eq:GFHarmCond2} is independent of $N_0\ge 0$ when $p,q\ge 1$. \par Next we prove that (2) is independent of the choice of $\omega \in \mathscr P (\rr {2d})$. For every $\omega _1,\omega _2\in \mathscr P (\rr {2d})$, we may find an integer $N_0\ge 0$ such that $$ \frac 1{v_{N_0}}\lesssim \omega _1, \omega _2\lesssim v_{N_0}, $$ and then \begin{equation}\label{NormArrayCond} \nm f{M^{p,q}_{(1/v_{N_0})}}\lesssim \nm f{M^{p,q}_{(\omega _1)}}, \nm f{M^{p,q}_{(\omega _2)}} \lesssim \nm f{M^{p,q}_{(v_{N_0})}}. \end{equation} Hence the stated invariance follows if we prove that \eqref{Eq:GFHarmCond2} holds for $\omega =v_{N_0}$, if it is true for $\omega =1/v_{N_0}$. \par Therefore, assume that \eqref{Eq:GFHarmCond2} holds for $\omega =1/v_{N_0}$. Let $g_N=H_d^{N}f$, $u =2N_0\sigma$, $t=t_1=N\sigma$, $N_2=N+2N_0$ and $t_2=t_1+u = N_2\sigma$. If $N\ge 2N_0$, then the bijectivity of \eqref{HarmonicOscModMap} gives \begin{multline} \label{Eq:Comp} \frac{\nm {g_N}{M^{p,q}_{(v_{N_0})}}^\sigma} {2^{N\sigma} r^{\frac {N\sigma}{\log (N\sigma)}} \left ( \frac {2N\sigma}{\log (N\sigma)} \right ) ^{N\sigma(1-\frac 1{\log (N\sigma )})}} =\frac{\nm {g_N}{M^{p,q}_{(v_{N_0})}}^\sigma} {{2^t r^{\frac t{\log t}} \left ( \frac {2t}{\log t} \right ) ^{t(1-\frac 1{\log t})}}} \\[1ex] \lesssim \frac{\nm {g_{N+2N_0}}{M^{p,q}_{(1/v_{N_0})}}^\sigma} {{2^t r^{\frac t{\log t}} \left ( \frac {2t}{\log t} \right ) ^{t(1-\frac 1{\log t})}}} \\[1ex] = 2^{u} g(r,t_1,t_2)h(t_1,t_2) \cdot \frac{\nm {g_{N_2}}{M^{p,q}_{(1/v_{N_0})}}^\sigma} {2^{N_2\sigma}r^{\frac {t_2}{\log (t_2)}} \left ( \frac {2t_2}{\log t_2} \right ) ^{t_2(1-\frac 1{\log t_2})}}, \end{multline} where $g(r,t_1,t_2)$ and $h(t_1,t_2)$ are the same as in Lemma \ref{Lemma:ExpressionLogExps}. A combination of Lemma \ref{Lemma:ExpressionLogExps} and \eqref{Eq:Comp} shows that (2) is independent of $\omega \in \mathscr P (\rr {2d})$. For general $p,q>0$, the invariance of \eqref{Eq:GFHarmCond2} with respect to $\omega$, $p$ and $q$ is a consequence of the embeddings $$ M^\infty _{(v_N\omega )}(\rr d)\subseteq M^{p,q} _{(\omega )}(\rr d) \subseteq M^\infty _{(\omega )}(\rr d), \qquad N> d\left ( \frac 1p +\frac 1q \right ). $$ \par The equivalence between (1) and (2) follows from these invariance properties and the continuous embeddings $$ M^{p_0,q_1}\subseteq L^{p_0}\subseteq M^{p_0,q_2}, \qquad q_1=\min (p_0,p_0'),\quad q_2=\max (p_0,p_0'), $$ which can be found in e.{\,}g. \cite{Toft8}. \end{proof} \par \begin{prop}\label{Prop:CoeffGivesHarmPowEst} Let $f\in C^\infty (\rr d)$ and $\sigma >0$. If \begin{equation}\label{Eq:CoeffGivesHarmPowEst1} \nm {H_d^Nf}{L^2}\lesssim 2^Nr^{\frac N{\log (N\sigma)}} \left ( \frac {2N\sigma}{\log (N\sigma)} \right ) ^{N(1-\frac 1{\log (N\sigma )})},\quad N\in \mathbf N,\ N\sigma \ge e, \end{equation} for some $r>0$ (for every $r>0$), then \begin{equation}\label{Eq:CoeffGivesHarmPowEst2} |c_\alpha (f)| \lesssim r^{|\alpha|}|\alpha |^{-\frac {|\alpha |}{2\sigma}}, \quad \alpha \in \nn d, \end{equation} for some $r>0$ (for every $r>0$). \end{prop} \par \begin{prop}\label{Prop:HarmPowGivesCoeffEst} Let $f\in C^\infty (\rr d)$ and $\sigma >0$. If \eqref{Eq:CoeffGivesHarmPowEst2} holds for some $r>0$ (for every $r>0$), then \eqref{Eq:CoeffGivesHarmPowEst1} holds for some $r>0$ (for every $r>0$). \end{prop} \par For the proofs we need some preparing lemmas. \par \begin{lemma}\label{Lemma:LogFuncEst} Let $\sigma>0$, $\sigma _0 \in [0,\sigma ]$ and let $F(r,t)=\left( \frac{2t}{\log t} \right ) ^{t( 1-\frac{1}{\log t} )} r^{\frac{t}{\log t}}$, when $r\geq 0$ and $t>\sigma (e+1)+e$. Then \begin{alignat}{2} F(r,t) &\leq F(r,t+\sigma _0),& \quad r &\in [1,\infty ), \label{Eq:LogFuncEst1} \intertext{and} F(r,t) &\leq F(r^{\frac{e-1}{e}},t+\sigma _0),& \quad r&\in (0,1]. \label{Eq:LogFuncEst2} \end{alignat} \end{lemma} \par \begin{proof} If $r\ge 1$, then it follows by straight-forward tests with derivatives that $F(r,t)$ is increasing with respect to $t>e+\sigma$. This gives \eqref{Eq:LogFuncEst1}. \par In order to prove \eqref{Eq:LogFuncEst2}, let $t_1=t+\sigma _0$ and $$ h(t_1,\sigma _0 ) = \frac{1-\frac{\sigma _0}{t_1}} {1+\frac{\log (1-\frac{\sigma _0}{t_1})}{\log t_1}}, $$ where $0\leq \sigma _0 \leq \sigma$. Then \begin{equation}\label{Eq:LogFractionEst} \left( \frac{2t}{\log t} \right)^{t(1-\frac{1}{\log t})} r^{\frac{t}{\log t}} \leq \left( \frac{2t_1}{\log t_1} \right)^{t_1(1-\frac{1}{\log t_1})} r^{\frac{t}{\log t}} \end{equation} and \begin{equation*} {\frac{t}{\log t}}= h(t_1,\sigma _0 ) \cdot \frac{t_1}{\log t_1}. \end{equation*} Since $$ 0\le \frac {\sigma _0}{t_1}\le \frac 1e \quad \text{and}\quad -1<\frac {\log (1-\frac {\sigma _0}{t_1})}{\log t_1}\le 1 $$ we get $$ h(t_1,\sigma _0 ) \ge 1-\frac {\sigma _0}{t_1}\ge 1-\frac 1e. $$ Hence the facts $\frac {t_1}{\log t_1} \ge 1$ and $0<r\le 1$ give $$ r^{\frac t{\log t}}= r^{h(t_1,\sigma _0 )\frac {t_1}{\log t_1}} \le r^{(1-\frac 1e)\frac {t_1}{\log t_1}}. $$ \par A combination of the latter inequality with \eqref{Eq:LogFractionEst} gives $$ F(r,t) \le \left( \frac{2t_1}{\log t_1} \right)^{t_1(1-\frac{1}{\log t_1})} \left ( r^{(1-\frac 1e)} \right ) ^{\frac{t_1}{\log t_1}} = F(r^{1-\frac 1e},t_1). \qedhere $$ \end{proof} \par \begin{lemma}\label{Lemma:CoeffGivesHarmPowEst} Let $\sigma >0$, $s\ge 10$, $$ \Omega _1 = [e,\infty )\cap (\sigma \cdot \mathbf N) \quad \text{and}\quad \Omega _2 = [e,\infty ). $$ Then the following is true: \begin{enumerate} \item for any $r_2>0$, there is an $r_1>0$ such that \begin{equation}\label{Eq:GeneralEstimate1} \inf _{t\in \Omega _j} \left ( s^{-t} \left ( \frac {2t}{\log t} \right ) ^{t(1-\frac 1{\log t})}r_1^{\frac t{\log t}} \right ) \lesssim r_2 ^ss^{-\frac s2},\quad j=1,2\text ; \end{equation} \vspace{0.1cm} \item for any $r_1>0$, there is an $r_2>0$ such that \eqref{Eq:GeneralEstimate1} holds. \end{enumerate} \end{lemma} \par \begin{proof} First prove the result for $j=2$. Let $x=\log t$, $y=\log s$, $\rho _j=\log r_j$, $j=1,2$. By applying the logarithm on \eqref{Eq:GeneralEstimate1}, the statements (1) and (2) follow if we prove: \begin{enumerate} \item[(1)$'$] for any $\rho _2\in \mathbf R$, there is a $\rho _1\in \mathbf R$ such that \begin{equation}\label{Eq:GeneralEstimate2} \inf _{x\ge 1} F(x) \le C \end{equation} for some constant $C$, where \begin{equation}\label{Eq:FDef} F(x)= -e^xy+e^x\log 2 + e^x \left ( 1-\frac 1x \right ) (x-\log x) + \rho _1 \frac {e^x}x-\rho _2e^y+\frac {e^yy}2 \end{equation} \vspace{0.1cm} \item[(2)$'$] for any $\rho _1\in \mathbf R$, there is a $\rho _2\in \mathbf R$ such that \eqref{Eq:GeneralEstimate2} holds. \end{enumerate} \par By choosing $x=y+\log y -\log 2$ and letting $h=\frac {\log y - \log 2}y>0$, which is small when $y$ is large, \eqref{Eq:FDef} becomes \begin{multline*} F(y+\log y-\log 2) = e^y \left ( -\frac {y^2}2 +\frac {y\log 2}2 + \right . \\[1ex] \frac y2 \left ( 1-\frac 1{y+\log y-\log 2} \right ) (y+\log y-\log 2 -\log (y+\log y-\log 2)) \\[1ex] \left . +\frac {\rho _1y}{2(y+\log y-\log 2)}-\rho _2+\frac y2 \right ) \\[1ex] = e^y \left ( -\frac y2\log (1+h)+\frac {\log y +\log (1+h)}{2(1+h)} +\frac {\rho _1}{2(1+h)}-\rho _2 \right ) \end{multline*} If $\rho _1\in \mathbf R$ is fixed, then we choose $\rho _2\in \mathbf R$ such that \begin{equation}\label{Eq:ChoosingRho} \frac {\rho _1}{2(1+h)}-\rho _2\le -C_0 \end{equation} for some large number $C_0>0$. In the same way, if $\rho _2\in \mathbf R$ is fixed, then we choose $\rho _1\in \mathbf R$ such that \eqref{Eq:ChoosingRho} holds. For such choices and the fact that $h>0$, Taylor expansions gives \begin{multline*} F(y+\log y-\log 2) \le e^y \left ( -\frac y2\log (1+h)+\frac {\log y +\log (1+h)}{2(1+h)}-C_0 \right ) \\[1ex] \le e^y \left ( -\frac y2\log (1+h)+\frac {\log y +\log (1+h)}{2} -C_0\right ) \\[1ex] \le e^y \left ( -\frac {\log y-\log 2}2+\frac {(\log y-\log 2)^2}{4y}+\frac 12 \left ( \log y +h \right ) -C_0\right ) \\[1ex] \le e^y \left ( \frac 12\log 2+\frac {(\log y-\log 2)^2}{4y}+\frac h2 -C_0\right )<0, \end{multline*} provided $C_0$ was chosen large enough. This gives the result in the case $j=2$. \par Next suppose that $j=1$, $r_2>0$ and $\rho \in (0,1)$. By the first part of the proof, there are $t_1>0$ and $r_1>0$ such that $$ s^{-t_1} \left ( \frac {2t_1}{\log t_1} \right ) ^{t_1(1-\frac 1{\log t_1})}r_1^{\frac {t_1}{\log t_1}} \lesssim (\rho r_2 )^ss^{-\frac s2}. $$ By Lemma \ref{Lemma:LogFuncEst} and the fact that $\rho ^ss^{\sigma _0}$ is bounded, it follows that $$ s^{-t} \left ( \frac {2t}{\log t} \right ) ^{t(1-\frac 1{\log t})}r_1^{\frac {t}{\log t}} \lesssim r_2^s s^{-\frac s2} $$ holds for some $r_1>0$, when $t=N\sigma$ and $N\in \mathbf N$ is chosen such that $0\le t_1-N\sigma\le \sigma$. This gives (1) for $j=1$. \par By similar arguments, (2) for $j=1$ follows from (2) in the case $j=2$. The details are left for the reader. \end{proof} \par \begin{proof}[Proof of Proposition \ref{Prop:CoeffGivesHarmPowEst}] Suppose that \eqref{Eq:CoeffGivesHarmPowEst1} holds for some $r=r_1>0$. By \begin{equation} \label{c-alpha} c_\alpha (H_d^Nf) = (2|\alpha |+d)^Nc_\alpha (f), \quad |c_\alpha (H_d^Nf)|\le \nm {H_d^Nf}{L^2} \end{equation} and \eqref{Eq:CoeffGivesHarmPowEst1} we get \begin{multline*} |c_\alpha (f)| = \frac {|c_\alpha (H_d^Nf)|}{(2|\alpha |+d)^N} \\[1ex] \lesssim \left ( |\alpha |+\frac d2 \right )^{-N} r_1^{\frac N{\log (N\sigma )}} \left ( \frac {2N\sigma}{\log (N\sigma )} \right ) ^{N(1-\frac 1{\log (N\sigma )})} \\[1ex] \le \left ( |\alpha | ^{-N\sigma} r_1^{\frac{N\sigma}{\log (N\sigma )}} \left ( \frac {2N\sigma}{\log (N\sigma )} \right ) ^{N\sigma (1-\frac 1{\log (N\sigma )})} \right ) ^{\frac 1\sigma}. \end{multline*} By taking the infimum over all $N\ge 0$, it follows from Lemma \ref{Lemma:CoeffGivesHarmPowEst} (2) that \begin{equation*} |c_\alpha (f)| \lesssim \left ( r_2^{|\alpha |}|\alpha |^{-\frac {|\alpha |}2} \right ) ^{\frac 1\sigma} = r^{|\alpha |}|\alpha |^{-\frac {|\alpha |}{2\sigma}} \end{equation*} for some $r_2>0$, where $r=r_2^{\frac 1\sigma}$. Hence \eqref{Eq:CoeffGivesHarmPowEst2} holds for some $r>0$. \par By similar arguments, using (1) instead of (2) in Lemma \ref{Lemma:CoeffGivesHarmPowEst}, it follows that if \eqref{Eq:CoeffGivesHarmPowEst1} holds for every $r>0$, then \eqref{Eq:CoeffGivesHarmPowEst2} holds for every $r>0$. \end{proof} \par For the proof of Proposition \ref{Prop:HarmPowGivesCoeffEst} we will use the following slight extension of \cite[Lemma 2]{FeGaTo1}. \par \begin{lemma}\label{Lemma:EstOnExFracFunc} Let $r>0$ and $$ f(s,r)= \frac {s^{2t}(2re)^s}{s^s},\quad s\ge 1, $$ for some fixed $t\ge 0$. Then the following is true: \begin{enumerate} \item if in addition $t$ is an integer, then there exists a positive and increasing function $\theta$ on $[0,\infty )$ and an integer $t_0(r)>e$ such that \begin{equation}\label{Eq:EstOnExFracFunc1} \max _{s>0}f(s,r) \le \left ( \frac {2t}{\log t} \right ) ^{2t(1-\frac 1{\log t})}(\theta (r)r)^{\frac {2t}{\log t}}, \quad t\ge t_0(r) \text ; \end{equation} \vspace{0.1cm} \item there exists a positive and increasing function $\theta$ on $[0,\infty )$ and an integer $t_0(r)>e$ such that \begin{equation}\label{Eq:EstOnExFracFunc2} \max _{s>0}f(s,r) \le \left ( \frac {2t}{\log t} \right ) ^{2t(1-\frac 1{\log t})}(\theta (r)r+(\theta (r)r)^{\frac {e-1}e})^{\frac {2t}{\log t}}, \quad t\ge t_0(r). \end{equation} \end{enumerate} \end{lemma} \par \begin{proof} The assertion (1) is essentially a restatement of \cite[Lemma 2]{FeGaTo1}. \par If $\theta (r)r\ge 1$, then all factors on the right-hand sides of \eqref{Eq:EstOnExFracFunc1} and \eqref{Eq:EstOnExFracFunc2} are increasing with respect to $t$, giving that \eqref{Eq:EstOnExFracFunc1} is true for any real $t\ge e$. Hence (2) follows in this case. \par Suppose instead that $\theta (r)r \le 1$, and let $t_1$ be the integer part of $t$ and let $F$ be the same as in Lemma \ref{Lemma:LogFuncEst}. Then (1) and Lemma \ref{Lemma:LogFuncEst} give \begin{multline*} \max _{s>0}f(s,r) \le \left ( \frac {2t_1}{\log t_1} \right ) ^{2t_1(1-\frac 1{\log t_1})}(\theta (r)r)^{\frac {2t_1}{\log t_1}} = F(\theta (r)r,t_1)^2 \\[1ex] \le F((\theta (r)r)^{\frac {e-1}e},t)^2 = \left ( \frac {2t}{\log t} \right ) ^{2t(1-\frac 1{\log t})}((\theta (r)r)^{\frac {e-1}e})^{\frac {2t}{\log t}}, \end{multline*} provided $t\ge t_0(r)$ and $t_0(r)$ is chosen large enough. This gives (2). \end{proof} \par \begin{proof}[Proof of Proposition \ref{Prop:HarmPowGivesCoeffEst}] Suppose that \eqref{Eq:CoeffGivesHarmPowEst2} holds for some $r>0$ and let $r_2>r$. From \eqref{Eq:CoeffGivesHarmPowEst2} and \eqref{c-alpha} we get \begin{multline*} \nm {H_d^Nf}{L^2}^2 = \sum \limits _{\alpha \in \nn d} |(2|\alpha|+d)^Nc_\alpha(f)|^2 \\[1ex] \lesssim \sup _{ |\alpha |\ge 1} \left( (2|\alpha|+d)^{2N} r_2^{2|\alpha|} |\alpha|^{-\frac{|\alpha|}{\sigma}} \right) \\[1ex] =\sup \limits _{s\ge 1} \left( 2^{2t} \left( s+\frac d{2} \right)^{2t} r_2^{2s} s^{-s} \right)^{\frac 1{\sigma}}, \end{multline*} where $s=|\alpha|$ and $t=N\sigma$. For $0<\rho <1$ we have \begin{multline*} s^s= \left( s-\frac d{2}+\frac d{2} \right)^{s-\frac d{2}} s^{\frac d{2}} \\[1ex] = \left( s-\frac d{2} \right)^{s-\frac d{2}} s^{\frac d{2}} \left( 1+\frac d{2s-d} \right)^{s-\frac d{2}} \\[1ex] \le \left( s-\frac d{2} \right)^{s-\frac d{2}} (se)^{\frac d{2}} \lesssim \left( s-\frac d{2} \right)^{s-\frac d{2}} \rho^{-2s}. \end{multline*} \par This gives \begin{multline} \label{Eq:equ2} \nm {H_d^Nf}{L^2}^2 \lesssim \sup \limits_{s\ge 1} \left( 2^{2t} \left( s+\frac d{2} \right)^{2t} r_2^{2s} s^{-s} \right)^{\frac 1{\sigma}} \\[1ex] = \sup \limits_{s\ge 1+\frac d{2}} \left( 2^{2t} s^{2t} r_2^{2s-d} \left( s-\frac d{2} \right)^{-(s- \frac d{2})} \right)^{\frac 1{\sigma}} \\[1ex] \lesssim \sup \limits _{s\ge 1+\frac d{2}} \left ( 2^{2t} s^{2t} \left( \frac{r_2}{\rho} \right)^{2s} s^{-s} \right)^{\frac 1{\sigma}}. \end{multline} \par Using \eqref{Eq:equ2} and Lemma \ref{Lemma:EstOnExFracFunc} we obtain \begin{multline*} \nm {H_d^Nf}{L^2}^2 \lesssim \sup \limits _{s\ge 1+\frac d{2}} \left( 2^{2t} s^{2t} \left( \frac{r_2}{\rho} \right)^{2s} s^{-s} \right)^{\frac 1\sigma} \\[1ex] = \sup \limits _{s\ge 1+\frac d{2}} \left( 2^{2t} s^{2t} \left( 2r_3e \right)^s s^{-s} \right)^{\frac 1\sigma} \\[1ex] \lesssim \left ( 2^{2t} \left ( \frac{2t}{\log t} \right )^{2t(1-\frac 1{\log t})} \left ( \theta (r_3)r_3 \right)^{\frac{2t}{\log t}} \right)^{\frac 1\sigma} \\[1ex] = 2^{2N}(r_3\theta (r_3))^{\frac{2N}{\log (N\sigma)}} \left ( \frac {2N\sigma}{\log (N\sigma)} \right ) ^{2N(1-\frac 1{\log (N\sigma )})} \end{multline*} when $$ r_3 = \frac{r_2^{2}}{2\rho^2e}.\qedhere $$ \end{proof} \par \begin{proof}[Proof of Theorem \ref{Thm:Mainthm1}] By Proposition \ref{Prop:NormEquiv} we may assume that $p=2$. The result now follows from Propositions \ref{Prop:CoeffGivesHarmPowEst} and \ref{Prop:HarmPowGivesCoeffEst}, together with the fact that $$ (d\cdot e)^{-|\alpha|} |\alpha|^{|\alpha|} \le \alpha ! \le |\alpha|^{|\alpha|},\quad \alpha \in \nn d. $$ \end{proof} \par
2,877,628,090,026
arxiv
\section{Introduction}\label{introduction} When studying number theory problems, one often runs into nonlinear exponential sums of the form \begin{eqnarray*} \sum_{n=1}^{\infty}a_n e\left(t \varphi\left(\frac{n}{X}\right)\right)V\left(\frac{n}{X}\right), \end{eqnarray*} where $a_n$ is some arithmetic function, here and throughout the paper, $e(x)=e^{2\pi ix}$, $V(x)\in \mathcal{C}_c^{\infty}(1,2)$ is a smooth function with support contained in $(1,2)$, $t,X\geq 1$ are large parameters and $\varphi(x)$ is some nonlinear real valued smooth function. For example, for an automorphic $L$-function $L(s,F)$, the subconvexity problem of $L(s,F)$ in the $t$-aspect boils down to a nontrivial estimate for this sum with $a_n=\lambda_F(n)$ being the Fourier coefficients of the automorphic form $F$ and $\varphi(x)=-(\log x)/2\pi$. Here we remind that for $a_n$ $(n\sim X)$ satisfying $\|a_n\|^2=\sum_n|a_n|^2\ll X$, the trivial bound of this nonlinear exponential sum is $O(X)$. On the other hand, it is worth noting that the square-root cancellation phenomenon should not hold in general, as first found by Iwaniec, Luo and Sarnak \cite{ILS} (see Appendix C, (C.17) and (C.18)) that \begin{eqnarray}\label{GL2} \sum_{n=1}^{\infty} \lambda_F(n)e(-2\sqrt{qn})V\left(\frac{n}{X}\right) =\frac{\lambda_F(q)}{q^{1/4}}\hat{V}(0)X^{3/4}+O\big((qX)^{1/4+\varepsilon}\big), \end{eqnarray} for any positive integer $q$ and any $\varepsilon>0$, where $\lambda_F(n)$ are the normalized Fourier coefficients of a $\rm SL_2(\mathbb{Z})$ holomorphic cusp form $F$ of weight $\kappa$ and $\hat{V}(0)=2^{-1}i^\kappa (1-i)\int_0^{\infty}V(x)x^{-1/4}\mathrm{d}x$. Moreover, Kaczorowski and Perelli \cite{KP} improved and extended this result for Selberg class and this was later revisited by Ren and Ye \cite{Ren-Ye} for $\rm GL_m$ Maass cusp forms. For $a_n=\lambda_F(n)$ being the Fourier coefficients of an automorphic form $F$, a natural way to study the associated nonlinear exponential twisted sum is to directly use the functional equation of the automorphic $L$-function $L(s,F)$ or equivalently, the Voronoi formula for $\lambda_F(n)$, as shown in \cite{KP} and \cite{Ren-Ye}. However, if the nonlinear exponential function $e\left(t \varphi\left(n/X\right)\right)$ oscillates strong enough, there is a chance to get more savings by separating the oscillations of $\lambda_F(n)$ and $e\left(t \varphi\left(n/X\right)\right)$ using the $\delta$-method. Kumar, Mallesham and Singh \cite{KMS19} first implemented this idea for $\rm GL_3$ Maass cusp forms by using the Duke-Friedlander-Iwaniec $\delta$-method given in \cite{IK} together with the conductor-lowering trick due to Munshi \cite{Mun1}, and proved that for $t=X^{\beta}$ and $\varphi(x)=\alpha x^{\beta}$ $(\alpha\in \mathbb{R}\backslash\{0\}, 0<\beta<1)$ \begin{eqnarray*} \sum_{n=1}^{\infty}\lambda_{\pi}(1,n)e\left(t\varphi\bigg(\frac{n}{X}\bigg)\right) V\bigg(\frac{n}{X}\bigg)\ll_{\pi,\alpha,\beta} t^{3/10}X^{3/4+\varepsilon}, \end{eqnarray*} which improved the estimate $O(X^{3\beta/2}\log X)$ by Ren and Ye \cite{Ren-Ye-1} for $\beta>5/8$. Here $\lambda_{\pi}(1,n)$ are the normalized Fourier coefficients of a Hecke-Maass cusp form $\pi$ for $\rm GL_3(\mathbb{Z})$. See also the first author \cite{HB}. For cusp forms on $\rm GL_2$, the associated nonlinear exponential twisted sums were studied in Aggarwal, Holowinsky, Lin and Qi \cite{AHLQ} by a Bessel $\delta$-method. Recently, Lin and the second author \cite{LS} studied the $\rm GL_3\times\rm GL_2$ case by using the Duke-Friedlander-Iwaniec $\delta$-method in \cite{IK}, but unlike \cite{KMS19} without the conductor-lowering trick (as in Aggarwal \cite{Agg}). The goal of this paper is to study nonlinear exponential twists of $\rm GL_2\times\rm GL_2$ automorphic forms. More precisely, let $f$ and $g$ be either holomorphic or Maass cusp forms for $\rm SL_2(\mathbb{Z})$ with normalized Fourier coefficients $\lambda_f(n)$ and $\lambda_g(n)$, respectively. Define \begin{eqnarray}\label{natural-sum} S(X,t)=\sum_{n=1}^{\infty}\lambda_f(n) \lambda_g(n)e\left(t \varphi\left(\frac{n}{X}\right)\right)V\left(\frac{n}{X}\right). \end{eqnarray} Our main result states as follows. \begin{theorem}\label{main-theorem} Let $\varphi(x)=\alpha\log x$ or $\alpha x^{\beta}$ ($\beta\in (0,1)\backslash \{1/2,3/4\}$, $\alpha\in \mathbb{R}\backslash \{0\}$). Let $V(x)\in \mathcal{C}_c^{\infty}(1,2)$ with total variation $\rm{Var}(V)\ll 1$ and satisfying the condition \begin{equation}\label{derivative-of-V} V^{(j)}(x)\ll_j \triangle^j \end{equation} for any integer $j\geq 0$ with $\triangle\ll t^{1/2-\varepsilon}$ for any $\varepsilon>0$. Then we have \begin{eqnarray*} S(X,t) \ll_{f,g,\varphi,\varepsilon}t^{2/5}X^{3/4+\varepsilon} \end{eqnarray*} for $t^{8/5}<X<t^{12/5}$. \end{theorem} \begin{remark} The assumption $\triangle\ll t^{1/2-\varepsilon}$ arises when we use stationary phase analysis for certain oscillatory integral in the proof (see \eqref{assumption-on-Delta}). For the sake of simplicity, we have restricted $f$ and $g$ to be on the full modular group. In fact, Theorem 1 can be similarly extended to modular forms of arbitrary level and nebentypus without taking much effort. \end{remark} Since the test function $V$ in Theorem 1.1 allows oscillations, we can remove it from the sum. \begin{corollary}\label{sharp-cut-sum} Same notation and assumptions as in Theorem \ref{main-theorem}. We have \begin{eqnarray*} \sum_{X<n\leq 2X}\lambda_f(n) \lambda_g(n)e\left(t \varphi\left(\frac{n}{X}\right)\right) \ll_{f,g,\varphi,\varepsilon} t^{2/5}X^{3/4+\varepsilon} \end{eqnarray*} for $t^{8/5}< X<t^{12/5}$. \end{corollary} A special case of Theorem 1.1 is that $t=X^{\beta}$ and $\varphi(x)=\alpha x^{\beta}$ $(\alpha\in \mathbb{R}, 0<\beta<1, \beta\neq 1/2,3/4)$. Then Corollary 1.2 implies \begin{corollary} For any $\alpha\in \mathbb{R}\backslash\{0\}$, we have \begin{eqnarray*} \sum_{n\leq X}\lambda_f(n) \lambda_g(n)e\big(\alpha n^{\beta}\big) \ll_{f,g,\alpha,\beta,\varepsilon} X^{3/4+2\beta/5+\varepsilon} \end{eqnarray*} for $5/12<\beta< 5/8$, $\beta\neq 1/2$. \end{corollary} For $15/32<\beta< 5/8$, $\beta\neq 1/2$, Corollary 1.3 improves the estimate $O_{f,g,\alpha,\beta,\varepsilon}(X^{2\beta})$ by Czarnecki \cite{C}. Theorem \ref{main-theorem} also admits an application in bounding Rankin-Selberg $L$-functions on the critical line. We recall \begin{eqnarray*} L\left(s,f\otimes g\right)=\zeta(2s)\sum_{n=1}^{\infty} \frac{\lambda_f(n)\lambda_g(n)}{n^{s}} \end{eqnarray*} for $\mbox{Re}\,s>1$. The convexity bound in the $t$-aspect is $L\left(1/2+it,f\otimes g\right)\ll t^{1+\varepsilon}$ and recently Acharya, Sharma and Singh \cite{ASS20} proved the subconvexity bound $O_{f,g,\varepsilon} (t^{1-1/16+\varepsilon})$ by using the Duke-Friedlander-Iwaniec $\delta$-method given in \cite{IK} together with the conductor-lowering trick due to Munshi \cite{Mun1}. An application of the approximate functional equation implies \begin{eqnarray*} L\left(\frac{1}{2}+it,f\otimes g\right)\ll \sup_{N\ll t^{2+\varepsilon}}\frac{1}{\sqrt{N}} \left|\sum_{n=1}^{\infty}\lambda_f(n) \lambda_g(n)n^{-it}V\left(\frac{n}{N}\right)\right|+ t^{-100}. \end{eqnarray*} We demonstrate that the conductor-lowering trick in Acharya, Sharma and Singh's proof can be removed and applying Theorem 1.1 with $\varphi(x)=-(\log x)/2\pi$, we improve the result of Acharya, Sharma and Singh. \begin{corollary}\label{subconvexity} We have \begin{eqnarray*} L\left(1/2+it,f\otimes g\right)\ll_{f,g,\varepsilon} (1+|t|)^{9/10+\varepsilon}. \end{eqnarray*} \end{corollary} The best record bound for $L\left(1/2+it,f\otimes g\right)$ is the Weyl type bound $L\left(1/2+it,f\otimes g\right)\ll (1+|t|)^{2/3+\varepsilon}$ due to Blomer, Jana and Nelson \cite{BJN} by combining in a substantial way representation theory, local harmonic analysis, and analytic number theory. Bernstein and Reznikov showed the bound $(1+|t|)^{5/6+\varepsilon}$ in \cite{BR} (see Remarks 7.2.2.2). Now we consider another application of Theorem \ref{main-theorem}. Let $L(s,F)$ be an $L$-function of degree $d$ with coefficients $\lambda_F(1)=1$, $\lambda_F(n)\in\mathbb{C}$. It is a fundamental problem to prove an asymptotic formula for the sum \begin{eqnarray*} \mathcal A(X,F) = \sum_{n\leq X} \lambda_F(n). \end{eqnarray*} Let $(\mu_{1,F},\ldots, \mu_{d,F})$ be the Satake parameter of $F$ at $\infty$. Assume $L_\infty(s,F)=\prod_{1\leq j\leq d}\Gamma_\mathbb{R}(s-\mu_{j,F})$ does not have poles for $\mathrm{Re}(s)>1/2+1/d$, where $\Gamma_\mathbb{R}(s)=\pi^{-s/2}\Gamma(s/2)$. Under the Ramanujan-Petersson conjecture $\lambda_F(n)\ll n^{\varepsilon}$, Friedlander and Iwaniec \cite{Fri-Iwa} established the following identity which relates $\mathcal A(X,F)$ to its dual sum $\mathcal B(X,N)$ \begin{eqnarray}\label{FI-Functional-eq} \mathcal A(X,F)=\mathrm{Res}_{s=1}\frac{L(s,F)}{s}X+c_F\, X^{\frac{d-1}{2d}}\mathcal B(X,N) +O\left(N^{-\frac{1}{d}}X^{\frac{d-1}{d}+\varepsilon}\right), \end{eqnarray} where $c_F$ is some constant depending on the form $F$ only and Let \begin{eqnarray*} \mathcal B(X,N)=\sum_{n\leq N}\overline{\lambda_F(n)}\, n^{-\frac{d+1}{2d}} \cos(2\pi d\left(nX\right)^{1/d}). \end{eqnarray*} In particular, by estimating the sum $B(X,N)$ trivially and choosing $N=X^{(d-1)/(d+1)}$, Friedlander and Iwaniec showed that \begin{eqnarray}\label{trivial-one} \mathcal A(X,F)=\mathrm{Res}_{s=1}\frac{L(s,F)}{s}X+O\left(X^{\frac{d-1}{d+1}+\varepsilon}\right) \end{eqnarray} for any $\varepsilon>0$. For $\lambda_F(n)=\sum_{n_1n_2n_3=n}\chi_1(n_1) \chi_2(n_2)\chi_3(n_3)$, $\chi_j$ being primitive Dirichlet characters, Friedlander and Iwaniec \cite{Fri-Iwa} proved an asymptotic formula with the error term $O(X^{1/2-1/150+\varepsilon})$. Recently, for $F=1\boxplus f$ and $\lambda_F(n)=\sum_{\ell m=n}\lambda_f(m)$, where $f$ is a holomorphic cusp form for $\rm SL_2(\mathbb{Z})$, Huang, Lin and Wang \cite{HLW} proved an asymptotic formula with the error term $O(X^{1/2-4/739+\varepsilon})$. For $F=1\boxplus \mathrm{sym}^2f$ and $\lambda_F(n)=\sum_{\ell^2 m=n}\lambda_f(m)^2$, where $f$ is a Hecke-holomorphic or Hecke-Maass cusp form for $\rm SL_2(\mathbb{Z})$, Huang \cite{HB} proved an asymptotic formula with the error term $O(X^{3/5-1/560+\varepsilon})$. Under the Ramanujan-Petersson conjecture Lin and the second author \cite{LS} considered the $\rm GL_3\times \rm GL_2$ case and proved the bound $O(X^{5/7-1/364+\varepsilon})$ for $F=\pi\otimes f$, where $\pi$ is a Hecke--Maass cusp form for $\rm SL_3(\mathbb{Z})$ and $f$ is a holomorphic or Maass cusp form for $\rm SL_2(\mathbb{Z})$. As an application of Therorem \ref{main-theorem}, we improve \eqref{trivial-one} for $F$ being certain $\rm GL_5$ Eisenstein series, namely when $F=1\boxplus(f \times g)$ and $L(s,F)=\zeta(s)L(s,f\times g)$. For simplification, we consider the holomorphic case. In fact, our argument holds also for Maass cusp forms under the Ramanujan-Petersson conjecture. Now let $f$ and $g$ be holomorphic Hecke cusp forms for $\mathrm{SL}_2(\mathbb{Z})$ of weight $k$, $\kappa$, with $k \geq \kappa \geq 12$, with normalized Fourier coefficients $\lambda_f(n)$ and $\lambda_g(n)$, respectively. For $\rm{Re}(s) > 1$, we define $$ L(s,1\boxplus(f \times g))= \zeta(s)L(s,f \times g) = \zeta(s)\zeta(2 s)\sum_{n=1}^{\infty} \frac{\lambda_f(n)\lambda_g(n)}{n^s}= \sum_{n=1}^{\infty}\frac{\lambda_{1\boxplus(f\times g)}(n)}{n^s}, $$ where $\lambda_{1\boxplus(f\times g)}(n):= \sum_{lm^2r=n}\lambda_f(r)\lambda_g(r)$. Note that \eqref{trivial-one} reads $$ \sum_{n \leq X}\lambda_{1\boxplus(f\times g)}(n) = L(1,f\times g)X + O(X^{\frac{2}{3}+\varepsilon}). $$ We shall prove the following result. \begin{corollary}\label{cor:main} Let $f$ and $g$ be holomorphic Hecke cusp forms for $\mathrm{SL}_2(\mathbb{Z})$ of weight $k$ and $\kappa$ with $12 \leq \kappa \leq k$, with normalized Fourier coefficients $\lambda_f(n)$ and $\lambda_g(n)$, respectively. Assume $f \perp g$. Then we have $$ \sum_{n \leq X}\lambda_{1\boxplus(f\times g)}(n) = L(1,f\times g)X + O(X^{\frac{2}{3}-\frac{1}{356} +\varepsilon}) $$ for any $\varepsilon>0$. \end{corollary} \begin{remark} At the end of the proof of Corollary \ref{cor:main} we will use the exponent pair $(\frac{13}{194}+\varepsilon,\, \frac{76}{97}+\varepsilon)$ which is a consequence of Bourgain's exponential pair in \cite{Bourgain} and the A-process in the theory of exponential pairs. This is the best known exponent pair we find for our problem. We essentially need to choose an exponent pair $(p,q)$ to minimize $\frac{38+33p-28q}{58+48p-43q}$. \end{remark} \medskip The paper is organized as follows. In Section \ref{sketch-of-proof}, we provide a quick sketch and key steps of the proof. In Section \ref{review-of-cuspform}, we review some basic materials of automorphic forms on $ \rm GL_2$ and estimates on exponential integrals. Sections \ref{details-of-proof} and \ref{proofs-of-technical-lemma} give details of the proof for Theorem \ref{main-theorem} and in Sections \ref{proofs-of-corollary} and \ref{sec:Cor} we complete the proofs for Corollaries \ref{sharp-cut-sum} and \ref{cor:main}, respectively. \bigskip \noindent {\bf Notation.} Throughout the paper, the letters $q$, $m$ and $n$, with or without subscript, denote integers. The letters $\varepsilon$ and $A$ denote arbitrarily small and large positive constants, respectively, not necessarily the same at different occurrences. We use $A\asymp B$ to mean that $c_1B\leq |A|\leq c_2B$ for some positive constants $c_1$ and $c_2$. The symbol $\ll_{a,b,c}$ denotes that the implied constant depends at most on $a$, $b$ and $c$, and $q\sim C$ means $C<q\leq 2C$. \section{Outline of the proof}\label{sketch-of-proof} In this section, we provide a quick sketch of the proof for Theorem \ref{main-theorem}. Suppose we are working with the following sum \begin{eqnarray*} \mathcal{S}=\sum_{n\sim X}\lambda_f(n) \lambda_g(n)e\left(t\varphi\bigg(\frac{n}{X}\bigg)\right). \end{eqnarray*} The first step is writing \begin{eqnarray*} \mathcal{S}=\sum_{n\sim X}\lambda_f(n)\sum_{m\sim X}\lambda_g(m) e\left(t\varphi\bigg(\frac{m}{X}\bigg)\right)\delta(m-n,0), \end{eqnarray*} and using the $\delta$-method to detect the Kronecker delta symbol $\delta(m-n,0)$. As in \cite{LS}, we use the Duke-Friedlander-Iwaniec's $\delta$-method \eqref{DFI's} to write \begin{eqnarray}\label{before-voronoi} \mathcal{S}&=&\frac{1}{Q} \int_{-X^{\varepsilon}}^{X^{\varepsilon}} \sum_{q\sim Q}\frac{1}{q}\;\sideset{}{^\star}\sum_{a\bmod{q}}\, \sum_{n\sim X}\lambda_f(n)e\left(-\frac{na}{q}\right) e\left(-\frac{n\zeta}{qQ}\right)\nonumber\\ &&\sum_{m\sim X}\lambda_g(m)e\left(\frac{ma}{q}\right) e\left(t\varphi\bigg(\frac{m}{X}\bigg)+\frac{m\zeta}{qQ}\right)\mathrm{d}\zeta, \end{eqnarray} where the $\star$ in the sum over $a$ means that the sum is restricted to $(a,q)=1$. Next, we use the $\mathrm{GL}_2$ Voronoi summation formulas to dualize the $m$- and $n$-sums. The $m$-sum can be transformed into the following \begin{eqnarray}\label{GL2dual} &&\sum_{m\sim X}\lambda_g(m)e\left(\frac{ma}{q}\right) e\left(t\varphi\bigg(\frac{m}{X}\bigg)+\frac{m\zeta}{qQ}\right) V\left(\frac{m}{X}\right)\nonumber\\ &&\leftrightarrow \frac{X}{Qt^{1/2}} \sum_{\pm}\sum_{m\sim Q^2t^2/X}\lambda_g(m)e\left(-\frac{m\bar{a}}{q}\right) \Phi^{\pm}\left(m,q,\zeta\right), \end{eqnarray} where \begin{eqnarray*} \Phi^{\pm}\left(m,q,\zeta\right)=\int_0^\infty V(y)y^{-1/4} e\left(t\varphi(y)+\frac{\zeta Xy}{qQ}\pm\frac{2\sqrt{mXy}}{q}\right)\mathrm{d}y. \end{eqnarray*} If we assume for example $\varphi'(x)>0$, then by integration by parts, $\Phi^{+}\left(m,q,\zeta\right)\ll X^{-A}$, and we only need to consider the minus sign contribution. Similarly, for the $n$-sum, we have \begin{eqnarray}\label{GL3dual} &&\sum_{n\sim X}\lambda_f(n)e\left(-\frac{na}{q}\right) e\left(-\frac{n\zeta}{qQ}\right)U\left(\frac{n}{X}\right)\nonumber\\ &&\leftrightarrow X^{1/2}\sum_{n\sim X/Q^2}\lambda_f(n)e\left(\frac{n\bar{a}}{q}\right) \Psi^{+}\left(n,q,\zeta\right)+O_A(X^{-A}), \end{eqnarray} where \begin{eqnarray*} \Psi^{+}\left(n,q,\zeta\right)=\int_0^\infty U(y)y^{-1/4} e\left(-\frac{\zeta Xy}{qQ}+\frac{2\sqrt{nXy}}{q}\right)\mathrm{d}y. \end{eqnarray*} We perform a stationary phase argument to get (note that $n\sim X/Q^2$) \begin{eqnarray*} \Psi^{+}\left(n,q,\zeta\right)\asymp \frac{q^{1/2}}{(nX)^{1/4}} e\left(\frac{nQ}{q\zeta}\right) U^{\natural}\left(\frac{nQ^2}{X\zeta^2}\right) \asymp \frac{Q}{X^{1/2}} e\left(\frac{nQ}{q\zeta}\right) U^{\natural}\left(\frac{nQ^2}{X\zeta^2}\right) \end{eqnarray*} for some smooth compactly supported function $U^{\natural}(y)$. Then by plugging the dual sums \eqref{GL2dual} and \eqref{GL3dual} back into \eqref{before-voronoi} and switching the orders of integration over $\zeta$ and $y$, we roughly get \begin{eqnarray}\label{intermediateS(X)} \begin{split} \mathcal{S}\approx& \frac{X}{Q^2t^{1/2}}\sum_{q\sim Q}\, \sum_{m\sim Q^2t^2/X} \lambda_g(m) \sum_{n\sim X/Q^2} \lambda_f(n)S(m-n,0;q)\\ &\times\int_0^\infty V(y)y^{-1/4} e\left(t\varphi(y)-\frac{2\sqrt{mXy}}{q}\right)\mathcal{K}(y;n,q)\, \mathrm{d}y \end{split} \end{eqnarray} where \begin{eqnarray*} \mathcal{K}(y;n,q)= \int_{-X^{\varepsilon}}^{X^{\varepsilon}} U^{\natural}\left(\frac{nQ^2}{X\zeta^2}\right) e\left(\frac{\zeta Xy}{qQ}+\frac{nQ}{q \zeta}\right) \mathrm{d}\zeta. \end{eqnarray*} We evaluate the integral $\mathcal{K}(y;n,q)$ using the stationary phase method (note that $n\sim X/Q^2$) \begin{eqnarray*} \mathcal{K}(y;n,q)\asymp \frac{n^{1/4}q^{1/2}Q}{X^{3/4}}e\left(\frac{2\sqrt{nXy}}{q}\right) F(y)\asymp \frac{Q}{X^{1/2}}e\left(\frac{2\sqrt{nXy}}{q}\right) F(y) \end{eqnarray*} for some smooth compactly supported function $F(y)$. Hence putting things together and writing the Ramanujan sum $S\left(m-n,0;q\right)$ as $\sum_{d|(m-n,q)}d\mu(q/d)$, $\mathcal{S}$ in \eqref{intermediateS(X)} is roughly equal to \begin{eqnarray*} \begin{split} & \frac{X^{1/2}}{Qt^{1/2}}\sum_{q\sim Q}\,\sum_{d|q}d\mu\left(\frac{q}{d}\right) \sum_{m\sim Q^2t^2/X} \lambda_g(m) \sum_{n\sim X/Q^2\atop n\equiv m\bmod d} \lambda_f(n)\mathfrak{I}(m,n,q), \end{split} \end{eqnarray*} where \begin{eqnarray}\label{I-integral0} \mathfrak{I}(m,n,q)=\int_0^\infty \widetilde{V}(y) e\left(t\varphi(y)+\frac{2\sqrt{nXy}}{q}-\frac{2\sqrt{mXy}}{q}\right)\, \mathrm{d}y \end{eqnarray} for some smooth compactly supported function $\widetilde{V}(y)$. Assume $\varphi(x)=c\log x$ or $cx^{\beta}$ with $\beta\in (0,1), \beta\neq 1/2$. We apply the stationary phase analysis to the integral $\mathfrak{I}(m,n,q)$ to get \begin{eqnarray*} \mathfrak{I}(m,n,q) \sim e\left(t\varphi(y_0^2)-Dy_0\right)\mathfrak{I}^*(m,n,q) \end{eqnarray*} where $y_0=\left(ct/D\right)^{1/\beta}\asymp 1$ with $D=2q^{-1}(mX)^{1/2}$ and \begin{eqnarray*} \mathfrak{I}^*(m,n,q) \asymp t^{-1/2}e\left(\frac{2y_0n^{1/2}X^{1/2}}{q}\right). \end{eqnarray*} To prepare for an application of the Poisson summation in the $m$-variable, we now apply the Cauchy-Schwarz inequality to smooth the $m$-sum and put the $n$-sum inside the absolute value squared to get \begin{eqnarray*} \mathcal{S}&\ll& \frac{X^{1/2}}{Qt^{1/2}}\sum_{q\sim Q}\,\sum_{d|q}d \bigg(\sum_{m\sim Q^2t^2/X}|\lambda_g(m)|^2\bigg)^{1/2} \bigg(\sum_{m\sim Q^2t^2/X}\bigg|\sum_{n\sim X/Q^2\atop n\equiv m\bmod d} \lambda_f(n)\mathfrak{I}^*(m,n,q)\bigg|^2\bigg)^{1/2}\\ &\ll&t^{1/2}\sum_{q\sim Q}\,\sum_{d|q}d \bigg(\sum_{m\sim Q^2t^2/X}\bigg|\sum_{n\sim X/Q^2\atop n\equiv m\bmod d} \lambda_f(n)\mathfrak{I}^*(m,n,q)\bigg|^2\bigg)^{1/2}. \end{eqnarray*} \begin{remark} If we open the absolute value squared, by the Rankin-Selberg estimate for $\lambda_f(n)$ and the trivial estimate $\mathfrak{I}^*(m,n,q)\ll t^{-1/2}$, the contribution from the diagonal term $n=n'$ is given by \begin{equation}\label{S-diagonal} \begin{split} \mathcal{S}_{\text{diag}} \ll& t^{1/2}\sum_{q\sim Q}\,\sum_{d|q}d \bigg(\sum_{m\sim Q^2t^2/X}\sum_{n\sim X/Q^2\atop n\equiv m\bmod d} |\lambda_f(n)|^2|\mathfrak{I}^*(m,n,q)|^2\bigg)^{1/2}\\ \ll& Q^{3/2}t, \end{split} \end{equation} which will be fine for our purpose (i.e., $S_{\text{diag}}=o(X)$) as long as $Q\ll (X/t)^{3/2}$. \end{remark} Note that the oscillation in the $m$-variable of $\mathfrak{I}^*(m,n,q)$ in \eqref{I-integral0} is of size $2y_0n^{1/2}X^{1/2}/q\approx X/Q^2$. So opening the absolute value squared and applying the Poisson summation formula in the $m$-variable, we have \begin{equation*} \begin{split} \sum_{m\sim Q^2t^2/X\atop m\equiv n\bmod d}\mathfrak{I}^*(m,n,q) \overline{\mathfrak{I}^*(m,n',q)}\leftrightarrow \frac{Q^2t^2}{dX} \sum_{\tilde{m}\ll \frac{dX/Q^2}{Q^2t^2/X}} \,\mathcal{H}\left(\frac{\tilde{m}Q^2t^2}{dX}\right), \end{split} \end{equation*} where \begin{eqnarray}\label{correlation-integral} \mathcal{H}(x)=\int_{\mathbb{R}} \mathfrak{I}^*\left(Q^2t^2\xi/X,n,q\right) \overline{\mathfrak{I}^*\left(Q^2t^2\xi/X,n',q\right)} \, e\left(-x\xi\right)\mathrm{d}\xi. \end{eqnarray} The contribution to $\mathcal{S}$ from the zero-frequency $\tilde{m}=0$ will roughly correspond to the diagonal contribution $S_{\text{diag}}$ in \eqref{S-diagonal}. For the non-zero frequencies from the terms with $\tilde{m}\neq 0$, we note that by performing stationary phase analysis, when $|x|$ is ``large", the expected estimate for the triple integral $\mathcal{H}(x)$ in \eqref{correlation-integral} is \begin{eqnarray}\label{expect} \mathcal{H}(x)\ll t^{-1/2}\cdot t^{-1/2}\cdot |x|^{-1/2}, \end{eqnarray} which comes from the square-root cancellation of the two inner integrals and the square-root cancellation in the $\xi$-variable. Note that this estimate does not hold for ``small" $|x|$. In fact, for these exceptional cases the ``trivial" bound $\mathcal{H}(x)\ll t^{-1/2}\cdot t^{-1/2}$ will suffice for our purpose. (These are the content of Lemma \ref{integral:lemma}). We ignore these exceptions and plug the expected estimate \eqref{expect} for $\mathcal{H}(x)$ into $\mathcal{S}$. It turns out that the non-zero frequencies contribution $S_{\text{off-diag}}$ from $\tilde{m}\neq 0$ to $\mathcal{S}$ is given by \begin{eqnarray*} S_{\text{off-diag}}&\ll&t^{1/2}\sum_{q\sim Q}\,\sum_{d|q}d \bigg(\sum_{n\sim X/Q^2}|\lambda_f(n)|^2 \sum_{n'\sim X/Q^2\atop n'\equiv n\bmod d}\frac{Q^2t^2}{dX} \sum_{0\neq \widetilde{m}\ll dX^2/(Q^4t^2)}\frac{d^{1/2}X^{1/2}}{|\widetilde{m}|^{1/2}Qt^2}\bigg)^{1/2}\\ &\ll&\frac{X^{5/4}}{Q}+X^{3/4}Q^{1/2}\\ &\ll&\frac{X^{5/4}}{Q} \end{eqnarray*} provided that $Q<X^{1/3}$. Hence combining this with the diagonal contribution $S_{\text{diag}}$ in \eqref{S-diagonal}, we get \begin{eqnarray*} \mathcal{S}\ll Q^{3/2}t+\frac{X^{5/4}}{Q}. \end{eqnarray*} By choosing $Q=X^{1/2}/t^{2/5}$ we obtain $\mathcal{S}\ll t^{2/5}X^{3/4}$ provided that $X<t^{12/5}$, which improves over the trivial bound $\mathcal{S}\ll X$ as long as $t^{8/5}\ll X$. \section{Preliminaries}\label{review-of-cuspform} First we recall some basic results on automorphic forms for $\mathrm{GL}_2$. \subsection{Holomorphic cusp forms for $\mathrm{GL}_2$} Let $f$ be a holomorphic cusp form of weight $\kappa$ for $\rm SL_2(\mathbb{Z})$ with Fourier expansion \begin{eqnarray*} f(z)=\sum_{n=1}^{\infty}\lambda_f(n)n^{(\kappa-1)/2}e(nz) \end{eqnarray*} for $\mbox{Im}\,z>0$, normalized such that $\lambda_f(1)=1$. By the Ramanujan-Petersson conjecture proved by Deligne \cite{Del}, we have $ \lambda_f(n)\ll \tau(n)\ll n^{\varepsilon} $ with $\tau(n)$ being the divisor function. For $h(x)\in \mathcal{C}_c(0,\infty)$, we set \begin{eqnarray}\label{intgeral transform-1} \Phi_h(x) =2\pi i^{\kappa} \int_0^{\infty} h(y) J_{\kappa-1}(4\pi\sqrt{xy})\mathrm{d}y, \end{eqnarray} where $J_{\kappa-1}$ is the usual $J$-Bessel function of order $\kappa-1$. We have the following Voronoi summation formula (see \cite[Theorem A.4]{KMV}). \begin{lemma}\label{voronoiGL2-holomorphic} Let $q\in \mathbb{N}$ and $a\in \mathbb{Z}$ be such that $(a,q)=1$. For $X>0$, we have \begin{eqnarray*}\label{voronoi for holomorphic} \sum_{n=1}^{\infty}\lambda_f(n)e\left(\frac{an}{q}\right)h\left(\frac{n}{X}\right) =\frac{X}{q} \sum_{n=1}^{\infty}\lambda_f(n) e\left(-\frac{\overline{a}n}{q}\right)\Phi_h\left(\frac{nX}{q^2}\right), \end{eqnarray*} where $\overline{a}$ denotes the multiplicative inverse of $a$ modulo $q$. \end{lemma} The function $\Phi_h(x)$ has the following asymptotic expansion when $x\gg 1$ (see \cite{LS}, Lemma 3.2). \begin{lemma}\label{voronoiGL2-holomorphic-asymptotic} For any fixed integer $J\geq 1$ and $x\gg 1$, we have \begin{eqnarray*} \Phi_h(x)=x^{-1/4} \int_0^\infty h(y)y^{-1/4} \sum_{j=0}^{J} \frac{c_{j} e(2 \sqrt{xy})+d_{j} e(-2 \sqrt{xy})} {(xy)^{j/2}}\mathrm{d}y +O_{\kappa,J}\left(x^{-J/2-3/4}\right), \end{eqnarray*} where $c_{j}$ and $d_{j}$ are constants depending on $\kappa$. \end{lemma} \subsection{Maass cusp forms for $\mathrm{GL}_2$} Let $f$ be a Hecke-Maass cusp form for $\rm SL_2(\mathbb{Z})$ with Laplace eigenvalue $1/4+\mu^2$. Then $f$ has a Fourier expansion $$ f(z)=\sqrt{y}\sum_{n\neq 0}\lambda_f(n)K_{i\mu}(2\pi |n|y)e(nx), $$ where $K_{i\mu}$ is the modified Bessel function of the third kind. The Fourier coefficients satisfy \begin{eqnarray}\label{individual bound} \lambda_f(n)\ll n^{\vartheta}, \end{eqnarray} where, here and throughout the paper, $\theta$ denotes the exponent towards the Ramanujan conjecture for $\rm GL_2$ Maass forms. The Ramanujan conjecture states that $\vartheta=0$ and the current record due to Kim and Sarnak \cite{KS} is $\vartheta=7/64$. We also need the following average bound (see for instance \cite[Lemma 1]{Murty}) \begin{eqnarray}\label{GL2: Rankin Selberg} \sum_{n\leq X}|\lambda_f(n)|^2= c_{f} X+O\big(X^{3/5}\big). \end{eqnarray} For $h(x)\in \mathcal{C}_c^{\infty}(0,\infty)$, we define the integral transforms \begin{eqnarray}\label{intgeral transform-2} \begin{split} \Phi_h^+(x) =& \frac{-\pi}{\sin(\pi i\mu)} \int_0^\infty h(y)\left(J_{2i\mu}(4\pi\sqrt{xy}) - J_{-2i\mu}(4\pi\sqrt{xy})\right) \mathrm{d}y,\\ \Phi_h^-(x) =& 4\varepsilon_f\cosh(\pi \mu)\int_0^\infty h(y)K_{2i\mu}(4\pi\sqrt{xy}) \mathrm{d}y, \end{split}\end{eqnarray} where $\varepsilon_f$ is an eigenvalue under the reflection operator. We have the following Voronoi summation formula (see \cite[Theorem A.4]{KMV}). \begin{lemma}\label{voronoiGL2-Maass} Let $q\in \mathbb{N}$ and $a\in \mathbb{Z}$ be such that $(a,q)=1$. For $X>0$, we have \begin{eqnarray*}\label{voronoi for Maass form} \sum_{n=1}^{\infty}\lambda_f(n)e\left(\frac{an}{q}\right)h\left(\frac{n}{X}\right) = \frac{X}{q} \sum_{\pm}\sum_{n=1}^{\infty}\lambda_f(n) e\left(\mp\frac{\overline{a}n}{q}\right)\Phi_h^{\pm}\left(\frac{nX}{q^2}\right), \end{eqnarray*} where $\overline{a}$ denotes the multiplicative inverse of $a$ modulo $q$. \end{lemma} For $x\gg 1$, we have (see (3.8) in \cite{LS}) \begin{eqnarray}\label{The $-$ case} \Phi_h^-(x)\ll_{\mu,A}x^{-A}. \end{eqnarray} For $\Phi_h^+(x)$ and $x\gg 1$, we have a similar asymptotic formula as for $\Phi_h(x)$ in the holomorphic case (see \cite{LS}, Lemma 3.4). \begin{lemma}\label{voronoiGL2-Maass-asymptotic} For any fixed integer $J\geq 1$ and $x\gg 1$, we have \begin{eqnarray*} \Phi_h^{+}(x)=x^{-1/4} \int_0^\infty h(y)y^{-1/4} \sum_{j=0}^{J} \frac{c_{j} e(2 \sqrt{xy})+d_{j} e(-2 \sqrt{xy})} {(xy)^{j/2}}\mathrm{d}y +O_{\mu,J}\left(x^{-J/2-3/4}\right), \end{eqnarray*} where $c_{j}$ and $d_{j}$ are some constants depending on $\mu$. \end{lemma} \begin{remark}\label{decay-of-largeX} For $x\gg X^{\varepsilon}$, we can choose $J$ sufficiently large so that the contribution from the $O$-terms in Lemmas \ref{voronoiGL2-holomorphic-asymptotic} and \ref{voronoiGL2-Maass-asymptotic} is negligible. For the main terms we only need to analyze the leading term $j=1$, as the analysis of the remaining lower order terms is the same and their contribution is smaller compared to that of the leading term. \end{remark} \subsection{Estimates for exponential integrals} Let \begin{equation*} I = \int_{\mathbb{R}} w(y) e^{i \varrho(y)} dy. \end{equation*} Firstly, we have the following estimates for exponential integrals (see \cite[Lemma 8.1]{BKY} and \cite[Lemma A.1]{AHLQ}). \begin{lemma}\label{lem: upper bound} Let $w(x)$ be a smooth function supported on $[ a, b]$ and $\varrho(x)$ be a real smooth function on $[a, b]$. Suppose that there are parameters $Q, U, Y, Z, R > 0$ such that \begin{align*} \varrho^{(i)} (x) \ll_i Y / Q^{i}, \qquad w^{(j)} (x) \ll_{j } Z / U^{j}, \end{align*} for $i \geqslant 2$ and $j \geqslant 0$, and \begin{align*} | \varrho' (x) | \geqslant R. \end{align*} Then for any $A \geqslant 0$ we have \begin{align*} I \ll_{ A} (b - a) Z \bigg( \frac {Y} {R^2Q^2} + \frac 1 {RQ} + \frac 1 {RU} \bigg)^A . \end{align*} \end{lemma} Next, we need the following evaluation for exponential integrals which are Lemma 8.1 and Proposition 8.2 of \cite{BKY} in the language of inert functions (see \cite[Lemma 3.1]{KPY}). Let $\mathcal{F}$ be an index set, $Y: \mathcal{F}\rightarrow\mathbb{R}_{\geq 1}$ and under this map $T\mapsto Y_T$ be a function of $T \in \mathcal{F}$. A family $\{w_T\}_{T\in \mathcal{F}}$ of smooth functions supported on a product of dyadic intervals in $\mathbb{R}_{>0}^d$ is called $Y$-inert if for each $j=(j_1,\ldots,j_d) \in \mathbb{Z}_{\geq 0}^d$ we have \begin{eqnarray*} C(j_1,\ldots,j_d) = \sup_{T \in \mathcal{F} } \sup_{(y_1, \ldots, y_d) \in \mathbb{R}_{>0}^d} Y_T^{-j_1- \cdots -j_d}\left| y_1^{j_1} \cdots y_d^{j_d} w_T^{(j_1,\ldots,j_d)}(y_1,\ldots,y_d) \right| < \infty. \end{eqnarray*} \begin{lemma} \label{lemma:exponentialintegral} Suppose that $w = w_T(y)$ is a family of $Y$-inert functions, with compact support on $[Z, 2Z]$, so that $w^{(j)}(y) \ll (Z/Y)^{-j}$. Also suppose that $\varrho$ is smooth and satisfies $\varrho^{(j)}(y) \ll H/Z^j$ for some $H/X^2 \geq R \geq 1$ and all $y$ in the support of $w$. \begin{enumerate} \item If $|\varrho'(y)| \gg H/Z$ for all $y$ in the support of $w$, then $I \ll_A Z R^{-A}$ for $A$ arbitrarily large. \item If $\varrho''(y) \gg H/Z^2$ for all $y$ in the support of $w$, and there exists $y_0 \in \mathbb{R}$ such that $\varrho'(y_0) = 0$ (note $y_0$ is necessarily unique), then \begin{equation} I = \frac{e^{i \varrho(y_0)}}{\sqrt{\varrho''(y_0)}} F(y_0) + O_{A}( Z R^{-A}), \end{equation} where $F(y_0)$ is an $Y$-inert function (depending on $A$) supported on $y_0 \asymp Z$. \end{enumerate} \end{lemma} We also need the second derivative test (see \cite[Lemma 5.1.3]{Hux2}). \begin{lemma}\label{lem: 2st derivative test, dim 1} Let $\varrho(x)$ be real and twice differentiable on the open interval $[a, b]$ with $ \varrho'' (x) \gg \lambda_0>0$ on $[a, b]$. Let $w(x)$ be real on $[ a, b]$ and let $V_0$ be its total variation on $[ a, b]$ plus the maximum modulus of $w(x)$ on $[ a, b]$. Then \begin{align*} I\ll \frac {V_0} {\sqrt{\lambda_0}}. \end{align*} \end{lemma} \section{Proof of the main theorem}\label{details-of-proof} In this section, we provide the details of the proof for Theorem \ref{main-theorem}. Recall \begin{eqnarray}\label{main sum} S(X,t)=\sum_{n=1}^{\infty}\lambda_f(n) \lambda_g(n)e\left(t \varphi\left(\frac{n}{X}\right)\right)V\left(\frac{n}{X}\right), \end{eqnarray} where $V(x)\in \mathcal{C}_c^{\infty}(1,2)$ with total variation $\text{Var}(V)\ll 1$ and satisfying \eqref{derivative-of-V} that $V^{(j)}(x)\ll_j \triangle^j$ for any integer $j\geq 0$ with $\triangle\ll t^{1/2-\varepsilon}$. Without loss of generality, we assume that the function $\varphi$ satisfies \begin{eqnarray}\label{first-derivative-varphi} \varphi'(x)>0, \qquad \varphi''(x)\gg 1. \end{eqnarray} (The case $\varphi'(x)<0$ can be analyzed analogously.) \subsection{Applying DFI's $\delta$-method} Define $\delta: \mathbb{Z}\rightarrow \{0,1\}$ with $\delta(0)=1$ and $\delta(n)=0$ for $n\neq 0$. As in \cite{LS}, we will use a version of the circle method of Duke, Friedlander and Iwaniec (see \cite[Chapter 20]{IK}) which states that for any $n\in \mathbb{Z}$ and $Q\in \mathbb{R}^+$, we have \begin{eqnarray}\label{DFI's} \delta(n)=\frac{1}{Q}\sum_{q\sim Q} \;\frac{1}{q}\; \sideset{}{^\star}\sum_{a\bmod{q}}e\left(\frac{na}{q}\right) \int_\mathbb{R}g(q,\zeta) e\left(\frac{n\zeta}{qQ}\right)\mathrm{d}\zeta \end{eqnarray} where the $\star$ on the sum indicates that the sum over $a$ is restricted to $(a,q)=1$. The function $g$ has the following properties (see (20.158) and (20.159) of \cite{IK} and Lemma 15 of \cite{HB}) \begin{eqnarray}\label{g-h} g(q,\zeta)\ll |\zeta|^{-A}, \;\;\;\;\;\; g(q,\zeta) =1+h(q,\zeta)\;\;\text{with}\;\;h(q,\zeta)= O\left(\frac{Q}{q}\left(\frac{q}{Q}+|\zeta|\right)^A\right) \end{eqnarray} for any $A>1$ and \begin{eqnarray}\label{rapid decay g} \zeta^j\frac{\partial^j}{\partial \zeta^j}g(q,\zeta)\ll (\log Q)\min\left\{\frac{Q}{q},\frac{1}{|\zeta|}\right\}, \qquad j\geq 1. \end{eqnarray} In particular the first property in \eqref{g-h} implies that the effective range of the integration in \eqref{DFI's} is $[-X^\varepsilon, X^\varepsilon]$. We write \eqref{main sum} as \begin{eqnarray*} S(X,t)=\sum_{n=1}^{\infty}\lambda_{f}(n) U\left(\frac{n}{X}\right)\sum_{m=1}^{\infty} \lambda_g(m)e\left(t \varphi\left(\frac{m}{X}\right)\right) V\left(\frac{m}{X}\right)\delta(m-n), \end{eqnarray*} where $U(x)\in \mathcal{C}_c^{\infty}(1/2,5/2)$ satisfying $U(x)=1$ for $x\in [1,2]$ and $U^{(j)}(x)\ll_j 1$ for any integer $j\geq 0$. Plugging the identity \eqref{DFI's} for $\delta(m-n)$ in and exchanging the order of integration and summations, we get \begin{eqnarray*} S(X,t)&=&\frac{1}{Q} \int_{\mathbb{R}} \sum_{q\sim Q}\frac{g(q,\zeta)}{q}\; \sideset{}{^\star}\sum_{a\bmod{q}} \left\{\sum_{n=1}^{\infty}\lambda_{f}(n) e\left(-\frac{na}{q}\right)U\left(\frac{n}{X}\right) e\left(-\frac{n\zeta}{qQ}\right)\right\}\nonumber\\ && \left\{\sum_{m=1}^{\infty}\lambda_{g}(m)e\left(\frac{ma}{q}\right) V\left(\frac{m}{X}\right) e\left(t \varphi\left(\frac{m}{X}\right)+\frac{m\zeta}{qQ}\right)\right\}\mathrm{d}\zeta. \end{eqnarray*} Note that the contribution from $|\zeta|\leq X^{-B}$ is negligible for $B>0$ sufficiently large. Moreover, by the first property in \eqref{g-h}, we can restrict $\zeta$ in the range $|\zeta|\leq X^{\varepsilon}$ up to an negligible error. So we can insert a smooth partition of unity for the $\zeta$-integral and get \begin{eqnarray*} S(X,t)&=&\sum_{X^{-B}\ll \Xi\ll X^{\varepsilon}\atop \text{dyadic}}\frac{1}{Q} \int_{\mathbb{R}}W\left(\frac{\zeta}{\Xi}\right) \sum_{q\sim Q}\frac{g(q,\zeta)}{q}\; \sideset{}{^\star}\sum_{a\bmod{q}} \left\{\sum_{n=1}^{\infty}\lambda_f(n) e\left(-\frac{na}{q}\right)U\left(\frac{n}{X}\right) e\left(-\frac{n\zeta}{qQ}\right)\right\}\nonumber\\ && \left\{\sum_{m=1}^{\infty}\lambda_g(m)e\left(\frac{ma}{q}\right) V\left(\frac{m}{X}\right) e\left(t \varphi\left(\frac{m}{X}\right)+\frac{m\zeta}{qQ}\right)\right\}\mathrm{d}\zeta +O_A(X^{-A}), \end{eqnarray*} where $\widetilde{W}(x)\in \mathcal{C}_c^{\infty}(1,2)$, satisfying $\widetilde{W}^{(j)}(x)\ll_j 1$ for any integer $j\geq 0$. Next we break the $q$-sum $\sum_{q\sim Q}$ into dyadic segments $q\sim C$ with $1\ll C\ll Q$ and write \begin{eqnarray}\label{C range} S(X,t)=\sum_{X^{-B}\ll \Xi\ll X^{\varepsilon}} \sum_{1\ll C\ll Q\atop \text{dyadic}}\mathscr{S}(C,\Xi)+O(X^{-A}), \end{eqnarray} where $\mathscr{S}(C,\Xi)=\mathscr{S}(X,t,C,\Xi)$ is \begin{eqnarray}\label{beforeVoronoi} \mathscr{S}(C,\Xi)&=&\frac{1}{Q} \int_{\mathbb{R}}W\left(\frac{\zeta}{\Xi}\right) \sum_{q\sim C}\frac{g(q,\zeta)}{q}\; \sideset{}{^\star}\sum_{a\bmod{q}} \left\{\sum_{n=1}^{\infty}\lambda_f(n) e\left(-\frac{na}{q}\right)U\left(\frac{n}{X}\right) e\left(-\frac{n\zeta}{qQ}\right)\right\}\nonumber\\ && \left\{\sum_{m=1}^{\infty}\lambda_g(m)e\left(\frac{ma}{q}\right) V\left(\frac{m}{X}\right) e\left(t \varphi\left(\frac{m}{X}\right)+\frac{m\zeta}{qQ}\right)\right\}\mathrm{d}\zeta. \end{eqnarray} Without loss of generality, for the $\zeta$-integral, we only consider the contribution from $\zeta\geq 0$ (the contribution from $\zeta\leq 0$ can be estimated similarly). By abuse of notation, we still write the contribution from $\zeta\geq 0$ as $\mathscr{S}(C,\Xi)$. We now proceed to estimate $\mathscr{S}(C,\Xi)$. \subsection{Voronoi summations} In what follows, we dualize the $n$-and $m$-sums in \eqref{beforeVoronoi} using Voronoi summation formulas. We first consider the sum over $m$. Depending on whether $f$ is holomorphic or Maass, we apply Lemma \ref{voronoiGL2-holomorphic} or Lemma \ref{voronoiGL2-Maass} respectively with $h_1(y)=V(y) e\left(t\varphi(y)+\zeta Xy/qQ\right)$, to transform the $m$-sum in \eqref{beforeVoronoi} into \begin{eqnarray}\label{after GL2 Voronoi} \frac{X}{q}\sum_{\pm}\sum_{m=1}^{\infty}\lambda_{g}(m)e\left(\mp \frac{m\overline{a}}{q}\right) \Phi_{h_1}^{\pm}\left(\frac{mX}{q^2}\right), \end{eqnarray} where if $g$ is holomorphic, $\Phi_{h_1}^+(x)=\Phi_{h_1}(x)$ with $\Phi_{h_1}(x)$ given by \eqref{intgeral transform-1} and $\Phi_{h_1}^-(x)=0$, while for $g$ a Hecke--Maass cusp form, $\Phi_{h_1}^{\pm}(x)$ are given by \eqref{intgeral transform-2}. Similarly, we apply Lemma \ref{voronoiGL2-holomorphic} or Lemma \ref{voronoiGL2-Maass} with $h_2(y)=U(y)e\left(-\zeta Xy/qQ\right)$ to transform te $n$-sum in \eqref{beforeVoronoi} into \begin{eqnarray}\label{$n$-sum after GL2 Voronoi} \frac{X}{q}\sum_{\pm}\sum_{n=1}^{\infty}\lambda_f(n)e\left(\pm \frac{n\overline{a}}{q}\right) \Phi_{h_2}^{\pm}\left(\frac{nX}{q^2}\right), \end{eqnarray} where if $f$ is holomorphic, $\Phi_{h_2}^+(x)=\Phi_{h_2}(x)$ with $\Phi_{h_2}(x)$ given by \eqref{intgeral transform-1} and $\Phi_{h_2}^-(x)=0$, while for $f$ a Hecke--Maass cusp form, $\Phi_{h_2}^{\pm}(x)$ are given by \eqref{intgeral transform-2}. As is typical in applying the $\delta$-method, we assume that \begin{eqnarray}\label{assumption 1} Q<X^{1/2-\varepsilon}. \end{eqnarray} Then we have $mX/q^2\gg X^{\varepsilon}$ and $nX/q^2\gg X^{\varepsilon}$. In particular, by \eqref{The $-$ case}, the contributions from $\Phi_{h_1}^{-}\left(mX/q^2\right)$ and $\Phi_{h_2}^{-}\left(nX/q^2\right)$ are negligible. For $\Phi_{h_1}^{+}\left(mX/q^2\right)$ and $\Phi_{h_2}^{+}\left(nX/q^2\right)$, we apply Lemma \ref{voronoiGL2-holomorphic-asymptotic}, Lemma \ref{voronoiGL2-Maass-asymptotic} and Remark \ref{decay-of-largeX} and find that the sum \eqref{after GL2 Voronoi} is asymptotically equal to \begin{eqnarray}\label{integral 1} \frac{X^{3/4}}{q^{1/2}}\sum_{\pm}\sum_{m=1}^{\infty}\frac{\lambda_g(m)}{m^{1/4}} e\left(-\frac{m\overline{a}}{q}\right)\int_0^\infty V(y)y^{-1/4} e\left(t\varphi(y)+\frac{\zeta Xy}{qQ}\pm \frac{2\sqrt{mXy}}{q}\right)\mathrm{d}y, \end{eqnarray} and the sum \eqref{$n$-sum after GL2 Voronoi} is asymptotically equal to \begin{eqnarray}\label{integral 2} \frac{X^{3/4}}{q^{1/2}}\sum_{\pm}\sum_{n=1}^{\infty}\frac{\lambda_f(n)}{n^{1/4}} e\left(\frac{n\overline{a}}{q}\right)\int_0^\infty U(y)y^{-1/4} e\left(-\frac{\zeta Xy}{qQ}\pm \frac{2\sqrt{nXy}}{q}\right)\mathrm{d}y. \end{eqnarray} Note that by the assumption \eqref{first-derivative-varphi}, the first derivative of the phase function of the exponential function in \eqref{integral 1} in the plus case is \begin{eqnarray*} t\varphi'(y)+\frac{\zeta X}{qQ}+\frac{\sqrt{mX/y}}{q}\gg X^{\varepsilon}. \end{eqnarray*} By applying integration by parts repeatedly, one finds that the contribution from the plus case is negligible. Similarly, the contribution from the minus case in \eqref{integral 2} is negligible. Assembling the above results, $\mathscr{S}(C,\Xi)$ in \eqref{beforeVoronoi} is asymptotically equal to \begin{eqnarray}\label{GL2} && \frac{X^{3/2}}{Q}\sum_{q\sim C}q^{-2} \sum_{m=1}^{\infty}\frac{\lambda_g(m)}{m^{1/4}} \sum_{n=1}^{\infty}\frac{\lambda_f(n)}{n^{1/4}}S\left(m-n,0;q\right)\mathcal{I}(m,n,q,\Xi), \end{eqnarray} where \begin{eqnarray}\label{I definition} \mathcal{I}(m,n,q,\Xi)= \int_{0}^{\infty}W\left(\frac{\zeta}{\Xi}\right) g(q,\zeta) \Phi\left(m,q,\zeta\right)\Psi\left(n,q,\zeta\right) \mathrm{d}\zeta \end{eqnarray} with \begin{eqnarray}\label{Phi definition} \Phi\left(m,q,\zeta\right)=\int_0^\infty V(y)y^{-1/4} e\left(t\varphi(y)+\frac{\zeta Xy}{qQ}-\frac{2\sqrt{mXy}}{q}\right)\mathrm{d}y \end{eqnarray} and \begin{eqnarray}\label{Psi definition} \Psi\left(n,q,\zeta\right)=\int_0^\infty U(y)y^{-1/4} e\left(-\frac{\zeta Xy}{qQ}+\frac{2\sqrt{nXy}}{q}\right)\mathrm{d}y. \end{eqnarray} Note that for $\triangle<t^{1/2-\varepsilon}$, defined in \eqref{derivative-of-V}, by Lemma \ref{lem: upper bound}, the integral $\Phi\left(m,q,\zeta\right)$ is negligibly small unless $\sqrt{mX}/q\ll X^{\varepsilon}\max\left\{t,X\Xi/qQ\right\}$. Thus we only need to consider those $m$ in the range $m \ll X^{\varepsilon}\max\{C^2t^2/X,X\Xi^2/Q^2\}$. Similarly, up to a negligible error, we only need to consider those $n$ in the range $n\asymp X\Xi^2/Q^2$. Making smooth partitions of unity into dyadic segments to the sums over $m$ and $n$ in \eqref{GL2}, we arrive at \begin{eqnarray}\label{M-N1-range} \mathscr{S}(C,\Xi)\ll \sum_{ M \ll X^{\varepsilon}\max\{C^2t^2/X,X\Xi^2/Q^2\}\atop \text{dyadic}} \sum_{N_1\asymp X\Xi^2/Q^2\atop \text{dyadic}} \left|\mathbf{T}\right|, \end{eqnarray} where temporarily, $\mathbf{T}:=\mathbf{T}(X,C,M,N_1,\Xi)$ is given by \begin{eqnarray}\label{T definition} \mathbf{T}=\frac{X^{3/2}}{Q}\sum_{q\sim C} q^{-2} \sum_{m\sim M } \frac{\lambda_g(m)}{m^{1/4}} \sum_{n\sim N_1}\frac{\lambda_f(n)}{n^{1/4}}S\left(m-n,0;q\right)\mathcal{I}(m,n,q,\Xi). \end{eqnarray} Now we consider the integral $\mathcal{I}(m,n,q,\Xi)$ in \eqref{I definition}. First we apply the stationary phase method to the integral $\Psi\left(n,q,\zeta\right)$ in \eqref{Psi definition}. The stationary point $y_0$ is given by $y_0=nQ^2/(X\zeta^2)$. Applying Lemma \ref{lemma:exponentialintegral} (2) with $Y=Z=1$ and $H=R=\sqrt{nX}/q\gg X^{\varepsilon}$, we obtain \begin{eqnarray*} \Psi\left(n,q,\zeta\right) =\frac{q^{1/2}}{(nX)^{1/4}} e\left(\frac{nQ}{q\zeta}\right) U^{\natural}\left(\frac{nQ^2}{X\zeta^2}\right)+O_A\left(X^{-A}\right), \end{eqnarray*} where $U^{\natural}$ is an $1$-inert function (depending on $A$) supported on $y_0 \asymp 1$. Plugging this asymptotic formula for $\Psi\left(n,q,\zeta\right)$ and \eqref{Phi definition} into \eqref{I definition} and switching the order of integrations, one has \begin{eqnarray}\label{I-middle} \mathcal{I}(m,n,q,\Xi)=\frac{q^{1/2}}{(nX)^{1/4}} \int_0^\infty \mathcal{K}(y;n,q,\Xi)V(y)y^{-1/4} e\bigg(t\varphi(y)-\frac{2\sqrt{mXy}}{q}\bigg) \mathrm{d}y+O_A(X^{-A}) \end{eqnarray} with \begin{eqnarray*} \mathcal{K}(y;n,q,\Xi)= \int_0^{\infty}g(q,\zeta) W\bigg(\frac{\zeta}{X^{\varepsilon}}\bigg)\widetilde{W}\bigg(\frac{\zeta}{\Xi}\bigg) U^{\natural}\bigg(\frac{nQ^2}{X\zeta^2}\bigg) e\bigg(\frac{\zeta Xy}{qQ}+\frac{nQ}{q\zeta}\bigg) \mathrm{d}\zeta. \end{eqnarray*} Next, we derive an asymptotic expansion for $\mathcal{K}(y;n,q,\Xi)$. By changing variable $nQ^2/(X\zeta^2)\rightarrow \zeta$, \begin{eqnarray*} \mathcal{K}(y;n,q,\Xi) =\frac{n^{1/2}Q}{X^{1/2}} \int_0^{\infty}\phi(\zeta) \exp\big(i\varpi(\zeta)\big)\mathrm{d}\zeta, \end{eqnarray*} where \begin{eqnarray*} \phi(\zeta):=-\frac{1}{2}\zeta^{-3/2}U^{\natural}(\zeta) g\bigg(q,\frac{n^{1/2}Q}{\zeta^{1/2}X^{1/2}}\bigg) W\bigg(\frac{n^{1/2}Q}{\zeta^{1/2}X^{1/2+\varepsilon}}\bigg) \widetilde{W}\bigg(\frac{n^{1/2}Q}{\zeta^{1/2}X^{1/2}\Xi}\bigg) \end{eqnarray*} and the phase function $\varpi(\zeta)$ is given by \begin{eqnarray*} \varpi(\zeta)=\frac{2\pi n^{1/2}X^{1/2}}{q} \left(y\zeta^{-1/2}+\zeta^{1/2}\right). \end{eqnarray*} Note that \begin{eqnarray*} \varpi'(\zeta)= \frac{\pi n^{1/2}X^{1/2}}{q} \left(-y\zeta^{-3/2}+\zeta^{-1/2}\right), \end{eqnarray*} and for $j\geq 2$, \begin{eqnarray*} \varpi^{(j)}(\zeta) =\left(-\frac{3}{2}\right)\cdot\cdot\cdot\left(\frac{1}{2}-j\right) \frac{\pi n^{1/2}X^{1/2}}{q}\left( -y\zeta^{-1/2-j}+\frac{1}{2j-1}\zeta^{1/2-j}\right). \end{eqnarray*} Thus the stationary point is $\zeta_0=y$ and $\varpi^{(j)}(\zeta)\ll_j n^{1/2}X^{1/2}/q$ for $j\geq 2$. By \eqref{rapid decay g}, we have $\phi^{(j)}(\zeta)\ll_j X^{\varepsilon}$. Applying Lemma \ref{lemma:exponentialintegral} (2) with $Y=Z=1$ and $H=R=n^{1/2}X^{1/2}/q\gg X^{\varepsilon}$, we obtain \begin{eqnarray}\label{K-integral} \mathcal{K}(y;n,q,\Xi) =\frac{n^{1/4}q^{1/2}Q}{X^{3/4}}e\left(\frac{2\sqrt{nXy}}{q}\right) F(y)+O_A\left(X^{-A}\right), \end{eqnarray} where $F(y)=F(y;\Xi)$ is an inert function (depending on $A$ and $\Xi$) supported on $\zeta_0 \asymp 1$. Substituting \eqref{K-integral} into \eqref{I-middle}, we get \begin{eqnarray}\label{I-middle-0} \mathcal{I}(m,n,q,\Xi)=\frac{qQ}{X} \int_0^\infty V(y)F(y)y^{-1/4} e\bigg(t\varphi(y)+\frac{2\sqrt{nXy}}{q}-\frac{2\sqrt{mXy}}{q}\bigg) \mathrm{d}y+O_A(X^{-A}). \end{eqnarray} Further substituting \eqref{I-middle-0} into \eqref{T definition} and writing the Ramanujan sum $S\left(m-n,0;q\right)$ as $\sum_{d|(m-n,q)}d\mu(q/d)$, one has \begin{eqnarray}\label{beforeCauchy} \mathbf{T}=X^{1/2}\sum_{q\sim C} q^{-1}\sum_{d|q}d\mu\left(\frac{q}{d}\right) \sum_{m\sim M } \frac{\lambda_g(m)}{m^{1/4}} \sum_{n\sim N_1\atop n\equiv m\bmod d}\frac{\lambda_f(n)}{n^{1/4}}\mathfrak{I}(m,n,q)+O_A\left(X^{-A}\right), \end{eqnarray} where $\mathfrak{I}(m,n,q)=\mathfrak{I}(m,n,q;\Xi)$ is given by \begin{eqnarray}\label{I-definition} \mathfrak{I}(m,n,q) =\int_0^\infty \widetilde{V}(y) e\bigg(t\varphi(y)+\frac{2\sqrt{nXy}}{q}-\frac{2\sqrt{mXy}}{q}\bigg) \mathrm{d}y. \end{eqnarray} Here $\widetilde{V}(y)=V(y)F(y)y^{-1/4}$ satisfying $\widetilde{V}^{(j)}(y)\ll_j \triangle^j$ and $\text{Var}(\widetilde{V})\ll 1$. Recall $\triangle$ denotes the quantity such that $V^{(j)}(x)\ll \triangle^j$ ( see \eqref{derivative-of-V}). Making a change of variable $y\rightarrow y^2$ in \eqref{I-definition}, one has \begin{eqnarray}\label{I-change-0} \mathfrak{I}(m,n,q) =2\int_0^\infty y\widetilde{V}(y^2) e\left(t\varphi(y^2)+\frac{2X^{1/2}}{q}\left(n^{1/2}-m^{1/2}\right)y\right) \mathrm{d}y. \end{eqnarray} Since the properties of the integral $\mathfrak{I}(m,n,q)$ depend on the size of $C$, we split the modulus $C$ according to $C\leq X^{1+\varepsilon}\Xi/(Qt)$ or $X^{1+\varepsilon}\Xi/(Qt) \leq C\ll Q$. \subsection{The case of small modulus} We first deal with the case $1\ll C\leq X^{1+\varepsilon}\Xi/(Qt)$. If we assume $\left(\varphi(y^2)\right)''\gg 1$, equivalently $\varphi(y)\neq cy^{1/2}+c_0$ for any constant $c_0$, then the second derivative of the phase function satisfies \begin{eqnarray*} t\left(\varphi(y^2)\right)''\gg t \end{eqnarray*} and by Lemma \ref{lem: 2st derivative test, dim 1}, we have \begin{eqnarray*} \mathfrak{I}(m,n,q)\ll t^{-1/2}. \end{eqnarray*} By this estimate, \eqref{individual bound} and \eqref{GL2: Rankin Selberg}, $\mathbf{T}$ in \eqref{beforeCauchy} can be bounded by \begin{eqnarray*}\label{small T} \mathbf{T}&\ll& \frac{X^{1/2}N_1^{\vartheta}}{t^{1/2}(MN_1)^{1/4}}\sum_{q\sim C} q^{-1}\sum_{d|q}d \sum_{m\sim M }|\lambda_g(m)| \sum_{n\sim N_1\atop n\equiv m\bmod d}1\nonumber\\ &\ll& \frac{X^{1/2}M^{3/4}N_1^{\vartheta}}{t^{1/2}N_1^{1/4}}\sum_{q\sim C} q^{-1}\sum_{d|q}d\bigg(1+\frac{N_1}{d}\bigg)\nonumber\\ &\ll&t^{-1/2}X^{1/2}M^{3/4}N_1^{-1/4+\vartheta}(C+N_1)\nonumber\\ &\ll&t^{-1/2}X^{1/2+\varepsilon}\frac{X^{1/2+\vartheta}\Xi^{1+2\vartheta}}{Q^{1+2\vartheta}} \left(\frac{X\Xi}{Qt}+\frac{X\Xi^2}{Q^2}\right)\nonumber\\ &\ll&\frac{X^{2+\vartheta+\varepsilon}}{Q^{2+2\vartheta}t^{1/2}}\left(\frac{1}{t}+\frac{1}{Q}\right) \end{eqnarray*} recalling $\Xi\ll X^{\varepsilon}$, $M \ll X^{\varepsilon}\max\{C^2t^2/X,X\Xi^2/Q^2\}\ll X^{1+\varepsilon}\Xi^2/Q^2$ and $N_1\asymp X\Xi^2/Q^2$ in \eqref{M-N1-range}. Assuming \begin{eqnarray}\label{assumption 3} Q<t \end{eqnarray} Then the contribution from $1\ll C\leq X^{1+\varepsilon}\Xi/(Qt)$ to $\mathscr{S}(C,\Xi)$ in \eqref{M-N1-range} is \begin{eqnarray}\label{small contribution} \frac{X^{2+\vartheta+\varepsilon}}{Q^{3+2\vartheta}t^{1/2}}. \end{eqnarray} \subsection{The case of large modulus} In the subsequent sections, we deal with the case $X^{1+\varepsilon}\Xi/(Qt) \leq C\ll Q$. In this case, we will evaluate the integral $\mathfrak{I}(m,n,q)$ more precisely. The integral $\mathfrak{I}(m,n,q)$ has the following properties which will be proved in Section \ref{proofs-of-technical-lemma}. \begin{lemma}\label{integral:lemma-0} Assume $V^{(j)}(x)\ll \triangle^j$ as defined in \eqref{derivative-of-V} with $\triangle<t^{1/2-\varepsilon}$ and $C$ satisfies $C\geq X^{1+\varepsilon}\Xi/(Qt)$. Further assume $\varphi(x)=c\log x$ or $cx^{\beta}$ with $\beta\in (0,1), \beta\neq 1/2$. Then we have \begin{eqnarray}\label{I-stationary phase-2} \mathfrak{I}(m,n,q) = e\left(t\varphi(y_0^2)-Dy_0\right)\mathfrak{I}^*(m,n,q) + O_{A}(t^{-A}), \end{eqnarray} where $y_0=\left(ct/D\right)^{1/\beta}$ with $D=2q^{-1}(mX)^{1/2}$ and \begin{eqnarray}\label{I*} \mathfrak{I}^*(m,n,q) =\frac{1}{\sqrt{t}}G_{\natural}(y_*) e\left(By_0+\frac{y_0^2}{2c\beta^2}\frac{B^2}{t}+B\sum_{j=2}^{K_2}g_{c,\beta,j}\left(y_0\right) \left(\frac{B}{t}\right)^j\right) + O_{A}(t^{-A}). \end{eqnarray} Here $B=2q^{-1}(nX)^{1/2}$, $y_*$ is defined in \eqref{stationary point}, $ G_{\natural}(x)$ is some inert function supported on $x\asymp 1$ and $g_{c,\beta, j}(x)$ is some polynomial function depending only on $c,\beta,j$. \end{lemma} \subsubsection{Cauchy-Schwarz and Poisson summation} Applying the Cauchy-Schwarz inequality to \eqref{beforeCauchy-2} and using the Rankin-Selberg estimate \eqref{GL2: Rankin Selberg}, one sees that \begin{eqnarray}\label{Cauchy} \mathbf{T}&\ll&\frac{X^{1/2}}{M^{1/4}}\sum_{q\sim C} q^{-1}\sum_{d|q}d \bigg(\sum_{m\sim M } |\lambda_g(m)|^2\bigg)^{1/2}\left(\sum_{m\sim M}\bigg| \sum_{n\sim N_1\atop n\equiv m\bmod d}\lambda_f(n)n^{-1/4}\mathfrak{I}^*(m,n,q)\bigg|^2\right)^{1/2}\nonumber\\ &\ll&X^{1/2}M^{1/4}\sum_{q\sim C}q^{-1}\sum_{d|q}d\sqrt{\mathbf{\Omega}(q,d)}, \end{eqnarray} where \begin{eqnarray}\label{Omega} \mathbf{\Omega}(q,d)=\sum_{m\in \mathbb{Z}}\omega\left(\frac{m}{M}\right)\bigg| \sum_{n\sim N_1\atop n\equiv m\bmod d}\lambda_f(n)n^{-1/4}\mathfrak{I}^*(m,n,q)\bigg|^2. \end{eqnarray} Here $\omega$ is a nonnegative smooth function on $(0,+\infty)$, supported on $[2/3,3]$, and such that $\omega(x)=1$ for $x\in [1,2]$. Opening the absolute square, we break the $m$-sum into congruence classes modulo $d$ and apply the Poisson summation formula to the sum over $m$ to get \begin{eqnarray}\label{omega-bound} \mathbf{\Omega}(q,d) &=&\sum_{n_1\sim N_1}\lambda_f(n_1)n_1^{-1/4} \sum_{n_2\sim N_1\atop n_2\equiv n_1\bmod d}\overline{\lambda_f(n_2)}n_2^{-1/4} \sum_{m\equiv n_1\bmod d}\omega\left(\frac{m}{M}\right)\mathfrak{I}^*(m,n_1,q) \overline{\mathfrak{I}^*(m,n_2,q)}\nonumber\\ &=&\frac{M}{d}\sum_{n_1\sim N_1}\lambda_f(n_1)n_1^{-1/4} \sum_{n_2\sim N_1\atop n_2\equiv n_1\bmod d}\overline{\lambda_f(n_2)}n_2^{-1/4} \sum_{\widetilde{m}\in \mathbb{Z}}e\left(-\frac{\widetilde{m}n_1}{d}\right) \mathcal{H}\left(\frac{\widetilde{m}M}{d}\right), \end{eqnarray} where the integral $\mathcal{H}(x)=\mathcal{H}(x;n_1,n_2,q)$ is given by \begin{eqnarray}\label{H-integral} \mathcal{H}(x)=\int_{\mathbb{R}} \omega\left(\xi\right) \mathfrak{I}^*\left(M\xi,n_1,q\right) \overline{\mathfrak{I}^*\left(M\xi,n_2,q\right)} \, e\left(-x\xi\right)\mathrm{d}\xi. \end{eqnarray} We have the following estimates for $\mathcal{H}(x)$, whose proofs we postpone to Section \ref{proofs-of-technical-lemma}. \begin{lemma}\label{integral:lemma} Assume $\varphi(x)=c\log x$ or $cx^{\beta}$ with $\beta\in (0,1)\backslash \{1/2,3/4\}$. Further assume $V^{(j)}(x)\ll \triangle^j$ as defined in \eqref{derivative-of-V} with $\triangle<t^{1/2-\varepsilon}$ and $C$ satisfies $C\geq X^{1+\varepsilon}\Xi/(Qt)$. (1) We have $\mathcal{H}(x)\ll t^{-1}$ for any $x\in \mathbb{R}$. (2) For $x\gg X^{1+\varepsilon}\Xi/(CQ)$, we have $\mathcal{H}(x)\ll_A X^{-A}$. (3) For $x\neq 0$, we have $\mathcal{H}(x)\ll t^{-1}|x|^{-1/2}$. (4) $\mathcal{H}(0)$ is negligibly small unless $|n_1-n_2|\ll X^{\varepsilon}$. \end{lemma} With estimates for $\mathcal{H}(x)$ ready, we now continue with the treatment of $\mathbf{\Omega}(q,d)$ in \eqref{omega-bound}. by Lemma \ref{integral:lemma} (2), the contribution from the terms with \begin{eqnarray}\label{m range-final} |\widetilde{m}|\gg dC^{-1}Q^{-1}M^{-1}X^{1+\varepsilon}\Xi:=N_2, \end{eqnarray} to $\mathbf{\Omega}(q,d)$ is negligible. So we only need to consider the range $0\leq |\widetilde{m}|\leq N_2$. We treat the cases where $\widetilde{m}=0$ and $\tilde{m}\neq 0$ separately and denote their contributions to $\mathbf{\Omega}(q,d)$ by $\mathbf{\Omega}_0$ and $\mathbf{\Omega}_{\neq 0}$, respectively. \subsubsection{The zero frequency}\label{The zero frequency} Let $\mathbf{\Sigma}_{0}$ denote the contribution of $\mathbf{\Omega}_0$ to \eqref{omega-bound}. Correspondingly, we denote its contribution to \eqref{Cauchy} by $\mathbf{\Sigma}_0$. \begin{lemma}\label{lemma:zero} Assume \begin{eqnarray}\label{assumption: range 1} Q>(X/t)^{1/2}. \end{eqnarray} We have \begin{eqnarray*} \mathbf{\Sigma}_0 \ll X^{\varepsilon}Q^{3/2}t. \end{eqnarray*} \end{lemma} \begin{proof} Splitting the sum over $n_1$ and $n_2$ according as $n_1=n_2$ or not, and applying Lemma \ref{integral:lemma} (4), the Rankin-Selberg estimate \eqref{GL2: Rankin Selberg} and using the inequality $|\lambda_f(n_1)\lambda_f(n_2)|\leq |\lambda_f(n_1)|^2+|\lambda_f(n_2)|^2$, we have \begin{eqnarray*} \mathbf{\Omega}_0 &\ll& \frac{M}{dtN_1^{1/2}} \mathop{\sum\sum}_{n_1,n_2\sim N_1\atop |n_1-n_2|\ll X^{\varepsilon}} |\lambda_f(n_1)||\lambda_f(n_2)| \\ &\ll&\frac{M}{dtN_1^{1/2}}\sum_{n_1\sim N_1}|\lambda_f(n_1)|^2 \sum_{n_2\sim N_1\atop |n_1-n_2|\ll X^{\varepsilon}}1\\ &\ll&\frac{X^{\varepsilon}MN_1^{1/2}}{dt}. \end{eqnarray*} This bound when substituted in place of $\mathbf{\Omega}(q,d)$ into \eqref{Cauchy} yields that \begin{eqnarray}\label{estimate-1} \mathbf{\Sigma}_0&\ll& X^{1/2+\varepsilon}M^{1/4}\sum_{q\sim C}q^{-1}\sum_{d|q}d \frac{M^{1/2}N_1^{1/4}}{d^{1/2}t^{1/2}} \ll \frac{X^{1/2+\varepsilon}M^{3/4}N_1^{1/4}C^{1/2}}{t^{1/2}}. \end{eqnarray} Recall $C\ll Q$ and from \eqref{M-N1-range} that $1\ll M\ll X^{\varepsilon}\max\left\{C^2t^2/X,X\Xi^2/Q^2\right\}$ ,$N_1\asymp X\Xi^2/Q^2$. In particular, if we further assume $Q$ satisfies $Q>(X/t)^{1/2}$, then $1\ll M\ll X^{\varepsilon}Q^2t^2/X$. Thus \begin{eqnarray*} \mathbf{\Sigma}_0 \ll X^{\varepsilon}Q^{3/2}t. \end{eqnarray*} This proves the lemma. \end{proof} \subsubsection{The non-zero frequencies}\label{The non-zero frequencies} Recall $\mathbf{\Omega}_{\neq 0}$ denotes the contribution from the terms with $\tilde{m}\neq 0$ to $\mathbf{\Omega}(q,d)$ in \eqref{omega-bound}. Correspondingly, we denote its contribution to \eqref{Cauchy} by $\mathbf{\Sigma}_{\neq 0}$. Using the inequality $|\lambda_f(n_1)\lambda_f(n_2)|\leq |\lambda_f(n_1)|^2+|\lambda_f(n_2)|^2$, we have \begin{eqnarray}\label{bound-med} \mathbf{\Omega}_{\neq 0}&\ll& \frac{M}{dN_1^{1/2}}\sum_{n_1\sim N_1}|\lambda_f(n_1)|^2 \sum_{n_2\sim N_1\atop n_2\equiv n_1\bmod d}\; \sum_{0\neq \widetilde{m}\ll N_2}\bigg|\mathcal{H}\left(\frac{\widetilde{m}M}{d}\right)\bigg|, \end{eqnarray} where $N_2=dC^{-1}Q^{-1}M^{-1}X^{1+\varepsilon}\Xi$ is defined in \eqref{m range-final}. \begin{lemma}\label{lemma:nonzero} Assume \begin{eqnarray}\label{assumption: range 2} Q<\min\{t,X^{1/3}\}. \end{eqnarray} We have \begin{eqnarray*} \mathbf{\Sigma}_{\neq 0}\ll X^{5/4+\varepsilon}/Q. \end{eqnarray*} \end{lemma} \begin{proof} For $x=\widetilde{m}M/d$, in order to apply the estimates for $\mathcal{H}(x)$ in Lemma \ref{integral:lemma}, we consider the cases where $x\ll X^{\varepsilon}$ and $x \gg X^{\varepsilon}$, separately, and split the sum over $\tilde{m}$ accordingly. Set \begin{eqnarray}\label{N3} N_3:=dX^{\varepsilon}/M. \end{eqnarray} Then for $0\neq \tilde{m}\ll N_3$ we will use the bound $\mathcal{H}(x)\ll t^{-1}$ in Lemma \ref{integral:lemma} (1), and for the remaining part we apply the bound $\mathcal{H}(x)\ll t^{-1}|x|^{-1/2}$ in Lemma \ref{integral:lemma} (3). By \eqref{bound-med}, we have \begin{eqnarray*} \mathbf{\Omega}_{\neq 0}&\ll&\frac{M}{dN_1^{1/2}}\sum_{n_1\sim N_1}|\lambda_f(n_1)|^2 \sum_{n_2\sim N_1\atop n_2\equiv n_1\bmod d}\; \sum_{0\neq \widetilde{m}\ll N_3}t^{-1}\\ &&+\frac{M}{dN_1^{1/2}}\sum_{n_1\sim N_1}|\lambda_f(n_1)|^2 \sum_{n_2\sim N_1\atop n_2\equiv n_1\bmod d}\; \sum_{N_3\ll \widetilde{m}\ll N_2}t^{-1}\bigg(|\widetilde{m}|M/d\bigg)^{-1/2}\\ &\ll&\frac{MN_1^{1/2}N_3}{dt}\left(1+\frac{N_1}{d}\right) +\frac{M^{1/2}N_1^{1/2}N_2^{1/2}}{d^{1/2}t}\left(1+\frac{N_1}{d}\right)\nonumber\\ &\ll&\frac{M^{1/2}N_1^{1/2}}{d^{1/2}t}\left(1+\frac{N_1}{d}\right)\left(\frac{M^{1/2}N_3}{d^{1/2}}+N_2^{1/2}\right). \end{eqnarray*} Here we have applied the Rankin--Selberg estimate \eqref{GL2: Rankin Selberg}. Recall \eqref{m range-final} $N_2=dC^{-1}Q^{-1}M^{-1}X^{1+\varepsilon}\Xi$ and \eqref{N3} $N_3=dX^{\varepsilon}/M$. Thus \begin{eqnarray*} \mathbf{\Omega}_{\neq 0} &\ll&\frac{X^{\varepsilon}M^{1/2}N_1^{1/2}}{t}\left(1+\frac{N_1}{d}\right) \left(\frac{X^{\varepsilon}}{M^{1/2}}+\frac{X^{1/2}\Xi^{1/2}}{C^{1/2}Q^{1/2}M^{1/2}}\right)\\ &\ll&\frac{X^{1/2+\varepsilon}N_1^{1/2}}{C^{1/2}Q^{1/2}t}\left(1+\frac{N_1}{d}\right) \end{eqnarray*} since $\Xi\ll X^{\varepsilon}$ and $Q<X^{1/2-\varepsilon}$ by \eqref{assumption 1}. Since $1\ll M\ll X^{\varepsilon}\max\left\{C^2t^2/X,X\Xi^2/Q^2\right\}= X^{\varepsilon}C^2t^2/X$ in \eqref{M-N1-range} as $X^{1+\varepsilon}\Xi/(Qt) \leq C\ll Q$, this bound when substituted in place of $\mathbf{\Omega}(q,d)$ in \eqref{Cauchy} gives that \begin{eqnarray*} \mathbf{\Sigma}_{\neq 0} &\ll&X^{1/2+\varepsilon}M^{1/4}\sum_{q\sim C}q^{-1}\sum_{d|q} \frac{dX^{1/4+\varepsilon}N_1^{1/4}}{C^{1/4}Q^{1/4}t^{1/2}}\left(1+\frac{N_1^{1/2}}{d^{1/2}}\right)\\ &\ll& \frac{M^{1/4}N_1^{1/4}X^{3/4+\varepsilon}C^{1/4}}{Q^{1/4}t^{1/2}}\left(C^{1/2}+N_1^{1/2}\right)\\ &\ll&N_1^{1/4}X^{1/2+\varepsilon}C^{3/4}Q^{-1/4}\left(C^{1/2}+N_1^{1/2}\right)\\ &\ll&N_1^{1/4}X^{1/2+\varepsilon}Q^{1/2}\left(Q^{1/2}+N_1^{1/2}\right) \end{eqnarray*} Recall $1\ll N_1\asymp X\Xi^2/Q^2\ll X^{1+\varepsilon}/Q^2$ in \eqref{M-N1-range} and $C\ll Q$, we further imply \begin{eqnarray*} \mathbf{\Sigma}_{\neq 0} &\ll& X^{3/4+\varepsilon} \left(Q^{1/2}+\frac{X^{1/2}}{Q}\right)\\ &\ll&X^{5/4+\varepsilon}/Q \end{eqnarray*} provided that $Q<X^{1/3}$. \end{proof} \subsection{Conclusion} By inserting the upper bounds in Lemmas \ref{lemma:zero} and \ref{lemma:nonzero} into \eqref{Cauchy}, we have shown the following \begin{eqnarray*} \mathbf{T}\ll X^{\varepsilon} \left(Q^{3/2}t+\frac{X^{5/4}}{Q}\right), \end{eqnarray*} under the assumption $X^{1+\varepsilon}\Xi/(Qt) \leq C\ll Q$ and \begin{eqnarray} \label{final-assumption-Q} (X/t)^{1/2}<Q<\min\{t,X^{1/3}\} \end{eqnarray} which is a combination of \eqref{assumption 1}, \eqref{assumption: range 1} and \eqref{assumption: range 2}. We set $Q=X^{1/2}/t^{2/5}$ to balance the contribution. Then for $X^{1+\varepsilon}\Xi/(Qt) \leq C\ll Q$, \begin{eqnarray}\label{large T estimate} \mathbf{T}\ll t^{2/5}X^{3/4+\varepsilon} \end{eqnarray} provided $X<t^{12/5}$. Moreover, for this choice of $Q$, when $C\leq X^{1+\varepsilon}\Xi/(Qt)$, by \eqref{small contribution}, $\mathbf{T}$ is bounded by \begin{eqnarray*} \frac{X^{2+\vartheta+\varepsilon}}{Q^{3+2\vartheta}t^{1/2}}=X^{1/2+\varepsilon}t^{7/10+4\vartheta/5}. \end{eqnarray*} Note that the estimate $t^{2/5}X^{3/4+\varepsilon}$ is superior to the trivial bound $X^{1+\varepsilon}$ for $X>t^{8/5}$. So we assume $X>t^{8/5}$ and in this case the term $X^{1/2+\varepsilon}t^{7/10+4\vartheta/5}$ is dominated by the estimate in \eqref{large T estimate}, since we can take $\vartheta=7/64$ by \cite{KS}. Substituting the estimate in \eqref{large T estimate} for $\mathbf{T}$ into \eqref{M-N1-range} and using \eqref{C range}, we obtain \begin{eqnarray*} S(X,t)\ll t^{2/5}X^{3/4+\varepsilon} \end{eqnarray*} provided $t^{8/5}<X<t^{12/5}$. Notice that by Lemma \ref{integral:lemma} (3), $\varphi(x)$ further satisfies $\varphi(x)=c\log x$ or $cx^{\beta}$ ($\beta\in (0,1)\backslash \{1/2,3/4\}$, $c\in \mathbb{R}\backslash \{0\}$); see \eqref{phi assumption}. The assumption $\triangle<t^{1/2-\varepsilon}$ arises also in applying Lemma \ref{integral:lemma} (3); see \eqref{assumption-on-Delta}. This completes the proof of Theorem \ref{main-theorem}. \section{Proof of Corollary \ref{sharp-cut-sum}}\label{proofs-of-corollary} In this section, we prove Corollary \ref{sharp-cut-sum} in Section \ref{introduction}. Without loss of generality, we assume that $f$ and $g$ are both Maass cusp forms. If one of $f$ and $g$ is holomorphic, the proof is similar and simpler. Note that from \eqref{GL2: Rankin Selberg}, we have \begin{eqnarray*} \sum_{X<n\leq X+X/\triangle}|\lambda_f(n)|^2\ll X/\triangle+X^{3/5}. \end{eqnarray*} In particular, if $\triangle\leq X^{2/5}$, one has \begin{eqnarray*} \sum_{X<n\leq X+X/\triangle}|\lambda_f(n)|^2\ll X/\triangle. \end{eqnarray*} Similarly, under the same assumption $\triangle\leq X^{2/5}$, we have \begin{eqnarray*} \sum_{X<n\leq X+X/\triangle}|\lambda_g(n)|^2\ll X/\triangle. \end{eqnarray*} We choose the smooth function $V$ in \eqref{natural-sum} to be supported on $[1,2]$ and $V(x)=1$ on $[1+1/\triangle,2-1/\triangle]$. Then, Theorem \ref{main-theorem} yields \begin{eqnarray*} &&\sum_{X<n\leq 2X}\lambda_f(n) \lambda_g(n)e\left(t \varphi\left(\frac{n}{X}\right)\right)\\ &\ll& t^{7/16}X^{3/4+\varepsilon}+ \sum_{X<n\leq X+X/\triangle}|\lambda_f(n)\lambda_g(n)|+ \sum_{2X-X/\triangle<n\leq 2X}|\lambda_f(n)\lambda_g(n)|\\ &\ll& t^{2/5}X^{3/4+\varepsilon}+ \bigg(\sum_{X<n\leq X+X/\triangle}|\lambda_f(n)|^2\bigg)^{1/2} \bigg(\sum_{X<n\leq X+X\triangle}|\lambda_g(n)|^2\bigg)^{1/2}\\ &\ll& t^{2/5}X^{3/4+\varepsilon}+X/\triangle, \end{eqnarray*} as long as $\triangle\leq X^{2/5}$ and $t^{8/5}<X<t^{12/5}$. Corollary \ref{sharp-cut-sum} then follows by choosing $\triangle=t^{1/2-\varepsilon}$ and by noting that $t^{1/2-\varepsilon}\leq X^{2/5}$ if and only if $t^{5/4-\varepsilon}\leq X$. \section{Proof of Corollary \ref{cor:main}} \label{sec:Cor} In this section, we prove Corollary \ref{cor:main}. We introduce a lemma about "Functional equation" of $L(s,1\boxplus(f \times g))$ before proving Corollary \ref{cor:main}. \begin{lemma}\label{lemma:functional equation} For $\rm {Re}(s) > 1$, we have $$ L(1-s,1\boxplus(f \times g))= \frac{1}{\varepsilon(f\times g)}\gamma(s)L(s,1\boxplus(f \times g)), $$ where $\varepsilon(f\times g)$ is the root number of $L(f \times g)$ with $|\varepsilon(f\times g)| = 1$, $$\gamma(s)={(\pi^{-5})}^{s-\frac{1}{2}}\prod_{j=1}^{5} \Gamma(\frac{s+\kappa_{j}}{2})\Gamma(\frac{1-s+\kappa_{j}}{2})^{-1},$$ with $\kappa_{1}=0, \kappa_{2}=\frac{k-\kappa}{2}, \kappa_{3}=\frac{k-\kappa}{2}+1, \kappa_{4}=\frac{k+\kappa}{2}-1, \kappa_{5}=\frac{k+\kappa}{2}$, and $$ \gamma(\sigma- i t) = \overline{\omega_k} \left(\frac{t}{2 \pi} \right)^{5(\sigma-1 / 2)} \left(\frac{2 \pi e}{t}\right)^{5 i t}\left\{1+O\left(\frac{1}{t}\right)\right\}, $$ for $\sigma>1 / 2$, $t>1$, $\omega_k=e\left(\frac{4 k-5}{8}\right)$. \end{lemma} \begin{proof} First by the functional equation \begin{equation*} \begin{split} \Lambda(s, 1\boxplus(f \times g))& = \pi^{-\frac{s}{2}} \Gamma\left(\frac{s}{2}\right) \zeta(s) \cdot (2 \pi)^{-2s}\Gamma\left(s+\frac{k-\kappa}{2}\right)\Gamma\left(s+\frac{k+\kappa}{2}-1 \right) L(s, f \times g)\\ & = \varepsilon(1\boxplus(f\times g))\Lambda(1-s, 1\boxplus(f \times g)) = \varepsilon(f\times g)\Lambda(1-s, 1\boxplus(f \times g)) , \end{split} \end{equation*} we can write the functional equation as follows, $$ \begin{aligned} &L(1-s, 1 \boxplus f \times g) \\ =&\frac{1}{\varepsilon(f\times g)} \frac{\pi^{-\frac{s}{2}} \Gamma\left(\frac{s}{2}\right)(2 \pi)^{-2 s} \Gamma\left(s+\frac{k-\kappa}{2}\right)\Gamma\left(s+\frac{k+\kappa}{2}-1 \right)}{\pi^{-\frac{1-s}{2}} \Gamma\left(\frac{1-s}{2}\right)(2 \pi)^{-2+2s}\Gamma\left(1-s+\frac{k-\kappa}{2}\right)\Gamma\left(1-s+\frac{k+\kappa}{2}-1 \right)} L(s, 1 \boxplus f \times g) \\ =&\frac{1}{\varepsilon(f\times g)} \gamma(s) L(s, 1 \boxplus (f \times g)), \end{aligned} $$ where $$ \gamma(s)=\pi^{\frac{1}{2}-s}(2 \pi)^{2-4 s}\frac{\Gamma\left(\frac{s}{2}\right)}{\Gamma\left(\frac{1-s}{2}\right)} \frac{\Gamma\left(s+\frac{k-\kappa}{2}\right)}{\Gamma\left(1-s+\frac{k-\kappa}{2}\right)} \frac{\Gamma\left(s+\frac{k+\kappa}{2}-1 \right)}{\Gamma\left(1-s+\frac{k+\kappa}{2}-1 \right)}. $$ Note that $\Gamma(z) \Gamma\left(z+\frac{1}{2}\right)=2^{1-2 z} \pi^{\frac{1}{2}} \Gamma(2 z)$. Taking $z=\frac{s+\frac{k-\kappa}{2}}{2}$, $z=\frac{s+\frac{k+\kappa}{2}-1}{2}$ respectively, we obtain $$ \Gamma\left(s+\frac{k-\kappa}{2}\right)=2^{s+\frac{k-\kappa}{2}-1} \pi^{-\frac{1}{2}} \Gamma\left(\frac{s+\frac{k-\kappa}{2}}{2}\right)\Gamma\left(\frac{s+\frac{k-\kappa}{2}+1}{2}\right), $$ $$ \Gamma\left(s+\frac{k+\kappa}{2}-1\right)=2^{s+\frac{k+\kappa}{2}-2} \pi^{-\frac{1}{2}} \Gamma\left(\frac{s+\frac{k+\kappa}{2}-1}{2}\right)\Gamma\left(\frac{s+\frac{k+\kappa}{2}}{2}\right). $$ With the notation of $\gamma(s)$ , we derive $$ \gamma(s) =\left(\pi^{-5}\right)^{s-\frac{1}{2}} \prod_{j=1}^{5} \Gamma\left(\frac{s+\kappa_{j}}{2}\right) \Gamma\left(\frac{1-s+\kappa_{j}}{2}\right)^{-1}, $$ with $\kappa_{1}=0, \kappa_{2}=\frac{k-\kappa}{2}, \kappa_{3}=\frac{k-\kappa}{2}+1, \kappa_{4}=\frac{k+\kappa}{2}-1, \kappa_{5}=\frac{k+\kappa}{2}$. Hence, by the argument of Friedlander-Iwaniec \cite[Section 1]{Fri-Iwa}, we complete the proof of Lemma. \end{proof} Take $s=1+\varepsilon-i t$. Lemma \ref{lemma:functional equation} yields \begin{equation}\label{apply lemma} L(-\varepsilon+i t, 1 \boxplus (f \times g))= \frac{\overline{\omega_k} }{\varepsilon(f\times g)} \left(\frac{t}{2\pi}\right)^{\frac{5}{2}+5\varepsilon}\left(\frac{t}{2\pi e}\right)^{-5 i t} L(1+\varepsilon-i t,1 \boxplus (f \times g))\left\{1+O\left(\frac{1}{t}\right)\right\}. \end{equation} \begin{proof}[Proof of Corollary \ref{cor:main}] We first approximate $\sum_{n \leq X}\lambda_{1\boxplus(f\times g)}(n)$ by a smooth sum. Let $$ Y=X^{2/3-\delta} \quad \text { for some } \delta \in (0, 2/39). $$ Let $W$ be a smooth function supported on $[1/2 - Y/X, 1 + Y/X]$ such that $W(u)=1$, $u \in [1/2, 1]$ and $W(u) \in [0 , 1]$, $u \in [1/2 - Y/X, 1/2] \cup [1,1+Y / X]$, and $W^{(m)}(u) \ll_ m(X / Y)^{m}$ for all $m \geq 1$. Then \begin{equation}\label{sum of lambda 1} \begin{aligned} \sum_{X / 2<n \leq X} \lambda_{1\boxplus(f\times g)}(n)=& \sum_{X / 2-Y<n<X+Y} \lambda_{1\boxplus(f\times g)}(n) W\left(\frac{n}{X}\right) \\ &+O\left(\sum_{X / 2-Y<n<X / 2}\left|\lambda_{1\boxplus(f\times g)}(n)\right|+\sum_{X<n<X+Y}\left|\lambda_{1\boxplus(f\times g)}(n)\right|\right) \\ =& \sum_{n \geq 1} \lambda_{1\boxplus(f\times g)}(n) W\left(\frac{n}{X}\right)+O\left(X^{2/3-\delta+\varepsilon}\right), \end{aligned} \end{equation} where we have used Deligne's bound $\lambda_{1\boxplus(f\times g)}(n) = \sum_{lm^2r=n}\lambda_f(r)\lambda_g(r)\ll n^{\varepsilon}$. Thus we only need to show \begin{equation}\label{sum of lambda 2} \sum_{n \geq 1} \lambda_{1\boxplus(f\times g)}(n) W\left(\frac{n}{X}\right)=L(1, f\times g) \tilde{W}(1) X + O\left(X^{2/3-\delta + \varepsilon}\right), \end{equation} where $\tilde{W}(s)=\int_{0}^{\infty} W(x) x^{s-1} \mathrm{d} x$ is the Mellin transform of $W(x)$ and $\tilde{W}(1)= 1/2 + O(Y/X)$. By breaking the sum into dyadic intervals and plugging \eqref{sum of lambda 2} into \eqref{sum of lambda 1}, we get $$ \begin{aligned} \sum_{n \leq X} \lambda_{1\boxplus(f\times g)}(n) =& 2 L(1, f\times g) \tilde{W}(1) X + O\left(X^{2/3 - \delta + \varepsilon}\right)\\ =&L(1, f\times g)X + O\left(X^{2/3 - \delta + \varepsilon}\right). \end{aligned} $$ Now we consider the sum $\sum_{n \geq 1}\lambda_{1\boxplus(f\times g)}(n)W\left(\frac{n}{X}\right)$ in \eqref{sum of lambda 2}. By the Mellin inversion formula $$ W(u)=\frac{1}{2 \pi i} \int_{(2)} \tilde{W}(s) u^{-s} \mathrm{d} s, $$ we get $$ \sum_{n \geq 1}\lambda_{1\boxplus(f\times g)}(n)W\left(\frac{n}{X}\right)= \frac{1}{2 \pi i} \int_{(2)} \tilde{W}(s) L(s, 1 \boxplus (f \times g)) X^{s} \mathrm{d}s. $$ Next we move the integration to the parallel segment with $\mathrm{Re}( s ) = -\varepsilon$. Note that inside the contour the integrand has only a simple pole at $s = 1$ with residue $L(1,f \times g)\tilde{W}(1)X$, since $L(s,1\boxplus(f \times g))= \zeta(s)L(s,f \times g)$. Hence, \begin{equation}\label{move integral} \sum_{n \geq 1} \lambda_{1\boxplus(f\times g)}(n) W\left(\frac{n}{X}\right) =L(1, f\times g) \tilde{W}(1) X + \frac{1}{2 \pi i}\int_{(-\varepsilon)}\tilde{W}(s) L(s, 1 \boxplus (f \times g)) X^{s} \mathrm{d}s. \end{equation} Let $$ I(X):= \frac{1}{2 \pi i}\int_{(-\varepsilon)}\tilde{W}(s) L(s, 1 \boxplus (f \times g)) X^{s} \mathrm{d}s. $$ Inserting a dyadic smooth partition of unity to the $t$-integral, we get \begin{equation}\label{I(X)dyadic} I(X)=\sum_{T \text { dyadic }} I(X, T), \end{equation} where $$ I(X, T):=\frac{X^{-\varepsilon}}{2 \pi} \int_{\mathbb{R}} X^{i t} \tilde{W}(-\varepsilon+i t) L(-\varepsilon+i t, 1 \boxplus (f \times g)) V\left(\frac{t}{T}\right) \mathrm{d} t $$ for some fixed compactly supported function $V$. For $\tilde{W}(s)$, by applying integration by parts, we have, for any $m \geq 1$ \begin{equation}\label{tilde{W}(s)} \tilde{W}(s)=\frac{(-1)^{m}}{s(s+1) \cdots(s+m-1)} \int_{0}^{\infty} W^{(m)}(u) u^{s+m-1} \mathrm{d} u \ll_m \frac{1}{|s|^{m}}\left(\frac{X}{Y}\right)^{m-1}, \end{equation} since supp $W^{(m)}\subset [1/2 - Y/X, 1/2] \cup [1,1 + Y /X]$. By \eqref{tilde{W}(s)}, one finds that the contribution from the $t$-integral of $I(X, T )$ is negligible if $t \gg X^{1+\varepsilon}/Y$. In addition, by the upper bound $L(-\varepsilon+i t, 1 \boxplus (f \times g)) \ll (1+ t)^{5/2+\varepsilon}$ which follows from Lemma \ref{lemma:functional equation} and the Phragm\'{e}n \textendash Lindel\"{o}f principle and by \eqref{tilde{W}(s)} with $m=1$, we deduce that $$ I(X,T)\ll X^{\varepsilon}T^{5/2 +\varepsilon}\ll Y $$ if $T\ll Y^{2/5-\varepsilon}$. Thus, up to a negligible error, we only need to consider those $T$ in \eqref{I(X)dyadic} in the range $ Y^{2/5-\varepsilon}\ll T \ll X^{1+\varepsilon}/Y$. And we only consider positive $T$'s, since negative $T$'s can be handled similarly. Next, for $I(X, T )$, by the first equality in \eqref{tilde{W}(s)} with $m = 1$, we get \begin{equation}\label{I(X,T)} \begin{aligned} I(X, T) &=-\frac{X^{-\varepsilon}}{2 \pi} \int_{1 / 3}^{3} W^{\prime}(u) u^{-\varepsilon} \int_{\mathbb{R}} \frac{(X u)^{i t}}{-\varepsilon+i t} L(-\varepsilon+i t, 1\boxplus(f\times g)) V\left(\frac{t}{T}\right) \mathrm{d} t~\mathrm{d} u \\ & \ll \frac{X^{-\varepsilon}}{T} \sup _{u \in [1/3,3]} \left| \int_{\mathbb{R}}(X u)^{i t} L(-\varepsilon+i t, 1\boxplus(f\times g)) V_1\left(\frac{t}{T}\right) \mathrm{d} t \right|. \end{aligned} \end{equation} Hence, in the following, we only need to estimate \begin{equation}\label{J(X, T)} J(X, T):=\int_{\mathbb{R}} X^{i t} L(-\varepsilon+i t, 1\boxplus(f\times g)) V_1\left(\frac{t}{T}\right) \mathrm{d} t. \end{equation} We shall apply functional equation for $L(-\varepsilon+i t, 1\boxplus(f\times g))$ to change the variable $s=-\varepsilon+i t$ into $1-s=1+\varepsilon-i t$. By inserting the functional equation \eqref{apply lemma} into \eqref{J(X, T)}, we have $$ \begin{aligned} J(X, T)=& \int_{\mathbb{R}} X^{i t} \frac{1}{\varepsilon(f\times g)} \overline{\omega_{k}} \cdot\left(\frac{t}{2 \pi}\right)^{5\left(\frac{1}{2}+\varepsilon\right)}\left(\frac{t}{2 \pi e}\right)^{-5 i t} L(1+\varepsilon-i t,1\boxplus(f\times g)) V_1\left(\frac{t}{T}\right) \mathrm{d} t \\ &+O\left(\frac{1}{T} \cdot T^{5 / 2+\varepsilon} \cdot T\right)\\ \ll & T^{5 / 2+\varepsilon}\left|\int_{\mathbb{R}}\sum_{n\geq 1}\frac{\lambda_{1\boxplus(f\times g)}(n)}{n^{1+\varepsilon-i t}}X^{i t}\left(\frac{t}{2 \pi e}\right)^{-5 i t} V_2\left(\frac{t}{T}\right)\mathrm{d} t\right| +T^{5 / 2+\varepsilon} \end{aligned} $$ for some smooth compactly supported function $V_2$. Exchanging the order of the integration and summation above, and making a change of variable $\frac{t}{T}\rightarrow \xi$, we get $$ \begin{aligned} J(X,T)\ll & T^{5 / 2+\varepsilon}\left|\sum_{n\geq 1} \frac{\lambda_{1\boxplus(f\times g)}(n)}{n^{1+\varepsilon}}\int_{\mathbb{R}} (nX)^{i t}\left(\frac{t}{2 \pi e}\right)^{-5 i t} _2\left(\frac{t}{T}\right)\mathrm{d} t\right| + T^{5 / 2+\varepsilon} \\ \ll & T^{7 / 2+\varepsilon}\left|\sum_{n\geq 1} \frac{\lambda_{1\boxplus(f\times g)}(n)}{n^{1+\varepsilon}}\int_{\mathbb{R}} e^{i T \xi \log (nX({\frac{2 \pi e}{T\xi}})^{-5})} V_2(\xi)\mathrm{d}\xi\right| + T^{5 / 2+\varepsilon}. \end{aligned} $$ Let $h(\xi):= T \xi \log (nX({\frac{2 \pi e}{T\xi}})^{-5})$, then $h^{\prime}(\xi)= 5T\log\frac{2\pi (nX)^{\frac{1}{5}}/T}{\xi}$, $h^{(j)}(\xi)= (-1)^{j-1}(j-2)!\frac{5T}{\xi^{j-1}}$ for $j \geq 2$. If $2\pi(nX)^{1/5}/T \notin \operatorname{supp} V_{2} $, it is not difficult to see that $h^{\prime}(\xi) \gg T^{\varepsilon}$. Applying Lemma 3.6 (1), we have the integral over $\xi$ is $O(T^{-2021})$. Now for the above integral over $\xi$, we consider the case $2\pi(nX)^{1/5}/T \in \operatorname{supp} V_2$. Note that the stationary point is $\xi_{0}=\frac{2 \pi(n X)^{1 / 5}}{T}$, $h\left(\xi_{0}\right)=5 T \xi_{0}$, $h^{\prime \prime}\left(\xi_{0}\right)=-\frac{5 T}{\xi_{0}} \asymp T$ and $V^{(j)}(\xi)\ll_j 1$ for $j \geq 0$, $h^{(j)}\left(\xi_{0}\right)\asymp T$ for $j \geq 2$. Applying Lemma 3.6 (2) with $Y=Z=1$ and $H=R=T$, we obtain $$ \begin{aligned} \int_{\mathbb{R}} V_{2}(\xi) e^{i T \xi \log \left(n X\left(\frac{T \xi}{2 \pi e}\right)^{-5}\right)} \mathrm{d} \xi &=\frac{e^{i h\left(\xi_{0}\right)}}{T^{1 / 2}} W_{1}\left(\xi_{0}\right)+O\left(\frac{1}{T^{2021}}\right) \\ &=\frac{e\left(5(n X)^{1 / 5}\right)}{T^{1 / 2}} W_{2}\left(\frac{n}{T^{5} / X}\right)+O\left(\frac{1}{T^{2021}}\right), \end{aligned} $$ for some inert functions $W_{1}$, $W_{2}$. Consequently, \begin{equation}\label{after stationary} \begin{aligned} J(X, T) & \ll T^{3+\varepsilon}\left|\sum_{n \geq 1} \frac{\lambda_{1\boxplus(f\times g)}(n)}{n^{1+\varepsilon}} e\left(5(n X)^{1 / 5}\right) W_{2}\left(\frac{n}{T^{5} / X}\right)\right|+T^{5 / 2+\varepsilon} \\ & \ll \frac{X^{1+\varepsilon}}{T^2}\left|\sum_{n \geq 1} \lambda_{1\boxplus(f\times g)}(n) e\left(5(n X)^{1 / 5}\right) W_{3}\left(\frac{n}{T^{5} / X}\right)\right|+T^{5 / 2+\varepsilon} \end{aligned} \end{equation} for some inert function $W_3$. Note that \begin{equation}\label{T range} X^{1/5+\varepsilon} \ll Y^{2/5+\varepsilon} \ll T \ll \frac{X^{1+\varepsilon}}{Y}. \end{equation} So the above sum over $n$ is non-empty. Combining \eqref{I(X)dyadic}, \eqref{I(X,T)}, \eqref{J(X, T)} and \eqref{after stationary}, we have \begin{equation}\label{I(X) to exp sum} I(X)\ll \sum_{T ~{\rm{dyadic}} \atop Y^{2/5+\varepsilon} \ll T \ll \frac{X^{1+\varepsilon}}{Y}}\left(\frac{X^{1+\varepsilon}}{T^{3}}\left|\sum_{n \geq 1} \lambda_{1 \boxplus(f\times g)}(n) e\left(5(n X)^{1 / 5}\right) W\left(\frac{n}{T^{5} / X}\right)\right|+T^{3/2+\varepsilon}\right). \end{equation} Here $X$ on the right-hand side of \eqref{I(X) to exp sum} should be understood as the original $X u$ in \eqref{I(X,T)} with $u \in [1/3 , 3]$, and $W$ is a smooth compactly supported function with $\operatorname{supp} W \in [1 / 4,4]$. So we only need to consider the case $n \asymp T^{5} / X$. Now we make use of the fact that $\lambda_{1\boxplus(f\times g)}(n) = \sum_{lm^2r=n}\lambda_f(r)\lambda_g(r)$. Inserting dyadic partitions to the $l$-sum and $m$-sum and making a smooth partition of unity into dyadic segments to the $r$-sum, we arrive at $$ I(X)\ll \sum_{T ~{\rm{dyadic}} \atop Y^{2/5+\varepsilon} \ll T \ll \frac{X^{1+\varepsilon}}{Y}}\left(\frac{X^{1+\varepsilon}}{T^{3}} \sup_{L,M,R \gg 1 \atop LM^2R \asymp T^5/X}|B(L,M,R)| + T^{3/2+\varepsilon}\right), $$ where $$ B(L,M,R):=\sum_{l \sim L} \sum_{m \sim M} \sum_{r \geq 1}\lambda_{f}(r)\lambda_{g}(r) e\left(5(l m^2 r X)^{1 / 5}\right) V\left(\frac{r}{R}\right). $$ We distinguish two cases. {\textbf{Case 1.}} $L \gg T^{593 / 345}M^{-194 / 207}X^{-97 / 207}$. We rewrite $B(L,M,R)$ as $$ B(L, M, R)=\sum_{m \sim M} \sum_{r \geq 1} \lambda_{f}(r)\lambda_{g}(r) V\left(\frac{r}{R}\right)\left(\sum_{l \sim L } e\left(5(l m^2 r X)^{1 / 5}\right)\right). $$ For the inner sum over $l$, we apply the method of exponent pairs with A-process (see for example \cite[Chapter 3]{GrahamKolesnik}), by taking the exponent pair $(p, q)$ as $$ (p, q)=\left(\frac{k}{2 k+2}, \frac{k+h+1}{2 k+2}\right)= \left(\frac{13}{194}+\varepsilon, \frac{76}{97}+\varepsilon\right), $$ where $(k, h)=\left(\frac{13}{84}+\varepsilon, \frac{55}{84}+\varepsilon\right)$ is an exponent pair according to Bourgain \cite[Theorem 6]{Bourgain}. Hence, \begin{equation}\label{B(L, M, R)1} \begin{aligned} B(L, M, R)& \ll \sum_{m \sim M} \sum_{r \geq 1} \left|\lambda_{f}(r)\lambda_{g}(r) V\left(\frac{r}{R}\right)\right| \left|\sum_{l \sim L } e\left(5(l m^2 r X)^{1 / 5}\right)\right|\\ & \ll T^{\varepsilon} M R \cdot(T / L)^{p} L^{q}\\ &\ll T^{983/194+\varepsilon}X^{-1+\varepsilon}M^{-1+\varepsilon}L^{-55/194+\varepsilon}\\ & \ll T^{316 /69+\varepsilon} M^{-152 / 207+\varepsilon} X^{-359 / 414+\varepsilon}\\ & \ll T^{316 /69+\varepsilon} X^{-359 / 414+\varepsilon}. \end{aligned} \end{equation} In the last inequality we have used the fact $M\gg1$. {\textbf{Case 2.}} $L \ll T^{593/ 345}M^{-194 / 207}X^{-97 / 207}$. We rewrite $B(L, M, R)$ as \begin{equation}\label{B(L, M, R)2} \begin{aligned} B(L, M, R)& =\sum_{l \sim L}\sum_{m \sim M} \left(\sum_{r \geq 1}\lambda_{f}(r)\lambda_{g}(r) e\left(5(l m^2 r X)^{1 / 5}\right) V\left(\frac{r}{R}\right) \right)\\ & \ll \sum_{l \sim L}\sum_{m \sim M} \left|\sum_{r \geq 1}\lambda_{f}(r)\lambda_{g}(r) e\left(5T(\frac{r}{R})^{1 / 5}\right) V\left(\frac{r}{R}\right) \right|. \end{aligned} \end{equation} In order to apply Theorem 1.1, we need to verify that $R$ satisfies the condition $R \ll T^{12/5}$. Note that $R \ll LM^2R \asymp T^5/X$ and $Y^{2/5+\varepsilon} \ll T \ll \frac{X^{1+\varepsilon}}{Y} = X^{1/3+\delta+\varepsilon}$. Since we assume $\delta < 2/39 $, we have $T\ll X^{5/13}$ and hence $R\ll T^5/X \ll T^{12/5}$. Therefore, by Theorem 1.1, we have $$ \begin{aligned} B(L, M, R) &\ll L M T^{\frac{2}{5}}R^{\frac{3}{4}+\varepsilon} \asymp L^{1/4+\varepsilon}M^{-1/2+\varepsilon}T^{83/20+\varepsilon}X^{-3/4+\varepsilon}\\ &\ll T^{316 /69+\varepsilon} M^{-152 / 207+\varepsilon} X^{-359 / 414+\varepsilon}\\ &\ll T^{316 /69+\varepsilon}X^{-359 / 414+\varepsilon}. \end{aligned} $$ Combining \eqref{I(X) to exp sum}, \eqref{B(L, M, R)1} and \eqref{B(L, M, R)2}, we have \begin{equation}\label{I(X)1} \begin{aligned} I(X) &\ll \sum_{T ~{\rm{dyadic}} \atop Y^{2/5+\varepsilon} \ll T \ll X^{1 /3+\delta+\varepsilon}}\left(\frac{X^{1+\varepsilon}}{T^{3}} \cdot T^{316 /69+\varepsilon} X^{-359 / 414+\varepsilon} + T^{3 / 2+\varepsilon}\right)\\ & \ll \sum_{T ~{\rm{dyadic}} \atop Y^{2/5+\varepsilon} \ll T \ll X^{1 /3+\delta+\varepsilon}} \left(X^{55 / 414 + \varepsilon} T^{109 /69+\varepsilon} + T^{3 / 2 + \varepsilon}\right) \\ & \ll X^{\frac{109}{69}\delta + \frac{91}{138} +\varepsilon}. \end{aligned} \end{equation} Finally, putting together the above estimates \eqref{sum of lambda 1}, \eqref{move integral} and \eqref{I(X)1}, we conclude that $$ \sum_{X / 2<n \leq X} \lambda_{1\boxplus(f\times g)}(n)= L(1, f\times g) \tilde{W}(1) X + O\left(X^{\frac{109}{69}\delta + \frac{91}{138}+ \varepsilon}\right) + O\left(X^{2/3 - \delta + \varepsilon}\right). $$ So we complete the proof of Corollary \ref{cor:main} by taking $\delta \leq 1/356$. \end{proof} \section{Estimation of integrals}\label{proofs-of-technical-lemma} We first prove Lemma \ref{integral:lemma-0}. \begin{proof}[Proof of Lemma \ref{integral:lemma-0}] By \eqref{I-change-0}, we write \begin{eqnarray*} \mathfrak{I}(m,n,q) =2\int_0^\infty y\widetilde{V}(y^2) e\left(t\varphi(y^2)+By-Dy\right)\mathrm{d}y, \end{eqnarray*} where \begin{eqnarray}\label{BD} B=2q^{-1}(nX)^{1/2}\asymp \sqrt{XN_1}/C, \qquad D=2q^{-1}(mX)^{1/2}\asymp (MX)^{1/2}/C. \end{eqnarray} Recall the range of $N_1$ in \eqref{M-N1-range} that $N_1\asymp X\Xi^2/Q^2$. Thus for $X^{1+\varepsilon}\Xi/(Qt) \leq C\ll Q$, we have \begin{eqnarray}\label{B upper bound} B\ll \frac{X^{1+\varepsilon}\Xi}{CQ}\ll X^{-\varepsilon}t. \end{eqnarray} Therefore, the integral $\mathfrak{I}(m,n,q)$ is negligibly small unless $D\asymp t$. Assume \begin{eqnarray}\label{phi assumption} (\varphi(y^2))'=cy^{-\beta} \qquad \text{with}\quad \beta\neq 0, \end{eqnarray} where $c>0$ is an absolute constant, i.e., \begin{eqnarray}\label{phi assumption-2} \varphi(y)=\frac{c}{2}\log y+c_1\qquad \text{or}\qquad \varphi(y)=\frac{c}{1-\beta}y^{(1-\beta)/2}+c_2 \quad \text{with}\; \beta\neq 0,1, \end{eqnarray} where $c_i\in \mathbb{R}, i=1,2$, are absolute constants. Without loss of generality, we further assume $c_i=0,i=1,2$. Let $\rho(y)=t\varphi(y^2)+By-Dy$. Then \begin{eqnarray*} \rho'(y)&=&cty^{-\beta}+B-D,\\ \rho^{(j)}(y)&=&t\big(\varphi(y^2)\big)^{(j)}, \quad j=2,3,\ldots. \end{eqnarray*} The stationary point $y_*$ which is the solution to the equation $\rho'(y)=cty^{-\beta}+B-D$ is $y_*=\left(\frac{ct}{D-B}\right)^{1/\beta}$. Denote \begin{eqnarray*} C_{\alpha}^j=\frac{\alpha(\alpha-1)\cdot\cdot\cdot (\alpha-j+1)}{j!}. \end{eqnarray*} Then by the Taylor series approximation, $y_*$ can be written as \begin{eqnarray}\label{stationary point} y_*&=&\left(\frac{ct}{D}\right)^{1/\beta}\bigg(1+ \sum_{j=1}^{K_1} C_{-1/\beta}^{j}\left(\frac{-B}{D}\right)^j +O_{\beta,K_1}\left(\frac{B^{K_1+1}}{t^{K_1+1}}\right) \bigg) \nonumber\\ &:=&y_0\left(1+\sum_{j=1}^{K_1}y_j +O_{c,\beta,K_1}\left(\frac{B^{K_1+1}}{t^{K_1+1}}\right) \right), \end{eqnarray} where here and after, $K_j\geq 1$, $j=1,2,3\ldots,$ denote integers, and \begin{eqnarray*} y_0&=&\left(\frac{ct}{D}\right)^{1/\beta} \asymp 1,\\ y_{j}&=&C_{-1/\beta}^{j}\left(\frac{-B}{D}\right)^j \asymp \left(\frac{B}{t}\right)^{j}. \end{eqnarray*} By \eqref{B upper bound}, the $O$-term in \eqref{stationary point} is $O(N^{-\varepsilon K_1 })$, which can be arbitrarily small by taking $K_1$ sufficiently large. Note that $\rho^{(j)}(y)\asymp t$ for any integer $j\geq 1$. Recall $\widetilde{V}^{(j)}(y)\ll_j \triangle^j$, where $\triangle<t^{1-\varepsilon}$ (see \eqref{derivative-of-V}). To make sure that the stationary phase analysis is applicable to the integral $\mathfrak{I}(M\xi,n,q)$, we assume $\triangle$ satisfies \begin{eqnarray}\label{assumption-on-Delta} \triangle<t^{1/2-\varepsilon} \end{eqnarray} Now applying Lemma \ref{lemma:exponentialintegral} with $Z=1$, $Y=\triangle$, $H=t$ and $R=H/X^2\gg t^{\varepsilon}$, we have \begin{eqnarray*} \mathfrak{I}(m,n,q) =\frac{e(\rho(y_*))}{\sqrt{2\pi \rho''(y_*)}} G(y_*) + O_{A}( t^{-A}), \end{eqnarray*} for any $A>0$, where $G(y)$ is some inert function supported on $y\asymp 1$. From \eqref{phi assumption-2}, \eqref{stationary point} and using Taylor series approximation, we have \begin{eqnarray*} \rho(y_*)&=&t\varphi(y_*^2)+By_*-Dy_*\\ &=&t\varphi(y_0^2)-Dy_0+By_0+\frac{y_0^2}{2c\beta^2}\frac{B^2}{t}+B\sum_{j=2}^{K_2}g_{c,\beta,j}\left(y_0\right) \left(\frac{B}{t}\right)^j+O_{c,\beta,K_2}\left(\frac{B^{K_2+2}}{t^{K_2+1}}\right) \end{eqnarray*} and \begin{eqnarray*} \rho''(y_*)=-c\beta ty_*^{-\beta-1} =-c\beta ty_0^{-\beta-1}+B(\beta+1)y_0^{-1}+ B\sum_{j=1}^{K_3}h_{c,\beta,j}\left(y_0\right) \left(\frac{B}{t}\right)^j+O_{c,\beta,K_3}\left(\frac{B^{K_3+2}}{t^{K_3+1}}\right) \end{eqnarray*} for some functions $g_{c,\beta, j}(x)$, $h_{c,\beta, j}(x)$ of polynomially growth, depending only on $c,\beta,j$, and supported on $x\asymp 1$. Note that $\rho''(y_*)\asymp t$. Hence, \begin{eqnarray*} \mathfrak{I}(m,n,q) &=&\frac{1}{\sqrt{t}}G_{\natural}(y_*) e\left(t\varphi(y_0^2)-Dy_0+By_0+\frac{y_0^2}{2c\beta^2}\frac{B^2}{t}\right)\nonumber\\ &&\qquad\times e\left(B\sum_{j=2}^{K_2}g_{c,\beta,j}\left(y_0\right) \left(\frac{B}{t}\right)^j\right) + O_{A}(t^{-A}), \end{eqnarray*} where $ G_{\natural}(y)=\left(t/(2\pi \rho''(y))\right)^{1/2} G(y)$ satisfies $G_{\natural}^{(j)}(y)\ll_j 1$. This finishes the proof of the lemma. \end{proof} Next we prove Lemma \ref{integral:lemma}. \begin{proof}[Proof of Lemma \ref{integral:lemma}] The proof is similar to \cite[Lemma 4.3]{LS}. Recall \eqref{H-integral} which we relabel as \begin{eqnarray}\label{H-relabel} \mathcal{H}(x)=\int_{\mathbb{R}} \omega\left(\xi\right) \mathfrak{I}^*\left(M\xi,n_1,q\right) \overline{\mathfrak{I}^*\left(M\xi,n_2,q\right)} \, e\left(-x\xi\right)\mathrm{d}\xi, \end{eqnarray} where by \eqref{I*}, \begin{eqnarray}\label{I*-2} \mathfrak{I}^*(M\xi,n,q) =\frac{1}{\sqrt{t}}G_{\natural}(y_*) e\left(By_0+\frac{y_0^2}{2c\beta^2}\frac{B^2}{t}+B\sum_{j=2}^{K_2}g_{c,\beta,j}\left(y_0\right) \left(\frac{B}{t}\right)^j\right) + O_{A}(t^{-A}). \end{eqnarray} Here $y_0, y_*$ are as in \eqref{stationary point}, $G_{\natural}(x)$ is some inert function supported on $x\asymp 1$, $B=2q^{-1}(nX)^{1/2}$ is defined in \eqref{BD} and $g_{c,\beta, j}(x)$ some polynomial function depending only on $c,\beta,j$. Trivially, one has \begin{eqnarray*} \mathcal{H}(x)\ll t^{-1}. \end{eqnarray*} This proves the first statement of Lemma \ref{integral:lemma}. Plugging \eqref{I*-2} into \eqref{H-relabel}, we obtain \begin{eqnarray*} &&\mathcal{H}(x)=\frac{1}{t} \int_{\mathbb{R}} \omega\left(\xi\right)G_{\natural}(y_*)\overline{G_{\natural}(y_*')} e\left(-x\xi+(B-B')\widetilde{y}_0\xi^{-1/(2\beta)}+ (B^2-B'^2)\frac{\widetilde{y}_0^2}{2c\beta^2t}\xi^{-1/\beta}\right)\\ &&\qquad\qquad\times e\left(\sum_{j=2}^{K_2}g_{c,\beta,j}(\widetilde{y}_0\xi^{-1/(2\beta)}) \bigg(B\bigg(\frac{B}{t}\bigg)^j -B'\bigg(\frac{B'}{t}\bigg)^j\bigg)\right) \mathrm{d}\xi + O_{A}( t^{-A}), \end{eqnarray*} where $\widetilde{y}_0=y_0\xi^{1/(2\beta)}=(ct/\widetilde{D})^{1/\beta}\asymp 1$ with $\widetilde{D}=D\xi^{-1/2}=2q^{-1}(MX)^{1/2}$ is defined in \eqref{BD}, $y_0, y_*$ are as in \eqref{stationary point}, and $B$ is defined in \eqref{BD} and $B'$ is defined in the same way but with $n_1$ replaced by $n_2$. Note that the first derivative of the phase function in the above integral equals \begin{eqnarray}\label{1st phase function} &&-x-\frac{1}{2\beta}(B-B')\widetilde{y}_0\xi^{-1/(2\beta)-1}-\frac{1}{\beta} (B^2-B'^2)\frac{\widetilde{y}_0^2}{2c\beta^2t}\xi^{-1/\beta-1}\nonumber\\&& -\frac{1}{2\beta}\widetilde{y}_0\xi^{-1/(2\beta)-1}\sum_{j=2}^{K_2}g'_{c,\beta,j}(\widetilde{y}_0\xi^{-1/(2\beta)}) \bigg(B\bigg(\frac{B}{t}\bigg)^j -B'\bigg(\frac{B'}{t}\bigg)^j\bigg) \end{eqnarray} which is $\gg |x|\gg X^{\varepsilon}$ if $|x|\gg X^{\varepsilon}\sqrt{XN_1}/C\asymp X^{1+\varepsilon}\Xi/(CQ)$ since $B, B'\asymp \sqrt{XN_1}/C$ and $N_1\asymp X\Xi^2/Q^2$ in \eqref{M-N1-range}. Then repeated integration by parts shows that the contribution from $x\gg X^{1+\varepsilon}\Xi/(CQ)$ is negligible. Thus the second statement of Lemma \ref{integral:lemma} is clear. Moreover, if $-1/(2\beta)-1\neq 0$, i.e., $\beta\neq -1/2$ or equivalently, $\varphi(x)\neq cx^{3/4}$, the second term in \eqref{1st phase function} is of size \begin{eqnarray*} |B-B'|=\frac{2N^{1/2}}{q}|n_1^{1/2}-n_2^{1/2}| \asymp \frac{X^{1/2}}{CN_1^{1/2}}|n_1-n_2|\asymp \frac{Q}{C\Xi}|n_1-n_2| \end{eqnarray*} since $N_1\asymp X\Xi^2/Q^2$. Thus repeated integration by parts shows that $\mathcal{H}(x)$ is negligibly small unless $|x|\asymp \frac{Q}{C\Xi}|n_1-n_2|$. Now by applying the second derivative test in Lemma \ref{lem: 2st derivative test, dim 1}, we infer that for $x\neq 0$ and $\varphi(x)\neq cx^{3/4}$, \begin{eqnarray*} \mathcal{H}(x)\ll t^{-1} |x|^{-1/2}. \end{eqnarray*} This proves (3). Finally, for $x=0$, using the identity $a^{j+1}-b^{j+1}=(a-b)(a^j+a^{j-1}b+\cdots+ab^{j-1}+b^j)$ and \eqref{B upper bound}, one sees that, for $j\geq 1$, \begin{eqnarray*} B\bigg(\frac{B}{t}\bigg)^j -B'\bigg(\frac{B'}{t}\bigg)^j&=&(B-B')\left(\bigg(\frac{B}{t}\bigg)^j+ \bigg(\frac{B}{t}\bigg)^{j-1}\frac{B'}{t}+\cdots+\frac{B}{t}\bigg(\frac{B'}{t}\bigg)^{j-1} +\bigg(\frac{B'}{t}\bigg)^j \right)\\ &\ll& |B-B'|X^{-\varepsilon}. \end{eqnarray*} Thus the first derivative of the phase function in \eqref{1st phase function} is \begin{eqnarray*} \gg |B-B'|\asymp \frac{Q}{C\Xi}|n_1-n_2|. \end{eqnarray*} By repeated integration by parts, $\mathcal{H}(0)$ is negligible small unless $|n_1-n_2|\ll C\Xi N^{\varepsilon}/Q$. Since $\Xi\ll N^{\varepsilon}$ and $C\ll Q$, we have that $\mathcal{H}(0)$ is negligibly small unless $|n_1-n_2|\ll N^{\varepsilon}$. This completes the proof of Lemma \ref{integral:lemma}. \end{proof} \begin{bibdiv} \begin{biblist} \bib{ASS20}{article} { author = {Acharya, Ratnadeep} author = {Sharma, Prahlad}, author = {Singh, Saurabh Kumar} title = {$t$-aspect subconvexity for $\rm GL(2) \times \rm GL(2)$ $L$-function }, note={\url{arXiv:2011.01172}}, date={2020}, } \bib{Agg}{article} { author = {Aggarwal, Keshav}, title = {A new subconvex bound for {$\rm GL(3)$} $L$-functions in the $t$-aspect}, journal={Int. J. Number Theory}, volume={17}, date={2021}, number={5}, pages={1111--1138}, doi={10.1142/S1793042121500275}, } \bib{AHLQ}{article} { author = {Aggarwal, Keshav}, author={Holowinsky, Roman}, author={Lin, Yongxiao}, author={Qi, Zhi}, title = {A Bessel delta-method and exponential sums for {$\rm GL(2)$}}, journal = {Q. J. Math.}, volume={71}, date={2020}, number={3}, pages={1143--1168}, doi = {10.1093/qmathj/haaa026}, } \bib{BR}{article} { author = {Bernstein, Joseph}, author={Reznikov, Andre}, title = {Subconvexity bounds for triple $L$-functions and representation theory}, journal = {Ann. of Math. (2) }, volume={172}, date={2010}, number={3}, pages={1679--1718}, } \bib{BJN}{article}{ author={Blomer, Valentin}, author={Jana, Subhajit }, author={Nelson, Paul}, title={The Weyl bound for triple product $L$-functions }, note={\url{arXiv:2101.12106}}, date={2021}, } \bib{BKY}{article}{ author={Blomer, Valentin}, author={Khan, Rizwanur}, author={Young, Matthew}, title={Distribution of mass of holomorphic cusp forms}, journal={Duke Math. J.}, volume={162}, date={2013}, number={14}, pages={2609--2644}, issn={0012-7094}, doi={10.1215/00127094-2380967}, } \bib{Bourgain}{article}{ author={Bourgain, J.}, title={Decoupling, exponential sums and the {R}iemann zeta function}, journal={J. Amer. Math. Soc.}, volume={30}, date={2017}, number={1}, pages={205--224}, } \bib{C}{article}{ author={Czarnecki, Kyle}, title={Resonance sums for Rankin-Selberg products of $\rm SL_m(\mathbb{Z})$ Maass cusp forms}, journal={J. Number theory}, volume={163}, date={2016}, pages={359--374}, doi={10.1016/j.jnt.2015.11.003}, } \bib{Del}{article}{ author={Deligne, Pierre}, title={La conjecture de Weil. I}, language={French}, journal={Inst. Hautes \'{E}tudes Sci. Publ. Math.}, number={43}, date={1974}, pages={273--307}, issn={0073-8301}, } \bib{Fri-Iwa}{article}{ author={Friedlander, John B.}, author={Iwaniec, Henryk}, title={Summation formulae for coefficients of $L$-functions}, journal={Canad. J. Math.}, volume={57}, date={2005}, number={3}, pages={494--505}, issn={0008-414X}, doi={10.4153/CJM-2005-021-5}, } \bib{GrahamKolesnik}{article}{ author={Graham, S.W.}, author={Kolesnik, G.}, title={van der {C}orput's method of exponential sums}, series={London Mathematical Society Lecture Note Series}, volume={126}, publisher={Published for Cambridge University Press, Cambridge}, date={1991}, } \bib{HB}{article}{ author={Huang, Bingrong}, title={On the Rankin-Selberg problem}, journal={Math. Ann.}, date={2021}, doi={10.1007/s00208-021-02186-7}, } \bib{HLW}{article}{ author={Huang, Bingrong}, author={Lin, Yongxiao}, author={Wang, Zhiwei}, title={Averages of coefficients of a class of degree 3 $L$-functions }, date={2021}, doi={10.1007/s11139-021-00417-8} } \bib{Hux2}{book}{ author={Huxley, M. N.}, title={Area, lattice points, and exponential sums}, series={London Mathematical Society Monographs. New Series}, volume={13}, note={Oxford Science Publications}, publisher={The Clarendon Press, Oxford University Press, New York}, date={1996}, pages={xii+494}, isbn={0-19-853466-3}, } \bib{IK}{book}{ author={Iwaniec, Henryk}, author={Kowalski, Emmanuel}, title={Analytic number theory}, series={American Mathematical Society Colloquium Publications}, volume={53}, publisher={American Mathematical Society, Providence, RI}, date={2004}, pages={xii+615}, isbn={0-8218-3633-1}, doi={10.1090/coll/053}, } \bib{ILS}{article}{ author={Iwaniec, Henryk}, author={Luo, Wenzhi}, author={Sarnak, Peter}, title={Low lying zeros of families of $L$-functions}, journal={Inst. Hautes \'{E}tudes Sci. Publ. Math.}, number={91}, date={2000}, pages={55--131 (2001)}, issn={0073-8301}, } \bib{Jutila}{book}{ author={Jutila, M.}, title={Lectures on a method in the theory of exponential sums}, series={Tata Institute of Fundamental Research Lectures on Mathematics and Physics}, volume={80}, publisher={Published for the Tata Institute of Fundamental Research, Bombay; by Springer-Verlag, Berlin}, date={1987}, pages={viii+134}, isbn={3-540-18366-3}, } \bib{KP}{article}{ author={Kaczorowski, J.}, author={Perelli, A.}, title={On the structure of the Selberg class. VI. Non-linear twists}, journal={Acta Arith.}, volume={116}, date={2005}, number={4}, pages={315--341}, issn={0065-1036}, } \bib{KS}{article}{ author={Kim, Henry H.}, author={Sarnak, Peter}, title={Appendix 2 in Functoriality for the exterior square of $\rm GL_4$ and the symmetric fourth of $\rm GL_2$}, journal={J. Amer. Math. Soc.}, volume={16}, date={2003}, number={1}, pages={139--183}, } \bib{KPY}{article}{ author={Kiral, Eren Mehmet}, author={Petrow, Ian}, author={Young, Matthew P.}, title={Oscillatory integrals with uniformity in parameters}, language={English, with English and French summaries}, journal={J. Th\'{e}or. Nombres Bordeaux}, volume={31}, date={2019}, number={1}, pages={145--159}, issn={1246-7405}, } \bib{KMV}{article}{ author={Kowalski, E.}, author={Michel, Ph.}, author={VanderKam, J.}, title={Rankin--Selberg $L$-functions in the level aspect}, journal={Duke Math. J.}, volume={114}, date={2002}, number={1}, pages={123--191}, issn={0012-7094}, doi={10.1215/S0012-7094-02-11416-1}, } \bib{KMS19}{article} { author = {Kumar, Sumit} author = {Mallesham, Kummari}, author = {Singh, Saurabh Kumar} title = {Non-linear additive twist of {F}ourier coefficients of {$GL(3)$} {M}aass forms}, note={\url{arXiv:1905.13109}}, date={2019}, } \bib{LS}{article}{ author={Lin, Yongxiao}, author={Sun, Qingfeng}, title={Analytic twists of $\rm GL_3 \times \rm GL_2$ automorphic forms}, journal={Int. Math. Res. Not.}, date={2021}, doi={10.1093/imrn/rnaa348}, } \bib{Mun1}{article}{ author={Munshi, Ritabrata}, title={The circle method and bounds for $L$-functions---III: $t$-aspect subconvexity for $GL(3)$ $L$-functions}, journal={J. Amer. Math. Soc.}, volume={28}, date={2015}, number={4}, pages={913--938}, issn={0894-0347}, doi={10.1090/jams/843}, } \bib{Murty}{article}{ author={Murty, M. Ram}, title={On the estimation of eigenvalues of Hecke operators}, journal={Rocky Mountain J. Math.}, volume={15}, date={1985}, number={2}, pages={521--533}, issn={0035-7596}, doi={10.1216/RMJ-1985-15-2-521}, } \bib{Ren-Ye-1}{article}{ author={Ren, XiuMin}, author={Ye, YangBo}, title={Resonance of automorphic forms for $\rm{GL}(3)$}, journal={Transactions of American mathematical society}, volume={367}, date={2015}, number={3}, pages={2137--2157}, } \bib{Ren-Ye}{article}{ author={Ren, XiuMin}, author={Ye, YangBo}, title={Resonance and rapid decay of exponential sums of Fourier coefficients of a Maass form for $\rm{GL}_m(\Bbb{Z})$}, journal={Sci. China Math.}, volume={58}, date={2015}, number={10}, pages={2105--2124}, issn={1674-7283}, doi={10.1007/s11425-014-4955-3}, } \end{biblist} \end{bibdiv} \end{document}
2,877,628,090,027
arxiv
\section{Introduction} Historically, star clusters have been used as natural laboratories in which to test the theory of stellar evolution. This is particularly true for massive stars, where the short lifetimes and non-monotonic mass-luminosity relation make it very difficult to infer the evolutionary state of isolated stars. In clusters where the age and distance are known, it is possible to constrain the initial masses of a wide variety of post main-sequence objects. For example, it has been possible to constrain the nature of the Of/WNh stars \citep{Martins08}, infer the progenitor masses of the progenitors of neutron stars \citep{sgr1900paper}, argue for high mass progenitors to Wolf-Rayet (WR) stars (provided membership with and age of the host cluster can be firmly established) \citep[e.g.][]{Humphreys85,Massey01,Clark05}, and measure an accurate mass-loss rate law for Red Supergiants (RSGs) \citep{Beasor-Davies16,Beasor-Davies18}. All of the above studies rely on being able to obtain accurate ages, reddenings and distances to the host star clusters. The latter quantity (and reddening, in the case of high foreground extinction) is vital in determining the bolometric luminosities of the cluster stars, allowing them to be placed on a diagnostic H-R diagram. Once the luminosities of the turn-off and post-MS stars are known, an age may then be inferred, although this process itself has many pitfalls \citep{Beasor2019}. Accurate distances to young star clusters are therefore pivotal to understanding the evolution of massive stars. Distances to young massive star clusters in the Milky Way have typically been estimated via three independent methods. If the cluster radial velocity can be measured, for example from the average of the member stars or from the surrounding interstellar medium, a kinematic distance may be inferred by comparing to the Galactic rotation curve \citep[e.g.][]{rsgc1paper,Kothes-Dougherty07}. If the cluster has a low foreground extinction, deep optical imaging can reveal the `kink' in the main-sequence caused by the transition from the PPI-chain to the CNO-cycle as the main form of energy generation, which can be used as a distance-sensitive anchor for isochrone fitting \citep[e.g.][]{Currie10}. Finally, if spectroscopic observations can go deep enough to detect the more well-behaved main sequence stars of spectral type late-O / early-B, spectroscopic parallaxes may be obtained \citep[e.g.][]{danks-paper,Crowther06}. Until recently, the much more direct method of obtaining distances, from their astrometric parallaxes, was not possible for Galactic YMCs. Such objects are relatively rare, and so typically have distances $>$2kpc, requiring parallax measurements accurate to better than 0.1mas. Furthermore, at these distances there is often substantial reddening, compounding the problem. The second data release of Gaia (DR2) \citep{Gaia,GaiaDR2} therefore represents an opportunity to revolutionise the field of massive star research, as distances to several benchmark clusters and associations may now be obtained at much higher accuracy and precision than has previously been possible. In this paper, we focus on three star clusters young and massive enough to contain several Red Supergiants, and whose cluster members are bright enough in the optical and sufficiently uncrowded to have reliable detections in DR2. These clusters are $\chi$~Per, NGC~7419, and Westerlund~1. In Sect.\ \ref{sec:method} we describe our methodology in terms of how we select the benchmark cluster members and determine an average cluster parallax. In Sect.\ \ref{sec:results} we present the results, and conclude in Sect.\ \ref{sec:conc}. \section{Method} \label{sec:method} \subsection{Sample definition} We begin by searching the SIMBAD\footnote{} database for OB stars within 0.5\degr\ of the centre of each cluster. We concentrate on OB stars, as the parallax measurements of late-type supergiants are known to be problematic owing to the size of the stars being comparable to (or greater than) the size of the Earth's orbit around the Sun \citep[see e.g.][]{Chiavassa11gaia}. We then cross-match this sample with Gaia~DR2, to obtain parallaxes $\pi$, proper motions (PMs), and associated errors. Following \citet{Lindegren18} and \citet[][ hereafter A19]{Aghakhanloo19}, we define the error on the parallax $\sigma_i$ of each star $i$ to be $\sigma_i = 1.086\sigma_\pi$ where $\sigma_\pi$ is the quoted error on $\pi$ in Gaia DR2. Next, we isolate those stars with proper motions (PMs) consistent with the cluster average. This allows us to eliminate stars with potentially anomalous parallaxes, such as runaways or binaries. We define the average PM for each cluster by performing an iterative sigma-clipped mean using the IDL function {\tt meanclip}, clipping at 1.5$\sigma$. We then isolate those stars within 2.5$\sigma_{\rm PM,i}$ of this mean, where $\sigma_{\rm PM,i}$ is the error on each star's PM. We deliberately set these tight (and potentially exclusive) constraints as we are not concerned with being complete, only with identifying the stars with reliable astrometric information. The results of this process are illustrated in \fig{fig:PMpi}. We define the remaining stars as the `clean' samples, which contain 62, 10 and 32 stars for the clusters $\chi$~Per, NGC~7419, and Wd~1 respectively. The sensitivity of our results to how aggressively we perform the PM cleaning are discussed in Sect.\ \ref{sec:results}. \subsection{Average cluster parallax, $\bar{\pi}$} \label{sec:pi} The next step is to define the average parallax $\bar{\pi}$ to each cluster. In \fig{fig:Ppi} we plot histograms of the parallaxes of the stars in each cluster field. We plot all OB stars in the fields of the clusters in black, and the cleaned sample in red. In each case, the parallaxes are somewhat normally distributed. This is just as one would expect if each star's parallax were randomly sampled from a gaussian distribution centred on the mean cluster parallax with a standard deviation characteristic of the error on each measurement. In fact, the errors on each $\pi_i$ are not all the same. To get a more representative illustration of the distribution of parallaxes, we determined the probability distribution functions for each $\pi_i$, assuming a gaussian distribution with width $\sigma_i$, and summed over all stars to determine the total $\pi$ probability function, $P_\pi$ . The results are shown in the green curves of \fig{fig:Ppi}. The green dashed lines in these figures are the weighted means of the cleaned samples, which we call $\bar{\pi}$. We determine $\bar{\pi}$ and its error $\delta\bar{\pi}$ according to, \begin{equation} \bar{\pi} = \frac{\sum^{N}_{i} w_{i} \pi_{i}} {\sum^{N}_{i} w_{i}} , ~~~ \delta\bar{\pi} = \sqrt{ \frac{1}{N-1} \frac{ \sum^{N}_{i} w_{i} (\pi_{i} - \bar{\pi})^2 } {\sum^{N}_{i} w_{i}} } \end{equation} \noindent where $N$ is the number of stars in the `clean' sample, and the weights $w_i = 1/\sigma_i^2$. Note that the error on the mean $\delta\bar{\pi}$ is the weighted standard deviation divided by $\sqrt{N}$. \subsection{Distance, $d$} \label{sec:dist} To convert $\bar{\pi}$ to a distance $d$, we first determine the posterior probability distribution on $\bar{\pi}$, \begin{equation} \displaystyle P_d \propto \exp \left({-\frac{1}{2} z^2}\right) \end{equation} \noindent where, \begin{equation} z = \frac{ \bar{\pi} - \pi_{\rm ZP} - 1/d}{\delta\bar{\pi}} \end{equation} \noindent and $\pi_{\rm ZP}$ is the zero-point parallax offset in Gaia~DR2. The quantity $\pi_{\rm ZP}$ has been studied by numerous authors using several independent methods, with values ranging from -0.029mas$<\pi_{\rm ZP}<$-0.08mas \citep{Stassun18,Riess18,Lindegren18,Graczyk19}. Specifically, Lindegren et al.\ found that the this offset varied with position on the sky with an amplitude of $\delta\pi_{\rm ZP}\pm0.03$mas on spatial scales of about a degree. Since this fluctuation occurs on a spatial scale larger than the apparent size of our clusters, it must be assumed to affect all stars equally, That is, the error on $\pi_{\rm ZP}$ fixes a lower limit to the uncertainty on the absolute parallax of the cluster. With this in mind, we add the quantities $\delta\pi_{\rm ZP}$ and $\sigma_{\bar{\pi}}$ in quadrature when determining the absolute uncertainty on $\bar{\pi}$. Throughout this work we adopt an average value $\pi_{\rm ZP} = -0.05\pm0.03$mas. Having calculated $P_d$, the distance $d$ and uncertainty $\sigma_d$ are determined from the mode and 68\% confidence intervals on $P_d$. Note that, in contrast to other studies which attempt to determine distances from Gaia parallaxes, we do not apply a prior on distance when determining the posterior probability distribution. \begin{figure*} \begin{center} \includegraphics[width=5.8cm]{chiper_PMpi.eps} \includegraphics[width=5.8cm]{7419_PMpi.eps} \includegraphics[width=5.8cm]{wd1_PMpi.eps} \caption{Proper motions of the OB stars in the plane of each cluster. Stars deemed to be cluster members with high confidence based on their proper motions (the `clean' sample) are plotted as red circles (see text for details).} \label{fig:PMpi} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=5.8cm]{chiper_Ppi.eps} \includegraphics[width=5.8cm]{7419_Ppi.eps} \includegraphics[width=5.8cm]{wd1_Ppi.eps} \caption{Parallaxes of the OB stars in the field of each cluster. Black lines show the histograms of all stars in the field; red lines show the same but only for the clean sample; and green shows the total probability distribution for the average parallax, which takes into account the error bars on each parallax measurement. The weighed mean parallax is shown as the green dashed line. The average zero-point parallax offset of -0.05mas has been applied to all stars (see text for details).} \label{fig:Ppi} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=5.8cm]{chiper_Pd.eps} \includegraphics[width=5.8cm]{7419_Pd.eps} \includegraphics[width=5.8cm]{wd1_Pd.eps} \caption{Posterior probability distribution of the distances to each cluster. The dashed lines represent the most probable distance and the 68\% confidence intervals.} \label{fig:dist} \end{center} \end{figure*} \section{Results and discussion} \label{sec:results} We now discuss our results for the average parallaxes and distances to the three clusters in our sample. The implications for the ages of these clusters, and therefore for how their stellar populations reconcile with stellar evolutionary theory, is complex since it also depends on how one defines the age. This is discussed in a companion paper \citep{Beasor2019}. \subsection{$\chi$ Persei} Previous estimates of this cluster's distance have involved fitting the (pre-) main-sequence population in one form or another. The state-of-the-art was presented in \citet{Currie10}. These authors fit the main sequence in a variety of colours and magnitudes, as well as obtaining spectrophotometric distances from the stars with known spectral types, and consistently found a distance within the range $2.344^{+0.088}_{-0.085}$\,kpc. This compares well to similar analysis by \citet{Slesnick02}, \citet{Uribe02}, and \citet{Mayne-Naylor08}. An alternative estimate to the distance to $\chi$~Per can be found from the maser parallax measurement of the Red Supergiant S~Per. Though unlikely to be a member of the cluster itself (projected distance = 1.47\degr\ $\simeq$60pc at a distance of 2.25kpc), it does belong to the larger Perseus OB1 association, of which $\chi$~Per is also a member. In an astrometric study of the H$_2$O masers around S~Per, \citet{Asaki10} found proper motions of (\hbox{$\alpha$=-0.49$\pm$0.23mas\,yr$^{-1}$}, \hbox{$\delta$=-1.19$\pm$0.20mas\,yr$^{-1}$}), and a parallax of \hbox{$\pi = 0.413\pm0.017$mas}. This is within the errors of that found for $\chi$~Per (see Figs.\ \ref{fig:PMpi} and \ref{fig:Ppi}), left panels), especially when one considers the zero point parallax error of \hbox{$\pi_{\rm ZP}=-0.05\pm0.03$}. The Gaia DR2 distance estimate for $\chi$~Per, \hbox{$d=2.25^{+0.16}_{-0.14}$\,kpc}, is consistent with the previous studies described above. The internal dispersion on the average cluster parallax is extremely precise ($\pm1.3\%$), and so the uncertainty on the absolute distance is dominated by that on $\pi_{\rm ZP}$. Even so, the absolute distance is precise to $\pm$7\%. Combined with the agreement with the two independent studies described above, we can consider the distance to $\chi$~Per to be extremely well constrained. \subsection{NGC~7419} In contrast to $\chi$~Per, the various distance estimates for NGC~7419 found in the literature span a broad range of values. Several studies exist which in one way or another fit the main sequence and/or spectroscopic parallaxes to the brightest main-sequence stars \citep{Beauchamp94,Caron03,Subramaniam06,Joshi08,Marco-Negueruela13}, but which find distances ranging from 1.7-4.0kpc. Further, despite having one extreme RSG as a cluster member (MY~Cep), which is a known maser emitter \citep[e.g.][]{Verheyen12}, there is no parallax measurement for this star, and so this object cannot be used to resolve the controversy. We find an average parallax of \hbox{$\bar{\pi} = 0.334 \pm 0.018$\,mas}\ from the `clean' sample of OB stars. We note that this value is robust to the details of which stars in the original OB sample we include in the averaging. As seen in \fig{fig:PMpi}, most OB stars in the plane of the cluster have similar proper motions. Irrespective of how harsh we make the proper motion cuts, we always obtain the same average parallax within the errors. The parallax translates to a distance of \hbox{$d=3.00^{+0.35}_{-0.29}$\,kpc}, which is consistent with the mean of the measurements described in the previous paragraph. Again, the dominant source of error is that on $\pi_{\rm ZP}$. \subsection{Westerlund 1} As summarised recently by A19, there have been numerous and wide-ranging distance estimates for Wd~1. Of the contemporary measurements, whether they be based on the assumed intrinsic luminosities of B-supergiants \citep{Crowther06}, a kinematic distance based on the radial velocity of the HI gas \citep{Kothes-Dougherty07}, or fitting the (pre-) main sequence \citep{Brandner08}, all seem to converge on $\sim$4kpc. In the past year, other authors have looked at Wd~1's parallax information in Gaia. \citet{Clark18} quoted an average parallax of $\pi=$0.21-0.24 (assuming $\pi_{\rm ZP} = -0.05$), but commented that the errors on the parallaxes of individual stars meant that one could only say that the cluster was consistent with the recent estimates. A19 went further, and attempted to model the large number of stars in the plane of Wd~1 into field and cluster components based on the observed parallax distribution. For the cluster component, they found $\bar{\pi} = 0.31\pm0.04$mas, and a distance of $d=3.2\pm0.4$kpc. Though consistent with the canonical `4kpc' distance to Wd~1 to within the errors, these authors argued that this nearer distance would require an older age for the cluster, and would have profound implications for the origins of Wd~1's many post main-sequence objects. The methodology of our study is different enough to that of A19 to be complimentary. In A19, they assume that the `core-region' is dominated by cluster stars, which gives them a very large sample. This sample inevitably contains foreground contaminants, which these authors then attempt to model out. Though we have fewer stars on which to base $\bar{\pi}$, the spectroscopic and proper-motion selection function mean that we have a very high membership probabilities for all stars in our sample. This means that we do not have to fit for the spatial distribution of the field star population, and so have at least three fewer free parameters (the cluster and field star densities, and the lengthscale for the field star distribution function). We find an average parallax to Wd~1 of \hbox{$\bar{\pi} = 0.259 \pm 0.036$}\,mas, where we have applied the zero-point offset of $\pi_{\rm ZP} = -0.05$, but have {\it not} yet included the error on $\delta\pi_{\rm ZP}$ in the total uncertainty. This agrees to within $\sim2\sigma$ of that found by A19 and \citet{Clark18}, once the same value of $\pi_{\rm ZP}$ is adopted. There is a variation in our measurement of $\bar{\pi}$ of $\pm5\%$ depending on how tightly we perform the proper motion cleaning and whether we incorporate the excess astrometric noise into the parallax error, though this is well within the quoted $1\sigma$ uncertainty. The impact of this variation on the inferred distance is discussed next. The posterior distribution on distance $P_d$ is plotted in \fig{fig:dist}. Our result on the distance to Wd~1 is \hbox{$d=3.87^{+0.95}_{-0.64}$\,kpc}. The variation of $\bar{\pi}$ caused by how aggressively we perform the proper motion cleaning can cause the inferred distance to vary between 3.6--4.1kpc. As with the average parallax, our distance estimate is within the errors of that of A19, but systematically higher, and with conspicuously larger errors despite the errors on $\bar{\pi}$ being comparable. We are unable to provide a definitive explanation for this, but we speculate that it is caused by our treatment of $\delta\pi_{\rm ZP}$. Here, we say that the error on Gaia's zero-point parallax offset affect all stars equally, since the angular scale for variations in $\pi_{\rm ZP}$ \citep[$\sim$1\degr, ][]{Lindegren18} is larger than the radius of Wd~1. This means that $\delta\pi_{\rm ZP}$ sets a hard limit on the precision of any measurement of absolute distance, regardless of the number of stars used to define the cluster average parallax. Our measurement of Wd~1's distance therefore places it close to the $\sim$4kpc found by previous studies. Furthermore, the uncertainty on this distance is roughly double that quoted by A19. We argue that this error bar cannot be reduced without a better characterization of Gaia's zero-point parallax offset. In addition, the chromatic calibration of Gaia in DR2 is still in its initial stages, and so this may be a further source of systematic error for heavily reddened clusters such as Wd~1. \section{Summary} \label{sec:conc} Using Gaia Data Release 2, we have reappraised the distances to three Milky Way young massive star clusters using the average parallaxes of their hot star cluster members. For $\chi$~Per, we find a distance in excellent agreement with earlier estimates (\hbox{$d=2.25^{+0.16}_{-0.14}$\,kpc}). For NGC~6419, our distance is right in the middle of the varied estimates present in the literature (\hbox{$d=3.00^{+0.35}_{-0.29}$\,kpc}). Finally, for Westerlund~1, our distance of \hbox{$d=3.87^{+0.95}_{-0.64}$\,kpc}\ is consistent with previous estimates, though with a larger error than a recent paper which also uses Gaia DR2 parallaxes. We argue that our errors are the more realistic given the current uncertainties on Gaia's zero-point parallax offset. This implications for these revised distances on the cluster ages are discussed in a companion paper \citep{Beasor2019}. \section*{Acknowledgements} We thank Mojgan Aghakhanloo, Simon Clark, Paul Crowther, Sebastian Kamann, Jeremiah Murphy, Ignacio Negueruela and the anonymous referee for useful discussions and constructive comments. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. \bibliographystyle{mnras}
2,877,628,090,028
arxiv
\section{Overview} This supplementary material includes behavior analysis, ablation studies, limitations, visualizations, and future direction of our PU-Transformer for point cloud upsampling. \section{Behavior Analysis} \noindent \textbf{Positional Fusion Block:} Although our Positional Fusion block utilizes similar operations as the Local Context Fusion (LCF) block proposed in~\cite{qiu2021pnp}, there are three main differences between these two methods. First, our block operates on the \emph{patches} of point clouds that have explicit borders, while the LCF extracts the local context from a \emph{whole} point cloud where more outliers could be involved. Second, all of our blocks in PU-Transformer share the \emph{same} geometric relations, but each LCF block requires a \emph{distinct} geometric relation that is specified in the corresponding point cloud resolution. Last but not least, our block serves as a feature \emph{encoding} block that helps to gradually expand the channel dimension of the point cloud feature map, while the LCF aims to \emph{refine} the feature representations in the same embedding space of the input. In addition, the effects of our Positional Fusion block can be analyzed from the comparisons in Figure~\ref{fig:posfus}: by applying the block, the generated points can better align with the contour of a point cloud object, retaining high-fidelity local detail with fewer outliers. \begin{figure}[ht] \begin{center} \includegraphics[width=0.95\columnwidth]{images/supp_vis.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{Upsampling results of the PU-Transformer \emph{with} and \emph{without} using the Positional Fusion block. } \label{fig:posfus} \vspace{-5mm} \end{figure} \vspace{1mm} \noindent \textbf{SC-MSA Block:} In Sec.~3.3 of the main paper, we state that it is easier for our SC-MSA block to integrate the information between the connected multi-head outputs compared to regular MSA~\cite{vaswani2017attention}. The main reason can be explained as follows: since two consecutive heads share some input channels, both of the two heads' outputs are affected/regulated by such shared channel-wise information, leading to less varying estimations of point-wise dependencies. As \emph{any} two consecutive heads in our SC-MSA will follow the above manner, the outputs of all connected multi-heads become less varying, benefiting the overall estimations of point-wise dependencies. Moreover, there is practical evidence to support it: as shown in Figure~\ref{fig:loss}, we clearly observe that the overall training loss of using SC-MSA is lower than using MSA. In other words, by using the SC-MSA, the integrated information can help our upsampling predictions better approach the given dense ground-truths. \section{Ablation Studies} \noindent \textbf{Normalization Operations:} As indicated in Fig.~2 and Alg.~1 of the main paper, the Transformer Encoder incorporates two normalization operations in the fashion of transformers. In practice, NLP-related models favor layer normalization (LN)~\cite{ba2016layer} while image-related methods prefer batch normalization (BN)~\cite{ioffe2015batch}. In terms of the point cloud upsampling task, we select the type of normalization operations (\ie, \enquote{Norm$_{1}$} and \enquote{Norm$_{2}$} in Table~\ref{tab:abl1}) in the PU-Transformer based on the practical performance. Table~\ref{tab:abl1} shows the quantitative results of \emph{five} options ($D_1$ to $D_5$), indicating that the two normalization operations are crucial while the effects of BN and LN are very similar. Considering the relative simplicity and effectiveness, we thus adopt the LN operation for both \enquote{Norm$_{1}$} and \enquote{Norm$_{2}$} (\ie, model $D_5$), in order to further regulate the point features encoded by our Positional Fusion and SC-MSA blocks. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{images/Figure_1.png} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{The training loss of using SC-MSA or MSA~\cite{vaswani2017attention} in the PU-Transformer body, respectively. Overall, compared to regular MSA block, our SC-MSA contributes to a better convergence (lower loss) in the training procedure.} \label{fig:loss} \vspace{-5mm} \end{figure} \vspace{1mm} \noindent \textbf{\emph{PU1K} and \emph{PU-GAN} Datasets:} Different from some works~\cite{yu2018pu, yifan2019patch, qian2020pugeo} testing their proposed models using their own data, we quantitatively evaluate the PU-Transformer on two public datasets: \emph{PU1K}~\cite{qian2021pu} and \emph{PU-GAN}~\cite{li2019pu}. Particularly, we utilize the same experimental settings and results from PU-GCN~\cite{qian2021pu} and Dis-PU~\cite{li2021point}, in order to have a fair comparison with state-of-the-art methods in Tab. 1 and 2 of the main paper. Moreover, we investigate the difference between the \emph{PU1K} and \emph{PU-GAN} datasets by \emph{swapping} their training and testing data. According to the results ($E_1\&E_2$, $E_3\&E_4$) in Table~\ref{tab:abl2}, we find that given a small scale of training data\footnote{24,000 samples in the \emph{PU-GAN} dataset}, our PU-Transformer can still achieve a similar performance when using a large scale of training data\footnote{69,000 samples in the \emph{PU1K} dataset}. In addition, as shown between $E_1\&E_3$ or $E_2\&E_4$, the test set of \emph{PU1K} is more challenging than the \emph{PU-GAN}'s, since there are 100 more testing samples in the \emph{PU1K} dataset. \begin{table} \begin{center} \captionsetup{font=small, skip=3pt} \caption{PU-Transformer's quantitative results of using different normalization operations in the Transformer Encoder, tested on the \emph{PU1K} dataset~\cite{qian2021pu}. The best results are denoted in \textbf{bold}. (\enquote{Norm$_1$}: the operation applied in step 4 of Alg. 1; \enquote{Norm$_2$}: the operation applied in step 5 of Alg. 1; \enquote{BN}: batch normalization~\cite{ioffe2015batch}; \enquote{LN}: layer normalization~\cite{ba2016layer}; \enquote{\textbf{CD}}: Chamfer Distance; \enquote{\textbf{HD}}: Hausdorff Distance; \enquote{\textbf{P2F}}: Point-to-Surface Distance.)} \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{c|cc|ccc} \Xhline{3\arrayrulewidth} \multirow{2}{*}{models}&\multirow{2}{*}{Norm$_{1}$} &\multirow{2}{*}{Norm$_{2}$} &\textbf{CD}$\downarrow$ &\textbf{HD}$\downarrow$ &\textbf{P2F}$\downarrow$\\ & & &($\times {10}^{-3}$) &($\times {10}^{-3}$) &($\times {10}^{-3}$)\\\hline $D_1$ &\emph{none} &\emph{none} &0.684 &6.810 &1.522\\ $D_2$ &BN &BN &0.453 &4.144 &1.395\\ $D_3$ &BN &LN &\textbf{0.441} &3.869 &1.306\\ $D_4$ &LN &BN &0.477 &4.105 &1.285\\\hline $\bm{D_5}$ &LN &LN &0.451 &\textbf{3.843} &\textbf{1.277}\\\Xhline{3\arrayrulewidth} \end{tabular} \label{tab:abl1} } \end{center} \vspace{-5mm} \end{table} \section{Limitations} \noindent \textbf{Model Efficiency:} As mentioned in the main paper, regular transformer models emphasize the effectiveness of visual applications, while the efficiency is not comparable to the CNN-based counterparts. Although PU-Transformer is a light transformer model, in point cloud upsampling, it still consumes more trainable parameters than some CNN-based methods~\cite{yu2018pu, qi2019deep, qian2021pu, li2021point}, producing a larger model as indicated in Table~\ref{tab:inference}. For the inference in practice, the speed of our approach is very close to others due to the fast parallel computing capability of GPUs. In contrast, the networks that apply complex architectures~\cite{li2019pu, li2021point} or conduct expensive geometric calculations~\cite{qian2020pugeo} will be a bit slower. \vspace{1mm} \noindent \textbf{Upsampling Flexibility:} Since our approach follows a supervised end-to-end training paradigm, by conducting a single iteration of inference, we can only obtain the upsampling results in a fixed scale (\ie, upsampling ratio $r$). To generate other resolutions of outputs using a pre-trained PU-Transformer model, we have to apply some post-processing such as the farthest point sampling algorithm~\cite{qi2017pointnet} and multiple iterations of inference. For further improvement, we consider keeping the PU-Transformer's head and body for point feature encoding but designing a more flexible tail to generate different resolutions of outputs. The code and pre-trained models of PU-Transformer will be available at \url{https://github.com/}. \section{Visualizations} \noindent \textbf{Upsampling Noisy Input:} In Tab. 4 of the main paper, we quantitatively compare the PU-Transformer's robustness to random noise against other point cloud upsampling methods. Moreover, in Figure~\ref{fig:noise}, we qualitatively visualize its upsampling results under different noise levels. Generally, our approach is robust to random noise since the upsampling results in all noisy cases retain the high-fidelity shapes. However, it is worth noting that the generated point cloud's uniformity can be affected as the noise level increases. \begin{table} \begin{center} \captionsetup{font=small, skip=3pt} \caption{PU-Transformer's quantitative results when using different training and testing data from \emph{PU1K} dataset~\cite{qian2021pu} and \emph{PU-GAN} dataset~\cite{li2019pu}. (\enquote{\textbf{CD}}: Chamfer Distance; \enquote{\textbf{HD}}: Hausdorff Distance; \enquote{\textbf{P2F}}: Point-to-Surface Distance.)} \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{c|cc|ccc} \Xhline{3\arrayrulewidth} \multirow{2}{*}{models}&training &testing &\textbf{CD}$\downarrow$ &\textbf{HD}$\downarrow$ &\textbf{P2F}$\downarrow$\\ &data &data &($\times {10}^{-3}$) &($\times {10}^{-3}$) &($\times {10}^{-3}$)\\\hline $E_1$ &\emph{PU1K} &\emph{PU1K} &0.451 &3.843 &1.277\\ $E_2$ &\emph{PU-GAN} &\emph{PU1K} &0.469 &4.227 &1.387\\\hline $E_3$ &\emph{PU1K} &\emph{PU-GAN} &0.278 &2.091 &1.838\\ $E_4$ &\emph{PU-GAN} &\emph{PU-GAN} &0.273 &2.605 &1.836\\ \Xhline{3\arrayrulewidth} \end{tabular} \label{tab:abl2} } \end{center} \vspace{-5mm} \end{table} \begin{table} \begin{center} \captionsetup{font=small, skip=3pt} \caption{Efficiency of the point cloud upsampling methods.} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{c|cccccc|c} \Xhline{3\arrayrulewidth} \multirow{2}{*}{Methods}& PU-Net &MPU &PU-GAN &PUGeo &PU-GCN &Dis-PU &PU-Transformer\\ &\cite{yu2018pu}&\cite{yifan2019patch}&\cite{li2019pu}&\cite{qian2020pugeo}&\cite{qian2021pu}&\cite{li2021point}&(ours)\\\hline\xrowht{7pt} Inference &\multirow{2}{*}{8.4ms}&\multirow{2}{*}{8.3ms}&\multirow{2}{*}{10.5ms}&\multirow{2}{*}{--}&\multirow{2}{*}{8.0ms}&\multirow{2}{*}{10.8ms}&\multirow{2}{*}{9.9ms}\\ Speed &&&&&&&\\\hline Model &\multirow{2}{*}{10.1M}&\multirow{2}{*}{6.2M}&\multirow{2}{*}{9.6M}&\multirow{2}{*}{22.9M}&\multirow{2}{*}{1.8M}&\multirow{2}{*}{13.2M}&\multirow{2}{*}{18.4M}\\ Size &&&&&&&\\\Xhline{3\arrayrulewidth} \end{tabular} \label{tab:inference} } \end{center} \vspace{-5mm} \end{table} \vspace{1mm} \noindent \textbf{Upsampling Different Input Sizes:} In Figure~\ref{fig:points}, we provide more examples to visualize our PU-Transformer's performance on upsampling various sizes of point cloud data. Similar to the effects shown in Fig. 5 of the main paper, given different numbers of input points, our proposed model can always generate dense output of high-quality. \vspace{1mm} \noindent \textbf{Upsampling Real Point Clouds:} We present a few examples of upsampling real point cloud data with our PU-Transformer. Particularly, Figure~\ref{fig:real} illustrates the upsampled results of a LiDAR street~\cite{behley2019semantickitti}, an indoor living room~\cite{dai2017scannet}, a conference room~\cite{armeni2017joint}, and some real-scanned objects~\cite{uy2019revisiting}. In general, the overall quality of input data is significantly improved, where the generated points are well organized in a uniform distribution. For object instances (\eg, \enquote{cars}, and \enquote{chairs}), the representative features have been enhanced, benefiting an easier visual recognition. \section{Future Direction} As the first transformer model for point cloud upsampling, we believe our PU-Transformer has great potential for different applications. In the low-level vision area, we can propose a \emph{multifunctional} tail targeting multiple tasks, including upsampling, completion, denoising, \etc. For high-level vision applications, we can develop a lightweight transformer based on the compact structure of the PU-Transformer's body. \begin{figure} \begin{center} \includegraphics[width=0.965\columnwidth]{images/supp_vis2.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{Visualizations of PU-Transformer in upsampling noisy input point clouds, where the noise is generated from a standard normal distribution $\mathcal{N}(0,1)$ and multiplied with a factor $\beta = 0.5\%$, $1\%$, $1.5\%$, and $2\%$, respectively. The input point clouds are in orange color, while the corresponding upsampled results are in blue.} \label{fig:noise} \vspace{-5mm} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{images/supp_vis3.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{Visualizations of PU-Transformer in upsampling different sizes of point clouds, where the number of input points is 256, 512, 1024, and 2048, respectively. The input point clouds are in orange color, while the corresponding upsampled results are in blue.} \label{fig:points} \vspace{-5mm} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{images/supp_vis4.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{Visualizations of PU-Transformer in upsampling real point clouds, including a LiDAR street (from SemanticKTTI dataset~\cite{behley2019semantickitti}), a living room (from ScanNet dataset~\cite{dai2017scannet}), a conference room (from S3DIS dataset~\cite{armeni2017joint}), as well as some real-scanned objects (from ScanObjectNN dataset~\cite{uy2019revisiting}). The input point clouds are in orange color, while the corresponding upsampled results are in blue.} \label{fig:real} \vspace{-5mm} \end{figure*} \end{document} \section{Conclusions} \label{sec:concl} This paper focuses on low-level vision for point cloud data in order to tackle its inherent \emph{sparsity} and \emph{irregularity}. Specifically, we propose a novel transformer-based model, PU-Transformer, targeting the fundamental point cloud upsampling task. Our PU-Transformer shows significant quantitative and qualitative improvements on different point cloud datasets compared to state-of-the-art CNN-based methods. By conducting related ablation studies and visualizations, we also analyze the effects and robustness of our approach. Given the great potential of PU-Transformer in solving the low-level upsampling problem, in the future, we expect to further optimize its efficiency for real-time applications and extend its adaptability in high-level 3D visual tasks such as semantic segmentation and object detection. \section{Experiments} \subsection{Settings} \noindent \textbf{Training Details:} In general, our PU-Transformer is implemented using Tensorflow~\cite{abadi2016tensorflow} with a single GeForce 2080 Ti GPU running on the Linux OS. In terms of the hyperparameters for training, we heavily adopt the settings from PU-GCN~\cite{qian2021pu} and Dis-PU~\cite{li2021point} for the experiments in Tab.~\ref{tab:pu1k} and Tab.~\ref{tab:pugan}, respectively. For example, we have a batch size of 64 for 100 training epochs, an initial learning rate of $1\times10^{-3}$ with a 0.7 decay rate, \etc. Moreover, we only use the modified Chamfer Distance loss~\cite{yifan2019patch} to train the PU-Transformer, minimizing the average closest point distance between the input set $\mathcal{P}\in\mathbb{R}^{N\times3}$ and the output set $\mathcal{S}\in\mathbb{R}^{rN\times3}$ for efficient and effective convergence. \vspace{1mm} \noindent \textbf{Datasets:} To quantitatively evaluate the PU-Transformer's effectiveness on point cloud upsampling, we train and test our proposed model using two 3D benchmarks. \begin{itemize} \item \textbf{PU1K:} This is a new point cloud upsampling dataset introduced in PU-GCN~\cite{qian2021pu}. In general, the PU1K dataset incorporates 1,020 3D meshes for training and 127 3D meshes for testing, where most 3D meshes are collected from ShapeNetCore~\cite{chang2015shapenet} covering 50 object categories. To fit in with the patch-based upsampling pipeline~\cite{yifan2019patch}, the training data is generated from patches of 3D meshes via Poisson disk sampling. Specifically, the training data includes a total of 69,000 samples (patches), where each sample has 256 input points (low resolution) and a corresponding ground-truth of 1,024 points ($4\times$ high resolution). \item \textbf{PU-GAN Dataset:} This is an earlier dataset that was first used in PU-GAN~\cite{li2019pu} and generated in a similar way as PU1K but on a smaller scale. To be concrete, the training data comprises 24,000 samples (patches) collected from 120 3D meshes, while the testing data only contains 27 meshes. In addition to the PU1K dataset consisting of a large volume of data targeting the basic $4\times$ upsampling experiment, we conduct both $4\times$ and $16\times$ upsampling experiments based on the compact data of the PU-GAN dataset. \end{itemize} \begin{table} \begin{center} \captionsetup{font=small, skip=3pt} \caption{Quantitative comparisons to state-of-the-art methods on the \emph{PU-GAN} dataset~\cite{li2019pu}. (\enquote{\textbf{CD}}: Chamfer Distance; \enquote{\textbf{HD}}: Hausdorff Distance; \enquote{\textbf{P2F}}: Point-to-Surface Distance. All metric units are ${10}^{-3}$, and the best performances are denoted in \textbf{bold}.)} \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|ccc|ccc|} \Xhline{3\arrayrulewidth} \multirow{2}{*}{Methods} &\multicolumn{3}{c|}{$4\times$ Upsampling} &\multicolumn{3}{c|}{$16\times$ Upsampling} \\\cline{2-7} &\textbf{CD} $\downarrow$ &\textbf{HD} $\downarrow$ &\textbf{P2F} $\downarrow$ &\textbf{CD} $\downarrow$ &\textbf{HD} $\downarrow$ &\textbf{P2F} $\downarrow$ \\\hline\hline PU-Net~\cite{yu2018pu} &0.844 &7.061 &9.431 &0.699 &8.594 &11.619 \\ MPU~\cite{yifan2019patch} &0.632 &6.998 &6.199 &0.348 &7.187 &6.822 \\ PU-GAN~\cite{li2019pu} &0.483 &5.323 &5.053 &0.269 &7.127 &6.306 \\ Dis-PU~\cite{li2021point} &0.315 &4.201 &4.149 &\textbf{0.199} &4.716 &4.249 \\\hline\hline \rowcolor{Gray}\textbf{Ours} &\textbf{0.273} &\textbf{2.605} &\textbf{1.836} &0.241 &\textbf{2.310} &\textbf{1.687} \\\Xhline{3\arrayrulewidth} \end{tabular} \label{tab:pugan} } \end{center} \vspace{-5mm} \end{table} \vspace{1mm} \noindent \textbf{Evaluation Metrics:} As for the testing process, we follow common practice that has been widely utilized in previous point cloud upsampling works~\cite{yifan2019patch, li2019pu, li2021point, qian2021pu}. To be specific, at first, we cut the input point cloud into multiple seed patches covering all the $N$ points. Then, we apply the trained PU-Transformer model to upsample the seed patches with a scale of $r$. Finally, the farthest point sampling algorithm~\cite{qi2017pointnet} is used to combine all the upsampled patches as a dense output point cloud with $rN$ points. For the $4\times$ upsampling experiments in this paper, each testing sample has a low-resolution point cloud with 2,048 points, as well as a high-resolution one with 8,196 points. Coupled with the original 3D meshes, we quantitatively evaluate the upsampling performance of our PU-Transformer based on three widely used metrics: (i) Chamfer Distance (CD), (ii) Hausdorff Distance~\cite{berger2013benchmark} (HD), and (iii) Point-to-Surface Distance (P2F). A lower value under these metrics denotes better upsampling performance. \subsection{Point Cloud Upsampling Results} \noindent \textbf{PU1K:} Table~\ref{tab:pu1k} shows the quantitative results of our PU-Transformer on the PU1K dataset. It can be seen that our approach outperforms other state-of-the-art methods on all three metrics. In terms of the Chamfer Distance metric, we achieve the best performance among all the tested networks, since the reported values of others are all higher than ours of 0.451. Under the other two metrics, the improvements of PU-Transformer are particularly significant: compared to the performance of the recent PU-GCN~\cite{qian2021pu}, our approach can almost \emph{halve} the values assessed under both the Hausdorff Distance (HD: 7.577$\rightarrow$3.843) and the Point-to-Surface Distance (P2F: 2.499$\rightarrow$1.277). \begin{table} \begin{center} \captionsetup{font=small, skip=3pt} \caption{Ablation study of the PU-Transformer's components tested on the \emph{PU1K} dataset~\cite{qian2021pu}. Specifically, models $A_1$-$A_3$ investigate the effects of the Positional Fusion block, models $B_1$-$B_3$ compare the results of different self-attention approaches, and models $C_1$-$C_3$ test the upsampling methods in the tail.} \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c|c|c|ccc} \Xhline{3\arrayrulewidth} \multirow{2}{*}{models} & \multicolumn{2}{c|}{PU-Transformer Body} & \multirow{2}{*}{PU-Transformer Tail} & \multicolumn{3}{c}{Results ($\times {10}^{-3}$)} \\ \cline{2-3} \cline{5-7} & \multicolumn{1}{c|}{Positional Fusion} & Attention Type & & {\textbf{CD} $\downarrow$} &{\textbf{HD} $\downarrow$} & \textbf{P2F} $\downarrow$ \\ \hline\hline $A_1$ &None &SC-MSA & Shuffle &0.605 &6.477 &2.038\\ $A_2$ &$\mathcal{G}_{geo}$ &SC-MSA & Shuffle &0.558 &5.713 &1.751\\ $A_3$ &$\mathcal{G}_{feat}$ &SC-MSA & Shuffle &0.497 &4.164 &1.511\\\hline\hline $B_1$ &$\mathcal{G}_{geo}$ \& $\mathcal{G}_{feat}$ &SA~\cite{wang2018non} & Shuffle &0.526 &4.689 &1.492\\ $B_2$ &$\mathcal{G}_{geo}$ \& $\mathcal{G}_{feat}$ &OSA~\cite{guo2021pct} & Shuffle &0.509 &4.823 &1.586\\ $B_3$ &$\mathcal{G}_{geo}$ \& $\mathcal{G}_{feat}$ &MSA~\cite{vaswani2017attention} & Shuffle &0.498 &4.218 &1.427\\\hline\hline $C_1$ &$\mathcal{G}_{geo}$ \& $\mathcal{G}_{feat}$ &SC-MSA & MLPs~\cite{yu2018pu} &1.070 &8.732 &2.467\\ $C_2$ &$\mathcal{G}_{geo}$ \& $\mathcal{G}_{feat}$ &SC-MSA & DupGrid~\cite{yifan2019patch} &0.485 &3.966 &1.380\\ $C_3$ &$\mathcal{G}_{geo}$ \& $\mathcal{G}_{feat}$ &SC-MSA & NodeShuffle~\cite{qian2021pu}&0.505 &4.157 &1.404\\\hline\hline \rowcolor{Gray}\textbf{Full} &$\mathcal{G}_{geo}$ \& $\mathcal{G}_{feat}$ &SC-MSA &Shuffle &\textbf{0.451} &\textbf{3.843} &\textbf{1.277}\\\Xhline{3\arrayrulewidth} \end{tabular} \label{tab:abl_parts} } \end{center} \vspace{-5mm} \end{table} \vspace{1mm} \noindent \textbf{PU-GAN Dataset:} In addition, we conduct point cloud upsampling experiments using the dataset introduced in PU-GAN~\cite{li2019pu}. With a smaller scale of training data, we test more upsampling scales in comparison to different networks. To be specific with the results in Table~\ref{tab:pugan}, we successfully achieve state-of-the-art performance under all three evaluation metrics for the $4\times$ upsampling experiment. However, in the $16\times$ upsampling test, we (CD: 0.241) are slightly behind the latest Dis-PU network~\cite{li2021point} (CD: 0.199) evaluated under the Chamfer Distance metric: the Dis-PU applies two CD-related items as its loss function, hence getting an edge for CD metric only. As for the results under Hausdorff Distance and Point-to-Surface Distance metrics, our PU-Transformer shows significant improvements again, where some values (\eg, P2F in $4\times$, HD and P2F in $16\times$) are even lower than \emph{half} of Dis-PU's results. \vspace{1mm} \noindent \textbf{Overall Comparison:} In general, the experimental results in Table~\ref{tab:pu1k} and~\ref{tab:pugan} indicate the great effectiveness of our PU-Transformer. Moreover, by quantitatively comparing to the CNN-based (\eg, GCN~\cite{li2019deepgcns}, GAN~\cite{goodfellow2014generative}) methods under different datasets, we are the first to demonstrate the superiority of a transformer model for point cloud upsampling. \subsection{Ablation Studies} \noindent \textbf{Effects of Components:} Table~\ref{tab:abl_parts} shows the experiments that replace PU-Transformer's major components with different options. Specifically, we test three simplified models ($A_1$-$A_3$) regarding the Positional Encoding block output (Eq.~\ref{equ:g}), where employing both local \emph{geometric} $\mathcal{G}_{geo}$ and \emph{feature} $\mathcal{G}_{feat}$ context (model \enquote{Full}) provides better performance compared to the others. As for models $B_1$-$B_3$, we apply different self-attention approaches to the Transformer Encoder, where our proposed SC-MSA (Sec.~\ref{sec:body}) block shows higher effectiveness on point cloud upsampling. In terms of the upsampling method used in the PU-Transformer tail, some learning-based methods are evaluated as in models $C_1$-$C_3$. Particularly, the shuffle operation~\cite{shi2016real} indicates an effective and efficient way to obtain a high-resolution feature map since the output of the PU-Transformer body has been sufficiently informative. \vspace{1mm} \noindent \textbf{Robustness to Noise:} As the PU-Transformer can upsample different types of point clouds, including real scanned data, it is necessary to verify our model's robustness to noise. Concretely, we test the pre-trained models by adding some random noise to the sparse input data, where the noise is generated from a standard normal distribution $\mathcal{N}(0,1)$ and multiplied with a factor $\beta$. In practice, we conduct the experiments under three noise levels: $\beta = 0.5\%$, $1\%$ and $2\%$. Table~\ref{tab:abl_noise} quantitatively compares the testing results of state-of-the-art methods, and our proposed PU-Transformer achieves the best performance in all tested noise cases. \begin{table} \begin{center} \captionsetup{font=small, skip=3pt} \caption{The model's robustness to random noise tested on the \emph{PU1K} dataset~\cite{qian2021pu}. The noise is generated from the standard normal distribution $\mathcal{N}(0,1)$, and $\beta$ denotes the noise level.} \resizebox{\columnwidth}{!}{ \begin{tabular}{c|ccc|ccc|ccc} \Xhline{3\arrayrulewidth} \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{$\beta=0.5\%$} & \multicolumn{3}{c|}{$\beta=1\%$} & \multicolumn{3}{c}{$\beta=2\%$} \\ \cline{2-10} & {\textbf{CD} $\downarrow$} &{\textbf{HD} $\downarrow$} & \textbf{P2F} $\downarrow$ & {\textbf{CD} $\downarrow$} &{\textbf{HD} $\downarrow$} & \textbf{P2F} $\downarrow$ & {\textbf{CD} $\downarrow$} &{\textbf{HD} $\downarrow$} & \textbf{P2F} $\downarrow$ \\ \hline\hline PU-Net~\cite{yu2018pu} &1.006 &14.640 &5.253 &1.017 &14.998 &6.851 &1.333 &19.964 &10.378\\ MPU~\cite{yifan2019patch} &0.869 &12.524 &4.069 &0.907 &13.019 &5.625 &1.130 &16.252 &9.291\\ PU-GCN~\cite{qian2021pu} &0.621 &8.011 &3.524 &0.762 &9.553 &5.585 &1.107 &13.130 &9.378\\\hline\hline \rowcolor{Gray}\textbf{Ours} &\textbf{0.453} &\textbf{4.052} &\textbf{2.127} &\textbf{0.610} &\textbf{5.787} &\textbf{3.965} &\textbf{1.058} &\textbf{9.948} &\textbf{7.551}\\ \Xhline{3\arrayrulewidth} \end{tabular} \label{tab:abl_noise} } \end{center} \vspace{-5mm} \end{table} \begin{table} \begin{center} \captionsetup{font=small, skip=3pt} \caption{Model Complexity of PU-Transformer using different numbers of Transformer Encoders. All the experiments are operated on \emph{PU1K} dataset~\cite{qian2021pu} with a single GeForce 2080 Ti GPU.} \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|ccc} \Xhline{3\arrayrulewidth} \# Transformer &\multirow{2}{*}{\# Parameters} &\multirow{2}{*}{Model Size} & Training Speed & Inference Speed & \multicolumn{3}{c}{Results ($\times {10}^{-3}$)} \\ \cline{6-8} Encoders&&&(per batch) & (per sample) & {\textbf{CD} $\downarrow$} &{\textbf{HD} $\downarrow$} & \textbf{P2F} $\downarrow$ \\ \hline\hline $L=3$ &438.3k &8.5M &12.2s &6.9ms &0.487 &4.081 &1.362\\ $L=4$ &547.3k &11.5M &15.9s &8.2ms &0.472 &4.010 &1.284\\ \rowcolor{Gray} $\bm{L=5}$ &969.9k &18.4M &23.5s &9.9ms &0.451 &\textbf{3.843} &1.277\\ $L=6$ &2634.4k &39.8M &40.3s &11.0ms &\textbf{0.434} &3.996 &\textbf{1.210}\\ \Xhline{3\arrayrulewidth} \end{tabular} \label{tab:abl_complexity} } \end{center} \vspace{-5mm} \end{table} \begin{figure*} \begin{center} \includegraphics[width=0.94\textwidth]{images/vis.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{Comparisons to state-of-the-art methods (PU-GAN~\cite{li2019pu}, PU-GCN~\cite{qian2021pu}, Dis-PU~\cite{li2021point}) in $4\times$ upsampling experiments using 2048 input points. For a better visualization, we reconstruct the surfaces (in blue) of each point cloud following the algorithm in~\cite{kazhdan2013screened}.} \label{fig:vis} \vspace{-5mm} \end{figure*} \vspace{1mm} \noindent \textbf{Model Complexity:} Generally, our PU-Transformer is a light ($<$1M parameters) transformer model compared to image transformers~\cite{khan2021transformers, liu2021swin, dosovitskiy2020image} that usually have more than 50M parameters. In particular, we investigate the complexity of our PU-Transformer by utilizing different numbers of the Transformer Encoders. As shown in Table~\ref{tab:abl_complexity}, with more Transformer Encoders being applied, the model complexity increases rapidly, while the quantitative performance improves slowly. For a better balance between effectiveness and efficiency, we adopt the model with \emph{five} Transformer Encoders ($L=5$) in this work. Overall speaking, the PU-Transformer is a powerful and affordable transformer model for the point cloud upsampling task. \subsection{Visualization} \noindent \textbf{Qualitative Comparisons:} The qualitative results of different point cloud upsampling models are presented in Fig.~\ref{fig:vis}. Since we utilize the self-attention based structure to capture the point-wise dependencies from a global perspective, the PU-Transformer's output can better illustrate the overall contours of input point clouds producing fewer outliers (as the zoom-in views show). Moreover, based on the comprehensive local context encoded by our Positional Fusion block, the PU-Transformer precisely upsamples the points with a uniform distribution, retaining much structural detail. Accordingly, the reconstructed meshes present more high-fidelity context with fewer artifacts: \eg, the wings of \enquote{dragon} in the 2nd row, the head/foot of \enquote{human} in the 4th row, and the body of \enquote{tiger} in the last row. \vspace{1mm} \noindent \textbf{Upsampling Different Input Sizes:} Fig.~\ref{fig:res} shows the results of upsampling different sizes of point cloud data using PU-Transformer. Given a relatively low-resolution point cloud (\eg, 256 or 512 input points), our proposed model is still able to generate dense output with high-fidelity context (\eg, the head/foot of \enquote{Panda}). As the input size increases, the new points are uniformly distributed, covering the main flat areas (\eg, the body of \enquote{Panda}). \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth]{images/res.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{PU-Transformer's $4\times$ upsampling results, given different sizes of input point cloud data.} \label{fig:res} \vspace{-5mm} \end{figure} \vspace{1mm} \noindent \textbf{Upsampling Real Point Clouds:} We also provide the PU-Transformer's $4\times$ and $16\times$ upsampling results on real point cloud samples from \emph{ScanObjectNN}~\cite{uy2019revisiting}, \emph{S3DIS}~\cite{armeni2017joint}, \emph{ScanNet}~\cite{dai2017scannet}, and \emph{SemanticKITTI}~\cite{behley2019semantickitti} datasets. As Fig.~\ref{fig:scan} clearly illustrates, by addressing the sparsity and non-uniformity of raw inputs, not only is the overall quality of point clouds significantly improved, but also the representative features of object instances are enhanced. \section{Implementation} \label{sec:impl} \subsection{PU-Transformer Head} \label{sec:head} As illustrated in Fig.~\ref{fig:net}, our PU-Transformer model begins with the PU-Transformer head to encode a preliminary feature map for the following operations. In practice, we only use a single layer MLP (\ie, a single $1\times1$ convolution, followed by a batch normalization layer~\cite{ioffe2015batch} and a ReLU activation~\cite{nair2010rectified}) as the PU-Transformer head, where the generated feature map size is $N\times16$. \subsection{PU-Transformer Body} \label{sec:pu_body} To balance the model complexity and effectiveness, empirically, we leverage \emph{five} cascaded Transformer Encoders (\ie, $L=5$ in Alg.~\ref{alg:put} and Fig.~\ref{fig:net}) to form the PU-Transformer body, where the channel dimension of each output follows: $32\rightarrow64\rightarrow128\rightarrow256\rightarrow256$. Particularly, in each Transformer Encoder, we only use the Positional Fusion block to encode the corresponding channel dimension (\ie, $C^{\prime}$ in Eq.~\ref{equ:g}), which remains the same in the subsequent operations. For all Positional Fusion blocks, the number of neighbors is empirically set to $k=20$ as used in previous works~\cite{wang2019dynamic, qian2021pu}. In terms of the SC-MSA block, the primary way of choosing the shift-related parameters is inspired by the Non-local Network~\cite{wang2018non} and ECA-Net~\cite{Wang_2020_CVPR}. Specifically, a reduction ratio $\psi$~\cite{wang2018non} is introduced to generate the low-dimensional matrices in self-attention; following a similar method, the channel-wise width (\ie, channel dimension) of each split in SC-MSA is set as $w=C^\prime/\psi$. Moreover, since the channel dimension is usually set to a power of 2~\cite{Wang_2020_CVPR}, we simply set the channel-wise shift interval $d = w/2$. Therefore, the number of heads in SC-MSA becomes $M=2\psi-1$. In our implementation, $\psi=4$ is adopted in all SC-MSA blocks of PU-Transformer. \subsection{PU-Transformer Tail} \label{sec:tail} Based on the practical settings mentioned above, the input to the PU-Transformer tail (\ie, the output of the last Transformer Encoder) has a size of $N\times256$. Then, following a similar approach to pixel shuffle~\cite{shi2016real}, we reform it into a dense feature map of $rN\times256/r$, where $r$ is the upsampling scale. Finally, another MLP is utilized to estimate the upsampled point cloud's 3D coordinates ($rN\times3$). \section{Introduction} \label{sec:intro} 3D computer vision has been attracting a wide range of interest from academia and industry since it shows great potential in many fast-developing AI-related applications such as robotics, autonomous driving, augmented reality, \etc. As a basic representation of 3D data, point clouds can be easily captured by 3D sensors~\cite{endres20133, jaboyedoff2012use}, incorporating the rich context of real-world surroundings. Unlike well-structured 2D images, point cloud data has inherent properties of \emph{irregularity} and \emph{sparsity}, posing enormous challenges for high-level vision tasks such as point cloud classification~\cite{qi2017pointnet, wang2019dynamic, qiu2021geometric}, segmentation~\cite{qi2017pointnet++, qiu2021dense, hu2020randla}, and object detection~\cite{qi2019deep, qi2020imvotenet, qiu2021investigating}. For instance, Uy~\etal~\cite{uy2019revisiting} fail to predict the real-world point clouds' labels when they apply a model that was pre-trained using synthetic data; and the segmentation~\cite{hu2020randla, qiu2021semantic} and detection~\cite{Park_2021_ICCV} results on distant/smaller point cloud objects (\eg, bicycles, traffic-signs) are worse than the closer/larger objects (\eg, vehicles, buildings) captured by LiDAR scanners~\cite{behley2019semantickitti, caesar2020nuscenes}. If we tackle the data's \emph{irregularity} and \emph{sparsity}, further improvements in point cloud analysis can be obtained. To this end, the point cloud upsampling task is worthy of investigation. \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{images/vis_front.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{Examples of upsampling real scanned point clouds using PU-Transformer. The first column presents a \enquote{chair}~\cite{uy2019revisiting}, while the others illustrate scenes of an \enquote{office}~\cite{armeni2017joint}, a \enquote{room}~\cite{dai2017scannet} and a \enquote{street}~\cite{behley2019semantickitti}, respectively. Given sparse point clouds, the dense outputs of PU-Transformer have uniform point distributions showing high-fidelity details. Particularly, the contours of upsampled object instances (\eg, \emph{tables} in \enquote{office/room}, \emph{cars} in \enquote{street}) are clearly distinct from the complex surroundings, benefiting further visual analysis.} \label{fig:scan} \vspace{-5mm} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth]{images/net.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{The details of PU-Transformer. The upper chart shows the overall architecture of the PU-Transformer model containing three main parts: the PU-Transformer head (Sec.~\ref{sec:head}), body (Sec.~\ref{sec:pu_body}), and tail (Sec.~\ref{sec:tail}). The PU-Transformer body includes a cascaded set of Transformer Encoders, serving as the core component of the whole PU-Transformer model. Particularly, the detailed structure of each Transformer Encoder (\eg, the PU-Transformer body contains $L$ Transformer Encoders in total) is illustrated in the lower chart, where all annotations are consistent with Line 3-5 in Alg.~\ref{alg:put}.} \label{fig:net} \vspace{-3mm} \end{figure*} As a basic 3D low-level vision task, point cloud upsampling aims to generate dense point clouds from sparse input, where the generated data should recover the fine-grained structures at a higher resolution. Moreover, the upsampled points are expected to lie on the underlying surfaces in a uniform distribution, benefiting downstream tasks for both 3D visual analysis~\cite{liu2019relation, qiu2021pnp} and graphic modeling~\cite{mitra2003estimating, mitra2004registration}. Following the success of Convolution Neural Networks (CNNs) in image super-resolution~\cite{dong2015image, kim2016accurate, anwar2020deep} and Multi-Layer-Perceptrons (MLPs) in point cloud analysis~\cite{qi2017pointnet, qi2017pointnet++}, previous point cloud upsampling networks~\cite{yu2018pu, yifan2019patch, li2019pu, qian2020pugeo, qian2021pu, li2021point} developed MLP-based structures to encode features and estimate new points. Although such an MLP operation can learn from point cloud data, the effectiveness of a whole network is limited by the MLP's insufficient expression and generalization capability. Different from existing methods, we introduce a powerful transformer model named PU-Transformer, specializing in the point cloud upsampling task. Next, we explain our key reasons for adopting transformers to point cloud upsampling. \emph{Plausibility in theory.} As the core operation of transformers, self-attention~\cite{vaswani2017attention} is a set operator~\cite{zhao2021point} calculating long-range dependencies between elements regardless of data order. On this front, self-attention can perfectly suit the point cloud data and easily estimate the point-wise dependencies without any concern for the inherent \emph{unorderedness}. Moreover, to represent a comprehensive feature map for point cloud analysis, channel-wise information also has been shown to be a crucial factor in attention mechanisms ~\cite{qiu2021geometric, qiu2021investigating}. Based on these facts, we propose a novel Shifted Channel Multi-head Self-Attention (SC-MSA) block, which strengthens the point-wise relations in a multi-head form and enhances the channel-wise connections by introducing the overlapping channels between consecutive heads. \emph{Feasibility in practice.} Since the transformer model was originally invented for natural language processing; its usage has been widely recognized in high-level visual applications for 2D images~\cite{carion2020end, dosovitskiy2020image, liu2021swin}. More recently, Chen~\etal~\cite{chen2021pre} introduced a pre-trained transformer model achieving excellent performance on image super-resolution and denoising. Inspired by the transformer's effectiveness for image-related low-level vision tasks, we attempt to create a transformer-based model for point cloud upsampling. Given the mentioned differences between 2D images and 3D point clouds, we introduce the Positional Fusion block as a replacement for positional encoding in conventional transformers: on the one hand, local information is aggregated from both the \emph{geometric} and \emph{feature} context of the points, implying their 3D positional relations; on the other hand, such \emph{local} information can serve as complementary to subsequent self-attention operations, where the point-wise dependencies are calculated from a \emph{global} perspective. \emph{Adaptability in various applications.} Transformer-based models are considered as a luxury tool in computer vision due to the huge consumption of data, hardware, and computational resources. However, our PU-Transformer can be easily trained with a \emph{single} standard GPU (\eg, GeForce 1080/2080 Ti) in a few hours, retaining a similar model complexity to regular CNN-based point cloud upsampling networks~\cite{yu2018pu, yifan2019patch, li2021point}. Moreover, following a patch-based pipeline~\cite{yifan2019patch}, the trained PU-Transformer model can effectively and flexibly upsample different types of point cloud data, including but not limited to regular object instances or large-scale LiDAR scenes as shown in Fig.~\ref{fig:scan}. Starting with the fundamental upsampling task in low-level vision, we expect our approach to transformers will be affordable in terms of resource consumption for most researchers, further extending the applications of point clouds. Our main contributions are in these aspects: \begin{itemize} \item To the best of our knowledge, we are the first to introduce a transformer-based model for point cloud upsampling. The whole project of our PU-Transformer will be released\footnote{The source code and trained models will be available at: \url{https://github.com/}.} to the public. \item We quantitatively validate the effectiveness of the PU-Transformer by significantly outperforming the results of state-of-the-art point cloud upsampling networks on two benchmarks using three metrics. \item The upsampled visualizations demonstrate the superiority of PU-Transformer for diverse point clouds. \end{itemize} \section{Methodology} \label{sec:metho} \subsection{Overview} As shown in Fig.~\ref{fig:net}, given a sparse point cloud $\mathcal{P}\in\mathbb{R}^{N\times3}$, our proposed PU-Transformer can generate a dense point cloud $\mathcal{S}\in\mathbb{R}^{rN\times3}$, where $r$ denotes the upsampling scale. Firstly, the PU-Transformer head extracts a preliminary feature map from the input. Then, based on the extracted feature map and the inherent 3D coordinates, the PU-Transformer body gradually encodes a more comprehensive feature map via the cascaded Transformer Encoders. Finally, in the PU-Transformer tail, we use the shuffle operation~\cite{shi2016real} to form a dense feature map and reconstruct the 3D coordinates of $\mathcal{S}$ via an MLP. In Alg.~\ref{alg:put}, we present the basic operations that are employed to build our PU-Transformer. As well as the operations (\enquote{MLP}~\cite{qi2017pointnet}, \enquote{Norm}~\cite{ba2016layer}, \enquote{Shuffle}~\cite{shi2016real}) that have been widely used in image and point cloud analysis, we propose two novel blocks targeting a transformer-based point cloud upsampling model \ie, the Positional Fusion block (\enquote{\textbf{PosFus}} in Alg.~\ref{alg:put}), and the Shifted-Channel Multi-head Self-Attention block (\enquote{\textbf{SC-MSA}} in Alg.~\ref{alg:put}). In the rest of this section, we introduce these two blocks in detail. Moreover, for a compact description, we only consider the case of an \emph{arbitrary} Transformer Encoder; thus, in the following, we discard the subscripts that are annotated in Alg.~\ref{alg:put} denoting a Transformer Encoder's specific index in the PU-Transformer body. \subsection{Positional Fusion} \label{sec:posfus} Usually, a point cloud consisting of $N$ points has two main types of context: the 3D coordinates $\mathcal{P}\in\mathbb{R}^{N\times3}$ that are explicitly sampled from synthetic meshes or captured by real-world scanners, showing the original geometric distribution of the points in 3D space; and the feature context, $\mathcal{F}\in\mathbb{R}^{N\times C}$, that is implicitly encoded by convolutional operations in $C$-dimensional embedding space, yielding rich latent clues for visual analysis. Older approaches~\cite{yu2018pu, yifan2019patch, li2019pu} to point cloud upsampling generate a dense point set by heavily exploiting the encoded features $\mathcal{F}$, while recent methods~\cite{qian2020pugeo, qian2021pu} attempt to incorporate more geometric information. As the core module of the PU-Transformer, the proposed Transformer Encoder leverages a Positional Fusion block to encode and combine both the given $\mathcal{P}$ and $\mathcal{F}$\footnote{equivalent to \enquote{$\mathcal{F}_{l-1}$} in Alg.~\ref{alg:put}} of a point cloud, following the local geometric relations between the scattered points. Based on the metric of \emph{3D-Euclidean distance}, we can search for neighbors ${\forall}p_j\in Ni(p_i)$ for each point $p_i\in\mathbb{R}^{3}$ in the given point cloud $\mathcal{P}$, using the k-nearest-neighbors (knn) algorithm~\cite{wang2019dynamic}. Coupled with a grouping operation, we thus obtain a matrix $\mathcal{P}_j\in\mathbb{R}^{N\times k \times3}$, denoting the 3D coordinates of the neighbors for all points. Accordingly, the relative positions between each point and its neighbors can be formulated as: \begin{equation} \label{equ:1} \Delta\mathcal{P} = \mathcal{P}_j - \mathcal{P}, \quad \Delta\mathcal{P}\in\mathbb{R}^{N\times k \times3}; \end{equation} where $k$ is the number of neighbors. In addition to the neighbors' relative positions showing each point's local detail, we also append the centroids' absolute positions in 3D space, indicating the global distribution for all points. By duplicating $\mathcal{P}$ in a dimension expanded $k$ times, we concatenate the local \emph{geometric} context as: \begin{equation} \label{equ:2} \mathcal{G}_{geo} = \mathrm{concat}\big[\underset{k}{\mathrm{dup}}(\mathcal{P}); \Delta\mathcal{P}\big]\in\mathbb{R}^{N\times k \times6}. \end{equation} Further, for the feature matrix $\mathcal{F}_j\in\mathbb{R}^{N\times k \times C}$ of all searched neighbors, we conduct similar operations (Eq.~\ref{equ:1} and~\ref{equ:2}) as on the counterpart $\mathcal{P}_j$, computing the relative features as: \begin{equation} \label{equ:3} \Delta\mathcal{F} = \mathcal{F}_j - \mathcal{F}, \quad \Delta\mathcal{F}\in\mathbb{R}^{N\times k \times C}; \end{equation} and representing the local \emph{feature} context as: \begin{equation} \label{equ:4} \mathcal{G}_{feat} = \mathrm{concat}\big[\underset{k}{\mathrm{dup}}(\mathcal{F}); \Delta\mathcal{F}\big]\in\mathbb{R}^{N\times k \times 2C}. \end{equation} After the local \emph{geometric} context $\mathcal{G}_{geo}$ and local \emph{feature} context $\mathcal{G}_{feat}$ are constructed, we then fuse them for a comprehensive point feature representation. Specifically, $\mathcal{G}_{geo}$ and $\mathcal{G}_{feat}$ are encoded via two MLPs, $\bm{\mathcal{M}}_{\Phi}$ and $\bm{\mathcal{M}}_{\Theta}$, respectively; further, we comprehensively aggregate the local information, $\mathcal{G}\in\mathbb{R}^{N\times C^\prime}$\footnote{equivalent to \enquote{$\mathcal{G}_{l}$} in Alg.~\ref{alg:put}}, using a concatenation between the encoded two types of local context, followed by a max-pooling function operating over the neighborhoods. The above operations can be summarized as: \begin{equation} \label{equ:g} \mathcal{G} = \underset{k}{\mathrm{max}}\Big(\mathrm{concat}\big[\bm{\mathcal{M}}_{\Phi}(\mathcal{G}_{geo}); \bm{\mathcal{M}}_{\Theta}(\mathcal{G}_{feat})\big]\Big). \end{equation} Unlike the local graphs in DGCNN~\cite{wang2019dynamic} that need to be updated in every encoder based on the \emph{dynamic} relations in embedding space, both of our $\mathcal{G}_{geo}$ and $\mathcal{G}_{feat}$ are constructed (\ie, Eq.~\ref{equ:2} and~\ref{equ:4}) and encoded (\ie, $\bm{\mathcal{M}}_{\Phi}$ and $\bm{\mathcal{M}}_{\Theta}$ in Eq.~\ref{equ:g}) in the same way, following \emph{fixed} 3D geometric relations (\ie, ${\forall}p_j\in Ni(p_i)$ defined upon \emph{3D-Euclidean distance}). The main benefits of our approach can be concluded from two aspects: (i) it is practically efficient since the expensive knn algorithm just needs to be conducted once, while the searching results can be utilized in all Positional Encoding blocks of the PU-Transformer body; and (ii) the local \emph{geometric} and \emph{feature} context are represented in a similar manner following the same metric, contributing to \emph{fairly fusing} the two types of context. Overall, the Positional Fusion block can not only encode the positional information about a set of unordered points for the transformer's processing, but also aggregate comprehensive local details for accurate point cloud upsampling. \subsection{Shifted Channel Multi-head Self-Attention} \label{sec:body} Compared to regular self-attention~\cite{vaswani2017attention}, multi-head self-attention (MSA)~\cite{vaswani2017attention} can capture more information by conducting the self-attention calculation in multiple heads, and thus has been widely utilized as the main calculation block in most transformer models. However, the element-wise (\ie, point-wise \wrt point cloud data) dependencies in each head are independently estimated by calculating the dot-products of feature vectors, leading to a lack of connection between the outputs of different heads. To tackle this issue, we introduce a novel Shifted Channel Multi-head Self-Attention (SC-MSA) block for PU-Transformer. \begin{algorithm \small \caption{Shifted Channel Multi-head\\ Self-Attention (\textbf{SC-MSA}) Implementation}\label{alg:msa} \nonl\textbf{input:} a point cloud feature map: $\mathcal{I}\in\mathbb{R}^{N\times C^\prime}$\\ \nonl\textbf{output:} the refined feature map: $\mathcal{O}\in\mathbb{R}^{N\times C^\prime}$\\ \nonl\textbf{others:} channel-wise split width: $w$ \\ \nonl\quad \quad \,\,\,\,\,\, channel-wise shift interval: $d$, $d<w$ \\ \nonl\quad \quad \,\,\,\,\,\, the number of heads: $M$ \\ $\mathcal{Q} = \mathrm{Linear}(\mathcal{I})$ \quad {\fontfamily{qcr}\selectfont\textcolor{gray}{\# Query Mat $\mathcal{Q}\in\mathbb{R}^{N\times C^\prime}$}}\\ $\mathcal{K} = \mathrm{Linear}(\mathcal{I})$ \quad {\fontfamily{qcr}\selectfont\textcolor{gray}{\# Key Mat $\mathcal{K}\in\mathbb{R}^{N\times C^\prime}$}}\\ $\mathcal{V} = \mathrm{Linear}(\mathcal{I})$ \quad {\fontfamily{qcr}\selectfont\textcolor{gray}{\# Value Mat $\mathcal{V}\in\mathbb{R}^{N\times C^\prime}$}}\\ \For{$m\in \{1, 2, ..., M\}$}{ $\mathcal{Q}_m = \mathcal{Q}[\;:\;, (m-1)d:(m-1)d+w]$\; $\mathcal{K}_m = \mathcal{K}[\;:\;, (m-1)d:(m-1)d+w]$\; $\mathcal{V}_m = \mathcal{V}[\;:\;, (m-1)d:(m-1)d+w]$\; $\mathcal{A}_m = \mathrm{softmax} (\mathcal{Q}_m {\mathcal{K}_m}^{T})$\; $\mathcal{O}_m = \mathcal{A}_m \mathcal{V}_m$\; } \textbf{obtain:} $\{\mathcal{O}_1, \mathcal{O}_2, ..., \mathcal{O}_M\}$\\ $\mathcal{O} = \mathrm{Linear}\Big(\mathrm{concat}\big[\{\mathcal{O}_1, \mathcal{O}_2, ..., \mathcal{O}_M\}\big]\Big)$\\ \end{algorithm} As Alg.~\ref{alg:msa} states, at first, we apply linear layers (denoted as \enquote{$\mathrm{Linear}$}, and implement as a $1\times1$ convolution) to encode the query matrix $\mathcal{Q}$, key matrix $\mathcal{K}$, and value matrix $\mathcal{V}$. Then, we generate low-dimensional splits of $\mathcal{Q}_m,\mathcal{K}_m,\mathcal{V}_m$ for each head. Particularly, as shown in Fig.~\ref{fig:splits}, regular MSA generates the \emph{independent} splits for the self-attention calculation in corresponding heads. In contrast, our SC-MSA applies a window (dashed square) shift along the channels to ensure that any two consecutive splits have an overlap of $(w-d)$ channels (slashed area), where $w$ is the channel dimension of each split and $d$ represents the channel-wise shift interval each time. After generating the $\mathcal{Q}_m,\mathcal{K}_m,\mathcal{V}_m$ for each head in the mentioned manner, we employ self-attention (Alg.~\ref{alg:msa} steps 8-9) to estimate the point-wise dependencies as the output $\mathcal{O}_m$ of each head. Considering the fact that any two consecutive heads have part of the input in common (\ie, the overlap channels), thus the connections between the outputs $\{\mathcal{O}_1, \mathcal{O}_2, ..., \mathcal{O}_M\}$ (Alg.~\ref{alg:msa} step 11) of multiple heads are established. There are two major benefits of such connections: (i) it is easier to integrate the information between the \emph{connected} multi-head outputs (Alg.~\ref{alg:msa} step 12), compared to using the \emph{independent} multi-head results of regular MSA; and (ii) as the overlapping context is captured from the channel dimension, our SC-MSA can further enhance the channel-wise relations in the final output $\mathcal{O}$, while regular MSA only focuses on point-wise information. It is worth noting that SC-MSA requires the shift interval to be smaller than the channel-wise width of each split (\ie, $d<w$ as in Alg.~\ref{alg:msa}) for a shared area between any two consecutive splits. Accordingly, the number of heads in our SC-MSA is higher than regular MSA (\ie, $M > C^{\prime}/w$ in Fig.~\ref{fig:splits}). More implementation detail and the choices of shift-related parameters are provided in Sec.~\ref{sec:pu_body}. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{images/splits.pdf} \end{center} \vspace{-5mm} \captionsetup{font=small} \caption{Examples of how regular MSA~\cite{vaswani2017attention} and our SC-MSA generate the low-dimensional splits of query matrix $\mathcal{Q}$ for multi-head processing (the same procedure also applies to $\mathcal{K}$ and $\mathcal{V}$).} \label{fig:splits} \vspace{-5mm} \end{figure} \section{Related Work} \label{sec:work} \noindent \textbf{Point Cloud Networks:} In early research, the projection-based methods~\cite{su2015multi, lawin2017deep} used to project 3D point clouds into multi-view 2D images, apply regular 2D convolutions for feature extraction, and finally fuse the extracted information for 3D analysis. Alternatively, discretization-based approaches~\cite{guo2020deep} tended to convert the point clouds to voxels~\cite{huang2016point} or lattices~\cite{su2018splatnet}, and then process them using 3D convolutions or sparse tensor convolutions~\cite{choy20194d}. To avoid context loss and complex steps during data conversion, the point-based networks~\cite{qi2017pointnet, qi2017pointnet++, wang2019dynamic} directly process point cloud data via MLP-based operations. Although current mainstream approaches in point cloud upsampling prefer utilizing MLP-related modules, in this paper, we focus on an advanced transformer structure~\cite{vaswani2017attention} in order to further enhance the point-wise dependencies between known points and benefit the generation of new points. \vspace{1mm} \noindent \textbf{Point Cloud Upsampling:} Despite the fact that current point cloud research in low-level vision~\cite{yu2018pu, yuan2018pcn} is less active than that in high-level analysis~\cite{qi2017pointnet, hu2020randla, qi2019deep}, there exist many outstanding works that have contributed significant developments to the point cloud upsampling task. To be specific, PU-Net~\cite{yu2018pu} is a pioneering work that introduced CNNs to point cloud upsampling based on a PointNet++~\cite{qi2017pointnet++} like backbone. Later, MPU~\cite{yifan2019patch} proposed a patch-based upsampling pipeline, which can flexibly upsample the point cloud patches with rich local details. In addition, PU-GAN~\cite{li2019pu} adopted the architecture of Generative Adversarial Networks~\cite{goodfellow2014generative} for the generation problem of high-resolution point clouds, while PUGeo-Net~\cite{qian2020pugeo} indicated a promising combination of discrete differential geometry and deep learning. More recently, Dis-PU~\cite{li2021point} applies disentangled refinement units to gradually generate the high-quality point clouds from coarse ones, and PU-GCN~\cite{qian2021pu} achieves good upsampling performance by using graph-based constructions~\cite{wang2019dynamic} in the network. As the first work that leverages the powerful transformer structure for point cloud upsampling, we hope that our PU-Transformer attracts more research interest and inspires future work in relevant topics. \begin{algorithm \small \caption{PU-Transformer Pipeline}\label{alg:put} \nonl\textbf{input:} a sparse point cloud $\mathcal{P}\in\mathbb{R}^{N\times3}$\\ \nonl\textbf{output:} a dense point cloud $\mathcal{S}\in\mathbb{R}^{rN\times3}$\\ {\nonl{\fontfamily{qcr}\selectfont\textcolor{gray}{\# PU-Transformer Head}}}\\ $\mathcal{F}_0$ = MLP($\mathcal{P}$)\\ {\nonl{\fontfamily{qcr}\selectfont\textcolor{gray}{\# PU-Transformer Body}}}\\ \For{each Transformer Encoder}{ {\nonl{\fontfamily{qcr}\selectfont\textcolor{gray}{\# $l = 1\;...\;L$}}}\\ {\nonl{\fontfamily{qcr}\selectfont\textcolor{gray}{\# the $l$-th Transformer Encoder}}}\\ $\mathcal{G}_{l}$ = \textbf{PosFus}($\mathcal{P}$, $\mathcal{F}_{l-1}$)\; ${\mathcal{G}_{l}}^\prime$ = \textbf{SC-MSA}\big(Norm($\mathcal{G}_{l}$)\big) + $\mathcal{G}_{l}$\; $\mathcal{F}_{l}$ = MLP\big(Norm(${\mathcal{G}_{l}}^\prime$)\big) + ${\mathcal{G}_{l}}^\prime$\; } {\nonl{\fontfamily{qcr}\selectfont\textcolor{gray}{\# PU-Transformer Tail}}}\\ $\mathcal{S}$ = MLP\big(Shuffle($\mathcal{F}_L$)\big) \end{algorithm} \vspace{1mm} \noindent \textbf{Transformers in Vision:} With the capacity in parallel processing as well as the scalability to deep networks and large datasets~\cite{khan2021transformers}, an increasing number of visual transformers have achieved excellent performance on image-related tasks including either low-level super-resolution~\cite{yang2020learning, chen2021pre} or high-level classification~\cite{dosovitskiy2020image, liu2021swin}, detection~\cite{carion2020end, zhu2020deformable}. Due to the inherent gaps between 3D and 2D data, transformer-based approaches to point cloud analysis have not been fully developed. In terms of regular point cloud classification and segmentation, researchers introduce some variants of transformer using vector-attention~\cite{zhao2021point}, offset-attention~\cite{guo2021pct}, and grid-rasterization~\cite{mazur2021cloud}, \etc. However, since these transformers still operate on an overall classical PointNet~\cite{qi2017pointnet} or PointNet++ architecture~\cite{qi2017pointnet++}, the improvement is relatively limited while the computational cost is too expensive for most researchers to re-implement. In contrast, recent PoinTr~\cite{yu2021pointr} shows great effectiveness in point cloud completion by adopting the standard architecture of transformer models~\cite{vaswani2017attention}, which consists of both the transformer encoders and decoders. To simplify the model's complexity and boost its adaptability in point cloud upsampling research, we only utilize the general structure of transformer encoder~\cite{dosovitskiy2020image} to form the body of our PU-Transformer.
2,877,628,090,029
arxiv
\section{INTRODUCTION} Microlensing is sensitive to planets orbiting low-mass stars and brown dwarfs (BDs) that are difficult to detect by other methods, such as the radial velocity and transit method. Although faint low-mass stars such as M dwarfs comprise $\sim 70 \%$ of stars in the solar neighborhood and the Galaxy \citep{skowron15}, it is difficult to detect distant M dwarfs due to their low luminosity. However, microlensing depends on the mass of the lens, not the luminosity, and thus it is not affected by the distance and luminosity of the lens. Hence, microlensing is the best method to probe faint M dwarfs in the Galaxy. A majority of host stars of 52 extrasolar planets detected by microlensing are M dwarfs, and they are distributed within a wide range of distances about $0.4 - 8\ \rm kpc$. Until now a large number of BDs \citep{han16} have been discovered by various methods including radial velocity \citep{sahlmann11}, transit \citep{deleuil08, johnson11, siverd12, moutou13, diaz13}, and direct imaging \citep{lafreniere07}, and most of them are young \citep{luhman12}. There exist various scenarios of BD formation based on these plentiful BD samples. Since microlensing provides different BD samples from other methods, the microlensing BD samples will play an important role to constrain the various BD formation scenarios. 17 BDs have been detected with microlensing so far. Only two of them, OGLE-2007-BLG-224L \citep{gould09} and OGLE-2015-BLG-1268L \citep{zhu16}, are isolated BDs, while most of the others, OGLE-2006-BLG-277Lb \citep{park13} , OGLE-2008-BLG-510Lb/MOA-2008-BLG-369L \citep{bozza12}, MOA-2009-BLG-411Lb \citep{bachelet12}, MOA-2010-BLG-073Lb \citep{street13}, MOA-2011-BLG-104Lb/OGLE-2011-BLG-0172 \citep{choi13}, MOA-2011-BLG-149Lb \citep{shin12b}, OGLE-2013-BLG-0102Lb \citep{jung15}, and OGLE-2013-BLG-0578Lb \citep{park15}, are binary companions orbiting M dwarf stars. This is because binary lens events (i.e., events with anomalies in the light curve) have a larger chance to measure masses of the lens than single lens events, such as isolated BD events. The key problem in ``detecting" isolated BDs is that in general, we do not know whether they are ``detected" or not, since all that we obtain from observed events is the Einstein timescale $t_{\rm E}$, which is the crossing time of the Einstein radius of the lens. With the observed $t_{\rm E}$, we can only make a very rough estimate of the lens mass and so cannot distinguish potential BDs from stars. To measure the masses of isolated BDs in the isolated BDs events, two parameters are required: the angular Einstein radius $\theta_{\rm E}$ and microlens parallax $\pi_{\rm E}$. This is because \citep{gould92, gould00} \begin{equation} \label{eqn:mass} M_{\rm L} = {\theta_{\rm E}\over{\kappa \pi_{\rm E}}} \end{equation} and \begin{equation} \pi_{\rm E} = {\pi_{\rm rel}\over{\theta_{\rm E}}}; \\\\\ \pi_{\rm rel} \equiv {\rm AU} \left({{1\over{D_{\rm L}}} - {1\over{D_{\rm S}}}}\right), \end{equation} where \begin{displaymath} \kappa \equiv {4G\over{c^{2}\rm AU}} \approx 8.14{{\rm mas}\over{M_\odot}}. \end{displaymath} Here $M_{\rm L}$ is the lens mass, and $D_{\rm L}$ and $D_{\rm S}$ are the distances to the lens and the source from the observer, respectively. However, it is usually quite difficult to measure the two parameters $\theta_{\rm E}$ and $\pi_{\rm E}$. In general, $\theta_{\rm E}$ is obtained from the measurement of the normalized source radius $\rho = \theta_\star/\theta_{\rm E}$, where $\theta_\star$ is an angular radius of the source. The $\rho$ measurement is limited to well-covered caustic-crossing events and high-magnification events in which the source passes close to the lens, while $\theta_\star$ is usually well measured through the color and brightness of the source. Because isolated BD events are almost always quite short, $\pi_{\rm E}$ can usually be measured only via so-called terrestrial parallax \citep{gould97, gould09}. Terrestrial parallax measurements are limited to well-covered high-magnification events. As a result, it is very hard to measure masses of isolated BDs from the ground \citep{gould13}. The best way to measure $\pi_{\rm E}$ is a simultaneous observation of an event from Earth and a satellite \citep{refsdal66, gould94b}. Fortunately, since 2014, the \textit{Spitzer} satellite has been regularly observing microlensing events toward the Galactic bulge in order to measure the microlens parallax. The \textit{Spitzer} observations suggest a new opportunity to obtain the mass function of BDs from the simultaneous observation from Earth and \textit{Spitzer}, although they are not dedicated to BDs \citep{zhu16}. The simultaneous observation from the two observatories with sufficiently wide projected separation $D_\perp$ allows to measure the microlens parallax vector $\bm{\pi}_{\rm E}$ from the difference in the light curves as seen from the two observatories, \begin{equation} \bm{\pi}_{\rm E} = \pi_{\rm E}{\bm{\mu}_{\rm rel}\over{\mu_{\rm rel}}}, \end{equation} where $\mu_{\rm rel}$ is the lens-source relative proper motion and \begin{equation} \bm{\pi}_{\rm E} = {{\rm AU}\over{D_\perp}}\left(\Delta \tau, \Delta \beta_{\pm \pm} \right), \end{equation} where \begin{equation} \Delta \tau = {{t_{\rm 0,sat} - t_{\rm 0,\oplus}}\over{t_{\rm E}}};\quad \Delta \beta_{\pm \pm} = \pm u_{\rm 0,sat} - \pm u_{\rm 0,\oplus}. \end{equation} Here $t_0$ is the time of the closest source approach to the lens (peak time of the event) and $u_0$ is the separation between the lens and the source at time $t_0$ (impact parameter). The subscripts of $\rm ``sat"$ and $``\oplus"$ indicate the parameters as measured from the satellite and Earth, respectively. Thus, $\Delta \tau$ and $\Delta \beta$ represent the difference in $t_0$ and $u_0$ as measured from the two observatories. Parallax measurements made by such comparisons between the light curves are subject to a well-known four-fold degeneracy, which comes from four possible values of $\Delta \beta$ including ($+u_{\rm 0, sat}, \pm u_{\rm 0, \oplus}$) and ($-u_{\rm 0, sat}, \pm u_{\rm 0, \oplus}$). However, there is only a two-fold degeneracy in the amplitude of $\bm{\pi}_{\rm E}$ because $\Delta\beta_{--} = - \Delta\beta_{++}$ and $\Delta\beta_{-+} = - \Delta\beta_{+-}$. The only exception to the four-fold degeneracy would be if one of the two observatories has $u_0$ consistent with zero, while the other has $u_0$ inconsistent with zero. In this case, the four-fold degeneracy reduces to a two-fold degeneracy. For example, if $u_{0,\rm sat}=0$ (within errors), then $\Delta\beta_{+,+}=\Delta\beta_{-,+}\rightarrow \Delta\beta_{0,+}$ (and similarly for $\Delta\beta_{0,-}$). Then, since $\Delta\beta_{0,-}=-\Delta\beta_{0,+}$, there is no degeneracy in the mass (See e.g., \citealt{gould12}). This case is very important for point-lens mass measurements since the lens always passes very close to or over the source as seen by one observatory, so $u_0\simeq 0$, whether or not it is strictly consistent with zero. Here we report the fifth isolated-star measurement derived from microlensing measurements of $\rho$ and $\pi_{\rm E}$. In contrast to the previous four measurements, this one has a discrete degeneracy in $\rho$ and therefore in mass. We trace the origin of this degeneracy to the fact that only a single point is affected by finite-source effects, and we argue that it may occur frequently in future space-based microlensing mass measurements, including BDs. We show how this degeneracy can be broken by future high-resolution imaging, regardless of whether the lens is dark or luminous. This paper is organized as follows. In Section 2, the observation of the event OGLE-2015-BLG-1482 is summarized, and we describe the analysis of the light curve in Section 3. With the results of Section 3, we derive physical properties of the source and lens in Section 4 and then we discuss the results in Section 5. Finally, we conclude in Section 6. \section{OBSERVATIONS} \subsection{Ground-based observations} The gravitational microlensing event OGLE-2015-BLG-1482 was discovered by the Optical Gravitational Lensing Experiment (OGLE) (Udalski 2003), and it was also observed by \textit{Spitzer} and Korea Microlensing Telescope Network (KMTNet, \citealt{kim16}). The microlensed source star of the event is located at ($\alpha$,$\delta$) = ($17^{\rm h}50^{\rm m}31^{\rm s}.33,-30^{\circ}53'19\farcs3$) in equatorial coordinates and ($l$,$b$) = ($358 \fdg 88,-1 \fdg 92$) in Galactic coordinates. OGLE observations were carried out using a 1.3 m Warsaw telescope with a field of view of 1.4 square degrees at the Las Campanas Observatory in Chile. The event lies in the OGLE field BLG534 with a cadence of about $0.3\,{\rm hr}^{-1}$ in $I$ band. The Einstein timescale is quite short, $t_{\rm E} \sim 4\,$days, and the OGLE baseline of this event is slightly variable on long timescales at about the 0.02 mag level. Thus, we used only 2015 season data for light curve modeling. KMTNet observations were conducted using 1.6 m telescopes with fields of view of 4.0 square degrees at each of three different sites, Cerro Tololo Inter-American Observatory (CTIO) in Chile, the South African Astronomical Observatory (SAAO) in South Africa, and Siding Spring Observatory (SSO) in Australia. The scientific observations at the CTIO, SAAO, and SSO were initiated on 3 February, 19 February, and 6 June in 2015, respectively. OGLE-2015-BLG-1482 was observed with $10 - 12$ minute cadence at the three sites and the exposure time was 60 s. The CTIO, SAAO, and SSO observations were made in $I$-band filter, and for determining the color of the source star, the CTIO observations with a typical good seeing were also made in $V$-band filter. Thus, the light curve of the event was well covered by the three KMTNet observation data sets. The KMTNet data were reduced by the Difference Image Analysis (DIA) photometry pipeline \citep{alard98, albrow09}. \subsection{\textit{Spitzer} observations} \textit{Spitzer} observations in 2015 were carried out under an 832-hour program whose principal scientific goal was to measure the Galactic distribution of planets \citep{gould14}. The event selection and observational cadences were decided strictly by the protocols of \cite{yee15b}, according to which events could be selected either ``subjectively" or ``objectively". Events that meet specified objective criteria {\it must} be observed according to a specified cadence. In this case all planets discovered, whether before or after \textit{Spitzer }observations are triggered, (as well as all planet sensitivity) can be included in the analysis. Events that do not meet these criteria can still be chosen ``subjectively". In this case, planets (and planet sensitivity) can only be included in the Galactic-distribution analysis based on data that become available after the decision. Like objective events, events selected subjectively must continue to be observed according to the specified cadence and stopping criteria (although those may be specified as different from the standard, objective values at the time of selection). Because the current paper is not about planets or planet sensitivities, the above considerations play no direct role. However, they play a crucial indirect role. Figure 1 shows that despite the event's very short timescale $t_{\rm E} \sim 4\,$days, and despite the fact that it peaked as seen from \textit{Spitzer} slightly before it peaked from Earth, observations began about 1 day prior to the peak. This is remarkable because, as discussed in detail by \citet{udalski15} (see their Figure 1), there is a delay between the selection of a target and the start of the \textit{Spitzer} observations. Targets can only be uploaded to the spacecraft once per week, and it takes some time to prepare the target uploads. Therefore, \textit{Spitzer} observations begin a minimum of three days after the final decision is made to observe the event with \textit{Spitzer}, and that decision is generally based on data taken the night before, i.e. about four days prior to the first \textit{Spitzer} observations. Hence, at the time that the decision was made to observe OGLE-2015-BLG-1482, the source was significantly outside the Einstein ring. It is notoriously difficult to predict the future course of such events. Therefore, such events cannot meet objective criteria that far from the peak, but selecting them``subjectively" would require a commitment to continue observing them for several more weeks of the campaign, which risks wasting a large number of observations if the event turns out to be very low-magnification with almost zero planet sensitivity (the most likely scenario). At the same time, if the event timescale is short, it could be over before the next opportunity to start observations with \textit{Spitzer} (~10 days later) Hence, \citet{yee15b} also specified the possibility of so-called ``secret alerts". For these, an observational sequence would be uploaded to \textit{Spitzer} for a given week, but no announcement would be made. If the event looked promising later (after upload), then it could be chosen subjectively. In this case, \textit{Spitzer }data taken after the public alert could be included in the parallax measurement (needed to enter the Galactic-distribution sample) but \textit{Spitzer} data taken before this date could not. If the event was subsequently regarded as unpromising, it would not be subjectively alerted, in which case the observations could be halted the next week without violating the \citet{yee15b} protocols. This was exactly the case for OGLE-2015-BLG-1482 (see Figure 1). It was $``$secretly" alerted at the upload for observations to begin at HJD$'$ = HJD-2450000 = 7206.73. It was only because of this secret alert that an observation was made near peak, which became the basis for the current paper. In fact, its subsequent rise was so fast (due to its short timescale) that it was subjectively alerted just prior to the near-peak \textit{Spitzer} observation. At the next week's upload, it met the objective criteria. Note, however, that if we had waited for the event to become objective before triggering observations, we would not have been able to make the mass measurement reported here, even though the planet sensitivity analysis would have been almost identically the same (provided that parallax could still be measured with the remaining \textit{Spitzer} observations). This is the first \textit{Spitzer} microlensing event for which a ``secret alert" played a crucial role. \textit{Spitzer} observations were made in $3.6\, \mu$m channel on the IRAC camera from HJD$'$ = HJD - 2450000 = 7206.73 to 7221.04. The data were reduced using specialized software developed specifically for this program \citep{calchinovati15}. Even though the \textit{Spitzer} data are relatively sparse, there is one point near the peak, which proves to be essential to determine the normalized source radius $\rho$. \section{LIGHT CURVE ANALYSIS} Event OGLE-2015-BLG-1482 was densely, and almost continuously covered by ground-based data, but showed no significant anomalies (See Figure 1). This has two very important implications. First, it means that the ground-based light curve can be analyzed as a point lens. Second, it implies that it is very likely (but not absolutely guaranteed) that the \textit{Spitzer} light curve can likewise be analyzed as a point lens. The reason that the latter conclusion is not absolutely secure is that the \textit{Spitzer} and ground-based light curves are separated in the Einstein ring by $\Delta\beta\sim 0.15$. Thus, even though we can be quite certain that the ground-based source trajectory did not go through (or even near) any caustics of significant size, it is still possible that the source as seen from \textit{Spitzer} did pass through a significant caustic, but that this caustic was just too small to affect the ground-based light curve. Nevertheless, since the closest \textit{Spitzer} point to peak has impact parameter $u_{spitzer} \sim 0.06$ and it is quite rare for events to show caustic anomalies at such separations when there are no anomalies seen in densely sampled data $u > 0.15$, we proceed under the assumption that the event can be analyzed as a point lens from both Earth and \textit{Spitzer}. Thus, we conduct the single lens modeling of the observed light curve by minimizing $\chi^2$ over parameter space. For the $\chi^2$ minimization, we use the Markov Chain Monte Carlo (MCMC) method. Thanks to the simultaneous observation from the Earth and satellite, we are able to measure the microlens parallax $\pi_{\rm E} = (\pi^{2}_{\rm E, N} + \pi^{2}_{\rm E, E})^{1/2}$, which are the north and east components of the parallax vector $\bm{\pi}_{\rm E}$, respectively. The \textit{Spitzer} light curve has a point near the peak of the light curve, and thus we can also measure the normalized source radius $\rho$. Hence, we put three single lensing parameters of $t_0$, $u_0$, and $t_{\rm E}$, the parallax parameters of $\pi_{\rm E,N}$ and $\pi_{\rm E,E}$, and the normalized source radius $\rho$ as free parameters in the modeling. In addition, there are two flux parameters for each of the 5 observatories (\textit{Spitzer}, OGLE, KMT CTIO, KMT SAAO, KMT SSO). One represents the source flux $f_{s,i}$ as seen from the $i$th observatory, while the other, $f_{b,i}$ is the blended flux within the aperture that does not participate in the event. That is, the five observed fluxes $F_i(t_j)$ at epochs $t_j$ are simultaneously modeled by \begin{equation} F_i(t_j) = f_{s,i}A_i(t_j;t_0,u_0,t_{\rm E},\rho,\bm{\pi}_{\rm E}) + f_{b,i}, \label{eqn:ftot} \end{equation} where $A_i(t)$ is the magnification as a function of time at the $i$th observatory. In principle, these magnifications may differ because the observatories are at different locations. However, in this event the separations of the observatories on Earth are so small compared to the projected size of the Einstein ring that we ignore them and consider all Earth-based observations as being made from Earth’s center. That is, we ignore so-called ``terrestrial parallax". At the same time, the distance between the Earth and \textit{Spitzer} remains highly significant, so $A_{Spitzer}(t)$ is different from $A_{\rm Earth}(t)$. As is customary (e.g., \citealt{dong07, udalski15, yee15a}), we determine the parameters in the $``$geocentric" frame at the peak of the event as observed from Earth \citep{gould04}, and likewise adopt the sign conventions shown in Figure 4 of \citet{gould04}. In addition, we conduct the modeling for the point-source/point-lens, because only a single point of \textit{Spitzer} contributes to the finite-source effect. We find that the $\Delta \chi^{2}$ between the best-fit models of the point- and finite-sources is $\Delta \chi^{2} = 31.47$. Hence, OGLE-2015-BLG-1482 strongly favors the finite-source model. \subsection{Limb Darkening} As we will show, the lens either transits or passes very close to the source as seen by \textit{Spitzer}, which induces finite-source effects near the peak of the \textit{Spitzer} light curve. To account for this, we adopt a limb-darkened brightness profile for the source star of the form \begin{equation} S_{\lambda}(\theta) = \bar{S}_{\lambda}\left[1 - \Gamma\left(1 - {3\over{2}}\cos\theta\right)\right], \end{equation} where $\bar{S}_{\lambda} \equiv F_{S, \lambda}/(\pi \theta^{2}_{\star})$ is the mean surface brightness of the source, $F_{S,\lambda}$ is the total flux at wavelength $\lambda$, $\Gamma$ is the limb darkening coefficient, and $\theta$ is the angle between the normal to the surface of the source star and the line of sight \citep{an02a}. Based on the estimated color and magnitude of the source, which is discussed in Section 4, assuming an effective temperature $T_{\rm eff} = 4500\ \rm K$, solar metallicity, surface gravity $\log\ g = 0.0$, and microturbulent velocity $v_{t} = 2$ km/s, we adopt $\Gamma_{3.6\,\mu {\rm m}} = 0.178$ from \citet{claret11}. \subsection{$(2\times 2)=4$ highly degenerate solutions} As discussed in Section 1, space-based parallax measurements for point lenses generically give rise to four solutions, which can be highly degenerate. However, in cases for which one of two observations has $u_0 \simeq 0$, while the other has $u_0 \neq 0$, the four solutions reduce to two solutions. Since for event OGLE-2015-BLG-1482, $Spitzer$ has $u_{\rm 0, sat} \simeq 0$, we expect the event to have two degenerate solutions, $u_{\rm 0, \oplus} > 0$ and $u_{\rm 0, \oplus} < 0$. However, what we see in Table 1 is not two degenerate solutions but four. For each of the two expected degenerate solutions [$(+,0), (-,0)$], there are two solutions with different values of $\rho$ ($\rho \simeq 0.06$ and $\rho \simeq 0.09$). Figure 1 shows the best-fit light curve of the event OGLE-2015-BLG-1482 with the OGLE, KMT, and \textit{Spitzer} data sets. The best-fit solution is $(+,0)$ solution for $\rho \simeq 0.06$, which means $u_{\rm 0, \oplus} > 0$ and $u_{\rm 0, sat} \simeq 0$. The biggest $\Delta \chi^{2}$ between the four solutions is $\Delta \chi^{2} \simeq 0.5$. We should expect the two-fold parallax to be very severe in this case. This two-fold degeneracy would be exact in the approximations that 1) Earth and \textit{Spitzer} are in rectilinear motion and 2) they have zero relative projected velocity \citep{gould95}. For events that are very short compared to a year (like this one), the approximation of rectilinear motion is excellent. And while Earth and \textit{Spitzer} had relative projected motion of order $v_\oplus \sim 30\,{\rm km\,s^{-1}}$, this must be compared to the lens-source projected velocity $\tilde v$, \begin{equation} \tilde v \equiv {{\rm AU}\over \piet_{\rm E}}\simeq 3050\,{\rm km\,s^{-1}}. \label{eqn:tildev} \end{equation} Hence, these two solutions are almost perfectly degenerate. On the other hand, the $\rho$ degeneracy was completely unexpected. It is also very severe. The origins of the $\rho$ degeneracy are discussed in Section 5. To illustrate the $\rho$ degeneracy, the light curve of the best-fit model $(+,0)$ for $\rho \simeq 0.09$ is also presented in Figure 1. In Table 1, we present the parameters of all the four solutions. \section{Physical properties} \subsection{Source properties} The color and magnitude of the source are estimated from the observed $(V - I)$ source color and best-fit modeling of the light curve, but they are affected by extinction and reddening due to the interstellar dust along the line of sight. The dereddened color and magnitude of the source can be determined by comparing to the color and magnitude of the red clump giant (RC) under the assumption that the source and RC experience the same amount of reddening and extinction \citep{yoo04}. Figure 2 shows the instrumental KMT CTIO color-magnitude diagram (CMD) of stars in the observed field. The color and magnitude of the RC are obtained from the position of the RC on the CMD, which correspond to $[(V - I), I]_{\rm RC} = [1.67, 17.15]$. We adopt the intrinsic color and magnitude of the RC with $(V - I)_{\rm RC,0}$ = 1.06 \citep{bensby11} and $I_{\rm RC,0}$ = 14.50 \citep{nataf13}. The instrumental source color obtained from a regression is $(V - I)_{\rm s} = 1.74$ and the magnitude of the source obtained from the best-fit model is $I_{\rm s} = 17.37$. The measured offset between the source and the RC is $[\Delta(V - I), \Delta I] = [0.07, 0.22]$. Here we note that there exists an offset between the instrumental magnitudes of OGLE and KMTNet as $I_{\rm kmt} - I_{\rm ogle} = 0.045$ mag. Thus, we should consider the offset when we estimate the dereddened magnitude of the source. As a result, we find the dereddened color and magnitude of the source $[(V - I), I]_{\rm s,0} = [1.13, 14.76]$. The dereddened $(V - K)$ source color by using the color-color relation of \citet{bessell88} is $(V - K)_{\rm s,0} = 2.61$. Then adopting $(V - K)_{\rm s,0}$ to the the color-surface brightness relation of \citet{kervella04}, we determine the source angular radius $\theta_{\rm \star} = 5.79 \pm 0.39\ \mu{\rm as}$. The estimated color and magnitude of the source suggest that the source is a K type giant. The error in $\theta_{\rm \star}$ includes the uncertainty in the source flux, the uncertainty in the conversion from the observed $(V - I)$ color to the surface brightness, and the uncertainty of centroiding the RC. The uncertainty in the source flux is about $1\%$ and the uncertainty of the microlensing color is $0.02$ mag, which contributes $1.6\%$ error in $\theta_{\rm \star}$ measurement. The scatter of the source angular radius relation in $(V - K)_{\rm s,0}$ is $5\%$ \citep{kervella08}, and centroiding the RC contributes $4\%$ to the radius uncertainty \citep{shin16}. As mentioned above, since the degeneracy between two different $\rho$ solutions is very severe as $\Delta \chi^{2} \lesssim 0.3$, we should consider both $\rho$ solutions. The two $\rho$ values yield two different Einstein radii, \begin{equation} \theta_{\rm E} = \theta_{\rm \star}/\rho = \left\lbrace \begin{array}{ll} 0.104 \pm 0.022\ \textrm{mas} & \textrm{for $\rho \simeq 0.06$} \\\\ 0.063 \pm 0.006\ \textrm{mas} & \textrm{for $\rho \simeq 0.09$}. \end{array} \right. \end{equation} Because of the two different Einstein radii, all the physical parameters related to the lens take on two discrete values. The relative proper motions of the lens and source are, \begin{equation} \mu_{\rm rel} = \theta_{\rm E}/t_{\rm E} = \left\lbrace \begin{array}{ll} 8.96 \pm 1.88\ \textrm{mas\ yr$^{-1}$} & \textrm{for $\rho \simeq 0.06$} \\\\ 5.48 \pm 0.48\ \textrm{mas\ yr$^{-1}$} & \textrm{for $\rho \simeq 0.09$}. \end{array} \right. \end{equation} \subsection{Lens properties} The mass and distance of the lens can be obtained from the measured Einstein radius $\theta_{\rm E}$ and microlens parallax $\pi_{\rm E}$. As discussed in the introduction, the four-fold degeneracy in $\bm{\pi}_{\rm E}$ usually leads to a two-fold degeneracy in its amplitude $\pi_{\rm E}$. However, in the case of events that are much higher magnification (much lower $u_0$) as seen from one observatory than the other, the two-fold degeneracy collapses as well. This is because, under these conditions, $|\Delta\beta_{\pm\pm}|\simeq |\Delta\beta_{\pm\mp}|$. The present case is consistent with the lens passing exactly over the center of the source as seen by \textit{Spitzer} (to our ability to measure it). Then, according to Equation \eqref{eqn:mass}, we measure the lens mass, \begin{displaymath} M = {\theta_{\rm E}\over{\kappa \pi_{\rm E}}} = \left\lbrace \begin{array}{ll} 0.096 \pm 0.023 \ M_\odot & \textrm{for $\rho \simeq 0.06$} \\\\ 0.055 \pm 0.009 \ M_\odot & \textrm{for $\rho \simeq 0.09$}. \end{array} \right. \end{displaymath} The lens-source relative parallax for the two cases is \begin{equation} \pi_{\rm rel} = \theta_{\rm E} \pi_{\rm E} = \left\lbrace \begin{array}{ll} 0.014 \pm 0.003\ {\rm mas} & \textrm{for $\rho \simeq 0.06$} \\\\ 0.009 \pm 0.001\ {\rm mas} & \textrm{for $\rho \simeq 0.09$}. \end{array} \right. \end{equation} These values of $\pi_{\rm rel}$ are very small compared to the source parallax $\pi_s \sim 0.12\,$mas. This implies that the distance between the lens and the source is determined much more precisely than the distance to the lens or the source separately. That is, \begin{equation} D_{\rm LS} \equiv D_{\rm S} - D_{\rm L} = {{\pi_{\rm rel} \over {\rm AU}} D_{\rm S} D_{\rm L}} \label{eqn:dls} \end{equation} \begin{displaymath} \simeq \left\lbrace \begin{array}{ll} 0.80 \pm 0.19 \ {\rm kpc} & \textrm{for $\rho \simeq 0.06$} \\\\ 0.54 \pm 0.08 \ {\rm kpc} & \textrm{for $\rho \simeq 0.09$}. \end{array} \right. \end{displaymath} Since the source is almost certainly a bulge clump star (from its position on the CMD), and the lens is $\lesssim 1$ kpc from the source, it is likewise almost certainly in the bulge. Thus, this is the first isolated low-mass object that has been determined to lie in the Galactic bulge. \section{Discussion} \subsection{Future Resolution of the $\rho$ Degeneracy Using Adaptive Optics} Event OGLE-2015-BLG-1482 has a very severe two-fold degeneracy in $\rho$, in which the $\Delta \chi^{2}$ between the two solutions ($\rho \simeq 0.06$ and $\rho \simeq 0.09$) is $\Delta \chi^{2} \sim 0.3$. For the solutions with $u_{\rm 0, \oplus} > 0$ and $u_{\rm 0, \oplus} < 0$, the microlens parallax vectors $\bm{\pi}_{\rm E}$ are different from one another, but they have almost the same amplitude $\pi_{\rm E}$. Therefore, the two solutions yield almost the same physical parameters of the lens. However, each of the two solutions also has two degenerate $\rho$ solutions: $\rho \simeq 0.06$ and $\rho \simeq 0.09$. Each $\rho$ solution yields different physical parameters of the lens, in particular the lens mass. For $\rho \simeq 0.06$, the lens is a very low-mass star, while for $\rho \simeq 0.09$ it is a brown dwarf. The degeneracy of the lens mass due to the two $\rho$ can be resolved from direct lens imaging by using instruments with high spatial resolution (Han \& Chang 2003; Henderson et al. 2014), such as the VisAO camera of the 6.5m Magellan telescope with the resolution $\sim 0\farcs04$ in the $J$ band \citep{close13}\footnote{Close et al. (2014) have obtained a diffraction limited FWHM in ground-based 6m $R$ band images, which gives hope for optical AO. However, it is premature to claim that this technique can be applied to faint stars in the Galactic bulge} and the GMTIFS of the 24.5 m Giant Magellan Telescope (GMT) with resolution $\sim 0\farcs 01$ in the NIR \citep{mcgregor12}. In general, direct imaging requires 1) that the lens be luminous, and 2) that it be sufficiently far from the source to be separately resolved. In the present case, (1) clearly fails for the BD solution. Hence, the way that high-resolution imaging would resolve the degeneracy is to look for the luminous (but faint) M dwarf predicted by the other solution at its predicted orientation (either almost due north or due south of the source -- since $|\pi_{\rm E,N}|\gg |\pi_{\rm E,E}|$) and with its predicted separation $(t_{\rm AO} - 2015)\times (9\,\rm mas\,yr^{-1})$. If the M dwarf fails to appear at one of these two expected positions, the BD solution is correct. Since the source is a clump giant, and hence roughly $10^4$ times brighter than the M dwarf, it is likely that the two cannot be separately resolved until they are separated by at least 2.5 FWHM. This requires to wait until $2015 + 2.5\times(40/9)=2026$ for Magellan or $2015 + 2.5\times (10/9)=2018$ for GMT. \subsection{Origin of the $\rho$ Degeneracy} The degeneracy in $\rho$ was completely unexpected. Indeed we discovered it accidentally because $\rho$ had one value in one of two degenerate parallax solutions and the other value in another one. Originally, this led us to think that it was somehow connected to the parallax degeneracy. However, by seeding both solutions with both values of $\rho$ we discovered that it was completely independent of the parallax degeneracy. In retrospect, the reason for this degeneracy is ``obvious''. There is only a single point that is strongly impacted by the finite size of the source. The value of $u$ at this time is well predicted by the rest of the light curve, in particular because \textit{Spitzer} data begin before peak (see Section~2), \begin{equation} u = \sqrt{\tau^2 + u_0^2} \quad{\mathrm where} \quad \tau = \frac{(t-t_{\rm 0, sat})}{t_{\rm E}}. \end{equation} Hence, the magnification (for point-lens/point-source geometry in a high magnification event) is also known $A_{\rm ps}\simeq 1/u$. Moreover, both $f_s$ and $f_b$ for \textit{Spitzer} are also well measured, so that the measured flux at the near-peak point $F$ directly yields an empirical magnification $A_{\rm obs} = (F-f_b)/f_s$ (i.e. the magnification in the presence of finite-source effects). Following \citet{gould94a}, the ratio of $A_{\rm obs}$ and $A_{\rm ps}$ can therefore be derived directly from the light curve \begin{equation} \label{eqn:bofz} B(z) \equiv {A_{\rm obs}\over A_{\rm ps}}\simeq A_{\rm obs}u . \qquad (z\equiv u/\rho) \end{equation} As shown by Figure 1 of \citet{gould94a}, $B(z)$ reaches a peak at $z\simeq 0.91$, with $B=1.34$.\footnote{While Figure 1 from \citet{gould94a} shows the correct qualitative behavior, it has a quantitative error in that the peak is at ~1.25, rather than 1.34 (the correct value)} Therefore, if one inverts a measurement of $B(z)$ to infer a value of $z$, there are respectively one, two and zero solutions for $B_{\rm obs}<1$, $1<B_{\rm obs}<1.34$, and $B_{\rm obs}>1.34$. Since this event is a high-magnification event only for \textit{Spitzer}, i.e., the finite-source effect is only seen by \textit{Spitzer}, only the trajectory of \textit{Spitzer} is considered. Figure 3 (adapted from \citealt{gould94a}) shows the finite-source effect function $B(z)$ as a function of $z$. For this event, $B(z) = A_{\rm obs}u = 19.14\times0.06 = 1.15$ at the nearest point to the peak, which is indicated by the horizontal dotted line in the figure. As shown in Figure 3, the function $B(z) = 1.15$ is satisfied at two different values of $z = 0.64$ and $z = 1.12$, which implies (as outlined above) that there are two $\rho$ values. The two $z$ values yield two normalized source radii of $\rho = 0.094$ (for $z=0.64$) and $\rho = 0.054$ (for $z=1.12$). These two derived $\rho$ values are almost the same as those obtained numerically from the best-fit solutions. Because high-magnification events can be alerted in real time, the high-magnification events observed from Earth are often well covered around the peak by intensive follow-up observations, and thus $\rho$ is almost always well measured if there are significant finite-source effects (i.e., $B\not=1$ for some points). This means that the $\rho$ degeneracy will often be resolved in high-magnification events observed from the ground. On the other hand, since the observation cadence of \textit{Spitzer} is much lower than those of ground-based observations, the $\rho$ degeneracy can occur frequently in high-magnification events observed by \textit{Spitzer}. Note that, in contrast to Figure 1 of \citet{gould94a}, our Figure 3 shows $B(z)$ with and without the effects of limb darkening. The effect is hardly distinguishable by eye, in particular because limb darkening at $3.6\,\mu$m is very weak. Nevertheless this effect should be included. If finite-source effects are reliably detected from a single measurement near peak, how often will $\rho$ be ambiguous, and if it is ambiguous, how often will the value fall in the upper versus lower allowed ranges? We might judge there to be a``reliable detection" of finite-source effects from a single point if $|B-1| > X$, where $X$ might be taken as 5\%. For high-magnification events including the limb darkening effect, we can Taylor expand $B$ for $z>1$ (see Appendix) \begin{equation} B(z) = 1 + {1\over{8}}\left(1 - {\Gamma\over5}\right){1\over{z^2}} + {3\over 64}\left(1 - {11\over 35}\Gamma\right){1\over{z^4}} + \ldots, \end{equation} where $\Gamma$ is the limb darkening coefficient, as mentioned in Section 3. Truncating at the second term, we have $B(z) \simeq 1 + (1 - \Gamma/5)/(8z^{2})$. For \textit{Spitzer} $\Gamma/5 \ll 1$, so we can ignore it. Then $B(z) = 1 + 1/(8z^2)$. Thus, $B - 1 = X$, i.e., $B = 1 + X$, implies $z = (1/(8X))^{1/2} \rightarrow 1.6$ (for $X = 5\%$). To next order, $z=(4/3(\sqrt{1+12X}-1))^{-1/2}=1.685$ which is very close to the numerical value, 1.7. Hence, we have three ranges of recognizable finite-source effects. The ranges are presented in Table 3. Table 3 shows that $0.51/(0.51 + 0.34 + 0.79) = 31\%$ of the finite-source effects will be unambiguous. And of the times they are ambiguous $0.34/(0.34+0.79) = 30\%$ will have the higher value of $\rho$. Figure 4 shows the $\chi^2$ distribution of $u_{\rm 0, sat}$ versus $\rho$ from the MCMC chains of the four degenerate solutions in Table 1. The figure shows that the distribution is centered on $u_{\rm 0, sat} = 0.0$ and thus the four solutions are consistent with $u_{\rm 0, sat} = 0.0$, although there is scatter. Therefore, it is correct to label $u_{\rm 0, sat}$ as ``0". The figure also shows that the nearest point to the peak of \textit{Spitzer} light curve favors $u_{\rm 0, sat} =0$, but can accommodate other values of $u_{\rm 0, sat}$, up to about 0.03 at $< 2\sigma$. In this case, the bigger $u_{\rm 0, sat}$ makes $B(z)$ bigger because $B(z) = uA_{obs}$, and so allowing values of $z$ between the two best-fit values. At the nearest point to the peak, $\tau = |(t - t_{0,sat})|/t_{\rm E} = |(7207.50 - 7207.76)|/4.26 = 0.06$. Then, \begin{eqnarray} \frac{B(u_{\rm 0, sat}=0.03)}{B(u_{\rm 0, sat}=0)} & = & \frac{u(u_{\rm 0, sat}=0.03)}{u(u_{\rm 0, sat}=0)} \nonumber \\ & = & {\sqrt{{0.06}^2 + 0.03^2\over{0.06^2}}} = 1.12 \end{eqnarray} Since $B(u_{\rm 0, sat}=0) = 1.15$ from Figure 3, $B(u_{\rm 0, sat}=0.03) = 1.15\times1.12 = 1.29$, and it is the maximum value allowed $B$, and thus the maximum allowed $u_{\rm 0, sat}$. The allowed maximum $B(z) = 1.29$ yields $z=0.79$ and $z=0.98$ and hence two $\rho$ values, $\rho = 0.085$ and $\rho = 0.068$. Thus, $\rho \simeq 0.09$ solutions have the lower limit of $\rho =0.085$, while $\rho \simeq 0.06$ solutions have the upper limit of $\rho = 0.068$. \subsection{$\rho$ degeneracy of OGLE-2015-BLG-0763} OGLE-2015-BLG-0763 is the only other event with a single lens mass measurement based on finite-source effects observed by \textit{Spitzer} \citep{zhu16}. As with OGLE-2016-BLG-1482, the \textit{Spitzer} light curve shows only one point that is strongly affected by finite-source effects (i.e., $B\not=1$). \citet{zhu16} report $\rho=0.0218$, $t_{\rm E}=33\,$days and their solution implies $t_{0,\rm sat}\simeq 7188.60$ and $u_{\rm 0, sat}=0.012$. Therefore, the three points closest to peak \citep{calchinovati15} at $t-7188.60 = (-0.75,0.36,0.72)\,$days, have respectively, $u=(0.026, 0.016, 0.025)$. Since the measurement was derived primarily from the highest point, one may infer $z\equiv u/\rho=0.73$. Inspection of Figure 3 shows that this implies $B(z)=1.25$, which (since $B>1$) implies that there is another solution at $z=1.01$ and therefore with $\rho=0.016$. We can then derive for the two solutions $z=(1.19,0.73,1.15)$ (adopted) and $z=(1.62,1.01,1.57)$ (other). These yield values of $B$ (from Figure~3) of $B(z)=(1.13,1.25,1.14)$ (adopted) and $B(z)=(1.06,1.25,1.06)$ (other). That is, for OGLE-2015-BLG-0763, the two nearest points to the peak will both be about 0.08 mag brighter in the adopted solution than in the other solution. Since the \textit{Spitzer} photometric errors are small compared to these inferred differences (Figure~2 of \citealt{zhu16}), we expect that, in the case of OGLE-2015-BLG-0763 (and in contrast to OGLE-2015-BLG-1482), the near-peak points resolve the degeneracy between the two solutions. Armed with the above understanding, which was derived without any detailed modeling, we reanalyze OGLE-2015-BLG-0763 and find only an upper limit of 0.01 for the second $\rho$. However, as discussed in \citet{zhu16}, solutions of the second $\rho$ result in inconsistency with observations, and thus they are not physically correct. As a result, there is no $\rho$ degeneracy for the event OGLE-2015-BLG-0763. As mentioned before, this is because of the near-peak points. This implies that although for events in which the finite-source effect is seen only in the \textit{Spitzer} the $\rho$ degeneracy can occur frequently due to low observation cadence of the \textit{Spitzer}, it can be resolved by a few data points near the peak. \subsection{Error analysis in $\rho$ measurement} The error in the $\rho$ measurement of the event OGLE-2015-BLG-1482 is $19.8\%$ for $\rho \simeq 0.06$ and $6.6\%$ for $\rho \simeq 0.09$. These errors are quite big relative to measurements in high-magnification events from the ground. We therefore study the source of these errors in $\rho$ both to determine why they are so different and to make sure that we are properly incorporating all sources of error in this measurement. As outlined above, the train of information is basically captured by $\rho= u/z(B)$ where $z(B)$ is the inverse of $B(z)$ and both $u$ and $B$ can be regarded approximately as ``measured" quantities. It is instructive to further expand this expression \begin{equation} \rho = {u\over z (A_{\rm obs}u)}. \label{eqn:rho} \end{equation} In this form, it is clear that the contribution from an error in determining $u$ tends to be suppressed if $z'\equiv dz/dB>0$ (i.e., $z<0.91$, so $\rho \simeq 0.09$ in our case), and it tends to be enhanced if $z'<0$. Hence, this feature of Equation \eqref{eqn:rho} goes in the direction of explaining the larger error in the $\rho \simeq 0.06$ case. Second, if we for the moment ignore the error in $u$, then Equation \eqref{eqn:rho} implies $\sigma(\ln\rho) =|z'/z|\sigma(A_{\rm obs})$. For the two cases, $\rho=(0.09,0.06)$, we have $z=(0.64,1.12)$, $z'=(0.85,-2.18)$ and so $|z'/z|=(1.33,1.95)$. Hence, this aspect also favors larger errors for the smaller $\rho$ (larger $z$) solution. This is intuitively clear from Figure 3: the shallow slope of $B(z)$ toward large $z$ makes it difficult to estimate $z$ from a measurement of $B$. Hence, the fact that the fractional error in $\rho$ is much larger for the large $z$ (small $\rho$) solution is well understood. Ignoring blending, we can write $A_{\rm obs} = F/f_s$. The error in $F$ (i.e., the flux measurement at the high point) is uncorrelated with any other error. Since in our case, $u_{0,spitzer}\simeq 0$, we can write $u=(t-t_0)/t_{\rm E}$, and so \begin{equation} B = A_{\rm obs}u = {|t-t_0|F\over f_s t_{\rm E}}. \label{eqn:aobs} \end{equation} Since $t_0$ is known extremely well, and $t$ is known essentially perfectly, there would appear to be essentially no error in $|t-t_0|$. The denominator is a near-invariant in high-magnification events \citep{yee12}. That is, the errors in this product are generally much smaller than the errors in either one separately. This means that the error in $B$ (and so $z(B)$) is dominated by the flux measurement error of the single point that is affected by finite-source effects. Nevertheless, it is important to recognize that \citet{yee12} derived their conclusion regarding the invariance of $f_s t_{\rm E}$ under conditions that the error in $f_s$ is completely dominated by the model, and not by the flux measurement errors. Indeed, as a rather technical, but very relevant point, it is customary practice to ignore the role of flux measurement errors in the determination of $f_s$. That is, $f_s$ and $f_b$ are normally {\it not} included as chain variables when modeling microlensing events. Instead, the magnification is determined at each point along the light curve from microlens parameters that are in the chain, and then the two flux parameters (from each observatory) are determined from a linear fit. This is a perfectly valid approach for the overwhelming majority of microlensing events because the errors arising from this fit (which are returned but usually not reported from the linear fit routine) are normally tiny compared to the error in $f_s$ due to the model. Moreover, there are usually many observatories contributing to the light curve, and if all the flux parameters were incorporated in the chain, it would increase the convergence time exponentially. However, in the present case $t_{\rm E}$ is essentially determined from ground-based data, which are both numerous and very high precision, while $f_s$ is determined from just 16 \textit{Spitzer} points (i.e., all the points save the one near peak). If the usual (linear fit) procedure were applied, it would seriously underestimate the error in $f_s$ and so overestimate its degree anticorrelation with $t_{\rm E}$. We therefore include $(f_s,f_b)_{spitzer}$ as chain parameters and remodel this event. The result of the remodeling is presented in Table 2. By comparing to runs in which we treat these flux parameters in the usual way, we find that including these parameters in the chain contributes about 41\% to the $\rho$ error compared to all other sources of $\rho$ error combined. That is, in the end, this does not dramatically increase the final error, since $(1^2 + 0.41^2)^{1/2}=1.08$. Nevertheless, it is important to treat $(f_s,f_b)_{spitzer}$ in a formally proper way since this contribution could easily be the dominant one in other cases. \subsection{Impact of the $\rho$ Degeneracy} The $\rho$ degeneracy was not realized until now for several reasons. First of all, although single lens finite-source events have been routinely detected from ground-based observations, they are not scientifically very interesting without the measurement of $\pi_{\rm E}$. However, $\pi_{\rm E}$ measurements of single lens events based on ground-based data alone are intrinsically rare and technically difficult \citep{gould13}. Second, prior to the establishment of second-generation microlensing surveys, observations of high-magnification microlensing events were usually conducted under the survey+followup mode, which was first suggested by \citet{gould92b}. High-magnification events with their nearly 100\% sensitivity to planets \citep{griest98} were therefore often followed up with intensive ($\sim$1 min cadence) observations, which could easily resolve this $\rho$ degeneracy, if it exists. The $\rho$ degeneracy is nevertheless important for the science of second-generation ground-based and future space-based microlensing surveys. The majority of events found by these surveys will not be followed up at all, and thus the $\rho$ degeneracy can appear because the typical source radius crossing time, $t_\star$, is comparable to the observing cadences that these surveys adopt. Here \begin{equation} \label{eqn:tstar} t_\star \equiv \frac{\theta_{\rm \star}}{\mu_{\rm rel}} = 45\ {\rm min} \left(\frac{\theta_{\rm \star}}{0.6~\mu as}\right) \left(\frac{\mu_{\rm rel}}{7~\ {\rm mas\ yr^{-1}}}\right)^{-1}\ , \end{equation} where $0.6~\mu$as is the angular source size of a Sun-like star in the Bulge, and 7 $\rm mas\ yr^{-1}$ is the typical value for lens-source relative proper motion of disk lenses. For second-generation microlensing surveys like OGLE-IV and KMTNet, although a few fields are observed once every $<20$ min, the majority of fields are observed at $>1$ hr cadences. Therefore, the single lens finite-source events in these relatively low-cadence fields are likely to have one single data point probing the finite-source effect, and thus the $\rho$ degeneracy appears. Fortunately, however, the result of event OGLE-2015-BLG-0763 showed that a few additional data points (before/after crossing the source) around the peak play a crucial role in resolving the $\rho$ degeneracy. When we observe typical microlensing events with a cadence of 1 hr, we can obtain 2 more data points right before and after crossing the source, except one source-crossing data point. In this case, the $\rho$ degeneracy will be resolved as in the event OGLE-2015-BLG-0763. This implies that 1 hr is the upper limit of the observing cadence to resolve the $\rho$ degeneracy in typical single lens events to be observed from the second-generation ground-based surveys, whereas for events with high $\mu_{\rm rel}$, such as events caused by a fast moving lens object or a high-velocity source star, it is not enough to resolve the $\rho$ degeneracy. Since about half of KMTNet fields have $\leqslant 1$ hr cadences and these fields have higher probability of detecting events than the other fields with cadences of $\geqslant 2.5$ hr, the $\rho$ degeneracy will be resolved in the majority of single high-magnification events to be observed by KMTNet. Although $\pi_{\rm E}$ is still intrinsically hard to measure even with second-generation surveys, the fraction of events with finite-source effects can be used as an indicator of the properties of the lens population, which is especially important for validating the short-timescale events such as the population of free-floating planets (FFPs) \citep{sumi11}. \textit{The Wide-Field InfraRed Survey Telescope} (\textit{WFIRST}) is likely to have six microlensing campaigns, each with 72 days observing a $\sim$3 deg$^2$ microlensing field at 15 min cadence \citep{spergel15}. \textit{WFIRST} microlensing is expected to detect thousands of bound planets and hundreds of FFPs. At first sight, the 15 min cadence that \textit{WFIRST} microlensing is currently adopting suggests that it will not be affected by the $\rho$ degeneracy. However, since \textit{WFIRST} will go much fainter than ground-based surveys, most of the sources for \textit{WFIRST} events will be M dwarfs, which are a half or even a quarter the size of Sun. Then the typical $t_\star$ for \textit{WFIRST} events is $\sim$15 min, which is the same as the adopted cadence. Hence, as mentioned above, although the $\rho$ degeneracy can be usually resolved by obtaining more than 2 data points around the peak, it will be severe for a significant fraction of events with high $\mu_{\rm rel}$. What makes this $\rho$ degeneracy more important for \textit{WFIRST} is that $\pi_{\rm E}$ can be measured relatively easily once ground-based observations are taken simultaneously \citep{yee13, zhu16b}. Therefore, the degeneracy in $\rho$ will directly lead to a degeneracy in the mass determination of isolated objects, including FFPs, BDs, and stellar-mass black holes. \subsection{Potential for the second body} As discussed in Section 3, it is possible that the deviation of the highly magnified \textit{Spitzer} point might be caused by a caustic structure rather than being entirely due to finite-source effects. It is easy to show qualitatively that this could affect the exact nature of the system, but is unlikely to significantly change the conclusion that the lens is a low-mass object in the bulge. First, if there were a caustic perturbation, there would be a second body in the lens system. However, we do not see any evidence for the second body in the ground-based light curve, and therefore the dominant lensing effect still comes from a single star. Second, consider the effect on the inferred physical properties of the lens (e.g., mass and distance). If there were a caustic structure, then it is likely to be small since it does not affected the ground-based data. In that case $\rho$ would be smaller and therefore $\theta_{\rm E}$ would be larger. At the same time, $t_{\rm E}$ is clearly determined from the dense, ground-based observations, so if $\theta_{\rm E}$ is larger, $\mu_{\rm rel}$ must also be larger. However, $\mu_{\rm rel}$ is already 9 $\rm mas\ yr^{-1}$ for the smaller $\rho$ solution. Larger values of $\mu_{\rm rel}$ are increasingly improbable and will eventually become unphysical. Hence, OGLE-2015-BLG-1482 is likely an event caused by the single lens star. However, since a binary lens system could simultaneously reproduce the single lens-like light curve from ground-based observations and the poorly sampled light curve from \textit{Spitzer}, we conduct binary lens modeling. As a result, we find that the best-fit binary lens solution is the BD binary lens system composed of a primary star $M_{\rm L,1}=0.06 \pm 0.01\ M_\odot$ and a secondary star $M_{\rm L,2}=0.05 \pm 0.01\ M_\odot$ with their projected separation 19 AU, which correspond to lensing parameters of the mass ratio between the binary components $q=0.78$ and the projected separation in units of $\theta_{\rm E}$ of the lens system $s=24$. The estimated distance to the BD binary is 7.5 kpc, and thus it is also located in the Galactic bulge. Although $\chi^2$ of the binary lens model is smaller than that of the single lens model by 35, it is a very wide binary system with large $\rho = 0.066$, and thus it is extremely closely related to (in fact, a variety of) the single lens solution with $\rho \simeq 0.06$. It is important to understand the reason for this close relation. The key point is that the high point of \textit{Spitzer} is explained by finite-source effects on the tiny caustic of the very wide BD binary that ``replaces'' the point caustic of the point lens in the single lens solution. But the $\chi^2$ improvement for the binary solution comes entirely from ground-based data, while the $\chi^2$ of the \textit{Spitzer} becomes slightly worse than that of the single lens model (see Figure 1). Thus, the \textit{Spitzer} high point is not caused by the binary, as we originally sought to test. The $\chi^2$ improvement could in principle be due to a distant companion. However, low-level systematics can also easily produce $\Delta \chi^{2} = 35$ improvements in microlensing light curves, which could then mistakenly be attributed to planets, binaries, etc. For this reason, \citet{gaudi02} and \citet{albrow01} already set a threshold at $\Delta \chi^{2} \geqslant 60$ for the detection of a planet based on experience with several dozen carefully analyzed events. Thus, all we can say about OGLE-2015-BLG-1482 is that the lens is consistent with being isolated but that we cannot rule out that it has a distant companion. And that the ``evidence" for such a companion is consistent with the systematic effects often seen in microlensing events. In order to find out whether there is a binary solution for which the high point of \textit{Spitzer} is actually explained by the caustic of a binary, we also conduct binary lens modeling in which $\rho \sim 0.0$. From this, we find that there is no valid binary lens solution with small $\rho$. This is because although we find two solutions with better $\chi^2$ relative to the single lens model, the best fit lens-source relative proper motions are $\mu_{\rm rel}=177\, \rm mas\, yr^{-1}$ and $\mu_{\rm rel}=583\,\rm mas\, yr^{-1}$ for the $\rho=0.0086$ and $\rho=0.0018$ solutions, respectively. These are very large (as anticipated in the previous paragraph) to the extent that they are unphysical. One of the two solutions (for $\rho=0.0086$) is the binary system composed of a primary star $M_{\rm L,1}=1.69 \pm 9.17\ M_\odot$ and a planet $M_{\rm L,2}=1.21 \pm 6.54\ M_{\rm Jupiter}$ with their projected separation 9.3 AU, while for the other solution (for $\rho=0.0018$) it is the binary system composed of a primary star $M_{\rm L,1}=5.55 \pm 11.26\ M_\odot$ and a planet $M_{\rm L,2}=5.00 \pm 10.15\ M_{\rm Jupiter}$ with the separation 14.7 AU, and these binaries are respectively located at 3.7 kpc and 1.6 kpc. The very large proper motion is due to large $\theta_{\rm E}$, while $t_{\rm E}$ is clearly determined from dense ground-based observations, as mentioned in the previous paragraph. Moreover, the $\chi^2$ of the \textit{Spitzer} data for the two binary lens models becomes worse. \section{CONCLUSION} We analyzed the single lens event OGLE-2015-BLG-1482 simultaneously observed from two ground-based surveys and from \textit{Spitzer}. The \textit{Spitzer} data exhibit the finite-source effect due to the passage of the lens directly over the surface of the source star as seen from \textit{Spitzer}. Thanks to the finite-source effect and the simultaneous observation from Earth and \textit{Spitzer}, we were able to measure the mass of the lens. From this analysis, we found that the lens of OGLE-2015-BLG-1482 is a very low-mass star with the mass $0.10 \pm 0.02 \ M_\odot$ or a brown dwarf with the mass $55\pm 9 \ M_J$, which are respectively located at $D_{\rm LS} = 0.80 \pm 0.19\ \textrm{kpc}$ and $ D_{\rm LS} = 0.54 \pm 0.08\ \textrm{kpc}$, and thus it is the first isolated low-mass object located in the Galactic bulge. The degeneracy between the two solutions is very severe ($\Delta \chi^{2} = 0.3$). The fundamental reason for the degeneracy is that the finite-source effect is seen only in a single data point from \textit{Spitzer} and this data point has the finite-source effect function $B(z) = A_{\rm obs} u > 1$, where $z = u/\rho$. We showed that whenever $B(z) > 1$, there are two solutions for $z$ and hence for $\rho = u/z$. Because the $\rho$ degeneracy can be resolved only by relatively high cadence observations around the peak, while the \textit{Spitzer} cadence is typically $\sim 1\ {\rm day}^{-1}$, we expect that events where the finite-source effect is seen only in the \textit{Spitzer} data may frequently exhibit the $\rho$ degeneracy. In the case of OGLE-2015-BLG-1482, the lens-source relative proper motion for the low-mass star is $\mu_{\rm rel} = 9.0 \pm 1.9\ \textrm{mas\ yr$^{-1}$}$, while for the brown dwarf it is $5.5 \pm 0.5\ \textrm{mas\ yr$^{-1}$}$. Hence, the severe degeneracy can be resolved within $\sim 10\ \rm yr$ from direct lens imaging by using next-generation instruments with high spatial resolution. \acknowledgments Work by S.-J. Chung was supported by the KASI (Korea Astronomy and Space Science Institute) grant 2017-1-830-03. Work by W.Z. and A.G. was supported by JPL grant 1500811. Work by C.H. was supported by the Creative Research Initiative Program (2009-0081561) of National Research Foundation of Korea. This research has made use of the KMTNet system operated by KASI and the data were obtained at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia. The OGLE has received funding from the National Science Centre, Poland, grant MAESTRO 2014/14/A/ST9/00121 to A.U. The OGLE Team thanks Professors M. Kubiak, G. Pietrzy{\`n}ski, and {\L}.Wyrzykowski for their contribution to the collection of the OGLE photometric data over the past years.
2,877,628,090,030
arxiv
\section{Introduction} Monte Carlo simulation of diffusion processes is of great interest, as it underlies methods of statistical inference from discrete observations in a variety of applications \citep[e.g.][]{gol:wil:2006, gol:wil:2008, chi:etal:ip, bla:sor:2014, bla:etal:2016}. Our interest in this paper is in the \emph{Wright-Fisher} diffusion. This process is widely used for inference, especially in genetics, where it serves as a model for the evolution of the frequency $X_t \in [0,1]$ of a genetic variant, or \emph{allele}, in a large randomly mating population. If there are two alternative alleles then the diffusion obeys a one-dimensional stochastic differential equation ({\sc sde}{}) \begin{equation} \label{eq:WFSDE} dX_t = \gamma(X_t)dt + \sqrt{X_t(1-X_t)}dB_t, \hspace{10pt} X_0 = x_0,\: t \in [0,T]. \end{equation} The drift coefficient, $\gamma:[0,1]\to\mathbb{R}$, can encompass a variety of evolutionary forces. For example, $\gamma=\beta$ where \begin{equation} \label{eq:selection-drift} \beta(x) = \frac{1}{2}[\theta_1(1-x) - \theta_2x] + \sigma x(1-x)[x + h(1-2x)], \end{equation} describes a process with recurrent mutation between the two alleles, governed by parameters $\theta_1$ and $\theta_2$, and with (diploid) natural selection causing fitness differences between individuals with different numbers of copies of the allele, governed by parameters $\sigma$ and $h$. There is much interest among geneticists in inference from this and related diffusions \citep[e.g.][]{wil:etal:2005, bol:etal:2008, gut:etal:2009, mal:etal:2012}, and in the characteristics of the trajectories themselves \citep{sch:etal:2013, zha:etal:2013}, as discretely observed data are becoming more and more available (for example, as genetic time series data for ancient human DNA and for viral evolution within hosts). Beyond genetics, Wright-Fisher diffusions have been considered for applications in several other fields. For example, in finance they have been used as models for time-evolving regime probability, discount coefficients or state price \citep[e.g.] []{del:shi:2002, Gourieroux2006475}; they have been proposed in biophysics as a model for ion channel dynamics \citep{dan:etal:2010, dan:etal:2012}; they have been studied as hidden Markov signals in filtering problems \citep{Chaleyat2009, PR14, PRS14}; and in Bayesian statistics they have been proposed as models for time-evolving priors \citep{12348926, 6582111, gri:spa:2013, men:rug:2016}. Simulation from equation \eqref{eq:WFSDE} is highly nontrivial because there is no known closed form expression for the transition function of the diffusion, even in the simple case $\ggg(x) \equiv 0$. In the absence of a method of exact simulation, it is necessary to turn to approximate alternatives such as an Euler-Maruyama scheme. Standard Euler-type methods fail here because simulated paths can easily leave the state space $[0,1]$, and moreover standard assumptions for weak and strong convergence typically require that the diffusion coefficient be Lipschitz continuous \citep[see][]{klo:pla:1999}. Consequently, a number of specialized time-discretization methods have been developed for the Wright-Fisher diffusion with various drifts; when the drift is of the form of $\beta(x)$, see \citet{sch:1996} for $\theta_1 = \theta_2 = \sigma = 0$, \citet{dan:etal:2012} for $\sigma = 0$, $\theta_1, \theta_2 > 0$, \citet{sch:etal:2013} for $\theta_1 = \theta_2 = 0$, $h = 1/2$; and \citet{neu:szp:2014} for $\sigma = 0$, $\theta_1, \theta_2 \geq 1$. Other approaches include truncating a spectral expansion of the transition function \citep{luk:etal:2011, son:ste:2012, ste:etal:2013}, or numerical solutions of the Kolmogorov equations \citep{wil:etal:2005, bol:etal:2008, gut:etal:2009}. The error introduced by these methods can be difficult to quantify and must often be tested empirically. In this paper we develop a novel and \emph{exact} method for simulating the Wright-Fisher diffusion with \emph{general} drift, as well as the corresponding bridges. By ``exact'' we mean that samples from the finite-dimensional distributions of the target diffusion can be recovered (up to the precision of a computer) without any approximation error. We build up our algorithm in stages, addressing how to simulate exactly from each of the following: \begin{enumerate} \item \label{item:neutral}The neutral Wright-Fisher diffusion; that is, with drift \begin{equation} \label{eq:neutral-drift} \alpha(x) = \frac{1}{2}[\theta_1(1-x) - \theta_2x], \end{equation} where $\theta_1,\theta_2 > 0$ (\sref{sec:WF}). \item \label{item:bridge} The neutral Wright-Fisher bridge (\sref{sec:WFbridge}); informally, this is the process \[ (X_t)_{t\in[0,T]}\mid X_T = y. \] \item \label{item:nonneutral}The Wright-Fisher diffusion and its bridges with a very general class of drift functions (defined later), including drift $\beta(x)$ of the form \eqref{eq:selection-drift} when $\theta_1,\theta_2 > 0$ (\sref{sec:nonneutralWF}). \end{enumerate} To achieve step \ref{item:neutral} the key approach is to exploit an eigenfunction expansion representation of the transition function \citep[see][for review]{gri:spa:2010}. The expansion admits a probabilistic interpretation and therefore lends itself to simulation techniques, but these techniques are not straightforward to implement because the distributions involved are known only in infinite series form. Despite this hurdle, here we show that is possible to perform such simulation without error. The technique is very general and so we develop this section not just for the Wright-Fisher diffusion but for the \emph{Fleming-Viot process}, its infinite-dimensional generalization. To achieve step \ref{item:bridge} we obtain a new probabilistic description of the transition function of a neutral Wright-Fisher \emph{bridge}. This is again complicated by the appearance of distributions known only in infinite series form, but from which (we show) realizations can still be obtained by evaluating only a finite number of terms in the series. Finally, we generalize these techniques to nonneutral processes in step \ref{item:nonneutral}. The eigenfunction expansion for a nonneutral process \citep{bar:etal:2000} is probabilistically intractable, so we take a different approach: we use the simulated neutral processes as candidates in a rejection algorithm. This uses a retrospective approach similar to that of the ``exact algorithms'' of \citet{bes:rob:2005}, \citet{bes:etal:2006:B, bes:etal:2008}, and \citet{pol:etal:2016}, which can return exact samples from a class of diffusions using \emph{Brownian motion} as the candidate for rejection. We defer a detailed description to \sref{sec:EA}, but for now we note that a direct application of these techniques would require that the target diffusion satisfy a number of regularity conditions, the most stringent perhaps that its law be absolutely continuous with respect to Brownian motion. The Wright-Fisher diffusion \eqref{eq:WFSDE} fails in this regard, first because of its nonunit diffusion coefficient and second because of its finite boundaries. Although the first problem is easily solved via a Lamperti transformation [also known as Fisher's transformation when applied to \eqref{eq:WFSDE}], it is not clear how to deal with the second. \citet{bes:etal:2008} point out that their exact algorithm can be adopted to the case of two finite entrance boundaries, but this approach requires a further technical condition [(A3) below] which does not always hold here. When it does hold, this approach becomes arbitrarily inefficient when the diffusion is proximate to the boundary \citep{jen:arxiv}; in any case, the boundaries of the Wright-Fisher diffusion can be exit, regular reflecting, or entrance, depending on the parameters of $\beta(x)$. But now we are armed with the ability to simulate the neutral Wright-Fisher process, which serves as a far more promising candidate than Brownian motion in a rejection algorithm; specifically, it is known that the law of a nonneutral process is absolutely continuous with respect to its neutral counterpart \citep{daw:1978, eth:kur:1993}. We develop these ideas in full in \sref{sec:nonneutralWF}. We also remark that a related approach is taken by \citet{sch:etal:2013}, who were interested in simulating nonneutral Wright-Fisher bridges in the absence of mutation. In this context one can condition each sample path to remain in $(0,1)$ (otherwise the path could be absorbed at 0 or 1 and could not terminate at a pre-specified point), rendering the boundaries inaccessible. They show that the appropriate candidate in this case is a Bessel process of dimension 4, whose boundary at $0$ is also of entrance type. However, their method is not exact in the sense given above, since the rejection probabilities are approximated via numerical integration. Furthermore, the Radon-Nikod\'ym derivative of the Wright-Fisher process with respect to the Bessel(4) process is not bounded (another approach developed in \citet{jen:arxiv} for a single entrance boundary suffers a similar limitation). In any case a direct comparison with our method is not possible since here we tackle $\theta_1, \theta_2 > 0$ rather than $\theta_1 = \theta_2 = 0$. The remainder of the paper is structured as follows. In Section \ref{sec:neutralperformance} we discuss practical considerations of the algorithm; \sref{sec:nonneutralbridge} fills in one last gap by showing how to construct a nonneutral Wright-Fisher bridge; Section \ref{sec:discussion} discusses extensions of the algorithm; and \sref{sec:proofs} contains the proofs of our results. \section{Simulating the neutral Wright-Fisher process} \label{sec:WF} In this section we demonstrate how exact simulation from the neutral Wright-Fisher diffusion can be achieved. To aid the exposition we first focus on a one-dimensional process, and then later extend this idea to the Fleming-Viot process. \subsection{A transition density expansion in one dimension} Consider the diffusion satisfying \eqref{eq:WFSDE} with drift \eqref{eq:neutral-drift}. Denote its law by $\bbW\bbF_{\alpha,x}$ and its transition density $f(x,\cdotp;t)$. Throughout this paper we assume $\theta_1,\theta_2 > 0$; then $f(x,\cdotp;t)$ is a probability density. We will exploit the following probabilistic representation of the transition function's eigenfunction expansion \citep{gri:1979:AAP11:310, tav:1984, eth:gri:1993, gri:spa:2010}: \begin{equation} \label{eq:transition-dual} f(x,y;t) = \sum_{m=0}^\infty q_m^\theta(t)\sum_{l=0}^m {\mathcal B}_{m,x}(l){\mathcal D}_{\theta_1 + l, \theta_2 + m-l}(y), \end{equation} where \[ {\mathcal B}_{m,x}(l) = \binom{m}{l}x^l(1-x)^{m-l}, \qquad l=0,1,\ldots, m, \] is the probability mass function ({\sc pmf}{}) of a binomial random variable, \[ {\mathcal D}_{\theta_1, \theta_2}(y) = \frac{1}{B(\theta_1,\theta_2) y^{\theta_1 - 1}(1-y)^{\theta_2 - 1}, \] is the probability density function ({\sc pdf}{}) of a beta random variable, $\theta = \theta_1 + \theta_2$, and $\{q_m^\theta(t) : m=0,1,\ldots \}$ are the transition functions of a certain death process $A^\theta_\infty(t)$ with an entrance boundary at $\infty$. More precisely, let $\{A^\theta_n(t): t\geq 0\}$ be a pure death process on $\mathbb{N}$ such that $A^\theta_n(0) = n$ almost surely and whose only transitions are $m \mapsto m-1$ at rate $m(m+\theta-1)/2$, for each $m=1,2,\ldots, n$. Then $q_m^\theta(t) = \lim_{n\to\infty}\mathbb{P}(A^\theta_n(t) = m)$. The representation \eqref{eq:transition-dual} has a natural interpretation in terms of Kingman's coalescent, which is the moment dual to the Wright-Fisher diffusion. The ancestral process $A_\infty(t)$ represents the number of lineages surviving a time $t$ back in an infinite-leaf coalescent tree, when lineages are lost both by coalescence and by mutation. For our purposes, the mixture expression \eqref{eq:transition-dual} also provides an immediate method for \emph{simulating} from $f(x,\cdotp; t)$. We summarize this idea in Algorithm \ref{alg:f}, which first appeared in \citet{gri:li:1983}. Steps \ref{f2} and \ref{f3} of Algorithm \ref{alg:f} are straightforward, but Step \ref{f1} requires the {\sc pmf}{} $\{q_m^\theta(t) : m=0,1,\ldots \}$, which is not available in closed form. \citet{gri:li:1983} used a numerical approximation, but in the following subsection we show it is in fact possible to simulate from this distribution without error. \begin{algorithm}[t] \DontPrintSemicolon Simulate $A^\theta_\infty(t)$. \nllabel{f1}\; Given $A^\theta_\infty(t) = m$, simulate $L \sim \text{Binomial}(m,x)$.\nllabel{f2}\; Given $L = l$, simulate $Y \sim \text{Beta}(\theta_1 + l, \theta_2 + m - l)$.\nllabel{f3}\; \Return{$Y$}.\; \caption{Simulating from the transition density $f(x,\cdotp; t)$ of the neutral Wright-Fisher diffusion with mutation.}\label{alg:f} \end{algorithm} \subsection{Simulating the ancestral process of Kingman's coalescent} \label{sec:ancestral} Our goal in this subsection is to obtain \emph{exact} samples from the discrete random variable with {\sc pmf}{} $\{q_m^\theta(t) : m=0,1,\ldots \}$. Were each $q_m^\theta(t)$ available in closed form, then standard inversion sampling would return exact samples from this distribution \citep[see for example][Ch.~2]{dev:1986}: for $U \sim \text{Uniform}[0,1]$, \[ \inf\left\{M \in \mathbb{N}: \sum_{m=0}^M q_m^\theta(t) > U\right\} \] is distributed according to $\{q_m^\theta(t) : m=0,1,\ldots \}$. However, $q_m^\theta(t)$ is known only as an infinite series \citep{gri:1980, tav:1984}: \begin{equation} \label{eq:qm} \begin{split} q_m^\theta(t) &= \sum_{k=m}^\infty (-1)^{k-m}a_{km}^\theta e^{-k(k+\theta-1)t/2}, \text{ where}\\ a_{km}^\theta &= \frac{(\theta + 2k-1)(\theta + m)_{(k-1)}}{m!(k-m)!}. \end{split} \end{equation} Here we have used the notation $a_{(x)} := \Gamma(a+x)/\Gamma(a)$ for $a > 0$ and $x \geq -1$. Despite the apparently infinite amount of computation needed to evaluate \eqref{eq:qm}, we now show that it is nonetheless possible to return exact samples from this distribution by a variant of the \emph{alternating series method} \citep[Ch.~4]{dev:1986}, which we summarize for a discrete random variable $X$ on $\mathbb{N}$ as follows. Suppose $X$ has {\sc pmf}{} $\{p_m: m=0,1,\ldots\}$ of the form \[ p_m = \sum_{k=0}^\infty (-1)^k b_k(m), \] and such that \begin{equation} \label{eq:monotone} b_k(m) \downarrow 0 \text{ as }k \to \infty, \text{ for each }m. \end{equation} Then for each $M, K \in \mathbb{N}$, \[ T^-_K(M) := \sum_{m=0}^M \sum_{k=0}^{2K+1} (-1)^k b_k(m) \leq \sum_{m=0}^M p_m \leq \sum_{m=0}^M \sum_{k=0}^{2K} (-1)^k b_k(m) =: T^+_K(M). \] Furthermore, $T^-_K(M) \uparrow \mathbb{P}(X \leq M)$ and $T^+_K(M) \downarrow \mathbb{P}(X \leq M)$ as $K \to \infty$. Hence, for $U \sim \text{Uniform}[0,1]$ and \[ K_0(M) := \inf\left\{K \in \mathbb{N} : T^-_K(M) > U \text{ or } T^+_K(M) < U\right\}, \] the quantity $\inf\left\{M \in \mathbb{N}: T^-_{K_0(M)}(M) > U \right\}$ can be computed from finitely many terms and is exactly distributed according to \mbox{$\{p_m: m=0,1,\ldots\}$}. This approach can be applied---with some modification---to $\{q_m^\theta(t) : m=0,1,\ldots \}$ by the following proposition, which says that the required condition \eqref{eq:monotone} holds with the possible exception of the first few terms in $m$. For those exceptional terms, \eqref{eq:monotone} still holds beyond the first few terms in $k$, and there is an easy way to check when this condition has been reached. \begin{proposition} \label{prop:qm} Let $b^{(t,\theta)}_k(m) = a_{km}^\theta e^{-k(k+\theta-1)t/2}$, the relevant coefficient in \eqref{eq:qm}, and let \begin{equation} \label{eq:Cm} \Ca^{(t,\theta)}_m := \inf\left\{ i \geq 0: b^{(t,\theta)}_{i+m+1}(m) < b^{(t,\theta)}_{i+m}(m)\right\}. \end{equation} Then \begin{enumerate}[(i)] \item $\Ca^{(t,\theta)}_m < \infty$, for all $m$. \item \label{item:monotonic} $b^{(t,\theta)}_k(m) \downarrow 0$ as $k \to \infty$ for all $k \geq m+\Ca^{(t,\theta)}_m$, and \item $\Ca^{(t,\theta)}_m = 0$ for all $m > \Cb^{(t,\theta)}_0$, where for $\epsilon\in[0,1)$, \end{enumerate} \begin{equation} \label{eq:C} \ensuremath{\displaystyle} \Cb^{(t,\theta)}_\epsilon := \inf\left\{ k \geq \left(\frac{1}{t} - \frac{\theta + 1}{2}\right)\vee 0: (\theta + 2k + 1)e^{-\frac{(2k+\theta)t}{2}} < 1-\epsilon\right\}. \end{equation} \end{proposition} (The parameter $\epsilon$ is introduced for later use.) As a result of \propref{prop:qm}, we need only to make the following adjustment to the alternating series method: If $m \leq \Cb^{(t,\theta)}_0$ then precompute terms in $q^\theta_m(t)$ until the first time that the coefficients in \eqref{eq:qm} begin to decay; we know by \propref{prop:qm}(\ref{item:monotonic}) that this decay then continues indefinitely. To allow for the number of computed coefficients to depend on $m$ we introduce ${\bfmath{k}} = (k_0,k_1,\ldots,k_M)$ and \begin{equation} \label{eq:T} S^-_{\bfmath{k}}(M) := \sum_{m=0}^M \sum_{i=0}^{2k_m+1} (-1)^{i} b^{(t,\theta)}_{m+i}(m), \qquad S^+_{\bfmath{k}}(M) := \sum_{m=0}^M \sum_{i=0}^{2k_m} (-1)^{i} b^{(t,\theta)}_{m+i}(m). \end{equation} We summarize this procedure in Algorithm \ref{alg:q}. Of course, the \emph{false} condition in line \ref{q15} of Algorithm \ref{alg:q} is never met, but \propref{prop:qm} guarantees that the algorithm still halts in finite time. Further performance considerations of this algorithm are discussed in \sref{sec:neutralperformance}. Also note that computed coefficients $a^\theta_{km}$ in \eqref{eq:qm} can also be stored for future calls to this algorithm. \begin{algorithm}[t] \DontPrintSemicolon Set $m \longleftarrow 0$, $k_0 \longleftarrow 0$, ${\bfmath{k}} \longleftarrow (k_0)$.\; Simulate $U \sim \text{Uniform}[0,1]$.\; \Repeat{false\nllabel{q15}}{ Set $k_m \longleftarrow \lceil \Ca^{(t,\theta)}_m/2 \rceil$ [eq.~\eqref{eq:Cm}]. \nllabel{qprecomputation}\; \While{$S_{\bfmath{k}}^-(m) < U < S_{\bfmath{k}}^+(m)$}{Set ${\bfmath{k}} \longleftarrow {\bfmath{k}} + (1,1,\ldots,1)$\nllabel{q6}}\; \uIf{$S_{\bfmath{k}}^-(m) > U$}{\Return{$m$}} \ElseIf{$S_{\bfmath{k}}^+(m) < U$}{Set ${\bfmath{k}} \longleftarrow (k_0,k_1,\ldots,k_m, 0)$.\; Set $m \longleftarrow m+1$.} } \caption{Simulating from the ancestral process $A_\infty(t)$ of Kingman's coalescent with mutation.}\label{alg:q} \end{algorithm} \subsection{A transition density expansion in higher dimensions} \label{sec:multidimensionalWF} It is worth pointing out that an interesting by-product of \propref{prop:qm} (and of Algorithm \ref{alg:q}) is the possibility of simulating exactly from the transition function of the (parent-independent) neutral Wright-Fisher diffusion \emph{in any dimension}, even in infinite dimensions. Wright-Fisher diffusions in $d$ dimensions can be seen as $d$-dimensional projections of a so-called neutral (parent-independent) Fleming-Viot measure-valued diffusion $\mu=(\mu_t:t\geq 0)$ with state space ${\cal M}_1(E)$, the set of all the probability measures on a given (Polish) type space $E$, equipped with the Borel sigma-algebra induced by the weak convergence topology. Given a total mutation parameter $\theta$ and a mutation distribution $P_0\in{\cal M}_1(E)$, the process $\mu$ is reversible with stationary distribution given by the Dirichlet process with parameter $(\theta, P_0)$, here denoted with $\Pi_{\theta, P_0}$, characterized by Dirichlet finite-dimensional distributions: $$\Pi_{\theta, P_0}\left(\bigcap_{i=1}^d\{\mu(A_i)\in dx_i,\}\right)\propto \left[\prod_{i=1}^d x_i^{\theta P_0(A_i)-1} dx_i\right]\mathbb{I}_{\Delta_{(d-1)}}(x_1,\ldots,x_d)$$ for any $d$ and every measurable partition $A_1,\ldots,A_d$ of $E$, where $\Delta_{(d-1)}=\{(x_1,\ldots,x_d)\in[0,1]^d:\sum_1^dx_i=1\}.$ The transition function describing the evolution of $\mu$ admits a probabilistic series expansion as mixture of (posterior) Dirichlet processes: \begin{multline} p(\mu, d\nu;t)= \sum_{m=0}^\infty q^\theta_m(t)\int_{E^m}\mu^{\otimes m}(d\xi_1,\ldots,d\xi_m)\Pi_{\theta+m, \frac{m}{\theta+m}\eta_m+\frac{\theta}{\theta+m}P_0}(d\nu),\\ t\geq 0, \mu,\nu\in{\cal M}_1(E), \label{eq:fvtf} \end{multline} where $\mu^{\otimes n}$ denotes the $n$-fold $\mu$-product measure and $\eta_m:=m^{-1}\sum_{i=1}^m\delta_{\xi_i}$ \citep[see][]{eth:gri:1993}. The coefficients of the series expansion are given by \emph{i.i.d.}\ samples (the $\xi$-random variables) from the starting measure, $\mu$, of random size given by the coalescent lines-of-descent counting process $A^\theta_\infty(t)$ with distribution $q_m^\theta(t)$. An algorithm for simulating from the transition function \eqref{eq:fvtf} is thus the following modification of Algorithm \ref{alg:f}. \\ \begin{algorithm}[H] \DontPrintSemicolon Simulate $A^\theta_\infty(t)$. \nllabel{fv1}\; Given $A^\theta_\infty(t) = m$, simulate $\xi_1,\ldots,\xi_m \overset{iid}{\sim} \mu$.\nllabel{fv2}\; Given $m^{-1}\sum_{i=1}^m\delta_{\xi_i} = \eta_m$, simulate $\nu \sim \Pi_{\theta+m, \frac{m}{\theta+m}\eta_n+\frac{\theta}{\theta+m}P_0}$.\nllabel{fv3}\; \Return{$\nu$}.\; \caption{Simulating from the transition density $p(\mu, \cdotp;t)$ of the neutral Fleming-Viot process with parent-independent mutation.}\label{alg:fv} \end{algorithm} ~\\ Notice that step \ref{fv3} requires sampling a (potentially infinite-dimensional) random measure distributed according to a Dirichlet process. Techniques for exact sampling from a Dirichlet process have been available in the literature [e.g.\ \citet{pap:rob:2008} and \citet{wal:2007}] for quite some time. Hence Algorithm \ref{alg:q} provides a way of filling the only missing gap (Step \ref{fv1} of Algorithm \ref{alg:fv}) to make the transition function \eqref{eq:fvtf} viable for exact simulation. When $E$ consists of $d$ points ($d\in\mathbb N$) the process reduces to the $(d-1)$-dimensional Wright-Fisher diffusion, thus Algorithm \ref{alg:f} is viable for exact simulation of neutral $(d-1)$-dimensional extensions of the Wright-Fisher diffusion \eqref{eq:WFSDE} with drift \eqref{eq:neutral-drift}. \section{Simulating a neutral Wright-Fisher bridge} \label{sec:WFbridge} In this section we demonstrate how exact simulation from the neutral Wright-Fisher diffusion \emph{bridges} can be achieved, via a new probabilistic description of its transition density. For the remainder of the paper we return to processes of dimension one. \subsection{A transition density expansion} \sref{sec:WF} provides a method for returning exact samples from $f(x,\cdotp;t)$ for any fixed $x \in [0,1]$ and $t > 0$. The density of a point $y \in (0,1)$ at time $s$ in a Wright-Fisher bridge from $x$ at time $0$ to $z$ at time $t$ is given by \citep{fit:etal:1992, sch:etal:2013}: \begin{equation} \label{eq:bridge-transition} f_{z,t}(x,y; s) = \frac{f(x,y; s)f(y,z;t-s)}{f(x,z; t)}, \qquad 0 < s < t, \end{equation} with $f(\cdotp,\cdotp; \cdotp)$ as in \eqref{eq:transition-dual}. Motivated by \eqref{eq:transition-dual}, our goal is to facilitate easy simulation from $f_{z,t}(x,y; s)$ by putting it into mixture form. For the rest of this section we assume $0 < x,y,z < 1$. \begin{proposition} \label{prop:WFbridge} The transition density of a Wright-Fisher bridge has expansion \begin{equation} \label{eq:bridge-mixture} f_{z,t}(x,y; s) = \sum_{m=0}^\infty \sum_{k=0}^\infty \sum_{l=0}^m \sum_{j=0}^k \A_{m,k,l,j} {\mathcal D}_{\theta_1 + l + j, \theta_2 + m + k - l - j}(y), \end{equation} where \begin{align} \label{eq:pmfN4} \A_{m,k,l,j} &= {\mathcal B}_{m,x}(l){\mathcal D}_{\theta_1 + j, \theta_2 + k - j}(z){\mathcal D}{\mathcal M}_{\theta_1 + l, \theta_2 + m-l;k}(j)\frac{q^\theta_m(s)q^\theta_k(t-s)}{f(x,z;t)}, \end{align} for $0 \leq l \leq m$ and $0 \leq j \leq k$, and $\A_{m,k,l,j} =0$ otherwise, where \[ {\mathcal D}{\mathcal M}_{a,b;k}(j) = \binom{k}{j}\frac{B(a+j,b+k-j)}{B(a,b)} \] is the {\sc pmf}{} of a beta-binomial random variable on $\{0,1,\dots, k\}$. \end{proposition} By \propref{prop:WFbridge}, we recognize equation \eqref{eq:bridge-mixture} as a mixture of beta-distributed random variables, with mixture weights defining a {\sc pmf}{} $\{\A_{m,k,l,j}: m,k,l,j \in \mathbb{N}\}$ on $\mathbb{N}^4$. Thus, the following algorithm returns exact samples from $f_{z,t}(x,y; s)$. \\ \begin{algorithm}[H] \DontPrintSemicolon Simulate $(M,K,L,J) \sim \{\A_{m,k,l,j}: m,k,l,j \in \mathbb{N}\}$ [eq.~\eqref{eq:pmfN4}].\nllabel{fbridge1}\; Given $(M,K,L,J) = (m,k,l,j)$, simulate $Y \sim \text{Beta}(\theta_1 + l + j, \theta_2 + m + k - l - j)$.\nllabel{fbridge2}\; \Return{$Y$} \caption{Simulating from the transition density $f_{z,t}(x,y; s)$ of a bridge of the neutral Wright-Fisher diffusion with mutation.} \label{alg:fbridge} \end{algorithm} ~\\ Again, while step \ref{fbridge2} of Algorithm \ref{alg:fbridge} is straightforward, Step \ref{fbridge1} is complicated by the appearance of $q_m^\theta(s)q^\theta_k(t-s)/f(x,z;t)$ in \eqref{eq:pmfN4}; each term in this ratio is known only as an infinite series, as we have seen. We address this in the following subsection. \subsection{Simulating from the discrete random variable on $\mathbb{N}^4$ with {\sc pmf}{}\\ \mbox{$\{\A_{m,k,l,j}: m,k,l,j \in \mathbb{N}\}$}} We will apply the alternating series approach of \sref{sec:ancestral} separately to each of $q_m^\theta(s)$, $q^\theta_k(t-s)$, and $f(x,z;t)$, and then combine these to obtain monotonically converging upper and lower bounds on \eqref{eq:pmfN4}. The first two terms have been dealt with already in \sref{sec:ancestral}, so it remains to take a similar approach for $f(x,z;t)$. Note that this problem---the pointwise evaluation of $f(x,z;t)$---is separate from (and in this case, harder than) actually simulating from $f(x,\cdotp;t)$. To employ the alternating series approach, use \eqref{eq:transition-dual} and \eqref{eq:qm} to write \begin{align} f(x,z;t) &= \sum_{m=0}^\infty \sum_{k=m}^\infty (-1)^{k-m}\cf{k}{m}, \label{eq:transition2}\\ \text{where} \quad \cf{k}{m} &= a_{km}^\theta e^{-k(k+\theta-1) t/2} \mathbb{E}[{\mathcal D}_{\theta_1 + L_m, \theta_2 + m - L_m}(z)], \notag \end{align} and $L_m \sim \text{Binomial}(m,x)$. Our strategy is to group the triangular array of coefficients $(\cf{k}{m})_{k \geq m}$ in such a way that, with the exception of the first few terms, they exhibit a property analogous to \eqref{eq:monotone}. We will compare terms in the sequence $(d_i)_{i=0,1,\ldots}$ of antidiagonals, defined by \begin{align} \label{eq:antidiagonal} d_{2m} &= \sum_{j=0}^{m} \cf{m+j}{m-j}, & d_{2m+1} &= \sum_{j=0}^{m} \cf{m+1+j}{m-j}, & m &= 0,1,\ldots, \end{align} (see \fref{fig:dm}), and dropping the superscript for convenience. Notice that the coefficients within each entry of this sequence are all multiplied by the same sign in \eqref{eq:transition2}, so that $f(x,z;t) = d_0 - d_1 + d_2 - d_3 + \ldots$ will be our alternating sequence. The main complication in this approach is to find \emph{explicitly} the first $i$ for which the coefficients $(d_i)$ begin to decrease in magnitude. To this end we define \begin{equation} \label{eq:antidiagonal-decaying} \D^{(t,\theta)} := \inf\left\{ m \geq 0: 2j \geq \Ca^{(t,\theta)}_{m-j} \text{ for all }j=0,\ldots,m\right\}, \end{equation} which simply provides the first entry in $(d_{2m})$ for which every member of the corresponding antidiagonal is decaying as a function of its first index. We now need the following lemma. \begin{lemma} \label{lem:beta} Let $L_m \sim \text{Binomial}(m,x)$ and \begin{equation} \label{eq:K} \K^{(x,z)} := \frac{x}{z} + \frac{1-x}{1-z}. \end{equation} Then for all $m \in \mathbb{N}$, \begin{equation} \label{eq:beta} \mathbb{E}[{\mathcal D}_{\theta_1 + L_{m+1}, \theta_2 + m+1 - L_{m+1}}(z)] < \K^{(x,z)}\mathbb{E}[{\mathcal D}_{\theta_1 + L_{m}, \theta_2 + m - L_{m}}(z)]. \end{equation} \end{lemma} Using \lemmaref{lem:beta} we are in a position to obtain the required analogue of property \eqref{eq:monotone}. \begin{proposition} \label{prop:lattice} Let $\Cb^{(t,\theta)}_\epsilon$, $(d_i)_{i=0,1,\ldots}$, $\D^{(t,\theta)}$, and $\K^{(x,z)}$ be defined as in \eqref{eq:C}, \eqref{eq:antidiagonal}, \eqref{eq:antidiagonal-decaying}, and \eqref{eq:K}, respectively, and $\epsilon \in (0,1)$. Then \[ d_{2m+2} < d_{2m+1} < d_{2m} \] for all $m \geq \D^{(t,\theta)} \vee \Cb^{(t,\theta)}_\epsilon \vee 2\K^{(x,z)}/\epsilon$. \end{proposition} We can now combine Propositions \ref{prop:qm} and \ref{prop:lattice} in order to construct a sequence amenable to simulation from $\{\A_{m,k,l,j}: m,k,l,j \in \mathbb{N}\}$ [eq.~\eqref{eq:pmfN4}], via the alternating series method. \begin{proposition} \label{prop:bes:etal:2008:1} Define \begin{equation} \label{eq:E} \E^{(s,t,\theta,x,z)}_{m,k,l,j} := \Ca^{(s,\theta)}_m \vee \Ca^{(t-s,\theta)}_k \vee \D^{(t,\theta)} \vee \Cb^{(t,\theta)}_\epsilon \vee 2\K^{(x,z)}/\epsilon, \end{equation} and \begin{multline*} \e_{m,k,l,j}(v) := {\mathcal B}_{m,x}(l){\mathcal D}_{\theta_1 + j, \theta_2 + k - j}(z){\mathcal D}{\mathcal M}_{\theta_1 + l, \theta_2 + m-l;k}(j)\\%{\mathcal A}_{m,k,l,j}^{(x,z,\bfmath{\gq})} \times \left. {\left[\ensuremath{\displaystyle}\sum_{i=0}^{v} (-1)^i b_{m+i}^{(s,\theta)}(m)\right]\left[\ensuremath{\displaystyle}\sum_{i=0}^{v} (-1)^i b_{k+i}^{(t-s,\theta)}(k) \right]}\middle/ {\ensuremath{\displaystyle}\sum_{i=0}^{v+1} (-1)^i d_{i}} \right.. \end{multline*} Then for $2v \geq \E^{(s,t,\theta,x,z)}_{m,k,l,j}$, \[ \e_{m,k,l,j}(2v+1) < \e_{m,k,l,j}(2v+3) < \A_{m,k,l,j} < \e_{m,k,l,j}(2v+2) < \e_{m,k,l,j}(2v). \] \end{proposition} In other words, for sufficiently large $v$ the odd and even terms in the sequence $(\e_{m,k,l,j}(v))_{v=0}^\infty$ provide monotonically converging lower and upper bounds on $\A_{m,k,l,j}$, respectively, and ``sufficiently large'' can be verified explicitly. The above results are summarized in Algorithm \ref{alg:pmfN4}. To explore $\mathbb{N}^4$ we introduce for convenience a bijective pairing function $\Sigma : \mathbb{N} \to \mathbb{N}^4$, such that $\Sigma(n) = (m,k,l,j)$. As in Algorithm \ref{alg:q}, we also introduce ${\bfmath{v}} = (v_0,v_1,\ldots,v_N)$ and \begin{equation*} \label{eq:V} V^-_{\bfmath{v}}(N) := \sum_{n=0}^N \e_{\Sigma(n)}(2v_n+1), \qquad V^+_{\bfmath{v}}(N) := \sum_{n=0}^N \e_{\Sigma(n)}(2v_n). \end{equation*} \begin{algorithm}[H] \DontPrintSemicolon Set $n \longleftarrow 0$, $v_0 \longleftarrow 0$, ${\bfmath{v}} \longleftarrow (v_0)$.\; Simulate $U \sim \text{Uniform}[0,1]$.\; \Repeat{false}{ Set $v_n \longleftarrow \lceil \E_{\Sigma(n)}/2\rceil$ [eq.~\eqref{eq:E}]. \nllabel{pmfN4precomputation}\; \While{$V_{\bfmath{v}}^-(n) < U < V_{\bfmath{v}}^+(n)$}{Set ${\bfmath{v}} \longleftarrow {\bfmath{v}} + (1,1,\ldots,1)$}\; \uIf{$V_{\bfmath{v}}^-(n) > U$}{\Return{$\Sigma(n)$}} \ElseIf{$V_{\bfmath{v}}^+(n) < U$}{Set ${\bfmath{v}} \longleftarrow (v_0,v_1,\ldots,v_n, 0)$.\; Set $n \longleftarrow n+1$.} } \caption{Simulating from the discrete random variable on $\mathbb{N}^4$ with {\sc pmf}{} \mbox{$\{\A_{m,k,l,j}: m,k,l,j \in \mathbb{N}\}$}.}\label{alg:pmfN4} \end{algorithm} \section{Performance of algorithms for neutral processes} \label{sec:neutralperformance} There are several easy improvements to the underlying Algorithm \ref{alg:q}. For example, we are free to vary the order of inspection of each $m$ by any finite permutation of $\mathbb{N}$, and we found a dramatic improvement by radiating outwards from (an approximation of) the mode of $\{q_m^\theta(t): m=0,1,\ldots\}$ than to start at $m=0$ and work upwards. Our approximation used $\mu^{(t,\theta)}$ in \thmref{thm:gri:1984} below. It may also be possible to improve on Algorithm \ref{alg:q} by allowing different $q_m^\theta(t)$ to be refined at different rates, i.e.\ by using a vector other than $(1,1,\dots,1)$ in Step \ref{q6}; we do not explore that here. A crucial quantity governing the efficiency of our algorithms is the number of coefficients $b^{(t,\theta)}_k(m)$ we must compute in Algorithm \ref{alg:q}; these in turn depend on the quantities $\Cb^{(t,\theta)}_0$ and $\Ca^{(t,\theta)}_m$ (recall \propref{prop:qm}). These quantities are in general manageably small, suggesting that the number of coefficients that need to be computed in Algorithms \ref{alg:q} and \ref{alg:pmfN4} (line \ref{qprecomputation} in each) should not be excessive. One exception to this observation is when $t$ is very small, for which the number of relevant coefficients grows quickly. The following result makes precise the complexity of Algorithm \ref{alg:q}. \begin{proposition} As $t\to 0$, \begin{enumerate}[(i)] \item $\Ca_m^{(t,\theta)} = O(t^{-1})$. \item $\max_m \Ca_m^{(t,\theta)} = O(t^{-1}\log (t^{-1}))$. \item $\Cb_0^{(t,\theta)} = o(t^{-(1+\kappa)})$, for any $\kappa > 0$. \end{enumerate} Let $N^{(t,\theta)}$ denote the total number of coefficients that must be computed in an implementation of Algorithm \ref{alg:q}. Then $\mathbb{E}[N^{(t,\theta)}] < \infty$, and in particular \begin{enumerate}[(i)] \item[(iv)] $\mathbb{E}[N^{(t,\theta)}] = o(t^{-(1+\kappa)})$, for any $\kappa > 0$. \end{enumerate} \label{prop:neutralcomplexity} \end{proposition} The growth in Algorithm \ref{alg:q} as $t\to 0$ is closely related to the well known numerical instability of \eqref{eq:qm} as $t \to 0$ \citep{gri:1984}, which afflicts any method based on the expansion \eqref{eq:transition-dual} \citep[or an expansion using a basis of orthogonal polynomials, which is equivalent to \eqref{eq:qm};][]{gri:spa:2010}. In any practical implementation of our algorithm we are obliged to use an approximation should the separation between two points be very small, and we found Algorithm \ref{alg:q} to fail for $t < 0.05$ or so. One option is to revert to the Euler-Maruyama approximation for small $t$. Alternatively, there has been much previous work in coalescent theory on approximating the distribution \eqref{eq:qm} \citep[e.g.][]{gri:1984, gri:2006, jew:ros:2014}; by inserting those approximations into Algorithm \ref{alg:q} they readily define new algorithms for approximate simulation of the dual \emph{diffusion}. We consider the following result, due to \citet[Theorem 4]{gri:1984}. \begin{theorem}[\citet{gri:1984}] \label{thm:gri:1984} Suppose $\beta = \frac{1}{2}(\theta - 1)t$, and let \begin{align*} \mu^{(t,\theta)} &= \frac{2\eta}{t}, & (\sigma^{(t,\theta)})^2 &= \begin{cases} \frac{2\eta}{t}(\eta + \beta)^2\left(1 + \frac{\eta}{\eta+\beta} - 2\eta\right)\beta^{-2}, & \beta \neq 0,\\ \frac{2}{3t}, & \beta = 0, \end{cases} & \text{where}\\ && \eta &= \begin{cases} \frac{\beta}{e^\beta - 1}, & \beta \neq 0,\\ 1, & \beta = 0. \end{cases} \end{align*} Then $\mathbb{P}\left((A_\infty^\theta(t) - \mu^{(t,\theta)})(\sigma^{(t,\theta)})^{-1} \leq x\right) \to \Phi(x)$ as $t\to 0$, where $\Phi(\cdot)$ is the cumulative distribution function ({\sc cdf}{}) of a standard normal random variable. \end{theorem} [The statement in \citet{gri:1984} is missing the factor $\beta^{-2}$.] To apply this approximation in practice when $t$ is small, we replace line \ref{f1} in Algorithm \ref{alg:f} with \begin{itemize} \item[{\scriptsize \bf 1'}] Simulate $A_\infty(t) \sim N(\mu^{(t,\theta)},(\sigma^{(t,\theta)})^2)$ and round it to the nearest nonnegative integer. \label{page:1'} \end{itemize} \subsection{Comparison with Euler-Maruyama simulation} To check the correctness of our algorithm and to compare its performance to Euler-Maruyama simulation, we performed the following experiment. We fixed $\theta_1 = \theta_2 = 1/2$, explored various fixed values of $x$ and $t$, and for each parameter combination drew 10,000 samples from $f(x,\cdot; t)$ using Algorithm \ref{alg:f}. To quantify whether the resulting sample was consistent with the true distribution $f(x,\cdot; t)$, we performed a one-sample Kolmogorov-Smirnov (K-S) test. We then performed the same experiment instead using Euler simulation with various stepsizes $\delta$. For this purpose we used the Balanced Implicit Split Step (BISS) algorithm of \citet{dan:etal:2012}, an Euler-type algorithm with some advanced modifications that guarantee each sample path stays within $[0,1]$, and which is state-of-the-art for $\theta_1,\theta_2 > 0$. (Their algorithm has an additional tuning parameter $\epsilon$; we followed their recommendation and set $\epsilon = \delta/4$.) To obtain an accurate expression for the {\sc cdf}{} of the true distribution for use within the K-S statistic, we exploited the fact that, in the special case $\theta_1 = \theta_2 = 1/2$, a Lamperti transformation of \eqref{eq:WFSDE} (or conversion to Stratonovich form) leads to \[ X_t = \frac{1}{2}(1-\cos B_t), \] where $(B_t)_{t\geq 0}$ is a Brownian motion commenced from $\arccos(1-2x)$ and reflecting at 0 and $\pi$. A rapidly converging series expression for the {\sc cdf}{} of $B_t$ (and hence of $X_t$) is available \citep[eq.~(26)]{lin:2005}; the first 1000 terms in the series suggested convergence to machine precision and were used as the reference {\sc cdf}{}. \begin{table}[p] \caption{\label{tab:neutralWFresults}Comparison of exact simulation methods for the neutral Wright-Fisher diffusion. {\bf BISS}: Algorithm of \citet{dan:etal:2012} with stepsize $\delta$. {\bf Exact}: Algorithm \ref{alg:f}. {\bf Exact'}: Algorithm \ref{alg:f} with the approximation of \citet{gri:1984} described on p\pageref{page:1'}. Tabulated are the computing time needed to simulate 10,000 sample paths and the $p$-value of a K-S test applied to the resulting collection of endpoints. Paths are initiated at $X_0 = x$ and run for length $t$. Mutation parameters are $\theta_1 = \theta_2 = 1/2$.} \vspace{-10pt} \begin{center} {\scriptsize \begin{tabular}{cc} \multicolumn{2}{c}{$t=0.01$}\\\hline \begin{tabular}{c c r@{.}l c } \multicolumn{5}{c}{$x = 0.01$}\\\hline Method & $\delta$ & \multicolumn{2}{c}{Time (s)} & $p$-value\\ \hline \multirow{5}{*}{BISS} & $10^{-3}$ & 0&03 & $<10^{-100}$ \\ & $10^{-4}$ & 0&05 & $<10^{-100}$ \\ & $10^{-5}$ & 0&30 & $<10^{-100}$ \\ & $10^{-6}$ & 2&83 & $<10^{-100}$ \\ & $10^{-7}$ & 27&61 & $<10^{-100}$ \\ Exact' & -- & 0&19 & 0.9 \end{tabular} & \begin{tabular}{c c r@{.}l c } \multicolumn{5}{c}{$x = 0.5$}\\\hline Method & $\delta$ & \multicolumn{2}{c}{Time (s)} & $p$-value\\\hline \multirow{5}{*}{BISS} & $10^{-3}$ & 0&02 & $8.0\times 10^{-3}$ \\ & $10^{-4}$ & 0&05 & 0.36 \\ & $10^{-5}$ & 0&30 & 0.22 \\ & $10^{-6}$ & 2&78 & 0.30 \\ & $10^{-7}$ & 27&44 & 0.78 \\ Exact' & -- & 0&17 & 0.1 \end{tabular} \\ \hline\\ \multicolumn{2}{c}{$t=0.05$}\\\hline \begin{tabular}{c c r@{.}l c } \multicolumn{5}{c}{$x = 0.01$}\\\hline Method & $\delta$ & \multicolumn{2}{c}{Time (s)} & $p$-value\\\hline \multirow{5}{*}{BISS} & $10^{-3}$ & 0&04 & $<10^{-100}$ \\ & $10^{-4}$ & 0&16 & $<10^{-100}$ \\ & $10^{-5}$ & 1&40 & $<10^{-100}$ \\ & $10^{-6}$ & 13&77 & $<10^{-100}$ \\ & $10^{-7}$ & 138&08 & $<10^{-100}$ \\ Exact & -- & 0&35 & 0.09\\ Exact' & -- & 0&17 & 0.3 \end{tabular} & \begin{tabular}{c c r@{.}l c } \multicolumn{5}{c}{$x = 0.5$}\\\hline Method & $\delta$ & \multicolumn{2}{c}{Time (s)} & $p$-value\\\hline \multirow{5}{*}{BISS} & $10^{-3}$ & 0&04 & $9.0\times10^{-4}$ \\ & $10^{-4}$ & 0&16 & 0.35 \\ & $10^{-5}$ & 1&39 & 0.53 \\ & $10^{-6}$ & 13&66 & 0.02 \\ & $10^{-7}$ & 137&06 & 0.72 \\ Exact & -- & 0&34 & 0.64\\ Exact' & -- & 0&17 & 0.9 \end{tabular} \\ \hline\\ \multicolumn{2}{c}{$t=0.5$}\\\hline \begin{tabular}{c c r@{.}l c } \multicolumn{5}{c}{$x = 0.01$}\\\hline Method & $\delta$ & \multicolumn{2}{c}{Time (s)} & $p$-value\\ \hline \multirow{5}{*}{BISS & $10^{-3}$ & 0&16 & $<10^{-100}$ \\ & $10^{-4}$ & 1&43 & $<10^{-100}$ \\ & $10^{-5}$ & 14&07 & $<10^{-100}$ \\ & $10^{-6}$ & 137&82 & $<10^{-100}$ \\%\hlin & $10^{-7}$ & 1378&33 & $<10^{-100}$ \\%\hlin Exact & -- & 0&19 & 0.16 \\ Exact' & -- & 0&17 & 0.2 \end{tabular} & \begin{tabular}{c c r@{.}l c } \multicolumn{5}{c}{$x = 0.5$}\\\hline Method & $\delta$ & \multicolumn{2}{c}{Time (s)} & $p$-value\\\hline \multirow{5}{*}{BISS} & $10^{-3}$ & 0&16 & $1.2 \times 10^{-28}$ \\ & $10^{-4}$ & 1&43 & $9.0 \times 10^{-18}$ \\ & $10^{-5}$ & 14&08 & $9.4\times 10^{-17}$ \\ & $10^{-6}$ & 138&23 & $6.3\times 10^{-13}$ \\ & $10^{-7}$ & 1368&33 & $8.1\times 10^{-13}$ \\ Exact & -- & 0&19 & 0.81 \\ Exact' & -- & 0&17 & 0.60 \end{tabular} \\\hlin \\ \multicolumn{2}{c}{$t=5$}\\\hline \begin{tabular}{c c r@{.}l c } \multicolumn{5}{c}{$x = 0.01$}\\\hline Method & $\delta$ & \multicolumn{2}{c}{Time (s)} & $p$-value\\\hline \multirow{5}{*}{BISS} & $10^{-2}$ & 0&17 & $<10^{-100}$ \\ & $10^{-3}$ & 1&43 & $<10^{-100}$ \\ & $10^{-4}$ & 14&01 & $<10^{-100}$ \\ & $10^{-5}$ & 137&95 & $<10^{-100}$ \\ & $10^{-6}$ & 1375&75 & $<10^{-100}$ \\%\hlin Exact & -- & 0&18 & 0.58 \\ Exact' & -- & 0&17 & $1.2 \times 10^{-47} \end{tabular} & \begin{tabular}{c c r@{.}l c } \multicolumn{5}{c}{$x = 0.5$}\\\hline Method & $\delta$ & \multicolumn{2}{c}{Time (s)} & $p$-value\\\hline \multirow{5}{*}{BISS} & $10^{-2}$ & 0&16 & $<10^{-100}$ \\ & $10^{-3}$ & 1&43 & $<10^{-100}$ \\ & $10^{-4}$ & 14&04 & $<10^{-100}$ \\ & $10^{-5}$ & 138&09 & $4.7\times 10^{-100}$ \\ & $10^{-6}$ & 1378&54 & $1.6\times 10^{-100}$ \\%\hlin Exact & -- & 0&18 & 0.88 \\ Exact' & -- & 0&17 & 0.49 \end{tabular} \\\hlin \end{tabular}} \end{center} \end{table} As is evident from \tref{tab:neutralWFresults}, exact simulation strongly outperforms the BISS algorithm over almost all the parameter combinations investigated. Over a timescale of $t \gtrapprox 0.1$, errors in the Euler-type method accumulate sufficiently fast that the resulting samples are easy to reject in a K-S test. Even reducing the stepsize so that its running time is several orders of magnitude greater than the exact method provides only a modest improvement to the quality of the sample. Note also that the performance of the BISS algorithm deteriorates for paths started close to the boundary (compare $x=0.01$ with $x = 0.5$), whereas the exact method is indifferent to starting position. One reason $p$-values are persistently small for the BISS algorithm is that sample paths are constrained by construction to stay inside $[\delta/4, 1- \delta/4]$, yet the narrow region close to the boundaries is precisely where we expect to find much of the probability mass for many choices of parameters. An example of how this affects the resulting transition density is given in \fref{fig:Euler-example}. By contrast, in no application of the K-S test to samples from our method would we have rejected at level $0.05$ the hypothesis that they were drawn from the true distribution. Only over short timescales and away from the boundaries (\tref{tab:neutralWFresults}, $t \leq 0.05$ and $x=0.5$) do the two methods seem comparable. At $t=0.05$, the same computational investment as in the exact method but applied to the BISS method buys a stepsize of about $\delta = 10^{-4}$, which is adequate in the interior of the state space ($x=0.5$) but not near the boundaries ($x = 0.01$). \begin{figure}[t] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.4\textwidth]{Euler_example} && \includegraphics[width=0.4\textwidth]{Euler_exampleB}\\ (a) && (b) \end{tabular} \end{center} \caption{\label{fig:Euler-example}Histogram of 10,000 simulated points using (a) Algorithm \ref{alg:f} and (b) the Euler-type BISS algorithm of \citet{dan:etal:2012}. Parameters are $\theta_1 = \theta_2 = 1/2$, $x = 0.5$, $t = 0.5$, $\delta = 10^{-6}$. For visual clarity, plotted are samples of the driving reflecting Brownian motion $B_t = \arccos(1-2X_t)$ rather than $X_t$ (since the density function of $B_t$---shown as a solid line---is bounded and can be calculated; see main text).} \end{figure} When $t\geq 0.05$ it is also possible to compare the exact method both with and without the approximation of \citet{gri:1984} (see p\pageref{page:1'}); the two versions exhibit similar running times and generate \emph{bona fide} samples from the true distribution (according to a K-S test) for moderate $t$. However, the approximate version deteriorates (as reported by the K-S $p$-value) for large $t$ (see \tref{tab:neutralWFresults}, $t = 5$), away from its asymptotic regime. Thus a suitable rule-of-thumb is to use the exact algorithm for $t \geq 0.05$ and its approximate version for $t < 0.05$, with the two methods in good agreement in their region of overlap in $t$. We investigated several other choices of $x$ and $t$ with predictable results (not shown); for example, performance of the exact method seems unrelated to starting position $x$, while the BISS method deteriorates even further as $x \to 0$. \section{Simulating the nonneutral Wright-Fisher process} \label{sec:nonneutralWF} In this section we develop an exact rejection algorithm for simulating from the Wright-Fisher diffusion \eqref{eq:WFSDE} with general drift. We make use of retrospective sampling techniques for the exact simulation of diffusions, which we first summarize. \subsection{Overview of the exact algorithm} \label{sec:EA} Here we give a brief overview of the exact algorithm (EA) of \citet{bes:rob:2005}, \citet{bes:etal:2006:B, bes:etal:2008}, and \citet{pol:etal:2016}, and we refer the reader to those papers for further details. The EA returns a recipe for simulating the sample paths of a diffusion $X = (X_t)_{t\in[0,T]}$ satisfying the {\sc sde}{} \begin{equation} \label{eq:EA-SDE} dX_t = \mu(X_t)dt + dB_t, \qquad X_0 = x_0, t \in [0,T], \end{equation} with $\mu$ assumed to satisfy the requirements for \eqref{eq:EA-SDE} to admit a unique, weak solution. Denote the law of such a process, our target, by $\mathbb{Q}_{x_0}$. The idea of the EA is to use Brownian motion started at $x_0$, whose law will be denoted $\mathbb{W}_{x_0}$, as the candidate process in a rejection sampling algorithm. The goal is then to write down the rejection probability, which is possible under the following assumptions: \begin{itemize} \item[(A1)] The Radon-Nikod\'ym derivative of $\mathbb{Q}_{x_0}$ with respect to $\mathbb{W}_{x_0}$ exists and is given by Girsanov's formula, \begin{equation} \label{eq:Girsanov} \frac{d\mathbb{Q}_{x_0}}{d\mathbb{W}_{x_0}}(X) = \exp\left\{\int_0^T \mu(X_t) dX_t - \frac{1}{2}\int_0^T \mu^2(X_t)dt\right\}, \end{equation} \item[(A2)] $\mu \in C^1$, \item[(A3)] $\phi(x) := \frac{1}{2}[\mu^2(x) + \mu'(x)]$ is bounded below, by $\phi^-$ say. \item[(A4)] $A(x) := \int_0^x \mu(z) dz$ is bounded above, by $A^+$ say. \end{itemize} Using (A1--A4) and It\^o's lemma, \eqref{eq:Girsanov} can be re-expressed as \begin{align} \label{eq:Girsanov2} \frac{d\mathbb{Q}_{x_0}}{d\mathbb{W}_{x_0}}(X) \propto \exp\left\{A(X_T)-A^+\right\}\exp\left\{ - \int_0^T [\phi(X_t) - \phi^-] dt\right\}. \end{align} Written in this form, the right hand side of \eqref{eq:Girsanov2} is less than or equal to $1$, and therefore provides the required rejection probability. To make the accept/reject decision, it is necessary to construct an event occurring with probability \eqref{eq:Girsanov2}. This is easy to achieve given a realized sample path $(X_t)_{t\in[0,T]} \sim \mathbb{W}_{x_0}$, but obtaining such a path would require an infinite amount of computation. Instead, note that the right hand term in \eqref{eq:Girsanov2} is the probability that all points in a Poisson point process $\mathbf{\Phi} = \{ (t_j, \psi_j) : j = 0,1,\ldots\}$ of unit rate on $[0,T] \times [0, \infty)$ lie in the epigraph of $t \mapsto [\phi(X_t) - \phi^-]$, and this event can be determined by simulating $X$ only at times $t_1, t_2, \ldots$. Thus, the following algorithm returns a (random) collection of skeleton points from $X \sim \mathbb{Q}_{x_0}$: \\ \begin{algorithm}[H] \DontPrintSemicolon \Repeat{false}{ Simulate $\mathbf{\Phi}$, a Poisson point process on $[0,T] \times [0,\infty)$.\; Simulate $U \sim \text{Uniform}[0,1]$.\; Given $\mathbf{\Phi} = \{ (t_j, \psi_j) : j = 0,1,\ldots\}$, simulate $B \sim \mathbb{W}_x$ at times $(t_j)_{j=0,1,\ldots}$ and at time $T$.\; \If{$\phi(B_{t_j})-\phi^- \leq \psi_j$ for all $j = 0,1,\ldots$ and $U \leq \exp\{A(B_T) - A^+\}$}{\Return $\{(t_j, B_{t_j}) : j=0,1\,\ldots\} \cup \{(T,B_T)\}$.} } \caption{Exact algorithm for simulating the path of a diffusion process with law $\mathbb{Q}_x$.}\label{alg:EA} \end{algorithm} ~\\ Once a skeleton has been accepted, further points can be filled in as required by sampling from Brownian bridges; no further reference to $\mathbb{Q}_{x}$ is necessary. If $\phi$ is bounded above, by $\phi^+$ say, then Algorithm \ref{alg:EA} can be implemented with finite computation by thinning $\mathbf{\Phi}$ to a Poisson Point process on $[0,T]\times [0,\phi^+-\phi^-]$; $|\mathbf{\Phi}|$ is then almost surely finite. [However, this requirement on $\phi$ can be relaxed \citep{bes:etal:2006:B, bes:etal:2008}. We remark that assumption (A4) is restrictive, and can in fact be removed by using a certain \emph{biased} Brownian motion as an alternative candidate; this also improves the efficiency of the algorithm \citep{bes:rob:2005}. We present the EA in the form above since an analogue of (A4) \emph{does} hold for the Wright-Fisher diffusion, and in any case a `biased Wright-Fisher diffusion' is not available. \subsection{Exact algorithm for the Wright-Fisher diffusion} As noted earlier, the requirements (A1--A4) need not hold when our target is the Wright-Fisher diffusion. However, related techniques can be used if the candidate process is chosen to be another Wright-Fisher diffusion, and in particular one with the same mutation parameters. Denote the target law $\bbW\bbF_{\ggg,x_0}$ [with drift $\ggg$], and the candidate law $\bbW\bbF_{\alpha,x_0}$ [with drift \eqref{eq:neutral-drift}]. We will write $\ggg \in {\mathcal W}{\mathcal F}$ if there exists a drift $\alpha$ of the form \eqref{eq:neutral-drift} such that the following hold: \begin{itemize} \item[(WF1)] The Radon-Nikod\'ym derivative of $\bbW\bbF_{\ggg,x_0}$ with respect to $\bbW\bbF_{\alpha,x_0}$ exists and is given by Girsanov's formula, \begin{multline} \label{eq:GirsanovWF} \frac{d\bbW\bbF_{\ggg, x_0}}{d\bbW\bbF_{\alpha, x_0}}(X) = \exp\left\{\int_0^T \frac{\ggg(X_t) - \alpha(X_t)}{X_t(1-X_t)} dX_t \right.\\ \left. {}- \frac{1}{2}\int_0^T \frac{\ggg^2(X_t) - \alpha^2(X_t)}{X_t(1-X_t)} dt\right\}, \end{multline} \item[(WF2)] $\ggg$ is continuously differentiable on $(0,1)$. \item[(WF3)] $\widetilde{\phi}(x)$ is bounded on $(0,1)$: $\widetilde{\phi}^- \leq \widetilde{\phi}(x) \leq \widetilde{\phi}^+$, where \[ \widetilde{\phi}(x) := \frac{1}{2}\left[\frac{\ggg^2(x) - \alpha^2(x)}{x(1-x)} + \ggg'(x) - \alpha'(x) - [\ggg(x) - \alpha(x)]\frac{1-2x}{x(1-x)}\right]. \] \item[(WF4)] $\widetilde{A}(x) := \int^x_{0} \frac{\ggg(z) - \alpha(z)}{z(1-z)} dz$ is bounded above, by $\widetilde{A}^+$ say. \end{itemize} Specific conditions on $\alpha$ and $\ggg$ to satisfy (WF1) are detailed e.g.\ in Theorem 8.6.8 in \citet{oks:2003}. \citep[This theorem imposes some unduly restrictive conditions to ensure that the {\sc sde}{} has a unique weak solution, but this can be established for \eqref{eq:WFSDE} by other means;][Ch.\ 4.]{par:2009} Following \sref{sec:EA}, we apply It\^o's lemma to $\widetilde{A}(x)$ to re-express \eqref{eq:GirsanovWF} as \begin{align} \label{eq:GirsanovWF2} \frac{d\bbW\bbF_{\ggg, x_0}}{d\bbW\bbF_{\alpha, x_0}}(X) \propto \exp\left\{\widetilde{A}(X_T)-\widetilde{A}^+\right\}\exp\left\{ - \int_0^T [\widetilde{\phi}(X_t) - \widetilde{\phi}^-] dt\right\}. \end{align} We recognize the rightmost term in \eqref{eq:GirsanovWF2} as the probability that no points in a Poisson point process on $[0,T]\times [0,\widetilde{\phi}^+-\widetilde{\phi}^-]$ lie in the epigraph of $t \mapsto \widetilde{\phi}(X_t) - \widetilde{\phi}^-$. Hence, Algorithm \ref{alg:WF-EA} returns exact samples from $\bbW\bbF_{\ggg,x_0}$. Step \ref{alg:WF3} of Algorithm \ref{alg:WF-EA} is achieved via Algorithm \ref{alg:f}. Once a skeleton is accepted, further points can be filled in via Algorithm \ref{alg:fbridge}. \begin{algorithm}[t] \DontPrintSemicolon \Repeat{false}{ Simulate $\mathbf{\Phi}$, a Poisson point process on $[0,T] \times [0,\widetilde{\phi}^+-\widetilde{\phi}^-]$.\; Simulate $U \sim \text{Uniform}[0,1]$.\; Given $\mathbf{\Phi} = \{ (t_j, \psi_j) : j = 0,1,\ldots, J\}$, simulate $X \sim \bbW\bbF_{\alpha,x}$ at times $(t_j)_{j=0,1,\ldots,J}$ and at time $T$.\label{alg:WF3}\; \If{$\widetilde{\phi}(X_{t_j})-\widetilde{\phi}^- \leq \psi_j$ for all $j = 0,1,\ldots, J$ and $U \leq \exp\{\widetilde{A}(X_T) - \widetilde{A}^+\}$\nllabel{eAtest}}{\Return $\{(t_j, X_{t_j}) : j=0,1\,\ldots,J\} \cup \{(T,X_T)\}$.} } \caption{Exact algorithm for simulating the path of a diffusion process with law $\bbW\bbF_{\ggg,x}$.}\label{alg:WF-EA} \end{algorithm} \subsection{A class of drifts for which exact simulation is possible} The class ${\mathcal W}{\mathcal F}$ defines the set of drifts for which exact simulation is possible. Here we show that ${\mathcal W}{\mathcal F}$ contains all the most popular population genetics diffusion processes with mutation and selection (including frequency-dependent selection), whose drift admits the general form \begin{equation} \gamma(x)=\alpha(x)+ x(1-x) \eta(x), \label{eq:coopbob-drift} \end{equation} where $\alpha$ is as in \eqref{eq:neutral-drift} for some $\theta_1,\theta_2>0$ and $\eta$ is a reasonably regular function of $x$. The case of diploid selection \eqref{eq:selection-drift} is recovered by setting $\eta(x)\propto (x+h(1-2x))$. For general $\eta$ the properties of diffusion processes with drift \eqref{eq:coopbob-drift} and of the corresponding genealogies are studied, among others, by \cite{coo:gri:2004}. \begin{proposition} \label{prop:coopbob} Any drift of the form \eqref{eq:coopbob-drift}, for $\alpha$ as in \eqref{eq:neutral-drift} for some $\theta_1,\theta_2>0$ and for $\eta$ continuously differentiable in $(0,1)$, is a member of ${\mathcal W}{\mathcal F}$. \end{proposition} It is worth noting that the additive structure in the drift $\gamma$ (that is, with a selection component added to the mutation component $\alpha$) is a widely accepted and theoretically justified property of all population genetics diffusion models \citep[see e.g.\ the discussion in][p186--187]{kar:tay:1981}. \subsubsection{Example: Wright-Fisher diffusion with diploid selection} By \propref{prop:coopbob}, the drift $\beta$ [eq.~\eqref{eq:selection-drift}] satisfies $\beta \in {\mathcal W}{\mathcal F}$. In fact, in the haploid case ($h = 1/2$), there is much simplification: $\widetilde{\phi}$ is a quadratic polynomial for which analytic bounds are available, and $\widetilde{A}(x) = \sigma x/2$. We implemented our exact algorithm on this model, and investigated its performance by considering several combinations of parameters; results are shown in \tref{tab:nonneutralWFresults}. For moderate selection ($|\sigma| = 1$) the algorithm is extremely efficient, with only slightly more than one candidate needed per acceptance. Furthermore, most simulations resulted in zero Poisson points. These results are quite robust to the length of the path $t$, the initial frequency $x$, and the sign of the selection parameter. For stronger selection ($\sigma = 10$) we observe some deterioration in efficiency because of the greater mismatch between candidate and target paths---to the extent that simulation of paths of length $t = 5.0$ became prohibitive. Nonetheless, it is still feasible to simulate a collection of shorter paths in a few seconds (and to string these together to construct longer paths, if necessary). \begin{table}[!t] \caption{\label{tab:nonneutralWFresults}Performance of Algorithm \ref{alg:WF-EA} applied to the Wright-Fisher diffusion with symmetric mutation and additive selection. Each row reports means (per accepted path) across a simulation to generate 1,000 accepted paths. Paths were initiated at $X_0 = x$ and run for time $t$, with mutation parameters $\theta_1=\theta_2 = 0.01$ and selection parameter $\sigma$ (and $h = 0.5$). Reported are the total numbers (per \emph{accepted} path) of: attempts, Poisson points simulated, coefficients $b^{(t,\theta)}_k(m)$ needed, random variables generated (that is, the aggregate of all draws from underlying constituent distributions: uniforms, betas, and so on), the number of times the simulation resorted to the approximation of \thmref{thm:gri:1984}, and the running time in milliseconds.} \begin{center} \begin{tabular}{r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r} \multicolumn{15}{c}{$\sigma = 1$}\\ \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{r}{Poisson} & \multicolumn{2}{c}{} & \multicolumn{2}{r}{Random} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\%& \multicolumn{2}{c}{}\\ \multicolumn{2}{c}{$t$} & \multicolumn{2}{c}{$x$} & \multicolumn{2}{c}{Attempts} & \multicolumn{2}{r}{points} & \multicolumn{2}{r}{Coefficients} & \multicolumn{2}{r}{variables} & \multicolumn{2}{r}{G1984} & \multicolumn{2}{c}{Time (ms)}\\ \hline 0&1 & 0&5 & 1&27 & 0&00 & 116&07 & 7&37 & 0&00 & 0&007 \\ 0&1 & 0&25 & 1&40 & 0&00 & 132&87 & 8&01 & 0&00 & 0&008\\ 0&1 & 0&01 & 1&58 & 0&01 & 141&34 & 8&91 & 0&01 & 0&009\\ 0&5 & 0&5 & 1&23 & 0&03 & 18&65 & 7&22 & 0&00 & 0&003\\ 0&5 & 0&25 & 1&48 & 0&03 & 21&60 & 8&45 & 0&01 & 0&003 \\ 0&5 & 0&01 & 1&58 & 0&03 & 23&84 & 9&02 & 0&01 & 0&003 \\ 5&0 & 0&5 & 1&29 & 0&24 & 5&23 & 8&58 & 0&00 & 0&002 \\ 5&0 & 0&25 & 1&49 & 0&27 & 6&18 & 9&64 & 0&01 & 0&002 \\ 5&0 & 0&01 & 1&67 & 0&28 & 7&36 & 10&79& 0&01 & 0&002 \\ \hline \multicolumn{15}{c}{}\\ \multicolumn{15}{c}{$\sigma = 10$}\\ \hline 0&1 & 0&5 & 11&83 & 3&72 & 995&92 & 62&68 & 1&83 & 0&062 \\ 0&1 & 0&25 & 41&69 & 12&97 & 3714&82 & 224&95 & 7&86 & 0&225 \\ 0&1 & 0&01 & 145&73 & 45&96 & 14937&52 & 856&21 & 45&88 & 0&879 \\ 0&5 & 0&5 & 13&16 & 20&96 & 641&52 & 109&00 & 2&47 & 0&054\\ 0&5 & 0&25 & 43&82 & 69&05 & 2729&80 & 399&95 & 10&89 & 0&214 \\ 0&5 & 0&01 & 149&21 & 235&34 & 17044&43 & 1869&54 & 71&54 & 1&185 \\ \hline \end{tabular} \end{center} \end{table} To make this observation more precise, \citet[Proposition 3]{bes:etal:2006:B} obtained an explicit upper bound on the expected number of Poisson points required of Algorithm \ref{alg:EA}, and hence on the computational complexity of the algorithm. A careful reading of their result shows that it relies on the existence of the bounds $\phi^\pm$ but does not depend on the laws of the diffusions involved, and carries over easily to the Wright-Fisher case. We therefore omit a proof of the following. \begin{proposition} \label{prop:nonneutralcomplexity} Let $N^{(T,\theta)}$ denote the number of Poisson points required until the first accepted path. Then \[ \mathbb{E}[N^{(t,\theta)}] \leq (\widetilde{\phi}^+ - \widetilde{\phi}^-)Te^{(\widetilde{\phi}^+ - \widetilde{\phi}^-)T + \widetilde{A}^+}. \] \end{proposition} An immediate consequence of \propref{prop:nonneutralcomplexity} is that the complexity of simulating a path of length $KT$ is $O((\widetilde{\phi}^+ - \widetilde{\phi}^-)KTe^{(\widetilde{\phi}^+ - \widetilde{\phi}^-)KT + \widetilde{A}^+})$ as $KT \to \infty$. However, superior performance can be achieved by splitting the problem into $K$ separate simulations; the complexity is then $O((\widetilde{\phi}^+ - \widetilde{\phi}^-)KTe^{(\widetilde{\phi}^+ - \widetilde{\phi}^-)T + \widetilde{A}^+})$, which is linear in $K$ as in \citet{bes:etal:2006:B}. As this is a statement about the complexity of the algorithm as the path length increases, it continues to hold even if we account for the increasing cost associated with each Poisson point as $T \to 0$, as quantified by \propref{prop:neutralcomplexity}. In practice one might wish to optimize the choice of $K$ and $T$ for a given path length $KT$. In our application, we must be prepared to introduce an additional constraint in order to fix $T$ some distance away from $0$ (and, as we recommend above, the choice $T \geq 0.05$ seems adequate). \section{Simulating a nonneutral Wright-Fisher bridge} \label{sec:nonneutralbridge} For completeness, here we provide an algorithm for simulating a nonneutral Wright-Fisher \emph{bridge} (Algorithm \ref{alg:WF-EA-bridge}). This follows immediately from the previous sections; the only modification is to note that the appropriate candidate process is the corresponding neutral bridge, which can be simulated via Algorithm \ref{alg:fbridge}. The rest follows exactly as in the Brownian case; see \citet[Section 6.2]{bes:etal:2006:B} for details. \begin{algorithm}[t] \DontPrintSemicolon \Repeat{false}{ Simulate $\mathbf{\Phi}$, a Poisson point process on $[0,T] \times [0,\widetilde{\phi}^+-\widetilde{\phi}^-]$.\; Given $\mathbf{\Phi} = \{ (t_j, \psi_j) : j = 0,1,\ldots, J\}$, simulate $X \sim (\bbW\bbF_{\alpha,x}\mid X_T = y)$ at times $(t_j)_{j=0,1,\ldots,J}$.\; \If{$\widetilde{\phi}(X_{t_j})-\widetilde{\phi}^- \leq \psi_j$ for all $j = 0,1,\ldots, J$}{\Return $\{(t_j, X_{t_j}) : j=0,1\,\ldots,J\} \cup \{(T,y)\}$.} } \caption{Exact algorithm for simulating the path of a diffusion process with law $\bbW\bbF_{\ggg,x}$ conditioned on $X_T = y$.}\label{alg:WF-EA-bridge} \end{algorithm} \section{Discussion} \label{sec:discussion} In this paper we have shown how to simulate exactly from the scalar Wright-Fisher diffusion, as well as a number of important and closely related processes: these include the ancestral process of an $\infty$-leaf Kingman coalescent tree, the Fleming-Viot process with parent-independent mutation, the nonneutral Wright-Fisher diffusion, and neutral and nonneutral Wright-Fisher bridges. Some interesting open problems remain, including mutation operators which do not lead to reversible diffusions, and the problem of sampling from $(d-1)$-dimensional ($2 < d\leq\infty$) Wright-Fisher bridges. It is also important to notice that, in order to employ the machinery proposed in this paper, the mutation parameters $\theta_1,\theta_2$ in the $\alpha$-component of a general drift $\gamma$ need to be both positive: models of the form \eqref{eq:coopbob-drift} with no mutation ($\alpha=0$) or with one-directional mutation (with only one mutation parameter positive and the other null) have at least one absorbing boundary, and therefore there cannot be absolute continuity with respect to a stationary Wright-Fisher diffusion with a transition density expansion of the form of \eqref{eq:bridge-mixture}. For such cases, series expansions are in fact available with a structure similar to \eqref{eq:transition-dual} \citep[see e.g.][]{eth:gri:1993} and we believe it should be relatively simple to adapt our method to encompass selection models with absorbing boundaries, a goal we do not pursue here. For choices of drifts $\gamma$ more general than \eqref{eq:coopbob-drift}, arising possibly in models beyond population genetics, it is harder to specify conditions for (WF1) verifiable in a simple way by inspection of $\gamma.$ The determination of which drift functions $\gamma$ guarantee (assuming identical diffusion coefficients) absolute continuity with respect to a Wright-Fisher process, seems to be, to our knowledge, an open problem. We expect that most of what is affected by the drift pertains to the behaviour of the process at the boundaries. The problem, however, might be delicate and not just limited to making sure that $\gamma$ prevents the boundaries from being absorbing: for example, if for the diffusion with drift $\gamma$ the boundaries are entrance or reflecting, the rate of escape from the boundaries might still make its sample paths qualitatively different from those of any Wright-Fisher diffusion, even within the same boundary regime, respectively entrance or reflecting. To support this conjecture, we remark that the very same circumstances induce mutual singularity in squared Bessel processes (whose diffusion coefficient is $\sqrt x$ hence quite similar near zero to the Wright-Fisher diffusion): it is well-known that any two squared Bessel processes starting at 0 are mutually singular whenever their drifts differ, even if they are both within the same boundary regime \citep[see][Lemma (2.1) and references therein]{pit:yor:1981}. In a wider perspective, we believe that the approach proposed here might serve as a template for developing new techniques for sampling exactly from diffusion processes by means of non-Brownian bridges whose transition function admits a transparent transition function expansion. \section{Proofs} \label{sec:proofs} {\bf Proof of \propref{prop:qm}.} First suppose $m > 0$.\\ (i) Note that \begin{equation} \label{eq:ratio} \frac{b^{(t,\theta)}_{k+1}(m)}{b^{(t,\theta)}_{k}(m)} = \frac{\theta + m + k - 1}{k-m+1}\cdot\frac{\theta + 2k + 1}{\theta + 2k - 1}e^{-\frac{(2k+\theta)t}{2}} =: f^\theta_m(k)e^{-\frac{(2k+\theta)t}{2}}, \end{equation} say. Treat $f^\theta_m(k)$ as having domain $\mathbb{R}$; it then suffices to show that $(f^{\theta}_m)'(k) < 0$ for all sufficiently large $k$. Then the right hand side of \eqref{eq:ratio} is subsequently decreasing in $k$ monotonically to $0$. Part (i) follows for the finite $k$ ($= m+i$) for which the right hand side of \eqref{eq:ratio} drops below $1$. Routine calculations show that $(f^\theta_m)'(k) < 0$ for all $k > (\sqrt{2(m-1) + \theta} - \theta)/2$.\\ (ii) Note that \[ \frac{\sqrt{2(m-1) + \theta} - \theta}{2} < \sqrt{\frac{m-1}{2}} + \frac{\sqrt{\theta} - \theta}{2} < \sqrt{\frac{m-1}{2}} + \frac{1}{8} < m, \] so in fact $(f^\theta_m)'(k) < 0$ for \emph{all} $k \geq m$, and thus as soon as $b^{(t,\theta)}_{k+1}(m) < b^{(t,\theta)}_{k}(m)$ for some $k$, it must also hold for all subsequent $k$.\\ (iii) The right hand side of \eqref{eq:ratio} can be made independent of $m$ by noting that \begin{equation} \label{eq:finequality} f^\theta_m(k) < f^\theta_k(k) = \theta + 2k + 1. \end{equation} Thus for $\Ca^{(t,\theta)}_m = 0$ to hold we need $m$ to exceed the upper of the two solutions on $\mathbb{R}$ of \[ (\theta + 2k +1)e^{-(2k+\theta)t/2} = 1. \] The definition of $\Cb^{(t,\theta)}_0$ is one way to express this condition, since the maximum of $(\theta + 2k +1)e^{-(2k+\theta)t/2}$ which separates the two solutions is at $k = \left(\frac{1}{t} - \frac{\theta + 1}{2}\right)$. Finally, consider the special case $m=0$. If $\theta > 1$ then similar arguments as in (i--ii) above continue to hold. However, if $\theta \leq 1$ then in fact $(f^\theta_0)'(k) > 0$ for all $k$, with $f_0^\theta(k)$ continuous on $k \geq 1$. But then $f^\theta_0(k) < f^\theta_0(\infty) = 1$ for $k \geq 1$, so $f_0^\theta(k)e^{-(2k+\theta)t/2} < 1$ for all $k \geq 1$ and hence (i--ii) still hold, with $\Ca^{(t,\theta)}_0 \leq 1$. \qed \\ \noindent {\bf Proof of \propref{prop:WFbridge}.} This follows by substituting \eqref{eq:transition-dual} into \eqref{eq:bridge-transition}, multiplying by $B(\theta_1 + l +j, \theta_2 + m-l + k-j)/B(\theta_1 + l +j, \theta_2 + m-l + k-j)$, and rearranging.\qed \\ \noindent {\bf Proof of \lemmaref{lem:beta}.} First suppose $l \leq \lfloor mz\rfloor$. Then, using $\Gamma(x+1) = x\Gamma(x)$, \begin{align*} \MoveEqLeft{\mathbb{P}(L_{m+1} = l){\mathcal D}_{\theta_1 + l,\theta_2 + m+1-l}(z)}\\ &= \left[\frac{m+1}{m+1-l}(1-x)\frac{\theta+m}{\theta_2 + m-l}(1-z)\right]\mathbb{P}(L_m = l){\mathcal D}_{\theta_1 + l, \theta_2 + m-l}(z),\\ &\leq \left[\frac{m+1}{m+1-mz}(1-x)\frac{\theta+m}{\theta_2 + m-mz}(1-z)\right]\mathbb{P}(L_m = l){\mathcal D}_{\theta_1 + l, \theta_2 + m-l}(z),\\ &\leq \left[\frac{1-x}{1-z}\right]\mathbb{P}(L_m = l){\mathcal D}_{\theta_1 + l, \theta_2 + m-l}(z), \end{align*} maximizing the term in square brackets first in $l$ and then in $m$, noting for the last inequality that this term is increasing in $m$. Hence, summing over $l=0,\ldots, \lfloor mz \rfloor$, \begin{multline} \label{eq:partitionmz1} \sum_{l=0}^{\lfloor mz \rfloor} \mathbb{P}(L_{m+1} = l){\mathcal D}_{\theta_1 + l,\theta_2 + m+1-l}(z) \\ < \frac{1-x}{1-z}\sum_{l=0}^{\lfloor mz\rfloor} \mathbb{P}(L_m = l){\mathcal D}_{\theta_1 + l, \theta_2 + m-l}(z). \end{multline} By a similar argument, for $l \geq \lfloor mz \rfloor$: \begin{equation*} \mathbb{P}(L_{m+1} = l+1){\mathcal D}_{\theta_1 + (l+1),\theta_2 + m+1-(l+1)}(z) \leq \frac{x}{z}\mathbb{P}(L_m = l){\mathcal D}_{\theta_1 + l, \theta_2 + m-l}(z), \end{equation*} (this time it is crucial we compare $L_{m+1} = l+1$ with $L_m = l$, rather than with $L_m = l+1$), and hence \begin{multline} \label{eq:partitionmz2} \sum_{l=\lfloor mz \rfloor+1}^{m+1} \mathbb{P}(L_{m+1} = l){\mathcal D}_{\theta_1 + l,\theta_2 + m+1-l}(z) \\ < \frac{x}{z}\sum_{l=\lfloor mz\rfloor}^m \mathbb{P}(L_m = l){\mathcal D}_{\theta_1 + l, \theta_2 + m-l}(z). \end{multline} Finally, sum \eqref{eq:partitionmz1} and \eqref{eq:partitionmz2} to yield \eqref{eq:beta}, noting that the overlap on the right-hand side at $l = \lfloor mz \rfloor$ necessitates the given definition of $\K^{(x,z)}$ (instead of the simpler bound $\frac{1-x}{1-z} \vee \frac{x}{z}$).\qed \\ \begin{figure}[t] \centering \setlength{\unitlength}{1.2cm} \begin{picture}(11,6.3)(0.9,0.25) \thinlines \put(0.9,0.9){\vector(0,1){5}} \put(0.9,0.9){\vector(1,0){9.5}} \put(0.8,6.0){$m$} \put(10.5,0.8){$k$} \put(4,4.3){$c_{m,m}$} \put(4.6,3.3){$c_{m+1,m-1}$} \put(7,1.3){$c_{2m,0}$} \put(5.25,4.3){$c_{m+1,m}$} \put(6.05,3.3){$c_{m+2,m-1}$} \put(8.25,1.3){$c_{2m+1,0}$} \put(5.25,5.55){$c_{m+1,m+1}$} \put(6.5,4.3){$c_{m+2,m}$} \put(7.5,3.3){$c_{m+3,m-1}$} \put(9.5,1.3){$c_{2m+2,0}$} \put(3,3.3){$\iddots$} \put(2,2.3){$c_{1,1}$} \put(3,2.3){$c_{2,1}$} \put(4,2.3){$\iddots$} \put(1,1.3){$c_{0,0}$} \put(2,1.3){$c_{1,0}$} \put(3,1.3){$c_{2,0}$} \put(4,1.3){$c_{3,0}$} \put(5,1.3){$\ldots$} \multiput(2.4,2.1)(0.04,-0.04){16}{\line(1,0){0.02}} \multiput(3.4,2.1)(0.04,-0.04){16}{\line(1,0){0.02}} \multiput(4.4,2.1)(0.04,-0.04){16}{\line(1,0){0.02}} \multiput(3.4,3.1)(0.04,-0.04){16}{\line(1,0){0.02}} \multiput(4.4,4.1)(0.04,-0.04){16}{\line(1,0){0.02}} \multiput(5.4,3.1)(0.04,-0.04){40}{\line(1,0){0.02}} \multiput(5.65,4.1)(0.04,-0.04){16}{\line(1,0){0.02}} \multiput(6.65,3.1)(0.04,-0.04){40}{\line(1,0){0.02}} \multiput(5.65,5.35)(0.04,-0.04){22}{\line(1,0){0.02}} \multiput(6.9,4.1)(0.04,-0.04){16}{\line(1,0){0.02}} \multiput(7.9,3.1)(0.04,-0.04){40}{\line(1,0){0.02}} \multiput(7.7,0.8)(0.04,-0.04){10}{\line(1,0){0.02}} \multiput(8.95,0.8)(0.04,-0.04){10}{\line(1,0){0.02}} \multiput(10.3,0.8)(0.04,-0.04){10}{\line(1,0){0.02}} \multiput(1.7,0.8)(0.04,-0.04){10}{\line(1,0){0.02}} \multiput(2.7,0.8)(0.04,-0.04){10}{\line(1,0){0.02}} \multiput(3.7,0.8)(0.04,-0.04){10}{\line(1,0){0.02}} \multiput(4.7,0.8)(0.04,-0.04){10}{\line(1,0){0.02}} \put(2.1,0.25){$d_{0}$} \put(3.1,0.25){$d_{1}$} \put(4.1,0.25){$d_{2}$} \put(5.1,0.25){$d_{3}$} \put(8.1,0.25){$d_{2m}$} \put(9.35,0.25){$d_{2m+1}$} \put(10.7,0.25){$d_{2m+2}$} \thicklines \put(5.2,4.34){\vector(-1,0){0.55}} \put(6.0,3.34){\vector(-1,0){0.25}} \put(7.2,2.3){\vector(-1,0){0.6}} \put(8.2,1.34){\vector(-1,0){0.55}} \put(6.45,4.34){\vector(-1,0){0.3}} \put(7.45,3.34){\vector(-1,0){0.25}} \put(8.45,2.3){\vector(-1,0){0.6}} \put(9.45,1.34){\vector(-1,0){0.3}} \put(5.65,5.25){\vector(0,-1){0.75}} \end{picture} \caption{\label{fig:dm}Computation of $\cf{k}{m}$. The sum of each antidiagonals (dashed) defines the sequence $(d_i)_{i=0,1,\ldots}$. To show that $d_{i+1} < d_i$ terms are paired off as shown by the arrows; that is, the coefficient at the head of a set of arrows is greater in magnitude than the sum of the terms at its tails.} \end{figure} \noindent {\bf Proof of \propref{prop:lattice}.} First, since $m \geq \D^{(t,\theta)}$ we know $j \geq \Ca^{(t,\theta)}_{m-j}$ for $j=0,\ldots, m$ and hence by \propref{prop:qm} that $b_{m+j+1}^{(t,\theta)}(m-j) < b_{m+j}^{(t,\theta)}(m-j)$. Now multiply this inequality by $\mathbb{E}[{\mathcal D}_{\theta_1 + L_{m-j}, \theta_2 + m-j - L_{m-j}}(z)]$ to yield \begin{equation} \label{eq:c} \cf{m+j+1}{m-j} < \cf{m+j}{m-j}. \end{equation} Thus, summing over $j=0,1,\ldots, m$, \begin{equation} \label{eq:m-even} \sum_{j=0}^m \cf{m+1+j}{m-j} < \sum_{j=0}^m \cf{m+j}{m-j}, \end{equation} which says precisely that $d_{2m+1} < d_{2m}$. We also need to show that $d_{2m+2} < d_{2m+1}$, but this case is more subtle since the left hand side is a sum over one more term than the right. Instead, we will increment the first index in \eqref{eq:c} and then sum over $j=1,\ldots, m$: \begin{equation} \label{eq:c2} \sum_{j=1}^{m} \cf{m+2+j}{m-j} < \sum_{j=1}^{m} \cf{m+1+j}{m-j}. \end{equation} It then suffices to show \begin{equation} \label{eq:suffices} \cf{m+1}{m+1} + \cf{m+2}{m} < \cf{m+1}{m}, \end{equation} for if we sum \eqref{eq:c2} and \eqref{eq:suffices} we obtain $d_{2m+2} < d_{2m+1}$ as required (\fref{fig:dm}). To show \eqref{eq:suffices}, first note that \[ \frac{\cf{k+1}{m}}{\cf{k}{m}} = \frac{b^{(t,\theta)}_{k+1}(m)}{b^{(t,\theta)}_{k}(m)} = f^\theta_m(k)e^{-\frac{(2k+\theta)t}{2}} \leq (\theta + 2k + 1)e^{-\frac{(2k+\theta)t}{2}}, \] with $f^\theta_m(k)$ defined as in \eqref{eq:ratio} and the inequality following from \eqref{eq:finequality}. Hence, choosing $k = m+1$ and noting that $m \geq C_\epsilon^{(t,\theta)}$, \begin{equation} \label{eq:c-firsthalf} \cf{m+2}{m} \leq (\theta + 2k + 1)e^{-\frac{(2k+\theta)t}{2}}\cf{m+1}{m} < (1-\epsilon)\cf{m+1}{m}. \end{equation} Second, note that \begin{align} \frac{\cf{m+1}{m+1}}{\cf{m+1}{m}} &= \frac{1}{m+1}\frac{\theta + 2m}{\theta + m}\frac{\mathbb{E}[{\mathcal D}_{\theta_1 + L_{m+1},\theta_2 + m+1 - L_{m+1}}(z)]}{\mathbb{E}[{\mathcal D}_{\theta_1 + L_{m},\theta_2 + m - L_{m}}(z)]}, \notag\\ &\leq \frac{1}{m+1}\cdot 2\frac{\mathbb{E}[{\mathcal D}_{\theta_1 + L_{m+1},\theta_2 + m+1 - L_{m+1}}(z)]}{\mathbb{E}[{\mathcal D}_{\theta_1 + L_{m},\theta_2 + m - L_{m}}(z)]}, \notag\\ &< \epsilon, \label{eq:c-secondhalf} \end{align} using \lemmaref{lem:beta} and $m +1 \geq 2\K^{(x,z)}/\epsilon$ for the last inequality. Rearrange \eqref{eq:c-secondhalf} and sum with \eqref{eq:c-firsthalf} to get \eqref{eq:suffices}. \qed \\ \noindent {\bf Proof of \propref{prop:bes:etal:2008:1}.} This follows immediately from Propositions \ref{prop:qm} and \ref{prop:lattice}. It can also be viewed as an application of Proposition 1 of \citet{bes:etal:2008} to a function $g(u_1,u_2,u_3) \propto u_1u_2/u_3$. \qed \\ \noindent {\bf Proof of \propref{prop:neutralcomplexity}.} (i) Using \eqref{eq:ratio} and \eqref{eq:finequality}, \begin{equation} \label{eq:finequality2} \frac{b^{(t,\theta)}_{K+1}(m)}{b^{(t,\theta)}_{K}(m)} = f^\theta_m(K)e^{-\frac{(2K+\theta)t}{2}} < (\theta+2K+1)e^{-\frac{(2K+\theta)t}{2}}. \end{equation} The right-hand side of \eqref{eq:finequality2} is less than 1 provided \[ K > \frac{\log(\theta + 2m + 1)}{t} - \frac{\theta}{2}, \] which implies that $\Ca_m^{(t,\theta)} < \frac{\log(\theta + 2m + 1)}{t} - \frac{\theta}{2} = O(t^{-1})$ as $t\to 0$.\\ (ii) Continuing, \[ \Ca_m^{(t,\theta)} < \frac{\log(\theta + 2m + 1)}{t} - \frac{\theta}{2} < \frac{\log(\theta + 2\Cb^{(t,\theta)}_0 + 1)}{t} - \frac{\theta}{2}, \] with the right-hand side independent of $m$ and asymptotically $O(t^{-1}\log(t^{-1}))$ as $t\to 0$ by (iii) below.\\ (iii) By inspection of the definition \eqref{eq:C} of $\Cb_0^{(t,\theta)}$, in order to ensure $K > \Cb_0^{(t,\theta)}$ it suffices that \[ K > \frac{1 \vee \log(\theta + 2K + 1)}{t}, \] which holds for sufficiently small $t$ if $K \sim t^{-(1+\kappa)}$, for any fixed $\kappa > 0$. Hence, $\Cb^{(t,\theta)}_0 = o(t^{-(1+\kappa)})$ as $t\to 0$.\\ (iv) Parts (i--iii) cover those terms that must be precomputed in Algorithm \ref{alg:q}: For the random number of remaining terms we may assume $K \geq m+\Ca_m^{(t,\theta)}$, so that $b_{K+1}(m) < b_K(m)$. Write $Q_M^{\theta}(t) := \mathbb{P}(A^\theta_\infty(t) \leq M)$. Our aim is to show that these random remaining terms do not add to the complexity of the calculation beyond (iii); we achieve this by determining the complexity of \[ \Cc_\delta^{(t,\theta)}(M) := \inf\left\{ K \geq \max_{m\in\{0,\dots,M\}}(m+\Ca_m^{(t,\theta)}) : \sum_{m=0}^M \sum_{k=K}^\infty (-1)^{k-m} b_k(m) < \delta \right\} \] as $t\to 0$, for a fixed $\delta > 0$. This is the appropriate quantity to look at, since if $K \geq \Cc_\delta^{(t,\theta)}(M)$ then $|Q_M^{\theta}(t) - S^{\pm}_{\bfmath{k}}(M)| < \delta$, when $2k_i \geq K$; $i=0,\dots,M$. Furthermore, averaging over the uniform random variable $U$ drawn for inversion sampling, the total number of coefficients $N^{(t,\theta)}\mid U = u$ needed to determine that $Q_{m-1}^{\theta}(t) < u < Q_m^{\theta}(t)$ is given by the number of terms needed to bound both our estimates of $Q_{m-1}^{\theta}(t)$ and $Q_m^{\theta}(t)$ away from $u$: \begin{align} \mathbb{E}(N^{(t,\theta)}) &= \mathbb{E}[\mathbb{E}(N^{(t,\theta)}\mid U)] \notag\\ &\leq \sum_{m=0}^\infty \int_{Q_{m-1}^{\theta}(t)}^{Q_{m}^{\theta}(t)} [\Cc_{Q_{m}^{\theta}(t)-u}^{(t,\theta)}(m) + \Cc_{u-Q_{m-1}^{\theta}(t)}^{(t,\theta)}(m-1)] du. \label{eq:EN} \end{align} We will show that if $K \sim t^{-(1+\kappa)}$ as $t\to 0$, for a fixed $\kappa > 0$, then we can attain the stated growth condition on $\mathbb{E}(N^{(t,\theta)})$. Using that the right-hand side of \eqref{eq:finequality2} is decreasing in $K$ for $K \geq m+\Ca_m^{(t,\theta)}$, for each $\zeta < 1$ we can find a constant $c_1^{(\theta)}$ such that for $K > c_1^{(\theta)}t^{-1}$, $f_m^\theta(K)e^{-(2K+\theta)t/2} < \zeta$. Hence, \begin{multline} \sum_{m=0}^M \sum_{k=K}^\infty (-1)^{k-m} b_k(m) < \sum_{m=0}^M \sum_{k=K}^\infty b_k(m) \\ < \sum_{m=0}^M \sum_{k=K}^\infty \zeta^{k-K} b_K(m) = \sum_{m=0}^M \frac{b_K(m)}{1-\zeta}. \label{eq:complexity} \end{multline} Routine calculations show that, for $m \geq 1$, \begin{multline*} a^\theta_{k,m+1} = a^\theta_{km} \frac{(k-m)(\theta+m+k-1)}{(m+1)(\theta+m)} \\\leq a^\theta_{km} \frac{(k-1)(\theta+k)}{2(\theta+1)} \leq a^\theta_{k1}\left[\frac{(k-1)(\theta+k)}{2(\theta+1)}\right]^{k-1}, \end{multline*} and thus, applying Stirling's formula to $a_{k1}^\theta \sim k^{c_2^{(\theta)}}$, \begin{equation} \label{eq:complexity2} b_K(m) = a^\theta_{Km}e^{-K(K+\theta-1)t/2} \leq e^{c_3^{(\theta)}K\log K - K(K+\theta-1)t/2} \leq c_4^{(\theta)}e^{-K^2t/2}, \end{equation} for some constants $c_2^{(\theta)}, c_3^{(\theta)}$, $c_4^{(\theta)}$ (which again exist by our assumption about the asymptotic growth of $K$). In the special case $m=0$, Stirling's formula also yields $a^\theta_{k0} \sim k^{c_2^{(\theta)}}$ and so the inequalities in \eqref{eq:complexity2} continue to hold. Combining \eqref{eq:complexity} with \eqref{eq:complexity2} we find \[ \sum_{m=0}^M \sum_{k=K}^\infty (-1)^{k-m} b_k(m) < c_5^{(\theta)}(M+1)e^{-K^2t/2}, \] for some $c_5^{(\theta)}$, which is less than $\delta$ if \begin{equation} \label{eq:complexity3} K > \sqrt{\frac{2}{t}}\log\left(\frac{c_5^{(\theta)}(M+1)}{\delta}\right). \end{equation} (This is not a tight bound but suffices in the calculations below.) In summary, if $K$ satisfies both $K > c_1^{(\theta)}t^{-1}$ and \eqref{eq:complexity3} then $K > \Cc_\delta^{(t,\theta)}(M)$. Integrating over $\delta$: \begin{multline} \int_{Q_{m-1}^{\theta}(t)}^{Q_{m}^{\theta}(t)} \Cc_{Q_{m}^{\theta}(t) -u}^{(t,\theta)}(m) du = \int_{0}^{q_{m}^{\theta}(t)} \Cc_{\delta}^{(t,\theta)}(m) d\delta \\ < \int_{0}^{q_{m}^{\theta}(t)} \left[\frac{c_1^{(\theta)}}{t^{1+\kappa}} + \sqrt{\frac{2}{t}}\log\left(\frac{c_5^{(\theta)}(m+1)}{\delta}\right)\right] d\delta\\ = q_{m}^{\theta}(t)\left[\frac{c_1^{(\theta)}}{t^{1+\kappa}} + \sqrt{\frac{2}{t}}\left[\log(c_5^{\theta}(m+1)) + 1 - \log q_{m}^{\theta}(t)\right]\right). \label{eq:complexitybound} \end{multline} It remains to show that the resulting expression \eqref{eq:complexitybound} can be summed over $m$, which is possible by \thmref{thm:gri:1984}; in particular, \begin{align*} q_{m}^{\theta}(t) &= \lefteqn{\frac{1}{\sqrt{2\pi (\sigma^{(t,\theta)})^2}}\exp\left(\frac{(m - \mu^{(t,\theta)})^2}{2(\sigma^{(t,\theta)})^2}\right) + o(1),}\\ \mu^{(t,\theta)} &= \frac{2}{t} + O(1), & (\sigma^{(t,\theta)})^2 &= \frac{2}{3t} + O(1). \end{align*} Hence, by Jensen's inequality, \begin{align*} \sum_{m=0}^\infty q_m^\theta(t) \log m &= \mathbb{E}[\log A_\infty^\theta (t)] \leq \log \mathbb{E}[A_\infty^\theta(t)] = O(\log t^{-1}),\\ \sum_{m=0}^\infty q_m^\theta(t) [-\log q_m^\theta (t)] &= \mathbb{E}\left[\log (\sqrt{2\pi}\sigma^{(t,\theta)}) + \frac{1}{2}\left(\frac{A_\infty^\theta (t)-\mu^{(t,\theta)}}{\sigma^{(t,\theta)}}\right)^2\right]\\ &= O(\log t^{-1}) + \frac{1}{2}\mathbb{E}[X^2] + O(1) = O(\log t^{-1}), \end{align*} where $X \sim N(0,1)$. Combining these results with \eqref{eq:complexitybound}, we obtain \begin{align*} \sum_{m=0}^\infty \int_{Q_{m-1}^{\theta}(t)}^{Q_{m}^{\theta}(t)} \Cc_{Q_{m}^{\theta}(t) -u}^{(t,\theta)}(m) du &= O(t^{-(1+\kappa)}) + O(t^{-1/2}\log t^{-1})\\ &= O(t^{-(1+\kappa)}), \end{align*} showing that the first term on the right of \eqref{eq:EN} is $O(t^{-(1+\kappa)})$. The second term is also $O(t^{-(1+\kappa)})$ by a similar argument. Since $\kappa$ was arbitrary, $\mathbb{E}[N^{(t,\theta)}] = o(t^{-(1+\kappa)})$. \qed \\ \noindent {\bf Proof of \propref{prop:coopbob}.} For a diffusion with drift $\gamma$, since $\eta$ is continuous on $(0,1)$, then $$\int_0^T \frac{\gamma^2(X_t)-\alpha^2(X_t)}{X_t(1-X_t)}\ dt=\int_0^T\ \left[\eta^2(X_t)X_t(1-X_t)+2\eta(X_t)\alpha(X_t)\right]\ dt<\infty,$$ then Novikov's condition is satisfied and a Girsanov transform exists with respect to the neutral WF diffusion with drift $\alpha$, i.e.\ (WF1) holds. In particular, \eqref{eq:GirsanovWF} reads \begin{equation} \exp\left\{\int_0^T\eta(X_t)\ dX_t-\frac{1}{2}\int_0^T\ \left[\eta^2(X_t)X_t(1-X_t)+2\eta(X_t)\alpha(X_t)\right]\ dt\right\}. \label{eq:cggirsanov} \end{equation} The function $\eta$ is also continuously differentiable in $(0,1)$, so $\gamma^{\prime}(x)-\alpha^{\prime}(x)=\eta^{\prime}(x)x(1-x)+\eta(x)(1-2x)$ is continuous and \begin{equation} \widetilde\phi(x)=\frac{1}{2}\left[x(1-x)(\eta^2(x)+\eta^{\prime}(x))+2\eta(x)\alpha(x) \right] \label{eq:cgepigraph} \end{equation} which, being itself continuous in $[0,1]$, is then bounded in $(0,1),$ and (WF3) follows.\\ (WF2) and (WF4) are obvious. Thus \eqref{eq:cggirsanov} has a version of the form \eqref{eq:GirsanovWF2} with $\widetilde\phi$ as in \eqref{eq:cgepigraph} and $\widetilde A(x)=\int_0^x\eta(z)\ dz$. \qed \section*{Acknowledgements} We thank Alison Etheridge, Bob Griffiths, Jere Koskela, Thomas Kurtz, Omiros Papaspiliopoulos, and Murray Pollock for useful discussions, and the anonymous referees for numerous helpful comments. \bibliographystyle{imsart-nameyear}
2,877,628,090,031
arxiv
\chapter{Israel matching conditions} \label{Isapp} In this appendix we derive the Israel matching conditions \cite{Israel} from scratch, based on the treatment presented in \cite{Poisson}. We begin with a timelike (or spacelike) hypersurface $\Sigma$, pierced by a congruence of geodesics which intersect it orthogonally. Denoting the proper distance (or proper time) along these geodesics by $\ell$, we can always adjust our parameterisation to set $\ell=0$ on $\Sigma$. (One side of the hypersurface is therefore parameterised by values of $\ell<0$, and the other by values $\ell>0$). Introducing $n^\alpha$, the unit normal to $\Sigma$ in the direction of increasing $\ell$, the displacement away from the hypersurface along one of the geodesics is given by $\d x^\alpha = n^\alpha \d \ell$. From this it follows that $n^\alpha n_\alpha=\ep$, where $\ep=+1$ for spacelike $\Sigma$, and $\ep=-1$ for timelike $\Sigma$, and that \[ n_\alpha = \ep \,\D_\alpha \ell. \] It is also useful to introduce a continuous coordinate system $x^\alpha$ spanning both sides of the hypersurface, along with a second set of coordinates $y^a$ installed on the hypersurface itself. (Henceforth Latin indices will be used for hypersurface coordinates and Greek indices for coordinates in the embedding spacetime). The hypersurface may then be parameterised as $x^\alpha=x^\alpha(y^a)$, and the vectors (in a Greek sense) \[ e^\alpha_a=\frac{\D x^\alpha}{\D y^a} \] are tangent to curves contained in $\Sigma$ (hence $e^\alpha_a n_\alpha=0$). For displacements within $\Sigma$, \bea \d s^2_\Sigma &=& g_{\alpha\beta}\,\d x^\alpha \d x^\beta \\ &=& g_{\alpha\beta}\,\Big(\frac{\D x^\alpha}{\D y^a}\,\d y^a \Big)\Big(\frac{\D x^\beta}{\D y^b}\,\d y^b \Big) \\ &=& h_{ab}\,\d y^a \d y^b, \eea and so the induced metric $h_{ab}$ on $\Sigma$ is given by \[ h_{ab} = g_{\alpha\beta}\,e^\alpha_a e^\beta_b. \] To denote the jump in an arbitrary tensorial quantity $A$ across the hypersurface, we will use the notation \[ [A] = \lim_{\ell\tt0_+}(A) - \lim_{\ell\tt 0^-}(A). \] The continuity of $x^\alpha$ and $\ell$ across $\Sigma$ immediately imply $[n^\alpha]=[e^\alpha_a]=0$. To decompose the bulk metric in terms of the different metrics on either side of $\Sigma$, we will require the services of the Heaviside distribution $\Theta(\ell)$, equal to $+1$ if $\ell>0$; $0$ if $\ell<0$; and indeterminate if $\ell=0$. Note in particular the properties: \[ \Theta^2(\ell)=\Theta(\ell), \qquad \Theta(\ell)\Theta(-\ell)=0, \qquad \frac{\d}{\d \ell}\,\Theta(\ell) = \delta(\ell), \] where $\delta(\ell)$ is the Dirac distribution. We can now express the metric $g_{\alpha\beta}$ in terms of the coordinates $x^\alpha$ as a distribution-valued tensor: \[ g_{\alpha\beta}=\Theta(\ell)\,g^+_{\alpha\beta}+\Theta(-\ell)\,g^-_{\alpha\beta}, \] where $g^+_{\alpha\beta}$ denotes the metric on the $\ell>0$ side of $\Sigma$, and $g^-_{\alpha\beta}$ the metric on the $\ell<0$ side. Differentiating, we find \[ g_{\alpha\beta , \gamma}=\Theta(\ell)\,g^+_{\alpha\beta , \gamma}+\Theta(-\ell)\,g^-_{\alpha\beta , \gamma} + \ep\, \delta(\ell)[g_{\alpha\beta}]n_\gamma. \] The last term is singular; moreover, this term creates problems when we compute the Christoffel symbols by generating the product $\Theta(\ell)\delta(\ell)$, which is not defined as a distribution. In order for the connection to exist as a distribution, we are forced to impose the continuity of the metric across the hypersurface: $[g_{\alpha\beta}]=0$. This statement can be reformulated in terms of hypersurface coordinates alone as $0=[g_{\alpha\beta}]\,e^\alpha_a e^\beta_b = [g_{\alpha\beta}\,e^\alpha_a e^\beta_b] = [h_{ab}]$, \iec the induced metric $h_{ab}$ must be the same on both sides of $\Sigma$. This condition is often referred to as the `first' junction condition. To derive the `second' junction condition (the Israel matching condition), we must calculate the distribution-valued Riemann tensor. Beginning with the Christoffel symbols, we obtain \[ \Gamma^\alpha_{\beta\gamma} = \Theta(\ell) \Gamma^{+\alpha}_{\beta\gamma}+\Theta(-\ell)\Gamma^{-\alpha}_{\beta\gamma}, \] where $\Gamma^{\pm\alpha}_{\beta\gamma}$ are the Christoffel symbols constructed from $g^\pm_{\alpha\beta}$. Thus \[ \Gamma^\alpha_{\beta\gamma , \delta} = \Theta(\ell)\Gamma^{+\alpha}_{\beta\gamma , \delta} +\Theta(-\ell)\Gamma^{-\alpha}_{\beta\gamma , \delta} + \ep \delta(\ell)[\Gamma^\alpha_{\beta\gamma}]n_\delta, \] and the Riemann tensor is \[ R^\alpha_{\beta\gamma\delta}=\Theta(\ell)R^{+\alpha}_{\beta\gamma\delta}+\Theta(-\ell)R^{-\alpha}_{\beta\gamma\delta} +\delta(\ell)A^\alpha_{\beta\gamma\delta}, \] where \[ A^\alpha_{\beta\gamma\delta}=\ep\([\Gamma^\alpha_{\beta\delta}]n_\gamma-[\Gamma^\alpha_{\beta\gamma}]n_\delta\). \] The quantities $A^\alpha_{\beta\gamma\delta}$ transform as a tensor since they are the difference of two sets of Christoffel symbols. We will now try to find an explicit expression for this tensor. Observe that the continuity of the metric across $\Sigma$ in the coordinates $x^\alpha$ implies that its tangential derivatives must also be continuous. Thus, if $g_{\alpha\beta , \gamma}$ is to be discontinuous, the discontinuity must be directed along the normal vector $n^\alpha$. We can therefore write \[ [g_{\alpha\beta,\gamma}]=\kappa_{\alpha\beta}n_\gamma, \] for some tensor $\kappa_{\alpha\beta}$ (given explicitly by $\kappa_{\alpha\beta}=\ep [g_{\alpha\beta,\gamma}]n^\gamma$). We then find \[ [\Gamma^\alpha_{\beta\gamma}] = \half\,(\kappa^\alpha_\beta n_\gamma+\kappa^\alpha_\gamma n_\beta - \kappa_{\beta\gamma} n^\alpha), \] and hence \[ A^\alpha_{\beta\gamma\delta}=\frac{\ep}{2}\,(\kappa^\alpha_\delta n_\beta n_\gamma - \kappa^\alpha_\gamma n_\beta n_\delta - \kappa_{\beta\delta} n^\alpha n_\gamma +\kappa_{\beta\gamma}n^\alpha n_\delta). \] A few lines of calculation show us that the $\delta$-function part of the Einstein tensor is \[ \label{Seq} S_{\alpha\beta} = \frac{\ep}{2}\,\(\kappa_{\mu\alpha}n^\mu n_\beta+\kappa_{\mu\beta}n^\mu n_\alpha-\kappa n_\alpha n_\beta-\ep \kappa_{\alpha\beta}-(\kappa_{\mu\nu}n^\mu n^\nu - \ep \kappa)g_{\alpha\beta}\). \] On the other hand, the total stress-energy tensor is of the form \[ T^\mathrm{\,total}_{\alpha\beta}=\Theta(\ell)T^+_{\alpha\beta}+\Theta(-\ell)T^-_{\alpha\beta}+\delta(\ell)T_{\alpha\beta}, \] where $T^+_{\alpha\beta}$ and $T^-_{\alpha\beta}$ represent the bulk stress-energy in the regions where $\ell> 0$ and $\ell < 0$ respectively, while $T_{\alpha\beta}$ denotes the stress-energy localised on the hypersurface $\Sigma$ itself. From the Einstein equations, we find $T_{\alpha\beta}=(8\pi G)^{-1} S_{\alpha\beta}$. It then follows that $T_{\alpha\beta}n^\beta=0$, implying $T_{\alpha\beta}$ is tangent to $\Sigma$. This allows us to decompose $T^{\alpha\beta}$ as \[ T^{\alpha\beta}=T^{ab}e^\alpha_a e^\beta_b, \] where $T_{ab}=T_{\alpha\beta}e^\alpha_a e^\beta_b$ re-expresses the hypersurface stress-energy tensor in terms of coordinates intrinsic to $\Sigma$. From (\ref{Seq}), we have \bea 16\pi G T_{ab} &=& -\kappa_{\alpha\beta}e^\alpha_a e^\beta_b-\ep(\kappa_{\mu\nu}n^\mu n^\nu-\ep\kappa)h_{ab} \nonumber\\ &=& -\kappa_{\alpha\beta}e^\alpha_a e^\beta_b-\kappa_{\mu\nu}(g^{\mu\nu}-h^{mn}e^\mu_m e^\nu_n)h_{ab}+\kappa h_{ab} \nonumber\\ &=&-\kappa_{\alpha\beta}e^\alpha_a e^\beta_b+h^{mn}\kappa_{\mu\nu}e^\mu_m e^\nu_n h_{ab}. \eea Finally, we can relate $T_{ab}$ to the jump in extrinsic curvature across $\Sigma$. From \[ [\grad_{\alpha}n_\beta]= - [\Gamma^\gamma_{\alpha\beta}]n_\gamma = \half\,(\ep \kappa_{\alpha\beta}-\kappa_{\gamma\alpha}n_\beta n^\gamma-\kappa_{\gamma\beta}n_\alpha n^\gamma), \] we deduce that \[ [K_{ab}] = [\grad_\alpha n_\beta]e^\alpha_a e^\beta_b = \frac{\ep}{2}\,\kappa_{\alpha\beta}e^\alpha_a e^\beta_b. \] This leads us to our goal; \[ 8\pi G T_{ab}=-\ep\([K_{ab}]-[K]h_{ab}\), \] which is the Israel matching condition (or `second' junction condition). \chapter{Five-dimensional longitudinal gauge} \label{appA} Starting with the background metric in the form (\ref{metrica}), the most general scalar metric perturbation can be written as \cite{Carsten} \bea \d s^2 &=& n^2\left(-(1+2\Phi)\,\d t^2 -2W\,\d t\, \d y + t^2 (1-2\Gamma)\,\d y^2 \right) - 2\nabla_i \alpha \,\d x^i \,\d t \nonumber \\ && +2t^2\,\nabla_i\beta \,\d y \,\d x^i+ b^2\left( (1-2\Psi)\,\delta_{ij}-2\nabla_i\nabla_j\chi\right)\d x^i\,\d x^j . \eea Under a gauge transformation $x^A\tt x^A+\xi^A$, these variables transform as \bea \nonumber & & \Phi \rightarrow \Phi-\dot{\xi}^t-\xi^t \frac{\dot{n}}{n}-\xi^y \frac{n'}{n}, \\ \nonumber & & \Gamma \rightarrow \Gamma+\xi'^y+\frac{1}{t}\xi^t+\xi^t \frac{\dot{n}}{n}+\xi^y \frac{n'}{n}, \\ \nonumber & & W \rightarrow W-\xi'^t+t^2 \dot{\xi}^y, \\ \nonumber & & \alpha \rightarrow \alpha-\xi^t+\frac{b^2}{n^2} \dot{\xi}^s, \\ \nonumber & & \beta \rightarrow \beta-\xi^y-\frac{b^2}{n^2 t^2} \xi'^s, \\ \nonumber & & \Psi \rightarrow \Psi+\xi^t \frac{\dot{b}}{b}+\xi^y \frac{b'}{b}, \\ & & \chi \rightarrow \chi+\xi^s , \eea where dots and primes indicate differentiation with respect to $t$ and $y$ respectively. Since a five-vector $\xi^A$ has three scalar degrees of freedom $\xi^t$, $\xi^y$ and $\xi^i=\nabla_i \xi^{s}$, only four of the seven functions $(\Phi,\Gamma,W,\alpha,\beta,\Psi,\chi)$ are physical. We can therefore construct four gauge-invariant variables, which are \bea \nonumber & & \Phi_{\inv}=\Phi-\dot{\tilde{\alpha}}-\tilde{\alpha}\, \frac{\dot{n}}{n}-\tilde{\beta}\, \frac{n'}{n}, \\ \nonumber & & \Gamma_{\inv}=\Gamma+\tilde{\beta}'+\frac{1}{t}\,\tilde{\alpha} +\tilde{\alpha}\, \frac{\dot{n}}{n}+\tilde{\beta}\, \frac{n'}{n}, \\ \nonumber & & W_{\inv}=W-\tilde{\alpha}'+t^2\, \dot{\tilde{\beta}}, \\ & & \Psi_{\inv}=\Psi+\frac{\dot{b}}{b}\,\tilde{\alpha}+\frac{b'}{b}\, \tilde{\beta}, \label{gauges} \eea where $\tilde{\alpha}=\alpha-(b^2/n^2)\, \dot{\chi}$ and $\tilde{\beta}=\beta+(b^2/n^2 t^2)\, \chi'$. In analogy with the four-dimensional case, we then define five-dimensional longitudinal gauge by $\chi=\alpha=\beta=0$, giving \begin{eqnarray} \Phi_{\inv}&=&\Phi_L, \qquad \,\,\Gamma_{\inv}=\Gamma_L, \nonumber \\ W_{\inv}&=&W_L, \qquad \Psi_{\inv}=\Psi_L , \end{eqnarray} \ie the gauge-invariant variables are equal to the values of the metric perturbations in longitudinal gauge. This gauge is spatially isotropic in the $x^i$ coordinates, although in general there will be a non-zero $t, y$ component of the metric. As for the locations of the branes, this will in general be different for different choices of gauge. In the case where the brane matter has no anisotropic stresses, the location of the branes is easy to establish. Working out the Israel matching conditions, we find that $\beta$ on the branes is related to the anisotropic part of the brane stress-energy. If we consider only perfect fluids, for which the shear vanishes, then the Israel matching conditions give $\beta(y=\pm y_0)=0$. From the gauge transformations above, we can transform into the gauge $\alpha=\chi=0$ using only a $\xi^s$ and a $\xi^t$ transformation. We may then pass to longitudinal gauge ($\alpha=\beta=\chi=0$) with the transformation $\xi^y=\tilde{\beta}$ alone. Since $\beta$ (and hence $\tilde{\beta}$) vanishes on the branes, $\xi^y$ must also vanish leaving the brane trajectories unperturbed. Hence, in longitudinal gauge the brane locations remain at their unperturbed values $y=\pm y_0$. Transforming to a completely arbitrary gauge, we see that in general the brane locations are given by $y = \pm y_0 - \tilde{\beta}$. \chapter{The early universe} \label{earlyunivchapter} \begin{flushright} \begin{minipage}{7.5cm} \small {\it \noindent Now entertain conjecture of a time \\ When creeping murmur and the poring dark \\ Fills the wide vessel of the universe. } \begin{flushright} \noindent Henry V, Act IV. \end{flushright} \end{minipage} \end{flushright} \section{The horizon problem} Observations of the cosmic microwave background (CMB) allow a precise determination of the nature of the primordial density perturbations. To date, these observations require the primordial density perturbations to be small-amplitude, adiabatic, Gaussian random fluctuations with a nearly scale-invariant spectrum \cite{WMAP3}. Even though this is almost the simplest conceivable possibility, to generate density perturbations of this nature demands new physics beyond the Standard Model. The essence of the problem is simple causality: a universe originating in a big bang has a particle horizon associated with the fact that light has only travelled a finite distance in the finite time elapsed since the big bang. For a universe with scale factor $a$ given in terms of the proper time $t$ by $a=(t/t_0)^p$, the size of this horizon in comoving coordinates is \[ \label{d} d(t) = \int_{t_i}^{t}\frac{\d t'}{a(t')} = \frac{p}{(1-p)}\,(\cH ^{-1}-\cH _i^{-1}), \] where the comoving Hubble radius $\cH^{-1} = (\d a /\d t)^{-1} = p^{-1}\,t_0^p\, t^{1-p}$ and $t_i$ is some initial time. For standard matter, $p<1$ (\eg $p=1/2$ for radiation, $p=2/3$ for matter), and so the size of the horizon is always increasing. At sufficiently late times, the integral is dominated by its upper limit; light travels the greatest distance at late times, and the horizon grows as $d\sim \cH^{-1}\sim t^{1-p}$ Observations of the CMB, however, indicate that the Universe was quasi-homogeneous at the time of last scattering on scales much larger than the size of the causal horizon at that time. Although the angular scale subtended by the horizon at last scattering is only $\sim 1$\textdegree, the temperature of the microwave sky is uniform to one part in a hundred thousand over much larger angular scales. This puzzle of explaining why the universe is nearly homogeneous over regions a priori causally independent is known as the {\it horizon problem}. \subsection{Inflation} One possible resolution of the horizon problem is that the early universe underwent a period of inflation, defined as an epoch in which the scale factor is accelerating, \iec $\ddot{a}>0$, where dots denote differentiation with respect to proper time. The comoving Hubble radius $\cH^{-1}$ is therefore shrinking during inflation, since $\dot{\cH}=\ddot{a}>0$. (Note, however, that the converse, shrinking comoving Hubble radius implies inflation, is {\it not} true, as we will see in the next section). To see how inflation solves the horizon problem, let us adopt a simple model in which the scale factor expands exponentially, $a=\exp(H(t-t_0))$. The size of the comoving horizon, equal to the comoving distance travelled by a light ray during inflation, is given by \[ \label{d_inf} d_\mathrm{infl.}(t)= \int_{t_i}^t \frac{\d t'}{a(t')} = \cH^{-1}_i - \cH^{-1}, \] where the comoving Hubble radius $\cH^{-1}=(aH)^{-1}$. Thus, due to the exponential expansion of the scale factor, after a sufficient number of e-foldings (defined as $N=\ln(a/a_i)$), the integral is dominated by the lower limit; light rays travel the greatest distance at early times and we find a near-constant horizon size $d_\mathrm{infl.}\approx \cH_i^{-1}$. By choosing the time of onset of inflation $t_i$ to be sufficiently small, and the proper Hubble parameter $H$ to be sufficiently large, we can always ensure that the horizon size at the end of inflation is sufficiently large to solve the horizon problem. As a rough estimate of the number of e-folds of inflation required, we stipulate that the comoving distance travelled by light during the radiation era must equal the comoving distance travelled by light during inflation, \iec the causal horizon at the end of inflation equals the size of our past light-cone at that time, since we live approximately at the end of the radiation era. This ensures that the portion of the surface of reheating visible to us (and hence any later surface, such as the surface of last scattering) is causally connected. From (\ref{d}) with $p=1/2$, we see that the comoving distance travelled by light during the radiation era is approximately equal to the comoving Hubble radius at matter-radiation equality, $\cH^{-1}_\now$. From (\ref{d_inf}), this must then equal the comoving Hubble radius at the onset of inflation, $\cH^{-1}_i$. The shrinking of the comoving Hubble radius during inflation is therefore matched by the growth of the comoving Hubble radius during the radiation era. Since during inflation $\cH^{-1}\sim a^{-1}$, whereas during radiation domination $\cH^{-1}\sim a$, \[ \frac{a_\now}{a_\rh}=\frac{\cH^{-1}_\now}{\cH^{-1}_\rh}=\frac{\cH^{-1}_i}{\cH^{-1}_\rh}=\frac{a_\rh}{a_i}=e^N, \] where $a_\rh$ is the scale factor evaluated at reheating, {\it etc}. Finally, since the scale factor is inversely proportional to the temperature during radiation domination, we deduce that the number of e-folds of inflation required is roughly $N\approx \ln(T_\rh/T_\now) \approx \ln(10^{12}\, \GeV/1\,\mathrm{meV})\approx 55$, assuming that reheating occurred approximately at the GUT scale\footnote{ Note that if inflation does terminate at the GUT scale, observational upper bounds on the density of monopoles constrain the GUT scale to be less than $10^{12}\,\GeV$ \cite{Peacock}.}. Having dealt with the horizon problem, we can now ask about the nature of the matter required to sustain a period of inflation. From the Friedmann equation for a homogeneous and isotropic universe, \[ \frac{\ddot{a}}{a} = -\frac{1}{6}\,(\rho+3P) \] (where we have set $8\pi G=1$), we see that inflation ($\ddot{a}>0$) requires matter with density $\rho$ and pressure $P$ such that $\rho+3P<0$. This corresponds to an exotic equation of state with $\w=P/\rho<-1/3$. Ruling out a cosmological constant with $\w=-1$ on the grounds that inflation must at some point terminate, the next simplest candidate is a massless scalar field $\vphi$, minimally coupled to gravity, and with a potential $V(\vphi)$. The stress tensor then takes the form of that for a perfect fluid, with $\rho =\dot{\vphi}^2/2+V$ and $P =\dot{\vphi}^2/2-V$. Inflation therefore requires that the potential energy is greater than the scalar field kinetic energy, $\dot{\vphi}^2 < V$. From the action \[ \label{scalarfieldaction} S = \half\int\sqrt{g}\left[R-(\D\vphi)^2-2V(\vphi)\right], \] the equations of motion are \bea \label{pbgdeom1} &&3 H^2 =\half\,\dot{\vphi}^2+V, \\ \label{pbgdeom2} && 0 =\ddot{\vphi}+3H\dot{\vphi}+V_{,\vphi}, \\ \label{pbgdeom3} &&\dot{H}=-\half\,\dot{\vphi}^2, \eea where the Hubble parameter $H=\dot{a}/a$, and the last equation follows from the first two. A particularly useful limit in which the dynamics are simplified is the {\it slow-roll} regime, in which the scalar field kinetic energy is negligible, $\dot{\vphi}^2\ll V$, and the system is over-damped, $\ddot{\vphi} \ll H\dot{\vphi}$. This holds provided the slow-roll parameters \bea \label{slowrolleps} \eps &=&\half\,\(\frac{V_{,\vphi}}{V}\)^2 \approx \half \frac{\dot{\vphi}^2}{H^2}, \\ \label{slowrolleta} \eta &=& \frac{V_{,\vphi\vphi}}{V} \approx -\frac{\ddot{\vphi}}{H\dot{\vphi}}+\half\frac{\dot{\vphi}^2}{H^2}, \eea are much smaller than unity (the approximate relations following in the limit when this is true). Under conditions of slow roll, the number of e-foldings obtained during inflation is simply \[ N = \int_{\vphi_i}^{\vphi_f}\frac{\d \vphi}{\dot{\vphi}}H(\vphi) \approx \int_{\vphi_i}^{\vphi_f} \frac{V}{V_{,\vphi}}\,\d \vphi. \] Finally, let us introduce a simple model that will prove useful, obtained by setting the equation of state parameter $\w=P/\rho$ to a constant. In this case the dynamics are exactly solvable (independently of considerations of slow roll), yielding the scaling solution \[ \label{toymodel} a=\(\frac{t}{t_0}\)^p, \qquad \vphi = \sqrt{2p}\,\ln\(\frac{t}{t_0}\), \qquad V = -V_0 \,\exp(-\sqrt{\frac{2}{p}}\,\vphi), \] where $t_0$ and $V_0$ are given in terms of the constant parameter $p$ by $t_0=p-1$ and $V_0 = p\,(1-3p)/t_0^2$. The parameter $p$ is in turn related to the equation of state parameter $\w$ by $p =2/3(1+\w)$. We recover slow-roll inflation in the limit where $p\gg 1$ ($\w \approx -1$), since the slow-roll parameters are $\eps = 1/p\,$ and $\eta=2/p\,$. In this case the potential takes the form of a positive-valued exponential. \subsection{A collapsing universe} An alternative resolution of the horizon problem is that the big bang is not the beginning of the universe. Instead, the present expanding phase of the universe would be preceded by a collapsing phase, with a concomitant transition from big crunch to big bang. A concrete example is provided by the simple model with constant $\w$ discussed above. From (\ref{toymodel}) with $p>1$, we obtain an expanding universe for times $0\le t <\inf$. If instead we take $0<p<1$, we obtain a contracting universe with $t_0<0$ and the time coordinate taking values in the range $-\inf<t\le 0$. The potential now corresponds to a negative-valued exponential. Since $\w>-1/3$, the universe is not inflating and $\ddot{a}<0$. Nonetheless, the comoving Hubble radius is still shrinking during the collapse: re-expressed in terms in of conformal time $\tau= -[t/(p-1)]^{1-p}$, the scale factor and scalar field in (\ref{toymodel}) are \[ a=|\tau|^{p/(1-p)}, \qquad \vphi = \frac{\sqrt{2p}}{1-p}\ln|\tau| \] where $-\inf<\tau\le 0$. (Note that for $0<p<1$, the range $-\inf<\tau\le 0$ is mapped to $-\inf < t \le 0$, whereas for $p>1$, the same range is instead mapped to $0\le t < \inf$). For general $p$, the magnitude of the comoving Hubble radius is therefore \[ |\cH|^{-1} = \left|(p^{-1}-1)\,\tau\right|, \] where we have taken the absolute value since in a collapsing universe $\cH<0$. With this definition, a shrinking comoving Hubble radius corresponds, in a collapsing universe, to the condition $\dot{\cH}=\ddot{a}<0$. In contrast, in an expanding universe, a shrinking Hubble radius implies $\dot{\cH}=\ddot{a}>0$, and hence inflation. For a universe undergoing a bounce, the resolution of the horizon problem is trivial. The comoving distance travelled by light rays during the collapsing phase is as given in (\ref{d}), except that now, owing to the shrinking of the comoving Hubble radius during the collapse, the integral is dominated by its lower limit: provided the collapsing phase began at some sufficiently negative initial time $t_i$, the comoving distance travelled scales as $|\cH_i|^{-1}$. By increasing the duration of the collapsing phase, the horizon size at the big bang can be made arbitrarily large. A lower bound is given by the size of our past light cone at the big bang, yielding the estimate $|\cH_i|^{-1} \sim \cH^{-1}_\now$, \iec the shrinkage of the comoving Hubble radius during the collapse is matched by its growth in the subsequent expanding phase. During both inflation and a collapsing era then, the comoving Hubble radius shrinks inside fixed comoving scales. The two scenarios differ, however, when interpreted in physical coordinates. While the proper distance corresponding to a fixed comoving length scales as $a=(t/t_0)^p$, the proper Hubble radius is $H^{-1}=t/p\,$. In an inflating universe ($p>1$) therefore, the physical wavelength corresponding to a fixed comoving wavevector is stretched outside the Hubble radius as $t\tt \inf$. In a collapsing universe ($0<p<1$), however, the opposite happens: as $t$ approaches zero from negative values, the shrinking of the Hubble radius outpaces the shrinking of physical wavelengths. Finally, for completeness, we note that a stability analysis \cite{Gratton2} shows that the background solution (\ref{toymodel}) is a stable attractor under small perturbations, both in the expanding case where $p>1$ ($\w<-1/3$), and also in the contracting case provided that $p<1/3$ ($\w>1$). This conclusion remains valid when we include more general forms of matter. Intuitively, in an inflating universe with $\w\approx-1$, this is because the scalar field potential energy density is nearly constant, whereas the curvature ($\propto a^{-2}$), matter ($\propto a^{-3}$), radiation ($\propto a^{-4}$) and other forms of energy density all decrease as the scale factor grows. Likewise, in a collapsing universe with $\w\gg 1$, the scalar field kinetic energy density increases as $a^{-3(1+\w)}$ as the scale factor shrinks, whereas curvature, matter, radiation and other forms of energy density increase at a slower rate. Consequently, in each case the dynamics are largely insensitive to the initial conditions: after a few e-folds of expansion or contraction, the solution converges rapidly to the attractor. This resolves the so-called flatness and homogeneity problems, explaining why the present-day universe is so uniform and geometrically flat, without recourse to fine-tuning of the initial conditions. \section{Classical cosmological perturbation theory} We now turn our attention to the behaviour of classical perturbations in linearised general relativity. Considering only the scalar sector relevant to the density perturbations, the perturbed line element for a flat FRW universe can be expressed in the general form \[ \d s^2 = a^2(\tau) \( -(1+2\phi)\,\d \tau^2 + 2B_{,i} \d \tau \d x^i+((1-2\psi)\delta_{ij}+2E_{,ij})\,\d x^i \d x^j\), \] where the spacetime functions $\phi$, $\psi$, $B$ and $E$ are linear metric perturbations. Under a gauge transformation $\xi^\mu = (\xi^0,\D^i\xi)$, the perturbations transform as \bea \phi & \tt & \phi - {\xi^0}' - \cH\xi ^0, \\ \psi & \tt & \psi+\cH \xi ^0, \\ B &\tt& B+\xi^0-\xi', \\ E &\tt& E-\xi, \eea where the primes indicate differentiation with respect to the conformal time $\tau$. We may then construct the gauge-invariant Bardeen potentials \cite{bardeen} \bea \Phi &=& \phi+\cH(B-E')+(B-E')' ,\\ \Psi &=& \psi-\cH(B-E'). \eea Another very useful gauge-invariant quantity is the curvature perturbation on comoving hypersurfaces, $\zeta$, defined in any gauge by \[ \zeta = \psi-\frac{\cH}{\rho+P}\,q \] where the potential $q$ satisfies $q_{,i}=\delta T^0_i$. (Thus, in comoving gauges characterised by $\delta T^0_i=q_{,i}=0$, $k^2 \zeta$ is proportional to the curvature of spatial slices, given by $R^{(3)}=-4k^2 \psi$). We can easily relate this definition to an equivalent one in terms of the Bardeen potentials as follows. Working in longitudinal gauge for convenience ($B_L=E_L=0$, and hence $\phi_L=\Phi$ and $\psi_L=\Psi$, where the subscript $L$ will be used to denote quantities evaluated in this gauge), the perturbed $G^0_i$ Einstein equation yields the momentum constraint equation \[ \(\delta G^0_i\)_L = -\frac{2}{a^2} \left[\Psi'+\cH \Phi\right]_{,i} = (\delta T^0_i)_L = q_{L,i} \] (where $8\pi G=1$ still). Making use of the background Friedmann equation in the form $\cH'-\cH^2=-a^2(\rho+P)/2$, we therefore find (in any gauge now) \[ \label{zetadef} \zeta = \Psi-\frac{\cH (\Psi'+\cH \Phi)}{\cH'-\cH^2} =\Psi-\frac{H}{\dot{H}}\,(\dot{\Psi}+H\Phi), \] where we have reverted to proper time in the last equality. The remaining components of the perturbed Einstein tensor, in longitudinal gauge, are: \bea \(\delta G^0_0\)_L &=& \frac{2}{a^2}\,\left[3\cH^2\Phi+3\cH\Psi'-\grad^2\Psi\right] ,\\ \(\delta G^i_j\)_L &=& \frac{1}{a^2}\,\left[2\Psi''+4\cH\Psi'-\grad^2(\Psi-\Phi)+2\cH\Phi'+2(2\cH'+\cH^2)\Phi\right]\delta^i_j \nonumber \\ && \qquad +\frac{1}{a^2}\delta^{ik}(\Psi-\Phi)_{,jk}. \eea Specialising to the case of a single scalar field, the absence of anisotropic stresses ($\delta T^i_j=0$ for $i\not= j$) forces the two Bardeen potentials to coincide, thus $\Phi=\Psi$. Evaluating the perturbed stress tensor to linear order and making use of the background equations of motion in conformal time, \bea \label{bgdeom1} 0 &=& 3\cH^2 - \half \vphi_0'^2-a^2 V, \\ \label{bgdeom2} 0 &=& \vphi_0''+2\cH\vphi_0'+a^2 V_{,\vphi}, \\ \label{bgdeom3} 0 &=& \cH'-\cH^2+\half\vphi_0'^2, \eea we find the linearised Einstein equations in longitudinal gauge read \bea \label{pert1} \grad^2\Phi-3\cH\Phi'-(\cH'+2\cH^2)\Phi &=& \half\,\vphi_0'\delta\vphi_L'+\half\,a^2V_{,\vphi}\delta\vphi_L ,\\ \label{pert2} \Phi'+\cH\Phi &=& \half\,\vphi_0'\delta\vphi_L ,\\ \label{pert3} \Phi''+3\cH\Phi'+(\cH'+2\cH^2)\Phi &=& \half\,\vphi_0'\delta\vphi_L'-\half\,a^2V_{,\vphi}\delta\vphi_L , \eea where $\delta\vphi_L$ denotes the perturbation of the scalar field in longitudinal gauge. We now subtract (\ref{pert1}) from (\ref{pert3}). Using the background equation of motion (\ref{bgdeom2}), we can further subtract $(4\cH+2\vphi_0''/\vphi_0')$ times (\ref{pert2}) to find the following second-order differential equation for the Newtonian potential: \[ \label{phieom} \Phi''+2\(\cH-\frac{\vphi_0''}{\vphi_0'}\)\Phi'-\grad^2\Phi+2\(\cH'-\cH\frac{\vphi_0''}{\vphi_0'}\)\Phi = 0. \] (Note the absence of $\delta\vphi_L$ ensures that this equation is now gauge invariant). Combining (\ref{phieom}) with (\ref{zetadef}) (recalling $\Phi=\Psi$), we then have \[ \label{phifromzeta} \frac{(\cH^2-\cH')}{\cH}\,\zeta' = \frac{\,\vphi_0'^2}{2\cH}\,\zeta'=\grad^2\Phi . \] This tells us that on large scales, where the gradient term is negligible, $\zeta$ is conserved (except possibly if $\vphi_0'= 0$ at some point during the cosmological evolution, \eg during reheating when the inflaton field undergoes small oscillations). In fact, even if $\vphi_0'$ does vanish at a particular moment in time, it turns out that $\zeta$ is still conserved outside the Hubble radius. For quite general matter, it can be shown (see \eg \cite{LangloisInflRev}) using the perturbed continuity equation that $\zeta$ is conserved on super-Hubble scales, provided only that the perturbations are {\it adiabatic} (\iec every point in spacetime goes through the same matter history. Note this automatically holds for single field models). This accounts for the utility of $\zeta$ in the calculation of inflationary perturbations: once a mode exits the Hubble radius, $\zeta$ for that mode is conserved. Despite our ignorance of the processes occurring during reheating, the initial conditions for a mode re-entering the Hubble radius in the radiation era are fully specified by the conserved value of $\zeta$ calculated as the mode left the Hubble radius during inflation. \section{Perturbations in inflation} Inflation stretches initial vacuum quantum fluctuations into macroscopic cosmological perturbations, thereby providing the initial conditions for the subsequent classical radiation-dominated era. Remarkably, the nearly scale-invariant spectrum of perturbations predicted by inflation exactly accounts for present-day observations of the CMB. The computation of the primordial fluctuations arising in inflationary models was first discussed in \cite{Mukhanov:1981xt, Starobinsky:1982ee, Hawking:1982cz, Guth:1982ec, Bardeen:1983qw}; the present section comprises a review of these classic calculations. (Other useful review articles include \cite{mukhanov, LangloisInflRev, Brandenberger:2005be, Turok:2002yq}). \subsection{Massless scalar field in de Sitter} Before launching into a full calculation of the quantisation of a scalar field including gravitational backreaction, let us first sharpen our claws on a simple toy model; namely, a massless scalar field on a {\it fixed} de Sitter background. The cosmological scale factor is given by $a \propto \exp(Ht)$, which in conformal time reads $a = -1/H\tau$. (Note the conformal time takes values in the range $-\inf<\tau<0$). The action for a massless scalar field in this geometry is \[ S = -\half \int \dx \rootg (\D \vphi)^2 = \half \int \d \tau \,\d ^3x\, a^2[\vphi'^2-(\vgrad\vphi)^2], \] where differentiation with respect to conformal time is denoted by a prime. Changing variables to $\chi=a\vphi$ and integrating by parts, we obtain \[ S = \half\int\d\tau\,\d^3x\,[ \chi'^2-(\vgrad \chi)^2+\frac{a''}{a}\,\chi^2], \] where the kinetic term is now canonically normalised. The fact that the scalar field lives in de Sitter spacetime rather than Minkowski results in a time-dependent effective mass \[ m_{\mathrm{eff}}^2 = -\frac{a''}{a} = -\frac{2}{\tau^2}. \] The quantum field $\chihat$ may then be expanded in a basis of plane waves as \[ \label{uhat} \chihat(\tau,\x) = \frac{1}{(2\pi)^3}\,\int\d^3k \,[ \ahatk \,\chi_k (\tau)\, e^{i\k\.\x}+\ahatk^\dag \,\chi_k^*(\tau)\, e^{-i\k\.\x}], \] where the creation and annihilation operators $\ahat^\dag$ and $\ahat$ satisfy the usual bosonic commutation relations \[ \label{acomm} [\ahatk,\ahatkp]=[\ahatk^\dag,\ahatkp^\dag]=0, \qquad [\ahatk,\ahatkp^\dag]=\delta(\k-\k'). \] The function $\chi_k(\tau)$ is a complex time-dependent function satisfying the classical equation of motion \[ \label{ueom} \chi_k''+(k^2-\frac{a''}{a})\chi_k=0, \] representing a simple harmonic oscillator with time-dependent mass. In the case of de Sitter spacetime, the general solution is given by \[ \chi_k = A\, e^{-ik\tau}\big(1-\frac{i}{k\tau}\big)+B\, e^{ik\tau}\big(1+\frac{i}{k\tau}\big). \] Canonical quantisation consists in imposing the following commutation relations on constant-$\tau$ hypersurfaces: \[ [\chihat(\tau,\x),\,\chihat(\tau,\xp)]=[\pihatchi(\tau,\x),\,\pihatchi(\tau,\xp)]=0 \] and \[ \label{comm} [\chihat(\tau,\x),\,\pihatchi(\tau,\xp)]=i\hbar \,\delta(\x-\xp), \] where the canonical momentum $\pihatchi = \delta S/\delta \chi = \chi'$. Substituting the mode expansion (\ref{uhat}) into the commutator (\ref{comm}), and making use of (\ref{acomm}), we obtain the condition \[ \label{norm} \chi_k \chi_k'^*-\chi_k^* \chi'_k = i \hbar, \] which fixes the normalisation of the Wronskian. Specifying a particular choice for $\chi_k(\tau)$ corresponds to the choice of a particular physical vacuum $\ket{0}$, defined by $\ahatk \ket{0}=0$. A different choice for $\chi_k(\tau)$ leads to a different decomposition into creation and annihilation operators, and thus to a different vacuum. In our case, the most natural physical prescription for the vacuum is to take the particular solution that corresponds to the usual Minkowski vacuum $\chi_k \sim \exp(-ik\tau)$ in the limit $k|\tau|\gg 1$. This is because the comoving Hubble radius $|\tau|^{-1}$ is shrinking during inflation: provided one travels sufficiently far back in time, then any given mode can always be found within the Hubble radius (\ie we can always choose $|\tau|$ sufficiently large such that $k|\tau|\gg 1$). Once the wavelength of the mode lies within the Hubble radius, the curvature of spacetime has a negligible effect (\ie the effective mass term in (\ref{ueom}) is negligible) and the mode behaves as if it were in Minkowski spacetime. Taking account of the normalisation condition (\ref{norm}), we thus arrive at the Bunch-Davies vacuum \[ \chi_k = \sqrt{\frac{\hbar}{2k}}\,e^{-ik\tau}\big(1-\frac{i}{k\tau}\big). \] The power spectrum is defined via the Fourier transform of the correlation function, \[ \bra{0}\phih(\x_1)\phih(\x_2)\ket{0} = \int \d^3k\, e^{i\k\.(\x_1-\x_2)} \frac{\P_\vphi(k)}{4\pi k^3}, \] where \[ 2\pi^2 k^{-3} \P_\vphi = |\vphi_k|^2=\frac{|\chi_k|^2}{a^2}. \] With this definition, the variance of $\vphi$ in real space is \[ \langle\vphi(\x)^2\rangle = \int \P_\vphi \,\d \ln{k}, \] and so a constant $\P_\vphi$ corresponds to a scale-invariant spectrum, \iec one in which the field variance receives an equal contribution from each logarithmic interval of wavenumbers. Thus we find that modes well outside the Hubble radius, \iec those for which $k|\tau|\ll 1$, possess the scale-invariant spectrum \[ \label{dSpowerspec} \P_\vphi(k) \approx \hbar \(\frac{H}{2\pi}\)^2. \] (Note also that in the opposite limit $k|\tau|\gg 1$, in which the mode is well inside the Hubble radius, we recover the usual result for fluctuations in the Minkowski vacuum, $\P_\vphi=\hbar(k/2\pi a)^2$). Looking back, we can trace the origin of this scale invariance to the $-2/\tau^2$ effective mass term in the classical equation of motion (\ref{ueom}). Thanks to this term, once a given mode $\chi_k$ exits the Hubble radius, it ceases to oscillate (\iec the mode `freezes') and instead scales as $\tau^{-1}$ in the limit as $\tau \tt 0$. Since this amplification sets in at a time $\tau \sim k^{-1}$, the late-time mode amplitude picks up an extra factor of $k^{-1}$ on top of the $k^{-1/2}$ factor from the initial Minkowski vacuum, leading to scale invariance, \iec $|\chi_k|^2 \propto k^{-3}$. \subsection{Including metric perturbations} \label{ADMsection} We now turn to the full calculation of the spectrum of inflationary perturbations including gravitational backreaction. Recalling our discussion of classical cosmological perturbation theory, there are a total of five scalar modes; four from the metric ($\phi$, $\psi$, $B$ and $E$), and the perturbation of the inflaton itself, $\delta\vphi$. Gauge invariance allows us to remove two of the five functions by fixing the time and spatial reparameterisations $x^\mu\tt x^\mu+\xi^\mu$, where $\xi^\mu=(\xi^0,\D^i\xi)$, while the Einstein constraint equations remove a further two. We are left with a sole physical degree of freedom. In order to derive the perturbed action up to second order in the metric perturbations, we will work in the ADM formalism, following \cite{Maldacena:2002vr}. Foliating spacetime into spacelike 3-surfaces of constant time, the metric can be decomposed in terms of the lapse $N(t,x^i)$, the shift $N^i(t,x^i)$ and the spatial 3-metric $h_{ij}$, as \[ \d s^2 = -N^2 \d t^2 + h_{ij}(\d x^i+N^i\d t)(\d x^j+N^j \d t). \] Spatial indices are lowered and raised using the spatial 3-metric. The 4-curvature can be decomposed in terms of the spatial curvature $R^{(3)}$ and the extrinsic curvature tensor \[ K_{ij} = \frac{1}{2N}\,(\dot{h}_{ij}-\grad_i N_j-\grad_j N_i) \] according to the Gauss-Codacci relation \cite{Wald} \[ R = R^{(3)}+K_{ij}K^{ij}-K^2, \] where $K=K^i_i$ and we have omitted a total derivative term. The relevant action (\ref{scalarfieldaction}) is thus \bea \label{ADMaction} S &=& \half\int \d t \d^3 x \sqrt{h}\,\Big[NR^{(3)}-2NV+N^{-1}(E_{ij}E^{ij}-E^2)\nonumber \\ && \qquad \qquad +N^{-1}(\dot{\vphi}-N^i\vphi_{,i})^2 -Nh^{ij}\vphi_{,i}\vphi_{,j}\,\Big], \eea where, for convenience, we have defined $E_{ij}=NK_{ij}$. The ADM formalism is constructed so that $\vphi$ and $h_{ij}$ play the role of dynamical variables, while $N$ and $N^i$ are simply Lagrange multipliers. Our strategy, after fixing the gauge, will be to solve for these Lagrange multipliers and then backsubstitute them into the action. Considering only scalar modes, a convenient gauge is \[ \label{zetagaugedef} \delta\vphi=0, \qquad h_{ij}=a^2(\tau)\left[(1-2\zeta)\delta_{ij}\right], \] where $a(\tau)$ is the background scale factor but $\zeta$ is a first-order perturbation. In this gauge the scalar field is unperturbed, hence $\delta T^0_i=0$ and the gauge is comoving. (Note also that the gauge is fully specified by the above). The equations of motion for $N^i$ and $N$ are the momentum and Hamiltonian constraints respectively, which in this gauge take the form \bea && \grad_i[N^{-1}(E^i_j-\delta^i_jE)]=0, \\ && R^{(3)}-2V-N^{-2}(E_{ij}E^{ij}-E^2)-N^{-2}\dot{\vphi}_0^2=0. \eea At linear order, the solution of these equations is \[ N=1-\frac{\dot{\zeta}}{H}, \qquad N^i=\delta^{ij}\,\D_j\Big(\frac{\zeta}{a^2H}-\chi\Big), \qquad \D_i^2 \chi = \frac{\dot{\vphi}_0^2}{2H^2}\,\dot{\zeta}, \] where $\D_i^2$ is a shorthand notation for $\delta^{ij}\,\D_i\D_j$. Substituting the above back into the action (\ref{ADMaction}), and expanding out to second order, we obtain a quadratic action for $\zeta$. Note that for this purpose it is unnecessary to compute $N$ or $N^i$ to second order. This is because these second order terms can only appear multiplying the zeroth order Hamiltonian and momentum constraint equations $\D\mathcal{L}/\D N$ and $\D\mathcal{L}/\D N^i$, which vanish for the background. Direct replacement in the action gives, up to second order, \bea S&=&\half\int \d t\, \d^3 x \,a \,e^{-\zeta}(1-\frac{\dot{\zeta}}{H})[4\D_i^2\zeta-2(\D_i\zeta)^2-2a^2 Ve^{-2\zeta}]\nonumber \\ &&\qquad + a^3 e^{-3\zeta}(1-\frac{\dot{\zeta}}{H})^{-1}[-6(\dot{\zeta}-H)^2+\dot{\vphi}_0^2], \eea where we have neglected a total derivative and $(\D_i\zeta)^2 = \delta^{ij}\D_i\zeta\D_j\zeta$. After integrating by parts, and using the background equations of motion (\ref{pbgdeom1}) to (\ref{pbgdeom3}), we obtain \[ S=\half\int\d t\,\d^3x \frac{\dot{\vphi}_0^2}{H^2}\,[a^3 \dot{\zeta}^2-a(\D_i\zeta)^2]=-\half\int\d^4x\sqrt{-g}\,\frac{\dot{\vphi}_0^2}{H^2}\,(g^{\mu\nu}\D_\mu\zeta\D_\nu\zeta), \] where $g_{\mu\nu}$ is the background FRW metric. To connect with our earlier discussion of a scalar field in de Sitter space, let us pass to conformal time and canonically normalise the kinetic term by introducing the variable $\nu = z\zeta$, where $z=a\vphi_0'/\cH$. We find \[ S = \half\int\d \tau \d^3x\,[\nu'^2-(\D_i\nu)^2+\frac{z''}{z}\,\nu^2], \] the action for a scalar field in Minkowski spacetime with a time-dependent mass $z''/z$. (Recall that in de Sitter spacetime the effective mass was instead $a''/a$). Each Fourier mode obeys the equation of motion \[ \nu_k''+(k^2-\frac{z''}{z})\nu_k=0, \] or equivalently, \[ \label{zetaeom} \zeta_k''+2\big(\frac{z'}{z}\big)\zeta_k'+k^2\zeta_k=0. \] (Since $\zeta$ is gauge-invariant, and $z$ depends only on background quantities, this relation in fact holds in any gauge). Under conditions of slow-roll, the evolution of $\vphi_0$ and $H$ is much slower than that of the scale factor $a$, and so to leading order in slow-roll one finds \[ \frac{z''}{z} \approx \frac{a''}{a}. \] Consequently, all the results of the previous section now apply to our variable $\nu$ in the slow-roll approximation. The correctly normalised expression corresponding to the Bunch-Davies vacuum is approximately \[ \nu_k \approx \sqrt{\frac{\hbar}{2k}}\,e^{-ik\tau}\big(1-\frac{i}{k\tau}\big). \] In the super-Hubble limit, $k|\tau|\ll 1$, this becomes \[ \nu_k\approx -\sqrt{\frac{\hbar}{2k}}\frac{i}{k\tau} \approx i\sqrt{\frac{\hbar}{2k}}\frac{a H_*}{k}, \] where we have used $a\approx -1/H\tau$ and the asterisk indicates that, to get the most accurate approximation, we should evaluate the value of $H$ at the moment the mode crosses the Hubble radius. Hence, on scales much larger than the Hubble radius, the power spectrum for $\zeta$ is given by \[ \label{zetapowerspec} \P_\zeta (k)= \frac{k^3}{2\pi^2}\frac{|\nu_k|^2}{z^2} \approx \frac{\hbar}{4\pi^2}\(\frac{H_*^4}{\,\dot{\vphi}_{0*}^2}\), \] where we have reverted once more to physical time, and the quantities labelled with asterisks are to be evaluated when the mode exits the Hubble radius. This is the celebrated result for the spectrum of scalar cosmological perturbations generated from vacuum fluctuations during slow-roll inflation. The spectrum is not strictly scale-invariant, however, since the values of the Hubble parameter and the scalar field velocity evolve slowly over time. This introduces a small additional momentum dependence conveniently parameterised by the spectral index $n_s$, where \[ n_s(k)-1=\frac{\d \ln\P_\zeta(k)}{\d \ln k} \approx \frac{1}{H_*}\frac{\d}{\d t_*}\ln\(\frac{H_*^4}{\dot{\vphi}_{0*}^2}\) = -2\(\frac{\ddot{\vphi}_{0*}}{H_*\dot{\vphi}_{0*}}+\frac{\dot{\vphi}_{0*}^2}{H_*^2}\)=2(\eta-3\eps), \] using the slow-roll parameters as defined in (\ref{slowrolleps}) and (\ref{slowrolleta}). A spectral index $n_s=1$ therefore corresponds to an exactly scale-invariant spectrum, whereas $n_s<1$ and $n_s>1$ correspond to a red and a blue spectrum respectively. (Equivalently, we have $k^3 |\zeta_k|^2 \propto k^{n_s-1}$). A useful heuristic to understand the parametric dependence of the power spectrum (\ref{zetapowerspec}) is as follows. Consider a mode just about to cross the Hubble radius and freeze out. On a constant-time slice the quantum fluctuations in the inflaton field will be of order $\delta\vphi \sim H_*$, approximating the background as de Sitter space and using (\ref{dSpowerspec}). Comoving surfaces, characterised by $\delta\vphi=0$, are therefore offset from surfaces of constant time by a time delay $\delta t \sim \delta\vphi/\dot{\vphi}\sim H_*/\dot{\vphi}$. The exponential expansion of the background spacetime as $a\sim \exp(H_*t)$ then leads to a comoving curvature perturbation $\zeta \sim \delta a/a \sim H_* \delta t \sim H_*^2/\dot{\vphi}$, consistent with (\ref{zetapowerspec}). \section{Perturbations in a collapsing universe} We have seen that models formulated in terms of gravity and a single scalar field possess only one physical scalar degree of freedom at linear order. In the case of inflation, we chose to parameterise this degree of freedom in a gauge-invariant fashion through use of the comoving curvature perturbation $\zeta$. The utility of this variable lies in its conservation on super-Hubble scales, allowing the amplitudes of modes re-entering the Hubble radius during the radiation era to be calculated independently of the detailed microphysics of re-heating. In the case of a collapsing universe, however, the situation is more subtle. The final spectrum of perturbations in the expanding phase is, in general, sensitive to the prescription with which the perturbations are matched across the bounce. In particular, it does not necessarily follow that $\zeta$ is conserved across the bounce. In this section we will therefore adopt a more generic approach, computing the behaviour of both $\zeta$ and the Newtonian potential $\Phi$ during the collapse. \subsection{The story of $\zeta$ and $\Phi$} Let us start by collecting together a number of our earlier results concerning $\zeta$ and $\Phi$. These variables satisfy the second order linear differential equations given in (\ref{phieom}) and (\ref{zetaeom}), and are related to each other via (\ref{zetadef}) (recalling that the absence of anisotropic stress for scalar field matter implies that $\Phi=\Psi$) and (\ref{phifromzeta}). To express these relations more compactly, it is useful to introduce the surrogate variables $u$ and $\nu$, defined as \[ \label{surrogatevars} u = \frac{a}{\vphi_0'}\,\Phi, \qquad v=z \zeta, \] where the background quantity $z= a\vphi_0'/\cH$ as before. In particular, $u$ and $\nu$ have the same $k$-dependence as $\Phi$ and $\zeta$, and hence the same spectral properties. Defining $\theta=1/z$, (\ref{phieom}) and (\ref{zetaeom}) are \bea \label{phiueom} 0&=& u''+ (k^2- \frac{\theta''}{\theta})\,u, \\ \label{nueom} 0&=&\nu''+ (k^2-\frac{z''}{z})\,\nu, \eea and the relations (\ref{zetadef}) and (\ref{phifromzeta}) become \bea \label{uvrelations} k\nu &=& 2k \,(u'+\frac{z'}{z}\,u), \\ -ku &=& \frac{1}{2k}\,(\nu'+\frac{\theta'}{\theta}\,\nu). \eea Choosing a boundary condition for $u$ and $\nu$ corresponds to specifying a vacuum state for the fluctuations. As usual, we will take this to be the Minkowski vacuum of a comoving observer in the far past, at a time when all comoving scales were far inside the Hubble radius. Thus, in the limit as $\tau\tt-\inf$, \bea \label{ubc} u&\tt& i\sqrt{\hbar}\,(2k)^{-3/2}e^{-ik\tau}, \\ \label{nubc} \nu &\tt& \sqrt{\hbar}\,(2k)^{-1/2}e^{-ik\tau}. \eea (It is easy to check from (\ref{uvrelations}) that these two boundary conditions are equivalent). Assuming the equation of state is constant while all wavelengths of cosmological interest leave the Hubble radius, from our scaling solution (\ref{toymodel}) with $\w=2/3p-1$, we find \bea \label{thetaduality} \frac{\,\theta''}{\theta} &=& \frac{p}{(p-1)^2\, \tau^2}\hspace{0.5mm}, \\ \label{zduality} \frac{\,z''}{z} &=& \frac{p\,(2p-1)}{(p-1)^2 \,\tau^2}\hspace{0.5mm}. \eea The solutions of (\ref{phiueom}) and (\ref{nueom}) are then easily obtained in terms of Hankel functions: \bea u(x) &=& x^{1/2}\left[A^{(1)}H^{(1)}_\alpha (x)+A^{(2)}H^{(2)}_\alpha (x)\right], \\ \nu(x) &=& x^{1/2}\left[B^{(1)}H^{(1)}_\beta (x)+B^{(2)}H^{(2)}_\beta (x)\right] , \eea where $x=k|\tau|$ is a dimensionless time variable, $A^{(1,2)}$ and $B^{(1,2)}$ are constants, and the order of the Hankel functions $H^{(1,2)}_s(x)$ is \bea \label{alphadef} \alpha &=& [(\theta''/\theta)\,\tau^2+1/4]^{1/2} = \half \left|\,\frac{1+p}{1-p}\,\right|, \\ \label{betadef} \beta &=& [(z''/z)\,\tau^2+1/4]^{1/2} = \half \left|\frac{1-3p}{1-p}\,\right| . \eea In the far past ($x\tt \inf$), when comoving scales are well inside the Hubble radius, the asymptotic behaviour of the Hankel functions is \[ H^{(1,2)}_s (x) \tt \sqrt{\frac{2}{\pi x}}\,\exp\left[\pm i\(x-\frac{s\pi}{2}-\frac{\pi}{4}\)\right] \] (where the $+$ sign corresponds to $H^{(1)}_s(x)$). The boundary conditions (\ref{ubc}) and (\ref{nubc}) then imply \bea \label{uresult} u &=& \frac{\lambda_1}{2k}\,(\hbar\pi x/4k)^{1/2}H^{(1)}_\alpha (x) ,\\ \label{nuresult} \nu &=& \lambda_2(\hbar\pi x/4k)^{1/2}H^{(1)}_\beta (x), \eea where $\lambda_1$ and $\lambda_2$ are the $k$-independent complex phase factors \bea \lambda_1 &=& \exp[i(2\alpha+3)\pi/4], \\ \label{lambda2def} \lambda_2 &=& \exp[i(2\beta+1)\pi/4]. \eea Pausing to catch our breath, we notice a curious duality: the equation of motion for $u$ ((\ref{phiueom}), along with (\ref{thetaduality})) is invariant under $p\tt 1/p\,$ \cite{Boyle}. Moreover, the boundary condition, (\ref{ubc}), is independent of $p$ (as is natural, since the boundary condition is imposed in the far past when comoving scales are well inside the Hubble radius). Consequently, our expressions (\ref{alphadef}) for $\alpha$, and (\ref{uresult}) for $u$, are invariant under $p\tt 1/p\,$. The spectrum of fluctuations of the Newtonian potential $\Phi$ in an expanding universe with $p>1$ is therefore identical to that in a collapsing universe with $\hat{p}=1/p\,<1$. (The same cannot be said for $\zeta$, however, as (\ref{zduality}) is not invariant under $p\tt1/p\,$). \subsection{Power spectra} Returning now to the calculation of the power spectra for $\zeta$ and $\Phi$, at late times when comoving scales are well outside Hubble radius ($x\tt 0$), the asymptotic behaviour of the Hankel functions is \[ \label{H1asymptotics} H^{(1)}_s (x) \tt -\frac{i}{\pi}\,\Gamma(s)\(\frac{x}{2}\)^{-s}\hspace{-4mm}, \] where $s>0$ and $\Gamma(s)$ is the Euler gamma function. On scales much larger than the Hubble radius, the power spectra for $\zeta$ is therefore \[ \label{fullzetaspec} \P_\zeta (k) = \frac{k^3}{2\pi^2}\,\frac{|\nu|^2}{z^2} = \frac{\hbar}{4\pi^2}\(\frac{H_*^4}{\,\dot{\vphi}_{0*}^2}\) \Lambda(p) \,x^{3-2\beta}, \] where asterisked quantities are to be evaluated at Hubble radius crossing ($k=aH$), and the $k$-independent numerical factor $\Lambda(p)$ is \[ \Lambda(p) = \Gamma^2(\beta)\,\frac{4^\beta(1-p)^2}{2\pi p^2}\,. \] We immediately see that the power spectrum for $\zeta$ is only scale-invariant when $\beta=3/2$, cancelling the $x$-dependence on the right-hand side of (\ref{fullzetaspec}). Equivalently, the spectral index $n_\zeta$ (given by $n_\zeta-1 = 3-2\beta$) must be unity for scale invariance. From (\ref{betadef}), this requires $p\tt \inf$ ($\w\approx -1$), corresponding to an inflating universe in the slow-roll limit (recall that the slow-roll parameter $\eps=1/p\,$). In this limit, the numerical factor $\Lambda(p)$ tends to unity and we recover our earlier result (\ref{zetapowerspec}). On the other hand, the power spectra on super-Hubble scales for the Newtonian potential $\Phi$ is given by \[ \label{phipowerspec} \P_\Phi (k) = \frac{k^3}{2\pi^2}\,\frac{|u|^2 \vphi_0'^2}{a^2}=\frac{1}{(2\pi)^3}\,\dot{\vphi}_{0*}^2\Gamma^2(\alpha)4^{\alpha-1}x^{1-2\alpha}. \] The spectral index for $\Phi$ is thus $n_\Phi-1=1-2\alpha$, with scale invariance requiring $\alpha=1/2$. From (\ref{alphadef}), this is satisfied in only two cases: firstly, that of an expanding universe undergoing slow-roll inflation ($p\tt\inf$); and secondly, the case of a very slowly collapsing universe with $p\ll 1$ ($\w\gg 1$). (In this latter scenario, the scalar field is rapidly rolling down a steep negative exponential potential). From our earlier remarks about duality, this result is exactly as expected: the power spectrum (\ref{phipowerspec}) is invariant under $p\tt 1/p\,$, since $\alpha$ is invariant. Nonetheless, it is remarkable that a nearly scale-invariant spectrum can be obtained, without inflation, on a background which is very nearly static Minkowski. To sum up, while in the case of slow-roll inflation both $\Phi$ and $\zeta$ acquire a scale-invariant spectrum well outside the Hubble radius, in the case of a slowly contracting universe, $\Phi$ is scale-invariant but $\zeta$ is blue ($n_\zeta -1 \approx 2$, indicating greater power at short wavelengths). Our analysis has been restricted to the case in which the equation of state is constant. More generally, however, when the equation of state is a slowly varying function of time, one can introduce the `fast-roll' parameters $\bar{\eps}$ and $\bar{\eta}$ defined by \[ \bar{\eps}=\(\frac{V}{V_{,\vphi}}\)^2, \qquad \bar{\eta}= 1-\frac{V V_{,\vphi\vphi}}{V_{,\vphi}^2}. \] Analogously to the case of inflation, one can then express the spectral index in terms of these fast-roll parameters. A short calculation \cite{Gratton2} yields the result \[ n_s-1=-4(\bar{\eps}+\bar{\eta}). \] \subsection{The relation between $\zeta$ and $\Phi$} Since a generic scalar fluctuation can be described using either $\zeta$ or $\Phi$ with equal validity, the fact that one variable possesses a scale-invariant spectrum, while the other does not, is at first sight puzzling. To understand this behaviour, it is useful to probe further the relationship between $\zeta$ and $\Phi$. This is given by \[ \label{neatzeta} \zeta= \frac{p}{a}\,\frac{\d}{\d t}\(\frac{a\Phi}{H}\), \] which follows directly from the definition of $\zeta$ given in (\ref{zetadef}), and the equation of motion $H^2/\dot{H}=-p$. Expanding once more the Hankel function $H^{(1)}(x)$ as $x\tt 0$, but this time retaining the subleading term as well, we find \[ H^{(1)}_s (x)= -\frac{i}{\pi}\,\Gamma(s)\(\frac{x}{2}\)^{-s}-\frac{i}{\pi}\,e^{-i\pi s}\,\Gamma(-s)\(\frac{x}{2}\)^s + O(x^{2-s},\,x^{2+s}). \] Up to $k$-independent numerical coefficients, the leading and subleading behaviour of $\Phi$ as $t\tt 0$ is therefor \[ \label{Phiscale} \Phi \sim k^{-3/2-p}\,t^{-1-p}+ k^{-1/2+p}\,t_0^{p}, \] where we have used (\ref{uresult}) and expanded all exponents up to linear order in $p$ (implying $\tau \sim t^{1-p}$). The leading term in this expression, the `growing' mode, is responsible for the scale invariance of $\Phi$. Since $a/H \sim t^{1+p}$, however, we immediately see from (\ref{neatzeta}) that this term makes no contribution to $\zeta$. Instead, the leading contribution to $\zeta$ comes from the subleading (or `decaying') mode in $\Phi$, which scales as $k^{-1/2+p}$. The spectrum for $\zeta$ is therefore boosted by a factor of $k^{2+2p}$ relative to scale invariance, yielding a blue spectrum with $n_\zeta-1\approx 2$. For completeness, from (\ref{nuresult}), the growing and decaying modes for $\zeta$, in the limit as $t\tt 0$, are \[ \label{zetascale} \zeta \sim k^{-1/2+p}\,t_0^p+k^{1/2-p}\,t^{1-3p}\,t_0^{2p}, \] where again we have expanded the exponents to linear order in $p$ and we are neglecting $k$-independent numerical coefficients. (Incidentally, it should be borne in mind that in a collapsing universe the terms `growing' and `decaying' are gauge-dependent. The behaviour $\Phi \sim t^{-1-p}$ is the growing mode in Newtonian gauge, yet the same physical fluctuation `decays' as $\zeta\sim t^{1-3p}$ in the gauge specified by (\ref{zetagaugedef})). \section{Frontiers \subsection{Matching at the bounce} Having seen how perturbations are generated in a collapsing universe, we must now ask how these perturbations relate to observable quantities in the present-day universe. In the expanding phase of the universe $t$ is positive and tending towards $+\inf$, with $p=1/2$ during radiation dominance, followed by $p=2/3$ during matter dominance. From the analysis leading to (\ref{Phiscale}) and (\ref{zetascale}) (but without expanding the exponents), we see that the dominant contribution to both $\Phi$ and $\zeta$ at late times comes from the time-independent piece, and that both time-dependent pieces decay to zero. The $\Phi\sim\zeta\sim const.$ mode is thus the `growing' mode in an expanding universe, and provides the dominant contribution to physical quantities as modes re-enter the Hubble radius. Compatibility with observation therefore requires this mode to have a scale-invariant spectrum. From (\ref{Phiscale}), however, we see that the time-independent term in $\Phi$ does {\it not} have a scale-invariant spectrum in the collapsing phase. Nevertheless, it can be argued that the matching rules across the bounce induce a generic mixing between the two solutions, allowing the scale-invariant spectrum of the $\Phi\sim t^{-1-p}$ mode in the collapsing phase to be inherited by the $\Phi\sim const.$ mode in the expanding phase \cite{Ekpyrotic, Gratton2}. In other words, the mixing at the bounce must match a piece of the scale-invariant growing mode in the collapsing phase to the growing mode in the expanding phase, even though from a naive perspective these modes are orthogonal. A proposal realising this in the full five-dimensional braneworld setup was made in \cite{TTS}, based on the analytic continuation of variables defined on time slices synchronous with the collision. (For subsequent criticism of this proposal, however, see \cite{Creminelli}). Since the nature of the matching at the bounce continues to be a subject of active research we will not pursue it further here. Instead, as explained in the Introduction, our chief concern will be to understand how the problem of propagating a scale-invariant spectrum of density perturbations across the bounce is modified, when restored to its true higher-dimensional setting. \subsection{Breaking the degeneracy: gravitational waves} As we have seen, the power spectrum of the Newtonian potential in a collapsing universe is identical to that of its expanding dual. Fortunately this degeneracy is broken by tensor perturbations, providing a crucial signature for future observations. Returning to our calculations in Section $\S\,$\ref{ADMsection} based on the ADM formalism, to compute the action for tensor perturbations we can set \[ \delta\vphi=0, \qquad h_{ij}=a^2(\tau)[\delta_{ij}+\gamma_{ij}], \] where the tensor perturbation $\gamma_{ij}$ satisfies $\D_j \gamma_{ji}=\gamma_{ii}=0$ and is automatically gauge-invariant. Inserting this into the action (\ref{ADMaction}) and expanding to quadratic order, we find \[ S = \frac{1}{8}\int \d t \,\d^3 x \,[a^3 \dot{\gamma}_{ij}\dot{\gamma}_{ij}-a \,\D_k\gamma_{ij}\D_k\gamma_{ij}]. \] Expressing $\gamma_{ij}$ in a basis of plane waves with definite polarisation tensors as \[ \gamma_{ij} = \int \frac{\d^3 k}{(2\pi)^3}\sum_{s=\pm}\eps^s_{ij}(k)\,\gamma^s_{\vec{k}}(t)\,e^{i\vec{k}\cdot\vec{x}}, \] where $\eps_{ii}=k^i\eps_{ij}=0$ and $\eps^s_{ij}(k)\,\eps^{s'}_{ij}(k)=2\delta_{ss'}$, we see that each polarisation mode obeys essentially the same equation of motion as a massless scalar field: \[ \gamma''+2\cH\gamma'+k^2\gamma=0. \] This can be recast in canonical form by setting $\chi=a \gamma$, yielding \[ \chi''+(k^2-\frac{a''}{a})\chi=0. \] Again, the standard vacuum choice is the Minkowski vacuum of a comoving observer in the far past, corresponding to the boundary condition \[ \label{chibc} \chi \tt (\hbar/2k)^{1/2}e^{-ik\tau} \] as $\tau \tt -\inf$. To solve for $\chi$, one need only observe that the equation of motion for $\chi$ is identical to that for $\nu$ given in (\ref{nueom}) (since $z\propto a$ for constant $\w$, hence $z''/z=a''/a$). The boundary conditions (\ref{chibc}) and (\ref{nubc}) are also identical. The solution for $\chi$ therefore follows from (\ref{nuresult}); \[ \chi = \nu = \lambda_2(\hbar\pi x/4k)^{1/2}H^{(1)}_\beta (x), \] where $x=k|\tau|$, and $\beta$ and $\lambda_2$ are as defined in (\ref{betadef}) and (\ref{lambda2def}) respectively. Defining the tensor spectral index $n_T$ such that $k^3 |\chi|^2 \propto k^{n_T-1}$, on super-Hubble scales the mode freezes and we have, from (\ref{H1asymptotics}), \[ n_T-1 = 3-2\beta=3-\left|\frac{1-3p}{1-p}\,\right|. \] This expression is \textit{not} invariant under $p\tt 1/p\,$: an expanding universe with a given $p$ produces a tensor spectrum which is much redder than the tensor spectrum produced in a contracting universe with $\hat{p}=1/p\,$ as, from the above, $n_T\le \hat{n}_T-2$. \section{Summary} In this chapter we have reviewed two independent scenarios for generating the scale-invariant spectrum of primordial density fluctuations required to fit observation. In the first scenario, inflation, the early universe undergoes a brief period of accelerated expansion in which physical wavelengths are stretched outside the quasi-static Hubble radius. The alternative is a slowly collapsing universe followed by a big crunch to big bang transition, as proposed in the cyclic and ekpyrotic models. In this scenario scale-invariant density perturbations are generated in the growing mode of the Newtonian potential during the collapsing phase, as the Hubble radius shrinks rapidly inside the near-constant physical length scales of the perturbations. Thus far, all our considerations have been couched in the framework of four-dimensional effective theory. In the following chapter we leave the prosaic world of four dimensions, and recast our view of the universe in five dimensions. \chapter{Braneworld gravity} \label{branegravitychapter} \begin{flushright} \begin{minipage}{9.4cm} \small {\it \noindent Now it is time to explain what was before obscurely said: \\ there was an error in imagining that all the four elements \\ might be generated by and into one another... \\ There was yet a fifth combination which God used \\ in the delineation of the universe. } \begin{flushright} \noindent Plato, Timaeus. \end{flushright} \end{minipage} \end{flushright} One of the most striking ideas to emerge from string theory is that the universe we inhabit may be a brane embedded in, or bounding, a higher-dimensional spacetime \cite{PolchinskiI}. The key to this picture is the notion that gravity, being the dynamics of spacetime itself, is free to roam in the full higher-dimensional bulk spacetime, whereas the usual standard model forces are confined to the brane. In this section we review the simplest phenomenological model of such a scenario, the Randall-Sundrum model, focusing on its cosmological properties and on its four-dimensional effective description. For a broader survey of the braneworld literature, see in particular \cite{Brax:2003fv, langlois, maartens}. \section{The Randall-Sundrum model} The first Randall-Sundrum model \cite{RSI} comprises a pair of four-dimensional positive- and negative-tension branes embedded in a five-dimensional bulk with a negative cosmological constant. For simplicity, a $\Z_2$ symmetry is imposed about each brane, so that the dimension normal to the branes is compactified on an $S^1/\Z_2$ orbifold. This scenario provides the simplest possible setup incorporating branes with a nontrivial warp factor in the bulk. The complete action takes the form: \[ \label{5Daction} S = 2\int_5 \sqrt{-g}\(\half\,m_5^3 R - \Lambda\)-\sum_{\pm} \int_\pm\sqrt{-g^\pm}\(2m_5^3 K^\pm+\sigma^\pm+\mathcal{L}^\pm_\mathrm{matter}\), \] where the first group of terms represent the action of the bulk ($m_5$ is the five-dimensional Planck mass, $R$ is the five-dimensional bulk Ricci scalar, and $\Lambda<0$ is the bulk cosmological constant), and the second set of terms encode the action of two branes of tension $\sigma^\pm$, each with its own induced metric $g^\pm_{ab}$ (where Latin indices run from $0$ to $4$), and (optional) matter content specified by $\mathcal{L}^\pm_\mathrm{matter}$. To deal with the assumed $\Z_2$ symmetry about each brane we have adopted an extended zone scheme in which each brane appears only once, but there are two identical copies of the bulk (hence the factor of two multiplying the bulk action). Each brane presents two surfaces to the bulk, and therefore contributes two copies of the Gibbons-Hawking surface term proportional to $K^\pm$, the trace of the extrinsic curvature $K^\pm_{ab}$ (itself defined as the projection onto the brane of the gradient of $n^\pm_a$, the outward-pointing unit normal, \iec $K^\pm_{ab}={g^\pm}_a^c \grad_c n^\pm_b$). The purpose of these surface terms is to cancel out the second set of surface terms arising from the integration by parts of the Einstein-Hilbert action (required to remove second derivatives of the metric, furnishing an action quadratic in first derivatives only). \subsection{The Israel matching conditions} In practice, however, it is usually simplest to adopt a ``cut and paste'' approach when dealing with branes: given a solution of the bulk equations of motion, corresponding to a solution of five-dimensional gravity with a negative cosmological constant, we can create a $\Z_2$-brane simply by gluing together two copies of this geometry. The total stress-energy $T_{ab}$ induced on the brane is then given in terms of the jump in extrinsic curvature across the brane via the Israel matching conditions \cite{Israel}: \[ \label{israel} T_{ab} =-m_5^3\,\([K_{ab}]-h_{ab}[K]\), \] where $h_{ab}$ denotes the induced metric on the brane. (A simple derivation of this result is provided in Appendix \ref{Isapp}, where we study the Einstein equations in the presence of distributional sources). To evaluate the jump $[...]$, we introduce a normal vector to the brane and calculate the extrinsic curvature term in the bracket on both sides of the brane using the same normal. We then subtract the terms calculated on one side (the side from which the normal points away) from the terms calculated on the other side (the side to which the normal is pointing). It is easy to see that the final result does not depend on the choice of normal: reversing the normal changes the order of the subtraction, but also the sign of the extrinsic curvatures. Alternatively, we could have evaluated the extrinsic curvatures on each side using two different normals (pointing away from the brane in each case). Their sum is then equal to the jump defined above. In the case of a $\Z_2$-symmetric brane, the jump therefore reduces to twice the extrinsic curvature at the brane, evaluated using the outward-pointing unit normal. This allows the Israel matching conditions to be re-written as \[ \label{israel2} m_5^3\, K_{ab}=-\frac{1}{2}\,(T_{ab}-\frac{1}{3}\,h_{ab}T). \] If the stress-energy on the brane is fixed in advance, then these equations constrain the embedding of the brane into the bulk geometry. (We will see an explicit example of this when we come to consider brane cosmology: the Israel matching conditions will determine the trajectory of the brane through the bulk, and hence the form of the induced Friedmann equation on the brane). \section{Static solutions} An obvious solution for the bulk geometry is to take a slice of five-dimensional anti-de Sitter (AdS) spacetime, \[ \label{staticsoln} \d s^2 = \d Y^2 + e^{2Y/L}\eta_{\mu\nu}\d x^\mu \d x^\nu, \] where $L$ is the AdS radius (defined as $L^2=-6m_5^3 /\Lambda$), and the indices $\mu$ and $\nu$ run from $0$ to $3$. Let us consider bounding the slice of AdS with flat branes located at constant $Y=Y^\pm$, with $Y^+ > Y^-$. The coordinates $x^\mu$ then parameterise the four dimensions tangent to the branes, and $Y$ parameterises the direction normal to the branes. Determining the stress-energy on the branes via the Israel matching conditions (\ref{israel}) as discussed above, a simple calculation shows that $T^\pm_{\mu\nu} = \mp(6m_5^3/L) \,g^\pm_{\mu\nu}$. To have static branes located at fixed $Y^\pm$ therefore, we see that the branes must be empty apart from a constant tension $\sigma^\pm = \pm 6 m_5^3/L$ (so that $T^\pm_{\mu\nu}=-\sigma^\pm g^\pm_{\mu\nu}$). With the tensions tuned to these values there is then a continuous one-parameter family of static solutions, parameterised by the distance between the branes $Y^+ - Y^-$. (The dependence on the overall position in the warp factor can be removed by a re-scaling of the coordinates tangential to the brane). Heuristically, the stability of the system derives from a balance between, on the one hand, the cosmological constant and gravitational potential energy of the bulk, and on the other hand, the energy stored in the tensions on the branes. The warp factor dictates that an increase in the volume of the bulk (rendering the bulk energy more negative) is always accompanied by a corresponding increase in the area (and hence energy) of the positive-tension brane, and/or a decrease in the area (hence increase in the energy) of the negative-tension brane. A useful generalisation of the static solution above is found by taking the bulk line element to be \[ \label{genstaticsoln} \d s^2 = \d Y^2 + e^{2Y/L} g_{\mu\nu}(x)\d x^\mu \d x^\nu, \] where $g_{\mu\nu}(x)$ is any Ricci-flat four-dimensional metric. (In particular, one could choose the Schwarzschild solution, leading to the braneworld black string \cite{BS}). Again, with the brane tensions appropriately tuned, a solution exists for branes located at arbitrary constant $Y=Y^\pm$. Later, we will use this solution as the starting point in our derivation of the four-dimensional effective theory via the moduli space approximation. \section{Cosmological solutions} Another obvious choice for the bulk metric is the AdS-Schwarzschild black hole. This satisfies the five-dimensional Einstein equations with a negative cosmological constant, and moreover, has the interesting property that the horizon is a three-dimensional manifold of constant curvature, which may be either positive, negative, or zero \cite{Birmingham}. (This is just one example of the considerably richer phenomenology of black holes in higher dimensions). Our suspicions are immediately alerted to the possibility of embedding a three-brane of constant spatial curvature into this bulk geometry, thereby obtaining a cosmological solution of the Randall-Sundrum model. More precisely, the line element for AdS-Schwarzschild takes the form \[ \label{AdS-Schw} \d s^2 = -f(r) \d t^2 + f^{-1}(r)\d r^2+r^2 \d\Omega_3^2, \] where the constant-curvature three-geometry, \[ \label{omega3} \d\Omega_3^2=(1-k\chi^2)^{-1}\d\chi^2 +\chi^2\d\Omega_2^2, \] is either open, closed or flat according to whether $k=-1$, $+1$ or $0$ respectively, and where $\d\Omega_2^2$ denotes the usual line element on an $S^2$. The function $f(r)=k-\mu/r^2+r^2/L^2$, where $L$ is related to the bulk cosmological constant $\Lambda = -6m_5^3/L^2$ as before, and the black hole mass parameter $\mu$ is an arbitrary constant. If $\mu$ is positive we have a black hole in AdS, but we can also choose $\mu$ to be zero, yielding pure AdS, or even negative, describing a naked singularity in AdS. (If $\mu=0$, the different choices for $k$ then correspond respectively to closed, open or flat slicings of AdS). In the next section bar one, we will show how to embed a brane in this bulk geometry to obtain an induced metric of FRW form. The corresponding Friedmann equations then follow from the Israel matching conditions, assuming perfect fluid matter on the branes. First, however, we will reassure ourselves that AdS-Schwarzschild is indeed the most general possible choice for the bulk metric, granted only cosmological symmetry on the branes. \subsection{Generalising Birkhoff's theorem} \label{Birkhoffsection} We begin by recalling Birkhoff's theorem in four dimensions: that the unique spherically symmetric vacuum solution of Einstein's equations is the static \linebreak Schwar\-zschild geometry. The assumption of spherical symmetry completely determines the form of the metric, up to two arbitrary functions of time and the radial coordinate. Application of the Einstein equations then fixes these two functions in terms of the radial coordinate alone, yielding the Schwarzschild solution. Physically, Birkhoff's theorem rules out the emission of gravitational waves by a pulsating spherically symmetric body. Furthermore, in a configuration consisting of a number of infinitely thin, concentric spherical shells of non-vanishing stress-energy, with everywhere else being held in vacuum, the gravitational field at any point is unaltered by the re-positioning of shells both interior and exterior to this point (provided only that interior shells do not cross to the exterior and vice versa). Gravity reduced to (1+1) dimensions by means of a symmetry is therefore completely local. In the present, five-dimensional context (where we also have a bulk cosmological constant), a similar theorem applies: the assumption of cosmological symmetry on the branes, amounting to three-dimensional spatial homogeneity and isotropy, constrains the form of the metric sufficiently that, upon application of the Einstein equations, we obtain a unique solution. With the right choice of time-slicing, this solution is moreover static. In analogy with our discussion of concentric shells, the physical import of this generalised Birkhoff theorem is that the bulk geometry seen by a brane is completely unaffected by the presence, and even the motion, of any other branes placed in the same bulk spacetime. (Except, of course, when collisions occur). To derive the theorem explicitly, following \cite{Gregory} (but using different sign conventions), we start by writing the bulk line element, without loss of generality, as \[ \label{vBansatz} \d s^2 = e^{2\sigma}B^{-2/3}(-\d \tau^2 +\d y^2)+B^{2/3}\d \Omega_3^2, \] where $\sigma(\tau, y)$ and $B(\tau, y)$ are arbitrary functions, and $\d\Omega_3^2$ is as defined in (\ref{omega3}). This represents the most general bulk metric consistent with three-dimensional spatial homogeneity and isotropy, after we have made use of our freedom to write the ($\tau$, $y$) part of the metric in a conformally flat form. We have further chosen the exponents with the aim of simplifying the bulk Einstein equations as much as possible. The latter read $G_{ab}=-(\Lambda/m_5^3)\, g_{ab}=(6/L^2)\,g_{ab}$, or equivalently, $R_{ab}=-(4/L^2) \,g_{ab}$. For the metric (\ref{vBansatz}), we then obtain the following system of partial differential equations: \bea B_{,yy}-B_{,\tau\tau} &=& [12L^{-2} B^{1/3}+6kB^{-1/3}]\,e^{2\sigma}, \\ \sigma_{,yy}-\sigma_{,\tau\tau} &=& [2L^{-2}B^{-2/3}-kB^{-4/3}]\,e^{2\sigma},\\ B_{,yy}+B_{\tau\tau} &=& 2\sigma_{,\tau}B_{,\tau}+2\sigma_{,y}B_{,y},\\ B_{,\tau y} &=& \sigma_{,y}B_{,\tau}+\sigma_{,\tau}B_{,y} \eea Switching to the light-cone coordinates \[ u=\half\,(\tau-y), \qquad v =\half\,(\tau+y), \] we find \bea \label{PDE1} -B_{,uv} &=& [12L^{-2} B^{1/3}+6kB^{-1/3}]\,e^{2\sigma}, \\ -\sigma_{,uv} &=& [2L^{-2}B^{-2/3}-kB^{-4/3}]\,e^{2\sigma},\\ B_{,uu} &=& 2\sigma_{,u}B_{,u}, \\ B_{,vv} &=& 2\sigma_{,v}B_{,v}. \eea The latter two equations can then be directly integrated to give \[ e^{2\sigma}=V'(v)B_{,u}=U'(u)B_{,v}, \] where $U'(u)$ and $V'(v)$ are arbitrary nonzero functions, and we will use primes to denote ordinary differentiation wherever the argument of a function is unique. It follows that $B$ and $\sigma$ take the form \[ B=B(U(u)+V(v)), \qquad e^{2\sigma}=B'U'V', \] which reduces the sole remaining partial differential equation (\ref{PDE1}) to the ordinary differential equation \bea B''+(12L^{-2} B^{1/3}+6kB^{-1/3})B' &=& 0 \\ \label{Beqn} \Rightarrow B'+9L^{-2}B^{4/3}+9kB^{2/3}&=& 9\mu, \eea where $\mu$ is a constant of integration. The metric then takes the form \bea \d s^2 &=& B'U'V' B^{-2/3}(-4\d u\d v)+B^{2/3}\d\Omega_3^2\\ &=&-4B^{-2/3}B'\d U \d V+B^{2/3}\d\Omega_3^2. \eea Upon changing coordinates to \[ r=B^{1/3}, \qquad t=3(V-U), \] we recover the AdS-Schwarzschild metric \[ \d s^2 = -f(r)\d t^2 +f^{-1}(r)\d r^2 + r^2 \d \Omega_3^2, \] where, from (\ref{Beqn}), \[ \label{f} f(r)=-\frac{r'}{3}=\frac{1}{9}\,B^{-2/3}B'=k-\frac{\mu}{r^2}+\frac{r^2}{L^2}. \] \subsection{The modified Friedmann equations} The trajectory of a brane embedded in AdS-Schwarzschild can be specified in parametric form as $t=T(\tau)$ and $r=R(\tau)$, where $\tau$ is the proper time. The functions $T(\tau)$ and $R(\tau)$ are then constrained by \[ g_{ab}u^a u^b = -f\dot{T}^2+f^{-1}\dot{R}^2 = -1, \] where $u^a=(\dot{T},\,\dot{R},\, \vec{0})$ is the brane velocity, dots denote differentiation with respect to $\tau$, and $f=f(r)$ is as defined in (\ref{f}). Thus \[ \label{Tdotrel} \dot{T}=f^{-1}(f+\dot{R}^2)^{1/2}, \] and the components of the unit normal vector (defined such that $n_a u^a = 0$ and $n_a n^a=1$) are \[ n^\pm_a = \pm\(\dot{R},\,-f^{-1}(f+\dot{R}^2)^{1/2},\, \vec{0}\), \] where the choice of sign corresponds to our choice of which side of the bulk we keep prior to imposing the $\Z_2$ symmetry. If we keep the side for which $r\le R(\tau)$ (leading to the creation of a positive-tension brane), then the normal points in the direction of decreasing $r$ and we must take the positive sign in the expression above. If, instead, we choose to retain the $r\ge R(\tau)$ side of the bulk (resulting in the formation of a negative-tension brane), then the normal points in the direction of increasing $r$ and we must take the negative sign. Either way, the four-dimensional induced metric on the brane is \[ \d s_\pm^2 = -\d \tau^2 + R^2(\tau)\d\Omega_3^2, \] and hence the cosmological scale factor on the brane can be identified with the radial coordinate $R(\tau)$. The motion of the brane through the bulk therefore translates into an apparent cosmological evolution on the brane. To calculate the brane trajectory, we must solve the Israel matching conditions \cite{KrausFRW, IdaFRW}. There are two nontrivial relations; one deriving from the $\tau\tau$ component, and the other from the 3-spatial components of the extrinsic curvature. Beginning with the latter, writing $\d \Omega_3^2 = \gamma_{ij}\,\d x^i \d x^j$ for $i,\,j=1,\,2,\,3$, the $ij$ components of the extrinsic curvature tensor are \[ K^\pm_{ij} = \grad_i n^\pm_j = \half\,n^{\pm a} \D_a g_{ij} = \mp\frac{(f+\dot{R}^2)^{1/2}}{R}\,g_{ij}, \] where the 3-metric $g_{ij}= r^2 \gamma_{ij}$, and the superscript $\pm$ indicates whether the brane is of positive or negative tension. Assuming a perfect fluid matter content in addition to the background tension $\sigma^\pm$, the stress-energy tensor on the brane takes the form \[ T^\pm_{ab} = (\rho+p)\, u_a u_b +(p-\sigma^\pm) g^\pm_{ab}, \] where $\rho$ is the energy density of the fluid, $p$ is its pressure, and $g^\pm_{ab}$ is the induced metric on the brane. The $ij$ components of the matching condition (\ref{israel2}) then yield the relation \[ \label{Kij} \pm m_5^3\,(f+\dot{R}^2)^{1/2} = \frac{1}{6}\,(\rho+\sigma^\pm)\,R . \] After squaring this expression, substituting for $f$ using (\ref{f}), replacing the brane tensions by their critical values $\sigma^\pm=\pm 6m_5^3/L$, and relabelling $R(\tau)$ as $a(\tau)$, we find \[ \label{modFriedmann} H^2 = \frac{\dot{a}^2}{a^2} = \pm \frac{\rho}{3 m_5^3 L}+\frac{\rho^2}{36 m_5^6}-\frac{k}{a^2}+\frac{\mu}{a^4}. \] This is known as the modified Friedmann equation \cite{Binetruy}, and is the braneworld counterpart of the usual Friedmann equation. Provided the energy density is much less than the critical tension (as is typically the case for standard matter or radiation at sufficiently late times), the linear term in $\rho$ dominates over the quadratic term. Then, if we set $m_5^3 L = m_4^2$, where $m_4$ is the four-dimensional Planck mass, the modified Friedmann equation is a good approximation to its conventional four-dimensional analogue. (At least on a positive-tension brane where matter gravitates with the correct sign!) The additional $\mu a^{-4}$ term is known as the dark radiation term, since it scales like radiation but does not interact with standard matter directly, on account of its gravitational origin. A second relation follows from the $\tau\tau$ component of the extrinsic curvature, given by \[ K_{\tau\tau}=K_{ab}u^a u^b = u^a u^b \grad_a n_b = -n_c a^c, \] where the acceleration $a^c$ is defined by $a^c=u^b\grad_b u^c$. Since $a^c u_c=0$, the acceleration can also be written as $a^c = a n^c$, where $a=-K_{\tau\tau}$. Then, since $\D_t$ is a Killing vector of the static background\footnote{For a Killing vector $\xi^c$, $ a n_\xi = an_c \xi^c = a_c \xi^c=\xi^c u^b \grad_b u_c = u^b \grad_b (u_c \xi^c) - u^c u^b \grad_b \xi_c , $ where the last term vanishes by Killing's equation, $\grad_{(b} \xi_{c)}=0$, hence $a n_\xi = u^b \D_b u_\xi = (\d u_\xi/\d \tau)$.}, we have $a=n_{t}^{-1} (\d u_t/\d \tau)$. The $\tau\tau$ component of the Israel matching condition (\ref{israel2}) therefore reads \[ m_5^3\,K^\pm_{\tau\tau}=\pm \frac{m_5^3}{\dot{R}}\,\frac{\d}{\d \tau}\,(f+\dot{R}^2)^{1/2 = -\frac{1}{6}\,(2\rho+3p-\sigma^\pm). \] Combining this with our first relation (\ref{Kij}) then yields the usual cosmological conservation equation \[ \dot{\rho}+3H(\rho+p)=0, \] where $H=\dot{R}/R=\dot{a}/a$ is the Hubble parameter as above. \section{A big crunch/big bang spacetime} Having understood the dynamics of a single brane embedded in AdS-Schwarzschild, we turn now to the behaviour of a pair of positive- and negative-tension branes when embedded in this same bulk spacetime. A useful alternative coordinate system for the bulk may be found by setting $f^{-1}(r)\,\d r^2 = \d Y^2$. Restricting our attention to the spatially flat case in which $k=0$, the bulk line element (\ref{AdS-Schw}) then takes the form \[ \label{Bmetric} \d s^2 = \d Y^2 -N^2(Y) \d T^2 + b^2(Y)\d \vec{x}^2, \] where for pure AdS ($\mu=0$), \[ b^2(Y)=N^2(Y)=e^{2Y/L}; \] for AdS-Schwarzschild ($\mu>0$) with horizon at $Y=0$, \[ \label{AdSSsol} b^2(Y)=\cosh{2Y/L}, \qquad N^2(Y) = \frac{\sinh^2(2Y/L)}{\cosh(2Y/L)}; \] and for AdS with a naked singularity at $Y=0$ ($\mu <0$), \[ b^2(Y)=\sinh{2Y/L}, \qquad N^2(Y) = \frac{\cosh^2(2Y/L)}{\sinh(2Y/L)}. \] (In the latter two cases, the value of $\mu$ has been absorbed into the definition of $T$ and the spatial $x^i$). For a brane with trajectory $Y=Y^\pm(T)$, the induced metric is \[ \d s_\pm^2 = -(N^2_\pm-Y^{\pm\,2}_{,T})\d T^2 + b^2_\pm\d \vec{x}^2, \] where $N_\pm = N(Y^\pm)$ and similarly $b_\pm = b(Y^\pm)$. The proper time $\tau$ on the brane then satisfies \[ \label{taurel} \d \tau = \sqrt{N_\pm^2-Y^{\pm 2}_{,T}}\,\d T. \] In these new coordinates, the spatial components of the Israel matching conditions, (\ref{Kij}), can be put in the form \[ \label{Newisrael} \frac{N_\pm}{b_\pm} = \(1\pm \frac{\rho_\pm L}{6 m_5^3}\)\sqrt{1-Y_{,T}^{\pm\,2}/N_\pm^2}, \] where to obtain this expression we have performed the coordinate transformation explicitly on a case by case basis, starting from (\ref{Kij}) with the critical tension, then using (\ref{Tdotrel}) (not confusing the $\dot{T}$ referring to the parameterisation $t=T(\tau)$ with our present coordinate $T$) and (\ref{taurel}). \subsection{Trajectories of empty branes} In the special case where the branes are empty ($\rho_\pm =0$) as well as flat, the modified Friedmann equations (\ref{modFriedmann}) imply that the bulk must be AdS-Schwarzschild with $\mu>0$ in order to have moving branes (and hence a nontrivial cosmological evolution). The relevant bulk solution is then (\ref{AdSSsol}), and the Israel matching condition (\ref{Newisrael}) simplifies to \[ Y^\pm_{\,,T} = \pm \sinh(2Y^\pm/L)\,\cosh^{-{3/2}}(2Y^\pm/L), \] where we have identified the positive root with the positive-tension brane and vice versa. Equivalently, in terms of the dimensionless brane scale factor $b_\pm$ defined in (\ref{AdSSsol}), we have \[ L\, b^\pm_{\,,T}=\pm (1-b^{-4}_\pm). \] This is easily integrated to yield the brane trajectories: \[ \label{traj} \pm \frac{T}{L} = b_\pm -\half\,\tan^{-1}{b_\pm}-\frac{1}{4}\,\ln{\(\frac{b_\pm +1}{b_\pm-1}\)}-C_\pm, \] where the $C_\pm$ are constants of integration. From the form of these expressions, we see that the branes must inevitably collide at some instant in time. Translating our time coordinate so that this collision occurs at $T=0$, the constants of integration are then identical; \[ C_\pm = b_0 -\half\,\tan^{-1}{b_0}-\frac{1}{4}\,\ln{\(\frac{b_0 +1}{b_0-1}\)}, \] where $b_0$ is the joint scale factor of both branes at the collision. (If desired, we could of course set $b_0=1$ by a re-scaling of the brane spatial coordinates). The brane trajectories are illustrated in Figure \ref{branesep}. After emerging from the collision, the positive-tension brane escapes to the boundary of AdS, while the negative-tension brane asymptotes to the event horizon of the bulk black hole. \begin{figure}[!htbp] \begin{center} \includegraphics[width=10cm]{branesep.ps} \caption{The background brane scale factors $b_\pm$ plotted as a function of the Birkhoff-frame time $T$, where $b_{\pm}$ have been normalised to unity at $T=0$. In these coordinates the bulk is AdS-Schwarzschild. The brane trajectories are then determined by integrating the Israel matching conditions, yielding the result (\ref{traj}). In the limit as $T\tt \inf$, the negative-tension brane asymptotes to the event horizon of the black hole, while the positive-tension brane asymptotes to the boundary of AdS.} \label{branesep} \end{center} \end{figure} \subsection{Brane-static coordinates} Our analysis of braneworld cosmological dynamics has centred so far on coordinate systems in which the bulk is static and the branes are moving. Yet it is also possible, however, to find alternative coordinates in which the branes are static, and the bulk evolves dynamically with time \cite{TTS}. This has the advantage of simplifying the form of the Israel matching conditions, which will be especially useful when we consider cosmological perturbations. The disadvantage is that the explicit form of the bulk solution in these coordinates is not known. Nevertheless, it is easy to see that such a transformation from bulk-static to brane-static coordinates must exist, as we now show. Starting from the Birkhoff-frame metric (\ref{Bmetric}), with $b$ and $N$ given by (\ref{AdSSsol}), we change coordinates from $Y$ to $Z$, where $\d Z = \d Y /N$ and $Z$ is chosen to be zero at the collision event. Then we have \[ \d s^2 = N^2 (-\d T^2 + \d Z^2)+b^2\d\vec{x}^2, \] where $N$ and $b$ are now functions of $Z$. Passing to lightcone coordinates, $T^\pm = T\pm Z$, we have \[ \d s^2 = N^2(- \d T^+\d T^-) + b^2 \d \vec{x}^2. \] Then, under the lightcone coordinate transformation $\bar{T}^\pm = f_\pm(T^\pm)=\bar{T}\pm\bar{Z}$, \[ \d s^2 = \frac{N^2}{f'_+ f'_-}\,(-\d \bar{T}^2+\d \bar{Z}^2)+b^2 \d \vec{x}^2. \] Setting $y=\bar{Z}$ and $t = \pm \,\exp(\pm\bar{T})$ (to describe post- and pre-collision spacetimes respectively), and defining $n^2(t,y)\, t^2 = N^2/f'_+ f'_-$, the metric takes the general form \[ \label{newmetric} \d s^2 = -n^2(t,y)(-\d t^2 + t^2 \d y^2) + b^2(t,y)\d\vec{x}^2. \] Through a suitable choice of the functions $f_\pm$, we can always make the branes static in the new coordinates. To see this, observe that the new spatial coordinate $y$ satisfies \[ \label{yeqn} y = \half\,[f_+(T+Z)-f_-(T-Z)], \] and hence is a solution of the two-dimensional wave equation. From the general theory of the wave equation, we can always find a solution $y(T,Z)$ for arbitrarily chosen boundary conditions on the branes (which are themselves described by timelike curves $Z=Z_\pm(T)$). In particular, we are free to choose $y=y_+$ on the positive-tension brane and $y=y_-$ on the negative-tension brane, for constant $y_\pm$. (Even after this choice there is additional coordinate freedom, since to completely specify $y(T,Z)$ we need additional Cauchy data, \eg on a constant-time slice). For the case of empty flat branes discussed in the previous subsection, \[ Z(b) = \frac{L}{2}\,\left[\tan^{-1}b+\half\,\ln\(\frac{b-1}{b+1}\)\right], \] where $b^2=\cosh(2Y/L)$. The brane trajectories $b=b_\pm(T)$ are given implicitly by (\ref{traj}), from which, in principle at least, one could determine the trajectories in the form $Z=Z_\pm(T)$. The two equations \[ y_\pm = \half\,[f_+(T+Z_\pm(T))-f_-(T-Z_\pm(T))] \] then determine the necessary coordinate transformation. Equivalently, since the $y_\pm$ are assumed to be constant, differentiating with respect to $T$ gives \[ f_+(T+Z_\pm(T))(1+V_\pm(T)) = f_-(T-Z_\pm(T))(1-V_\pm(T)), \] where $V_\pm(T) = Z^\pm_{\,,T}$ are the brane velocities. The solution of these equations is not, however, immediately apparent. Later, when we return to this coordinate system in Chapter \S\,\ref{5dchapter}, we will simply construct the metric functions $n$ and $b$ in (\ref{newmetric}) from scratch, by solving the bulk Einstein equations. \section{Four-dimensional effective theory} In this section, we will use the moduli space approximation to derive the four-dimensional effective theory describing the Randall-Sundrum model at low energies. Before we begin, however, it is interesting to consider how the nature of the dimensional reduction employed in braneworld theories differs from its counterpart in Kaluza-Klein theory. \subsection{Exact versus inexact truncations} In a dimensional reduction of the Kaluza-Klein type, one expands the higher-dimensional fields in terms of a complete set of harmonics on the compact internal space, before truncating to the massless sector of the resulting four-dimensional theory. Crucially, this truncation is {\it consistent}, in the sense that all solutions of the lower-dimensional truncated theory are also solutions of the higher-dimensional theory. In practice, this means that when one considers the equations of motion for the lower-dimensional massive fields prior to truncation, there are no source terms constructed purely from the massless fields that are to be retained. Thus, if one starts the system off purely in the massless zero mode, it will remain in this mode for all time. The subsequent dynamics are then exactly described by the four-dimensional effective theory. Braneworld gravity, in contrast, does not in general possess a consistent truncation down to four dimensions: the ansatz used in the dimensional reduction, rather than being an exact solution, is typically only an approximate solution of the higher-dimensional field equations\footnote{For an interesting counterexample, see however \cite{Koyama}.}. In effect, the warping of the bulk ensures that the massive higher Kaluza-Klein modes are sourced by the zero mode. If the branes are moving at nonzero speed, then generically these higher Kaluza-Klein modes will be continuously produced. The regime of validity of the four-dimensional effective theory is therefore limited to asymptotic regions in which the branes are moving very slowly, or else are very close together (in which case the warp factor and tension on the branes become negligible and the theory reduces to Kaluza-Klein gravity). \subsection{The moduli space approximation} The moduli space approximation applies to any field theory whose equations of motion admit a continuous family of static solutions with degenerate action. This family of static solutions is parameterised by the moduli, which correspond to `flat' directions in configuration space, along which slow dynamical evolution is possible. During this evolution, the excitation along other directions is consistently small, provided these other directions are stable and are characterised by large oscillatory frequencies. The action on moduli space may be obtained from the full action by inserting as an ansatz the functional form of the static solutions, but with the moduli promoted from constants to slowly varying functions of spacetime. Variation with respect to the moduli then yields the equations of motion governing the low energy trajectories of the system along moduli space\footnote{In fact, the moduli space approximation underpins much of our knowledge of the classical and quantum behaviour of solitons, such as magnetic monopoles and vortices \cite{Manton:1981mp, Manton:1985hs}.}. As we have already discussed, the two brane Randall-Sundrum model does indeed possess a one-parameter family of degenerate static solutions (\ref{staticsoln}), parameterised by a single modulus which is the interbrane separation. The spectrum of low energy degrees of freedom therefore consists of a single four-dimensional massless scalar field corresponding to this modulus, as well as the four-dimensional graviton zero mode (captured by promoting $\eta_{\mu\nu}$ in (\ref{staticsoln}) to some generic Ricci-flat $g_{\mu\nu}(x)$, as in (\ref{genstaticsoln})). We will now proceed to calculate the moduli space action for this theory. Two approaches are possible: the first is to allow the interbrane separation to slowly fluctuate whilst preserving the form of the bulk metric given in (\ref{genstaticsoln}). In this approach \cite{Brax:2002nt, Garriga:2001ar}, the kinetic terms in the moduli space action derive from the Gibbons-Hawking boundary terms in (\ref{5Daction}). The second method \cite{Garriga:2001ar, Ekpyrotic} is to transform to an alternative coordinate system in which the branes are held at fixed locations, with the relevant modulus being instead encoded into the bulk geometry. The kinetic terms in the moduli space action then derive purely from the bulk Einstein-Hilbert term in (\ref{5Daction}). For ease of computation we will follow this second method here, using a brane-static coordinate system that permits the bulk Ricci scalar to be evaluated by a standard conformal transformation formula. We start with the bulk solution (\ref{genstaticsoln}), re-expressed as \[ \d s^2 = \frac{L^2}{Z^2}\, (\d Z^2 + \gdxdx ), \] where $Z=L\,\exp{(-Y/L)}$, and the branes are located at constant $Z=Z^\pm$, with $Z^+ < Z^-$. The coordinate transformation $Z = (Z^- - Z^+)(z /L) + Z^+$ then maps the positive- and negative-tension branes to $z=0$ and $z=L$ respectively. The bulk line element now takes the form \[ \d s^2 = [z/L+c]^{-2}(\d z^2+\gdxdx), \] where the dimensionless modulus $c= Z^+/(Z^- - Z^+)$, and we have re-absorbed a factor of $L^2/(Z^- - Z^+)^2$ into $g_{\mu\nu}(x)$. To apply the moduli space approximation, we simply promote the modulus $c$ to a spacetime function $c(x)$, yielding the variational ansatz \[ \d s^2 =[z/L+c(x)]^{-2}(\d z^2+\gdxdx). \] The five-dimensional Ricci scalar may now be evaluated using the standard conformal transformation formula \cite{Wald} \[ \label{conftransform} \tilde{R} = \Omega ^{-2} \left[R - 2(n-1)g^{ab} \nabla _a \nabla _b \ln{\Omega} - (n-2)(n-1)g^{ab} (\nabla _a \ln{\Omega}) \nabla _b \ln{\Omega} \right], \] where $\tilde{R}$ and $R$ are the Ricci scalars of the two metrics $\tilde{g}_{ab}$ and $g_{ab}$ respectively, which are themselves related by the conformal transformation $\tilde{g}_{ab}=\Omega^2 g_{ab}$, for some smooth strictly positive function $\Omega(x^a)$. In this formula, $n$ denotes the total number of spacetime dimensions, and it is understood that covariant derivatives are to be evaluated with respect to $g_{ab}$. In the present case, $n=5$, and we will take $\Omega = [z /L + c(x)]^{-1}$. The five-dimensional Ricci scalar of the metric $g_{ab}$ (with line element $\d z^2 + \gdxdx$) then reduces to the Ricci scalar of the four-dimensional metric $g_{\mu \nu}$, henceforth denoted by $R$. Thus \bea m_5^3 \int _5 \sqrt{-^5\tilde{g}}\ ^5\tilde{R} &=& m_5^3 \int _5 \sqrt{-g} \,\Omega ^3 \(R + 12g^{ab} (\nabla _a\ln{\Omega}) \nabla _b \ln{\Omega} \) \nonumber \\ && \qquad -8 m_5^3 \int _4 \sqrt{-g} \left[ \Omega ^3 \nabla _z \ln{\Omega} \right]^{z=L}_{z=0} , \eea where we have integrated by parts yielding a boundary term on the branes. (The boundary terms at spatial infinity, $x^\mu \tt \inf$, are assumed to vanish). Integrating over the $z$-coordinate, we obtain \[ m_5^3 L \int _4 \sqrt{-g}\left( \half\,( \Omega ^2 _+ - \Omega ^2 _-)R + (\Omega^4_+-\Omega^4_-)\(3 (\partial{c})^2 -5L^{-2}\)\right) , \] where \[ \Omega_+(x) = c^{-1}(x), \qquad \Omega_-(x) = (1+c(x))^{-1}, \] are the dimensionless scale factors on the $(\pm)$ branes respectively. Dispensing with the tildes, the remaining terms in the action (\ref{5Daction}) are now easily computed: \begin{eqnarray} \int _5 \sqrt{-^5g}\,(-2\Lambda ) &=& 3 m_5 ^3L^{-1} \int _4 \sqrt{-g}\,(\Omega_+^4-\Omega_-^4), \\ -\sum_{\pm} \int _\pm \sqrt{-g^\pm}\,\sigma^\pm &=& -6 m_5^3 L^{-1} \int _4 \sqrt{-g}\,(\Omega_+^4-\Omega_-^4), \\ -\sum_\pm \int _\pm \sqrt{-g^\pm}\,2m_5^3 K^\pm &=& 8 m_5^3 L^{-1} \int _4 \sqrt{-g}\, (\Omega_+^4 - \Omega_-^4), \end{eqnarray} where $\Lambda=-6m_5^3/L^2$, and we are assuming the critical tuning $\sigma^\pm = \pm 6m_5^3/L$. The total moduli space action is thus \[ S_{\mathrm{mod}} = m_4^2 \int _4 \sqrt{-g}\,\left(\half\,R(\Omega_+^2-\Omega_-^2)+3(\partial{c})^2(\Omega_+^4-\Omega_-^4)\right), \] where we have set $m_5^3 L = m_4^2$ to recover the four-dimensional Planck mass $m_4$. (Note that the terms proportional to $m_5^3/L$ all sum to zero). The Einstein frame action is then obtained by a further conformal transformation (this time setting $n=4$ in (\ref{conftransform})), yielding \[ S_{\mathrm{mod}} = m_4^2 \int_4\sqrt{-g}\,\left( R-6(\partial{c})^2(1+2c)^{-2}\right). \] Setting $\phi = -\sqrt{3/2}\,\ln{(1+2c)}$, so that $-\inf < \phi \le 0$, the moduli space action takes the simple form \[\label{4Daction} S_{\mathrm{mod}} = \half\, m_4^2\int_4\sqrt{-g}\left(R-(\partial{\phi})^2\right) , \] corresponding to gravity and a minimally coupled massless scalar field. The induced metrics on the branes (to which matter couples) are given by \[ g_{\mu\nu}^+ = \cosh^2{(\phi/\sqrt{6})}\,g_{\mu\nu}, \qquad g_{\mu\nu}^- = \sinh^2{(\phi/\sqrt{6})}\,g_{\mu\nu} , \] and the proper separation of the branes is \[ d=\int_0^L \d z\, (z/L+c)^{-1} = L \ln{\coth{(-\phi/\sqrt{6})}}, \] so that the branes collide as $\phi\tt -\inf$. In this limit, $g^\pm_{\mu\nu}\approx \exp(-\sqrt{2/3}\,\phi)\,g_{\mu\nu}$, whereupon the four-dimensional effective theory reduces to standard Kaluza-Klein theory. The moduli space approximation is expected to be valid whenever the motion of the system is sufficiently slow that non-moduli degrees of freedom are only slightly excited. In practice, this means the typical scale of curvature on the branes must be much less than the AdS curvature scale (\iec four-dimensional derivatives must be small compared to $1/L$). In the case of cosmology, this implies that the four-dimensional Hubble parameter $H$ must be small compared to $1/L$. Equivalently, the matter density on the branes must be much less than the brane tensions, since $\rho \sim m_5^3 L H^2 \ll m_5^3/L \sim \sigma$. Finally, it should be borne in mind that the above derivation of the four-dimensional effective theory via the moduli space approximation, while possessing the virtue of simplicity, is far from rigorous. In particular, performing the variation of the full five-dimensional action with a restricted ansatz is not guaranteed to converge on correct solutions of the field equations\footnote{In fact, the moduli space ansatz, which may equivalently be written $\d s^2 = T(x)^2 \d Y^2 + \exp(2T(x)Y/L)\,g_{\mu\nu}(x)\,\d x^\mu \d x^\nu$, is not even sufficiently general to describe linearised perturbations about a flat background \cite{CGR}.}. Nevertheless, the validity of the four-dimensional effective action (\ref{4Daction}) up to quadratic order in derivatives can be justified by more rigorous (yet more laborious) techniques in which the bulk field equations are solved in a controlled expansion, before backsubstituting into the action and integrating out the extra dimension \cite{Shiromizu:2002qr, K&S, K&S2, KSValidity}. Alternatively, one could start out with a more general metric ansatz for the bulk, then attempt to restrict the form of the metric functions via the bulk field equations and the requirement that the dimensionally reduced action be in Einstein frame \cite{GaugesinBulkI}. In the next chapter we will take a fresh approach, identifying a hidden symmetry of the four-dimensional effective action that completely constrains its form up to quadratic order in derivatives. \chapter{Conformal symmetry of braneworld effective actions} \label{confsymmchapter} \begin{flushright} \begin{minipage}{11.5cm} \small {\it \noindent In my work, I have always tried to unite the true with the beautiful; but when I had to choose one or the other, I usually chose the beautiful. } \begin{flushright} \noindent Hermann Weyl \end{flushright} \end{minipage} \end{flushright} In this chapter, we show how the brane construction automatically implies conformal invariance of the four-dimensional effective theory. This explains the detailed form of the low energy effective action previously found using other methods. The AdS/CFT correspondence may then be used to improve the effective description. We show how this works in detail for a positive- and negative-tension brane pair. \section{Derivation of the effective action} For the general, non-static solution to the Randall-Sundrum model (\ref{5Daction}) it is convenient to choose coordinates in which the bulk metric takes the form \[ \label{bulk_metric} \d s^2 = \d Y^2 + g_{\mu\nu}(x,Y)\dxdx . \] The brane loci are now $Y^\pm (x)$ and the metric induced on each brane is \[ \label{brane_metric} \gpm (x) = \D _\mu Y^\pm (x) \D _\nu Y^\pm (x) + \g (x,Y^\pm (x)). \] At low energies we expect the configuration to be completely determined by the metric on one brane and the normal distance to the other brane, $Y^+ - Y^-$. That is, we are looking for a four-dimensional effective theory consisting of gravity plus one physical scalar degree of freedom. What we will now show is that this theory may be determined on symmetry grounds alone. (See also \cite{4d1} for related ideas). The full five-dimensional theory is diffeomorphism invariant. This invariance includes the special set of transformations \[ \label{condn1} Y' = Y + \xi ^5(x), \ \ \ x'^\mu = x^\mu + \xi^\mu (x,Y), \] with $\xi^\mu (x,Y)$ satisfying \[ \label{condn} \D_Y \xi ^\mu (x,Y) = -g^{\mu\nu}(x,Y)\D _\nu\xi ^5 (x), \] which preserve the form (\ref{bulk_metric}) of the metric. The transformation (\ref{condn1}) displaces the $Y^\pm (x)$ coordinates of the branes, \[ Y^\pm (x) \tt Y^\pm (x) + \xi ^5-\xi^\sigma \D _\sigma Y^\pm (x), \] and alters $g^{\mu\nu}(x,Y)$ via the usual Lie derivative. Using (\ref{condn}), one finds that the combined effect on each brane metric (\ref{brane_metric}) is the four-dimensional diffeomorphism \[ x'^\mu = x^\mu + \xi^\mu (x, Y^\pm (x)). \] In fact, by departing from the gauge (\ref{bulk_metric}) away from the branes, we can construct a five-dimensional diffeomorphism for which $\xi^\mu$ vanishes on the branes. To see this, we can set \[ \xi^\mu (x,Y)=\xi^\mu _0(x,Y)-f(Y)\xi^\mu _0 (x,Y^+), \] where $\xi^\mu _0 (x,Y)$ is the solution to (\ref{condn}) which vanishes on the negative-tension brane and $f(Y)$ is a function chosen to satisfy $f(Y^-) = 0$, $f(Y^+) = 1$, and $f'(Y^-)=f'(Y^+)=0$ for all $x$. We conclude that the four-dimensional theory, in which $Y^\pm (x)$ are represented as scalar fields, must possess a local symmetry $\xi ^5(x)$ acting nontrivially on these fields. The dimensionless exponentials $\psi^\pm (x) \equiv \exp( Y^\pm (x)/\L)$ transform as conformal scalars: $\psi ^\pm (x) \tt \exp(\xi ^5 /\L)\,\psi^\pm (x)$, while the induced brane metrics $\gpm$ remain invariant. The only local, polynomial, two-derivative action possessing such a symmetry involves gravity with two conformally coupled scalar fields. After diagonalising and re-scaling the fields, this may be expressed as \[ m^2\int \dx \sqrt{-g} \left( c_+ \psi^+ \Delta \psi^+ + c_- \psi^-\Delta \psi^-\right) , \] where $\Delta \equiv \Box -\frac{1}{6}R$, $c_\pm = \pm 1$, and $m$ is a constant with dimensions of mass. It should be stressed that the metric $\g$ appearing in this expression is that of the effective theory, which is in general different to $\gpm$, the induced metric on the branes. Potential terms are excluded by the fact that flat branes, with arbitrary constant $\psi^\pm$, are solutions of the five-dimensional theory, \ie the $\psi^\pm$ are moduli. By construction, the theory possesses local conformal invariance under \[ \label{conf_transf} \psi^\pm \tt \Omega (x)^{-1}\psi^\pm, \ \ \ \g \tt \Omega (x)^2 \g . \] For $c_+=-c_-$, without loss of generality we can set $c_+=-1$. Provided $(\psi^+)^2 - (\psi^-)^2 > 0$, we obtain the usual sign for the Einstein term, so there are no ghosts in the gravitational sector. We can then set $\psi^+=A\cosh{\phi/\sqrt{6}}$ and $\psi^- = -A\sinh{\phi/\sqrt{6}}$. The field $A$ has the wrong sign kinetic term, but it can be set equal to a constant by a choice of conformal gauge. Therefore, in this case there are no physical propagating ghost fields. In contrast, a similar analysis reveals that when $c_+=c_-$ the theory possesses physical ghosts, either in the gravitational wave sector (wrong sign of $R$) or in the scalar sector, no matter how the conformal gauge is fixed. We conclude that the low energy effective action must be \[ \label{conf_action} m^2 \int \dx \sqrt{-g} \left(- \psi^+ \Delta \psi^+ + \psi^-\Delta \psi^-\right). \] We know from the above argument that the brane metrics are conformally invariant: from this, and from general covariance, they must equal $\g$ times homogeneous functions of order two in $\psi^+$ and $\psi^-$. Yet in the model under consideration, we have static solutions $\gpm = \exp(2Y^\pm/\L)\, \eta _{\mu\nu}$ for all $Y^+>Y^-$. The only choice consistent with this, and with $(\psi^+)^2 - (\psi^-)^2 > 0$, is \[ \label{g_eqns} \gpm = \frac{(\psi^\pm)^2}{6}\,\g , \] which is a conformally invariant equation. (The numerical factor has been introduced for later convenience). \subsection{Conformal gauges} It is instructive to fix the conformal gauge in several ways. First, set $\psi^+=\sqrt{6}$, so that $\g = \gp$ and the metric appearing in (\ref{conf_action}) is actually the metric on the positive-tension brane. The action (\ref{conf_action}) then consists of Einstein gravity (with Planck mass $m$) plus a conformally invariant scalar field $\psi^-$ which has to be smaller than $\sqrt{6}$: \[ \label{+action} m^2 \int \dx \sqrt{-g^+}\left( (1-\frac{1}{6}\,(\psi^-)^2)\,R^+-(\D \psi^-)^2 \right). \] Changing variables to $\chi = 1 - (\psi^-)^2/6$ produces the alternative form \cite{K&S, K&S2} \[ m^2 \int \dx \sqrt{-g^+}\left(\chi R^+ - \frac{3}{2(1-\chi)}(\D \chi)^2 \right). \] Conversely, if we set $\psi^-=\sqrt{6}$, then $\g$ is the metric on the negative-tension brane and $\psi^+$, which has to be larger than $\sqrt{6}$, is a conformally coupled scalar field. (The relative sign between the gravitational and kinetic terms in the action is now wrong, however, and so this gauge possesses ghosts). If we add matter coupling to the metric on the positive- and negative-tension branes, we find that matter on the negative-tension brane couples in a conformally invariant manner to the positive-tension brane metric and the field $\psi^-$, and conversely for matter on the positive-tension brane. (Note that we are not implying conformal invariance of the matter itself: it is simply that matter coupled to the brane metrics will be trivially invariant under the transformation (\ref{conf_transf}), as the brane metrics are themselves invariant). A third conformal gauge maps the theory to Einstein gravity with a minimally coupled scalar field $\phi$, taking values in the range $-\infty <\phi <0$. Starting from (\ref{conf_action}), we can set $\psi^+=A\cosh{\phi/\sqrt{6}}$ and $\psi^- = -A\sinh{\phi/\sqrt{6}}$, as noted earlier, yielding the action \[ \label{EFaction1} -m^2\int \dx \sqrt{-g}\left(A\Delta A+\frac{A^2}{6}(\D \phi )^2\right). \] Now, choosing the conformal gauge $A=\sqrt{6}$, we find \[ \label{EFaction2} m^2 \int \dx \sqrt{-g}\left(R+\phi\Box\phi\right) , \] \iec gravity plus a minimally coupled massless scalar. In this gauge equations (\ref{g_eqns}) read: \[ \label{g_eqns2} \gp = \cosh ^2 (\phi /\sqrt{6})\,\g, \qquad \gm = \sinh ^2 (\phi/\sqrt{6})\,\g, \] in agreement with explicit calculations in the moduli space approach \cite{Ekpyrotic}. The present treatment also goes some way towards explaining the moduli space results. For example, the fact that the moduli space metric is flat is seen to be a consequence of conformal invariance. Specifically, for solutions with cosmological symmetry, one can pick a conformal gauge in which the metric is static. The scale factors on the two branes are determined by $\psi^\pm$. From (\ref{conf_action}), the moduli space metric is just two-dimensional Minkowski space. A couple of results for conformal gravity follow from the above discussion. Firstly, in the $\psi^+ = \sqrt{6}$ gauge, we have $\psi^-=-\sqrt{6}\tanh{\phi/\sqrt{6}}$. Any solution for a minimally coupled scalar $\phi$, with metric $\g$, thus yields a corresponding solution for a conformally coupled scalar $\psi^-$, with $|\psi^-| < \sqrt{6}$ and metric $\gp$ as in (\ref{g_eqns2}), and vice versa. Secondly, in the $\psi^-=\sqrt{6}$ gauge, we have $\psi^+=-\sqrt{6}\coth{\phi/\sqrt{6}}$ hence we may also obtain a solution for a conformally coupled scalar $\psi^+$, with $|\psi^+|>\sqrt{6}$ and metric $\gm$ given in (\ref{g_eqns2}). Solutions to conformal scalar gravity therefore come in pairs: if $\g$ and $\psi$ are a solution, then $(\psi^2/6)\,\g$ and $\tilde{\psi}=6/\psi$ is another solution. In terms of branes, this merely states that if $\gp$ and $\psi^-$ are known in the gauge $\psi^+=\sqrt{6}$, then it is possible to reconstruct $\gm$ and $\psi^+$ in the gauge $\psi^-=\sqrt{6}$. \subsection{Generalisation to other models} The argument given above establishing the conformal symmetry of the effective action is of a very general nature: the only step at which we specialised to the Randall-Sundrum model was in the identification of the brane metrics in terms of the effective theory variables (\ref{g_eqns}). This required merely the knowledge of a one-parameter family of solutions. To derive the effective theory for other brane models, it is only necessary to generalise this last step. For example, in the case of tensionless branes compactified on an $S^1/\Z_2$, the bulk warp is absent, and so we know that a family of static solutions is given by the ground state of Kaluza-Klein theory (in which all fields are independent of the extra dimension, hence the additional $\Z_2$ orbifolding present in the case of tensionless branes is irrelevant). Ignoring the gauge fields, the Kaluza-Klein ansatz for the five-dimensional metric is \[ \d s^2 = e^{2\sqrt{2/3}\,\phi(x)} \d y^2 + e^{-\sqrt{2/3}\,\phi(x)}\gdxdx , \] where $\phi$ and $\g$ extremise an action identical to (\ref{EFaction2}). For branes located at constant $y$, the induced metrics are $\exp(-\sqrt{2/3}\,\phi)\,\g$, independent of $y$. Using the effective action in the form (\ref{EFaction1}), conformal invariance of the induced brane metrics dictates that \[ \g ^\pm = A^2 f^\pm (\phi)\, \g , \] for some unknown functions $f^\pm$. Upon fixing the conformal gauge to $A=\sqrt{6}$, one recovers the action (\ref{EFaction2}), which is just the standard Kaluza-Klein low energy effective action. The functions $f^\pm$ are thus both equal to $(1/6)\,\exp(-\sqrt{2/3}\,\phi)$, and we have \[ \g^\pm = e^{-\sqrt{2/3}\,\phi} \g = \frac{1}{6}\, (\psi^+ + \psi^-)^2 \g. \] Note that this is consistent with the $\phi \tt -\inf$ limit of the Randall-Sundrum theory (\ref{g_eqns2}): as the brane separation goes to zero, the warping of the bulk becomes negligible and the Randall-Sundrum theory tends to the Kaluza-Klein limit \cite{TTS}. Conceivably, arguments similar to the above might also be used to derive the low energy four-dimensional effective action for the Ho{\v r}ava-Witten model (see Chapter \S\,\ref{Mthchapter}), although we will not pursue this connection further here. \subsection{Cosmological solutions} We now turn to a discussion of the general cosmological solutions representing colliding branes. We choose a conformal gauge in which the metric is static, and all the dynamics are contained in $\psi^\pm$. For flat, open and closed spacetimes, the spatial Ricci scalar is given by $R=6k$, where $k = 0$, $-1$ and $+1$ respectively. The action (\ref{conf_action}) yields the equations of motion \bea \ddot{\psi}^\pm &=& -k\psi^\pm , \\ (\dot{\psi}^+)^2-(\dot{\psi}^-)^2 &=& -k\left( (\psi^+)^2-(\psi^-)^2\right). \eea For $k=0$, we have the solutions \[ \psi^+ = -At+B, \qquad \psi^-=At+B, \qquad t<0 \] representing colliding flat branes. It is natural to match $\psi^+$ to $\psi^-$ across the collision, and vice versa, to obtain $\psi^\pm = \pm At+B$ for $t>0$. This solution then describes two branes which collide and pass through each other, with the positive-tension brane continuing to a negative-tension brane, and vice versa \cite{Seiberg, TTS}. For $k=-1$, we have the three solutions \bea \begin{array}{cccc} \psi^{(1)}\, = &A\sinh{t}; & A\cosh{t}; & A\,e^t, \\ \psi^{(2)}\, = &A\sinh{(t-t_0)}; & A\cosh{(t-t_0)}; & A\,e^{t-t_0}, \end{array} \eea where we set $\psi^+$ equal to the greater, and $\psi^-$ equal to the lesser, of $\psi^{(1)}$ and $\psi^{(2)}$. For $k = +1$, we find the bouncing solutions $\psi^{(1)} = A\sin{t}$, $\psi^{(2)} = A\sin{(t-t_0)}$. In the absence of matter on the negative-tension brane, the $\sin$ and $\sinh$ solutions are singular when the negative-tension brane scale factor $a_-$ vanishes. Matter on the negative-tension brane scaling faster than $a_-^{-4}$, however, (\eg scalar kinetic matter) causes the solution for $\psi^-$ to bounce smoothly at positive $a_-$, because $\psi^-$ has a positive kinetic term. This bounce is perfectly regular. In contrast, the big crunch/big bang singularity, occurring when the positive- and negative-tension branes collide, is unavoidable. The above example illustrates a general feature of the brane pair effective action. If the positive- and negative-tension brane solutions are continued through the collision without re-labelling (this means that the orientation of the warp must flip), then the four-dimensional effective action changes sign. The re-labelling restores the conventional sign. The same phenomenon is seen in string theories obtained by dimensionally reducing eleven-dimensional supergravity, when the eleventh dimension collapses and reappears. For a discussion of brane world black hole solutions with intersecting branes, see \cite{EFT_BH}. \section{AdS/CFT} Recently, it has been shown that the AdS/CFT correspondence \cite{MaldacenaAdSCFT, WittenADSCFT} provides a powerful approach to the understanding of braneworlds. For a single positive-tension brane, the four-dimensional effective description comprises simply Einstein gravity plus two copies of the dual CFT \cite{deHaro} (as the $\Z_2$ symmetry implies there are two copies of the bulk). Notable successes of this program include reproducing the $O(1/r^3)$ corrections to Newton's law on the brane \cite{Duff&Liu}, and reproducing the modified Friedmann equation induced on the brane \cite{Gubser, Shiromizu&Ida}. Consider, for simplicity, a single positive-tension brane containing only radiation. Taking the trace of the effective Einstein equations, we find \[ \label{trace} -R = 2(8\pi G_4)<T_{CFT}> , \] as the stress tensor of the radiation is traceless. The trace anomaly of the dual $\mathcal{N}=4$ $SU(N)$ super-Yang Mills theory must then be evaluated. With the help of the AdS/CFT dictionary, this quantity may be calculated for the case of cosmological symmetry as shown in \cite{Henningson&Skenderis}, giving \[ \label{trace2} -R=\frac{L^2}{4}\left(R_{\mu\nu}R^{\mu\nu}-\frac{1}{3}R^2\right). \] Here, the usual $R^2$ counterterm has been added to the action in order to eliminate the $\Box R$ term in the trace, thus furnishing second order equations of motion. For a cosmological metric, with scale factor $a$, this becomes \[ \label{hdoteqn} 2(\ddot{a}a+ka^2)=L^2(k+h^2)\dot{h}, \] where $h \equiv \dot{a}/a$ and the dot denotes differentiation with respect to conformal time. Re-expressing the left-hand side as $h^{-1}\partial _t(\dot{a}^2+ka^2)$, we can then integrate to obtain \[ \label{hinteqn} h^2+k = \frac{1}{a^2}\,(B-\frac{1}{4}\,k^2 L^2) + \frac{1}{4}\,(h^2+ k)^2 \,{L^2\over a^2}, \] where $B$ is an integration constant. Now, we can expect to recover Einstein gravity on the brane in the limit when $L\rightarrow 0$, with other physical quantities held fixed. Expanding all terms in powers of $L$, at leading order we must obtain four-dimensional Einstein gravity, for which $8\pi G_4 = 8\pi G_5/L$. We therefore set $B\sim (8 \pi G_5 \rho_0/3L) +C$, where $\rho=\rho_0/a^4$ is the energy density of conventional radiation, and $C$ is a constant independent of $L$ as $L\rightarrow 0$. From (\ref{hinteqn}), we then obtain the first correction to $h^2+k$, namely \[ h^2+k= \frac{8 \pi G_5 \rho_0}{3La^2} + \frac{C}{a^2}+ \frac{(8\pi G_5\rho_0)^2}{36a^6} + O(L), \] which, thanks to the CFT contribution, now includes the well-known dark energy and $\rho^2$ corrections \cite{Binetruy}. It should come as no surprise that the AdS/CFT correspondence only approximates the Randall-Sundrum setup up to first nontrivial order in an expansion in $L$. The AdS/CFT scenario involves string theory on $AdS_5\times S_5$. Since $\alpha '\sim \ell_s^2 \sim L^2$ at fixed 't Hooft coupling, and the masses squared of the Kaluza-Klein modes on the $S_5$ are of order $1/L^2$, we expect nontrivial corrections at second order in an expansion in $L$. Furthermore, one can show from the AdS/CFT dictionary that, in order for the $\rho^2$ term to dominate in the modified Friedmann equation, the temperature of the conventional radiation must be greater than the Hagedorn temperature of the string. Clearly, the AdS/CFT correspondence cannot describe this situation. Let us now consider the extension of the AdS/CFT approach to the case of a pair of positive- and negative-tension branes, using the ideas developed in earlier in this chapter. The effective action for a single positive-tension brane is \[ \label{AdS/CFTaction} \frac{1}{16\pi G_4}\int\dx \sqrt{-g^+}\,R^+ + 2W_{CFT}[g^+] + S_{m}[g^+], \] where $\gp$ is the induced metric on the brane, $S_{m}$ is the brane matter action, and $W_{CFT}$ is the CFT effective action (including the appropriate $R^2$ counterterms). Substituting now for $\gp$ using (\ref{g_eqns}), the Einstein-Hilbert term $\sqrt{-g^+}\,R^+$ becomes $-\sqrt{-g}\,\psi^+\Delta\psi^+$. A negative tension brane may then be incorporated as follows: \bea \label{S2} \frac{1}{16\pi G_4}\int\dx\sqrt{-g}\,(-\psi^+ \Delta \psi^+ + \psi^- \Delta \psi^-) + 2W_{CFT}[g^+] &&\nonumber \\ -2W_{CFT}[g^-]+S_{m}[g^+]+S_{m}[g^-]. \ \ \ \ \ \ \ \ \ \ \ \eea The action for the positive- and negative-tension brane pair must take this form in order to correctly reproduce the Friedmann equation for each brane. To see this, consider again the conformal gauge in which the effective theory metric is static and all the dynamics are contained in $\psi^\pm$, which play the role of the brane scale factors. Variation with respect to the $\psi^\pm$ yields the scalar field equations \[ (\psi^\pm)^{-3}\Delta \psi^\pm = 2(8\pi G_4)<T_{CFT}^\pm >, \] where the trace anomaly must be evaluated on the induced brane metric $\gpm$, but $\Delta$ is evaluated on the effective metric $\g$. The left-hand side evaluates to $-(\psi^\pm)^{-3}(\ddot{\psi}^\pm+k(\psi^\pm)^2)$. After identifying $\psi^\pm/\sqrt{6}$ with $a_\pm$ according to (\ref{g_eqns}), we recover equation (\ref{hdoteqn}), upon dropping the plus or minus label. From the necessity of recovering the Friedmann equation on each brane, we may also deduce that cross-terms in the action between $\psi^+$ and $\psi^-$ are forbidden. The signs associated with the gravitational parts of the action are required to achieve consistency with (\ref{conf_action}). Consequently, the relative sign between the gravity plus CFT part of the action, and that of the matter, is reversed for the negative-tension brane, consistent with the modified Friedmann equations \cite{Binetruy} \[ \label{FRW} H^2_\pm = \pm \frac{8\pi G_5\rho_\pm}{3\L} + {(8\pi G_5\rho_\pm)^2 \over 36} -\frac{k}{a^2}+ \frac{C}{a^4}, \] where plus and minus label the positive- and negative-tension branes, and $C$ is again a constant representing the dark radiation. In summary, we have elucidated the origin of conformal symmetry in brane world effective actions, and shown how this determines the effective action to lowest order. When combined with the the AdS/CFT correspondence, our approach also recovers the first corrections to the brane Friedmann equations. \chapter{Solution of a braneworld big crunch/big bang cosmology} \label{5dchapter} \begin{flushright} \begin{minipage}{11cm \small {\it \noindent We can lick gravity, but sometimes the paperwork is overwhelming.} \begin{flushright} \noindent Wernher von Braun \end{flushright} \end{minipage} \end{flushright} In this chapter we solve for the cosmological perturbations in a five-dimensional background consisting of two separating or colliding boundary branes, as an expansion in the collision speed $V$ divided by the speed of light $c$. Our solution permits a detailed check of the validity of four-dimensional effective theory in the vicinity of the event corresponding to the big crunch/big bang singularity. We show that the four-dimensional description fails at the first nontrivial order in $(V/c)^2$. At this order, there is nontrivial mixing of the two relevant four-dimensional perturbation modes (the growing and decaying modes) as the boundary branes move from the narrowly-separated limit described by Kaluza-Klein theory to the well-separated limit where gravity is confined to the positive-tension brane. We comment on the cosmological significance of the result and compute other quantities of interest in five-dimensional cosmological scenarios. \section{Introduction} Two limiting regimes can be distinguished in which the dynamics of the Randall-Sundrum model simplify: the first is the limit in which the interbrane separation is much greater than the AdS curvature radius, and the second is the limit in which the interbrane separation is far smaller. When the two boundary branes are very close to one another, the warping of the five-dimensional bulk and the tension of the branes become irrelevant. In this situation, the low energy modes of the system are well-described by a simple Kaluza-Klein reduction from five to four dimensions, \iec gravity plus a scalar field (the $\Z_2$ projections eliminate the gauge field zero mode). When the two branes are widely separated, however, the physics is quite different. In this regime, the warping of the bulk plays a key role, causing the low energy gravitational modes to be localised on the positive-tension brane \cite{RSII,garriga,giddings}. The four-dimensional effective theory describing this new situation is nevertheless identical, consisting of Einstein gravity and a scalar field, the radion, describing the separation of the two branes. In this chapter, we study the transition between these two regimes -- from the naive Kaluza-Klein reduction to localised Randall-Sundrum gravity -- at finite brane speed. In the two asymptotic regimes, the narrowly-separated brane limit and the widely-separated limit, the cosmological perturbation modes show precisely the behaviour predicted by the four-dimensional effective theory. There are two massless scalar perturbation modes; in longitudinal gauge, and in the long wavelength ($k\rightarrow 0$) limit, one mode is constant and the other decays as $t_4^{-2}$, where $t_4$ is the conformal time. In the four-dimensional description, these two perturbation modes are entirely distinct: one is the curvature perturbation mode; the other is a local time delay to the big bang. Nonetheless, we shall show that in the five-dimensional theory, at first nontrivial order in the speed of the brane collision, the two modes mix. If, for example, one starts out in the time delay mode at small $t_4$, one ends up in a mixture of the time delay and curvature perturbation modes as $t_4 \rightarrow \infty$. Thus the two cosmological perturbation modes -- the growing and decaying adiabatic modes -- mix in the higher-dimensional braneworld setup, a phenomenon which is prohibited in four dimensions. The mode-mixing occurs as a result of a qualitative change in the nature of the low energy modes of the system. At small brane separations, the low energy modes are nearly uniform across the extra dimension. Yet as the brane separation becomes larger than the bulk warping scale, the low energy modes become exponentially localised on the positive-tension brane. If the branes separate at finite speed, the localisation process fails to keep pace with the brane separation and the low energy modes do not evolve adiabatically. Instead, they evolve into a mixture involving higher Kaluza-Klein modes, and the four-dimensional effective description fails. The mixing we see between the two scalar perturbation modes would be prohibited in {\it any} local four-dimensional effective theory consisting of Einstein gravity and matter fields, no matter what the matter fields were. The mixing is therefore a truly five-dimensional phenomenon, which cannot be modelled with a local four-dimensional effective theory. There is, moreover, an independent argument against the existence of any local four-dimensional description of these phenomena. In standard Kaluza-Klein theory, it is well known that the entire spectrum of massive modes is actually spin two \cite{duff}. Yet, despite many attempts, no satisfactory Lagrangian description of massive, purely spin two fields has ever been found \cite{deser,damour}. Again, this suggests that one should not expect to describe the excitation of the higher Kaluza-Klein modes in terms of an improved, local, four-dimensional effective theory. The system we study consists of two branes emerging from a collision. In this situation, there are important simplifications which allow us to specify initial data rather precisely. When the brane separation is small, the fluctuation modes neatly separate into light Kaluza-Klein zero modes, which are constant along the extra dimension, and massive modes with nontrivial extra-dimensional dependence. Furthermore, the brane tensions and the bulk cosmological constant become irrelevant at short distances. It is thus natural to specify initial data which map precisely onto four-dimensional fields in the naive dimensionally-reduced theory describing the limit of narrowly-separated branes. With initial data specified this way, there are no ambiguities in the system. The two branes provide boundary conditions for all time and the five-dimensional Einstein equations yield a unique solution, for arbitrary four-dimensional initial data. Our main motivation is the study of cosmologies in which the big bang was a brane collision, such as the cyclic model \cite{Cyclicevo}. Here, a period of dark energy domination, followed by slow contraction of the fifth dimension, renders the branes locally flat and parallel at the collision. During the slow contraction phase, growing, adiabatic, scale-invariant perturbations are imprinted on the branes prior to the collision. Yet if the system is accurately described by four-dimensional effective theory throughout, then, as a number of authors have noted \cite{brand1, brand2, lyth1, jch1, jch2, Creminelli}, there is an apparent roadblock to the passage of the scale-invariant perturbations across the bounce. Namely, it is hard to see how the growing mode in the contracting phase, usually described as a local time delay, could match onto the growing mode in the expanding phase, usually described as a curvature perturbation. In this chapter, we show that the four-dimensional effective theory fails at order $(V/c)^2$, where $V$ is the collision speed and $c$ is the speed of light. The four-dimensional description works well when the branes are close together, or far apart. As the branes move from one regime to the other, however, the two four-dimensional modes mix in a nontrivial manner. The mixing we find demonstrates that the approach of two boundary branes along a fifth dimension produces physical effects that cannot properly be modelled by a local four-dimensional effective theory. Here, we deal with the simplest case involving two empty boundary branes separated by a bulk with a negative cosmological constant. For the cyclic model, the details are more complicated. In particular, there is an additional bulk stress $\Delta T_5^5$, associated with the interbrane force, that plays a vital role in converting a growing mode corresponding to a pure time delay perturbation into a mixture of time delay and curvature modes on the brane. We will present some preliminary considerations of the effects of such a bulk stress in the following chapter. For now, our main conclusion with regard to the cyclic model is that, to compute properly the evolution of perturbations before and after a brane collision, one must go beyond the four-dimensional effective theory to consider the full five-dimensional theory. The outline of this chapter is as follows. In Section \S\,\ref{solnmethods}, we provide an overview of our three solution methods. In \S\,\ref{seriessoln}, we solve for the background and cosmological perturbations using a series expansion in time about the collision. In \S\,\ref{polysection}, we present an improved method in which the dependence on the fifth dimension is approximated using a set of higher-order Dirichlet or Neumann polynomials. In \S\,\ref{expaboutscalingsoln}, we develop an expansion about the small-$(V/c)$ scaling solution, before comparing our results with those of the four-dimensional effective theory in \S\,\ref{compwitheft}. We conclude with a discussion of mode-mixing in \S\,\ref{mixingsection}. Detailed explicit solutions may be found in Appendix \ref{detailedresults}, and the Mathematica code implementing our calculations is available online \cite{Website}. \section{Three solution methods} \label{solnmethods} In this section, we review the three solution methods employed, noting their comparative merits. For the model considered here, with no dynamical bulk fields, as we saw in \S\,\ref{Birkhoffsection} there is a Birkhoff-like theorem guaranteeing the existence of coordinates in which the bulk is static. It is easy to solve for the background in these coordinates. The motion of the branes complicates the Israel matching conditions, however, rendering the treatment of perturbations difficult. For this reason, it is preferable to choose a coordinate system in which the branes are located at fixed spatial coordinates $y=\pm y_0$, and the bulk evolves with time. We shall employ a coordinate system in which the five-dimensional line element for the background takes the form \[ \label{metrica} \d s^2 = n^2(t,y) (-\d t^2 +t^2 \d y^2) + b^2(t,y) \d \vec{x}^2, \] where $y$ parameterises the fifth dimension and $x^i$ (for $i=1,2,3$), the three noncompact dimensions. Cosmological isotropy excludes $\d t \,\d x^i$ or $\d y \,\d x^i$ terms, and homogeneity ensures $n$ and $b$ are independent of $\vec{x}$. The $t,y$ part of the background metric may then be taken to be conformally flat, and one may further choose to write the metric for this two-dimensional Minkowski spacetime in Milne form. Since we are interested in scenarios with colliding branes in which the bulk geometry about the collision is Milne, we will assume the branes to be located at $y=\pm y_0$, with the collision occurring at $t=0$. By expressing the metric in locally Minkowski coordinates, $T=t \cosh{y}$ and $Y=t \sinh{y}$, one sees that the collision speed is $(V/c)= \tanh{2 y_0}$ and the relative rapidity of the collision is $2y_0$. As long as the bulk metric is regular at the brane collision and possesses cosmological symmetry, the line element may always be put into the form (\ref{metrica}). Furthermore, by suitably re-scaling coordinates one can choose $b(0,y)=n(0,y)=1$. In order to describe perturbations about this background, one needs to specify an appropriate gauge choice. Five-dimensional longitudinal gauge is particularly convenient \cite{Carsten}: firstly, it is completely gauge-fixed; secondly, the brane trajectories are unperturbed in this gauge \cite{TTS}, so that the Israel matching conditions are relatively simple; and finally, in the absence of anisotropic stresses, the traceless part of the Einstein $G^i_j$ (spatial) equation yields a constraint among the perturbation variables, reducing them from four to three. In light of these advantages, we will work in five-dimensional longitudinal gauge throughout. Our three solution methods are as follows: \begin{itemize} \item {\bf Series expansion in \bf\textit{t}} \end{itemize} The simplest solution method for the background is to solve for the metric functions $n(t,y)$ and $b(t,y)$ as a series in powers of $t$ about $t=0$. At each order, the bulk Einstein equations yield a set of ordinary differential equations in $y$, with the boundary conditions provided by the Israel matching conditions. These are straightforwardly solved. A similar series approach, involving powers of $t$ and powers of $t$ times $\ln{t}$ suffices for the perturbations. The series approach is useful at small times $(t/L)\ll 1$ since it provides the precise solution for the background plus generic perturbations, close to the brane collision, for all $y$ and for any collision rapidity $y_0$. It allows one to uniquely specify four-dimensional asymptotic data as $t$ tends to zero. Nonetheless, the series thus obtained fails to converge at quite modest times. Following the system to long times requires a more sophisticated method. Instead of taking $(t/L)$ as our expansion parameter, we want to use the dimensionless rapidity of the brane collision $y_0$, and solve at each order in $y_0$. \begin{itemize} \item {\bf Expansion in Dirichlet/Neumann polynomials in \bf\textit{y}} \end{itemize} In this approach we represent the spacetime metric in terms of variables obeying either Dirichlet or Neumann boundary conditions on the branes. We then express these variables as series of Dirichlet or Neumann polynomials in $y$ and $y_0$, bounded at each subsequent order by an increasing power of the collision rapidity $y_0$. (Recall that the range of the $y$ coordinate is bounded by $|y|\le y_0$). The coefficients in these expansions are undetermined functions of $t$. By solving the five-dimensional Einstein equations perturbatively in $y_0$, we obtain a series of ordinary differential equations in $t$, which can then be solved exactly. In this Dirichlet/Neumann polynomial expansion, the Israel boundary conditions on the branes are satisfied automatically at every order in $y_0$, while the initial data at small $t$ are provided by the previous series solution method. The Dirichlet/Neumann polynomial expansion method yields simple, explicit solutions for the background and perturbations as long as $(t/L)$ is smaller than $1/y_0$. Since $y_0 \ll 1$, this considerably improves upon the naive series expansion in $t$. For $(t/L)$ of order $1/y_0$, however, the expansion fails because the growth in the coefficients overwhelms the extra powers of $y_0$ at successive orders. Since $(t/L) \sim 1/y_0$ corresponds to brane separations of order the AdS radius, the Dirichlet/Neumann polynomial expansion method fails to describe the late-time behaviour of the system, and a third method is needed. \begin{itemize} \item {\bf Expansion about the scaling solution} \end{itemize} The idea of our third method is to start by identifying a scaling solution, whose form is independent of $y_0$ for all $y_0\ll 1$. This scaling solution is well-behaved for all times and therefore a perturbation expansion in $y_0$ about this solution is similarly well-behaved, even at very late times. To find the scaling solution, we first change variables from $t$ and $y$ to an equivalent set of dimensionless variables. The characteristic velocity of the system is the brane speed at the collision, $V=c\tanh 2 y_0 \sim 2 c y_0$ for small $y_0$, where we have temporarily restored the speed of light $c$. Thus we have the dimensionless time parameter $x = y_0 ct/L \sim V t/L$, of order the time for the branes to separate by one AdS radius. We also re-scale the $y$-coordinate by defining $\w = y/y_0$, whose range is $-1\leq \w\leq 1$, independent of the characteristic velocity. As we shall show, when re-expressed in these variables, for small $y_0$, the bulk Einstein equations become perturbatively \textit{ultralocal}: at each order in $y_0$ one only has to solve an ordinary differential equation in $\w$, with a source term determined by time derivatives of lower order terms. The original partial differential equations reduce to an infinite series of ordinary differential equations in $\w$ which are then easily solved order by order in $y_0$. This method, an expansion in $y_0$ about the scaling solution, is the most powerful and may be extended to arbitrarily long times $t$ and for all brane separations. In light of the generalised Birkhoff theorem, the bulk in between the two branes is just a slice of five-dimensional AdS-Schwarzschild spacetime, within which the two branes move \cite{langlois, maartens, Durrer}. (The bulk black hole is itself merely virtual, however, as it lies beyond the negative-tension brane and hence is excluded from the physical region). As time proceeds, the negative-tension brane becomes closer and closer to the horizon of the virtual AdS-Schwarzschild black hole. Even though its location in the Birkhoff-frame (static) coordinates freezes (see Figure \ref{branesep}), its proper speed grows and the $y_0$ expansion fails. Nonetheless, by analytic continuation of our solution in $\w$ and $x$, we are able to circumvent this temporary breakdown of the $y_0$ expansion and follow the positive-tension brane, and the perturbations localised near it, as they run off to the boundary of anti-de Sitter spacetime. Our expansion about the scaling solution is closely related to derivative-expansion techniques developed earlier by a number of authors \cite{Toby, K&S, Gonzalo}. In these works, an expansion in terms of brane curvature over bulk curvature was used. For cosmological solutions, this is equivalent to an expansion in $L \mathcal{H}^+$, where $\mathcal{H}^+$ is the Hubble constant on the positive-tension brane. In the present instance, however, we specifically want to study the time-dependence of the perturbations for all times, from the narrowly-separated to the well-separated brane limit. For this purpose, it is better to use a time-independent expansion parameter ($y_0$), and to include all the appropriate time-dependence order by order in the expansion. Moreover, in these earlier works, the goal was to find the four-dimensional effective description more generally, without specifying that the branes emerged from a collision with perturbations in the lowest Kaluza-Klein modes. Consequently, the solutions obtained contained a number of undetermined functions. In the present context, however, the initial conditions along the extra dimension are completely specified close to the brane collision by the requirement that only the lowest Kaluza-Klein mode be excited. The solutions we obtain here are fully determined, with no arbitrary functions entering our results. Returning to the theme of the four-dimensional effective theory, we expect on general grounds that this should be valid in two particular limits: firstly, as we have already discussed, a Kaluza-Klein description will apply at early times near to the collision, when the separation of the branes is much less than $L$. Here, the warping of the bulk geometry and the brane tensions can be neglected. Secondly, when the branes are separated by many AdS lengths, one expects gravity to become localised on the positive-tension brane, which moves ever more slowly as time proceeds, so the four-dimensional effective theory should become more and more accurate. Equipped with our five-dimensional solution for the background and perturbations obtained by expanding about the scaling solution, we find ourselves able to test the four-dimensional effective theory explicitly. We will show that the four-dimensional effective theory accurately captures the five-dimensional dynamics to leading order in the $y_0$-expansion, but fails at the first nontrivial order. Our calculations reveal that the four-dimensional perturbation modes undergo a mixing in the transition between the Kaluza-Klein effective theory at early times and the brane-localised gravity at late times. This effect is a consequence of the momentary breakdown of the effective theory when the brane separation is of the order of an AdS length, and cannot be seen from four-dimensional effective theory calculations alone. \section{Series expansion in time} \label{seriessoln} As described above, we find it simplest to work in coordinates in which the brane locations are fixed but the bulk evolves. The bulk metric is therefore given by (\ref{metrica}), with the brane locations fixed at $y=\pm y_0$ for all time $t$. The five-dimensional solution then has to satisfy both the Einstein equations and the Israel matching conditions on the branes \cite{Israel}. The bulk Einstein equations read $G_a^b = -\Lambda \delta_a^b$, where the bulk cosmological constant is $\Lambda = -6/L^2$ (we work in units in which the four-dimensional gravitational coupling $8\pi G_4 = 8\pi G_5/L =1$). Evaluating the linear combinations $G^0_0 + G^5_5$ and $G^0_0 + G^5_5 - (1/2)G^i_i$ (where $0$ denotes time, $5$ labels the $y$ direction, and $i$ runs over the noncompact directions), we find: \bea \label{bgdeqn1} \beta_{,\tau\tau}-\beta_{,y y} +\beta_{,\tau}^2-\beta_{,y}^2 + 12\,e^{2\nu} &=&0, \\ \label{bgdeqn2} \nu_{,\tau\tau}- \nu_{,y y} + \frac{1}{3}(\beta_{,y}^2-\beta_{,\tau}^2) - 2\,e^{2\nu} &=& 0, \eea where $(t/L)= e^\tau$, $\beta\equiv 3\ln{b}$ and $\nu \equiv \ln{(nt/L)}$. The Israel matching conditions on the branes read \cite{Carsten, TTS} \[ \frac{b_{,y}}{b} = \frac{n_{,y}}{n} = \frac{nt}{\L}, \label{ibd} \] where all quantities are to be evaluated at the brane locations, $y=\pm y_0$. We will begin our assault on the bulk geometry by constructing a series expansion in $t$ about the collision, implementing the Israel matching conditions on the branes at each order in $t$. This series expansion in $t$ is then exact in both $y$ and the collision rapidity $y_0$. It chief purpose will be to provide initial data for the more powerful solution methods that we will develop in the following sections. The Taylor series solution in $t$ for the background was first presented in \cite{TTS}. Expanded up to terms of $O(t^3/L^3)$, \bea n &=& 1 + (\sech\,{y_0}\sinh{y})\,\frac{t}{L}+\frac{1}{4}\,\sech^2\,{y_0}\,(-3+\cosh{2y_0}+2\cosh{2y}) \frac{t^2}{L^2}, \qquad \\ \label{tseriesbgd} b &=& 1 + (\sech\,{y_0}\sinh{y})\,\frac{t}{L}+\frac{1}{2}\,\sech^2\,{y_0}\,(-\cosh{2y_0}+\cosh{2y}) \frac{t^2}{L^2}. \eea (Note that in the limit as $t\tt 0$ we correctly recover compactified Milne spacetime). Here, however, we will need the perturbations as well. Working in five-dimensional longitudinal gauge for the reasons given in the previous section, the perturbed bulk metric takes the form (see Appendix \ref{appA}) \[ \d s^2 = n^2\left(-(1+2\Phi_L)\,\d t^2-2W_L\,\d t\d y+t^2\,(1-2\Gamma_L)\,\d y^2\right)+b^2\left(1-2\Psi_L\right)d\vec{x}^2, \] with $\Gamma_L=\Phi_L-\Psi_L$ being imposed by the five-dimensional traceless $G^i_j$ equation. The Israel matching conditions at $y=\pm y_0$ then read \[ \Psi_{L\, ,y}=\Gamma_L \frac{nt}{L}, \qquad \Phi_{L\, ,y}=-\Gamma_L \frac{nt}{L}, \qquad W_L=0. \label{lgi} \] Performing a series expansion, we find \bea \Phi_L &=& -\frac{B}{t^2}+\frac{B\, \sech{\,y_0}\sinh{y}}{t}+\Big(A-\frac{B}{8}-\frac{Bk^2}{4}+\frac{1}{6}B k^2\ln{|k t|} \nonumber \\ && +\frac{1}{16}B\cosh{2y}\,(-1+6 \,y_0 \coth{2y_0})\,\sech^2{y_0} -\frac{3}{8}B\,\sech^2{y_0}\sinh{2y}\Big), \\ \Psi_L &=&-\frac{B\,\sech{\,y_0}\sinh{y}}{t}+\Big(2A-\frac{B}{4} +\frac{B k^2}{3}\ln{|kt|} +\frac{B}{4}\cosh{2y}\,\sech^2{y_0}\Big), \qquad\,\,\,\ \\ \label{tseriesperts} W_L &=& -\frac{3}{4}B\,\sech^2{y_0}\big(y\cosh{2y}-y_0\cosh{2y_0}\sinh{2y}\big)\,t, \eea where the first two equations are accurate to $O(t)$ and the third is accurate to $O(t^2)$. We have moreover set $L=1$ for clarity; except for a few specific instances, we will now adopt this convention throughout the rest of this chapter. (To restore $L$, simply replace $t\tt t/L$ and $k\tt kL$). The two arbitrary constants $A$ and $B$ (which may themselves be arbitrary functions of $\vec{k}$) have been chosen so that, on the positive-tension brane, to leading order in $y_0$, $\Phi_L$ goes as \[ \Phi_L = A - \frac{B}{t^2} + O(y_0)+O(k^2)+O(t). \] \section{Expansion in Dirichlet and Neumann polynomials} \label{polysection} \subsection{Background} Having solved the relevant five-dimensional Einstein equations as a series expansion in the time $t$ before or after the collision event, we now have an accurate description of the behaviour of the bulk at small $t$ for arbitrary collision rapidities. In order to match onto the incoming and outgoing states, however, we really want to study the long-time behaviour of the solutions, as the branes become widely separated. Ultimately, this will enable us to successfully map the system onto an appropriate four-dimensional effective description. Instead of expanding in powers of the time, we approximate the five-dimensional solution as a power series in the rapidity of the collision, and determine each metric coefficient for all time at each order in the rapidity. Our main idea is to express the metric as a series of Dirichlet or Neumann polynomials in $y_0$ and $y$, bounded at order $n$ by a constant times $y_0^n$, such that the series satisfies the Israel matching conditions exactly at every order in $y_0$. To implement this, we first change variables from $b$ and $n$ to those obeying Neumann boundary conditions. From (\ref{ibd}), $b/n$ is Neumann. Likewise, if we define $N(t,y)$ by \[ nt = \frac{1}{N(t,y) - y}, \label{nt} \] then one can easily check that $N(t,y)$ is also Neumann on the branes. Notice that if $N$ and $b/n$ are constant, the metric (\ref{metrica}) is just that for anti-de Sitter spacetime. For fixed $y_0$, $N$ describes the the proper separation of the two branes, and $b$ is an additional modulus describing the three-dimensional scale-factor of the branes. Since $N$ and $b/n$ obey Neumann boundary conditions on the branes, we can expand both in a power series \[ \label{Bgd_ansatz} N= N_0(t)+\sum_{n=3}^\infty N_n(t) P_n(y), \qquad b/n=q_0(t)+\sum_{n=3}^\infty q_n(t) P_n(y), \] where $P_n(y)$ are polynomials \[ P_n(y)= y^n-\frac{n}{n-2} \,y^{n-2} \,y_0^2, \qquad n=3,4,\dots \] satisfying Neumann boundary conditions, each bounded by $|P_n(y)|<2y_0^n/(n-2)$ for the relevant range of $y$. Note that the time-dependent coefficients in this ansatz may also be expanded as a power series in $y_0$. By construction, our ansatz satisfies the Israel matching conditions exactly at each order in the expansion. The bulk Einstein equations are not satisfied exactly, but as the expansion is continued, the error terms are bounded by increasing powers of $y_0$. Substituting the series ans{\"a}tze (\ref{Bgd_ansatz}) into the background Einstein equations (\ref{bgdeqn1}) and (\ref{bgdeqn2}), we may determine the solution order by order in the rapidity $y_0$. At each order in $y_0$, one generically obtains a number of linearly independent algebraic equations, and at most one ordinary differential equation in $t$. The solution of the latter introduces a number of arbitrary constants of integration into the solution. To fix the arbitrary constants, one first applies the remaining Einstein equations, allowing a small number to be eliminated. The rest are then determined using the series expansion in $t$ presented in the previous section: as this solution is exact to all orders in $y_0$, we need only to expand it out to the relevant order in $y_0$, before comparing it term by term with our Dirichlet/Neumann polynomial expansion (which is exact in $t$ but perturbative in $y_0$), taken to a corresponding order in $t$. The arbitrary constants are then chosen so as to ensure the equivalence of the two expansions in the region where both $t$ and $y_0$ are small. This procedure suffices to fix all the remaining arbitrary constants. The first few terms of the solution are \bea N_0 &=& {1\over t}-\frac{1}{2}\,t y_0^2+{1\over 24}\,t(8-9t^2)y_0^4+\dots \\ N_3 &=& -{1\over 6}+\left({5\over 72}-2t^2\right)y_0^2+\dots \eea and \bea q_0 &=& 1 -\frac{3}{2}\,t^2 y_0^2+\left(t^2-\frac{7}{8}\,t^4\right)y_0^4+\dots \\ q_3 &=& -2\,t^3 y_0^2+\dots \eea The full solution up to $O(y_0^{10})$ may be found in Appendix \ref{appB}. \subsection{Perturbations} Following the same principles used in our treatment of the background, we construct the two linear combinations \[ \label{phi4xi4} \phi_4 = \frac{1}{2}(\Phi_L+\Psi_L), \qquad \xi_4= b^2 (\Psi_L-\Phi_L)=b^2 \Gamma_L, \] both of which obey Neumann boundary conditions on the branes, as may be checked from (\ref{ibd}) and (\ref{lgi}). In addition, $W_L$ already obeys simple Dirichlet boundary conditions. The two Neumann variables, $\phi_4$ and $\xi_4$, are then expanded in a series of Neumann polynomials and $W_L$ is expanded in a series of Dirichlet polynomials, \[ D_n(y)=y^n-y_0^n,\,\,\,\,\,\, n =2,4,\dots, \,\,\,\,\,\, D_n(y)=y D_{n-1}(y), \,\,\,\,\,\, n=3,5,\dots, \] each bounded by $|D_n(y)|<y_0^n$ for $n$ even and $y_0^n (n-1)/n^{n/(n-1)}$ for $n$ odd, over the relevant range of $y$. As in the case of the background, the time-dependent coefficients multiplying each of the polynomials should themselves be expanded in powers of $y_0$. To solve for the perturbations it is sufficient to use only three of the perturbed Einstein equations (any solution obtained may then be verified against the remainder). Setting \bea \Phi_L &=& \phi\, e^{-2\nu-\beta/3}, \\ \Psi_L &=& \psi\, e^{-\beta/3}, \\ W_L &=& w\, e^{\tau-2\nu-\beta/3}, \eea where $t=e^\tau$, $\beta=3\ln{b}$ and $\nu = \ln{nt}$, the $G^5_i$, $G^0_i$ and $G^i_i$ equations take the form \bea \label{pe1} w_{,\tau} &=& 2\,\phi_{,y} - 4\,e^{3\nu/2}\,(\psi\, e^{\nu/2})_{,y}, \\ \label{pe2} \phi_{,\tau} &=& {1\over 2}\,w_{,y} - e^{3\nu}\,(\psi\, e^{-\nu})_{,\tau}, \\ \label{pe3} (\psi_{,\tau}\, e^{\beta/3})_{,\tau} &=& (\psi_{,y}\, e^{\beta/3})_{,y}+ \psi\,e^{\beta/3}\, \left(\frac{1}{3}\,\beta_{,\tau}^2-\frac{1}{9}\,\beta_{,y}^2-k^2\,e^{2(\nu-\beta/3)}\right) \nonumber \\ && -\frac{2}{9}\,e^{-2\nu+\beta/3}\,\left(\phi \,(\beta_{,\tau}^2+\beta_{,y}^2)- w\, \beta_{,\tau}\,\beta_{,y}\right) . \eea Using our Neumann and Dirichlet ans{\"a}tze for $\phi_4$, $\xi_4$ and $W_L$, the Israel matching conditions are automatically satisfied and it remains only to solve (\ref{pe1}), (\ref{pe2}) and (\ref{pe3}) order by order in the rapidity. The time-dependent coefficients for $\phi_4$, $\xi_4$ and $W_L$ are then found to obey simple ordinary differential equations, with solutions comprising Bessel functions in $kt$, given in Appendix \ref{appC}. Note that it is not necessary for the set of Neumann or Dirichlet polynomials we have used to be orthogonal to each other: linear independence is perfectly sufficient to determine all the time-dependent coefficients order by order in $y_0$. As in the case of the background, the arbitrary constants of integration remaining in the solution after the application of the remaining Einstein equations are fixed by performing a series expansion of the solution in $t$. This expansion can be compared term by term with the series expansion in $t$ given previously, after this latter series has itself been expanded in $y_0$. The arbitrary constants are then chosen so that the two expansions coincide in the region where both $t$ and $y_0$ are small. The results of these calculations, at long wavelengths, are \bea \Phi_L &=& A-B\left({1\over t^2}-{k^2\over 6} {\rm ln}|kt|\right) +\left(A t +\frac{B}{t}\right)\,y + \dots \\ \Psi_L &=& 2 A +B {k^2\over 3} {\rm ln}|kt| -\left(A t+ \frac{B}{t}\right)\,y + \dots \\ W_L&=& 6 A\, t^2\, (y^2-y_0^2)+\dots \label{results} \eea where the constants A and B can be arbitrary functions of $k$. The solutions for all $k$, to fifth order in $y_0$, are given in Appendix \ref{appC}. \section{Expansion about the scaling solution} \label{expaboutscalingsoln} It is illuminating to recast the results of the preceding sections in terms of a set of dimensionless variables. Using the relative velocity of the branes at the moment of collision, $V= 2c \tanh y_0 \simeq 2c y_0$ (where we have temporarily re-introduced the speed of light $c$), we may construct the dimensionless time parameter $x = y_0 ct/L \sim Vt/L$ and the dimensionless $y$-coordinate $\w = y/ y_0 \sim y(c/V)$. Starting from the full Dirichlet/Neumann polynomial expansion for the background given in Appendix \ref{appB}, restoring $c$ to unity and setting $t=x L/y_0$ and $y = \w y_0$, we find that \bea \label{ad_n_series} n^{-1} &=& \tN(x)-\w x + O(y_0^2), \\ \label{ad_q} {b\over n} &=& q(x) + O(y_0^2), \eea where \bea \label{N_series} \tN(x) &=& 1 - \frac{x^2}{2} - \frac{3\,x^4}{8} - \frac{25\,x^6}{48} - \frac{343\,x^8}{384} - \frac{2187\,x^{10}}{1280}+O(x^{12}), \\ \label{q_series} q(x) &=& 1 - \frac{3\, x^2}{2} - \frac{7\, x^4}{8} - \frac{55\, x^6}{48} - \frac{245\, x^8}{128} - \frac{4617\, x^{10}}{1280} +O(x^{12}). \eea The single term in (\ref{ad_n_series}) linear in $\w$ is necessary in order that $n^{-1}$ satisfies the correct boundary conditions. Apart from this one term, however, we see that to lowest order in $y_0$ the metric functions above turn out to be completely independent of $\w$. Similar results are additionally found for the perturbations. Later, we will see how this behaviour leads to the emergence of a four-dimensional effective theory. For now, the key point to notice is that this series expansion converges only for $x \ll 1$, corresponding to times $t \ll L/y_0$. In order to study the behaviour of the theory for all times therefore, we require a means of effectively resumming the above perturbation expansion to all orders in $x$. Remarkably, we will be able to accomplish just this. The remainder of this section, divided into five parts, details our method and results: first, we explain how to find and expand about the scaling solution, considering only the background for simplicity. We then analyse various aspects of the background scaling solution, namely, the brane geometry and the analytic continuation required to go to late times, before moving on to discuss higher-order terms in the expansion. Finally, we extend our treatment to cover the perturbations. \subsection{Scaling solution for the background} The key to the our method is the observation that the approximation of small collision rapidity ($y_0\ll 1$) leads to a set of equations that are perturbatively ultralocal: transforming to the dimensionless coordinates $x$ and $\w$, the Einstein equations for the background (\ref{bgdeqn1}) and (\ref{bgdeqn2}) become \bea \label{e1} \beta_{,\w\w}+\beta_{,\w}^2-12\,e^{2\tnu}&=& y_0^2 \left( x(x\beta_{,x})_{,x}+x^2\beta_{,x}^2\right), \\ \label{e2} \tnu_{,\w\w}-\frac{1}{3}\,\beta_{,\w}^2+2\,e^{2\tnu} &=& y_0^2\big( x(x\tnu_{,x})_{,x}-\frac{1}{3}\,x^2\beta_{,x}^2\big), \eea where we have introduced $\tnu = \nu+\ln{y_0}$. Strikingly, all the terms involving $x$-derivatives are now suppressed by a factor of $y_0^2$ relative to the remaining terms. This segregation of $x$- and $\w$-derivatives has profound consequences: when solving perturbatively in $y_0$, the Einstein equations (\ref{e1}) and (\ref{e2}) reduce to a series of {\it ordinary} differential equations in $\w$, as opposed to the partial differential equations we started off with. To see this, consider expanding out both the Einstein equations (\ref{e1}) and (\ref{e2}) as well as the metric functions $\beta$ and $\tnu$ as a series in positive powers of $y_0$. At zeroth order in $y_0$, the right-hand sides of (\ref{e1}) and (\ref{e2}) vanish, and the left-hand sides can be integrated with respect to $\w$ to yield anti-de Sitter space. (This was our reason for using $\tnu = \nu+\ln{y_0}$ rather than $\nu$: the former serves to pull the necessary exponential term deriving from the cosmological constant down to zeroth order in $y_0$, yielding anti-de Sitter space as a solution at leading order. As we are merely adding a constant, the derivatives of $\tnu$ and $\nu$ are identical.) The Israel matching conditions on the branes (\ref{ibd}), which in these coordinates read \[ \label{ibd2} \frac{1}{3}\,\beta_{,\w}=\tnu_{,\w}=e^{\tnu}, \] are not, however, sufficient to fix all the arbitrary functions of $x$ arising in the integration with respect to $\w$. In fact, two arbitrary functions of $x$ remain in the solution, which may be regarded as time-dependent moduli describing the three-dimensional scale factor of the branes and their proper separation. These moduli may be determined with the help of the $G^5_5$ Einstein equation as we will demonstrate shortly. Returning to (\ref{e1}) and (\ref{e2}), at $y_0^2$ order now, the left-hand sides amount to ordinary differential equations in $\w$ for the $y_0^2$ corrections to $\beta$ and $\tnu$. The right-hand sides can no longer be neglected, but, because of the overall factor of $y_0^2$, only the time-derivatives of $\beta$ and $\tnu$ at {\it zeroth} order in $y_0$ are involved. Since $\beta$ and $\tnu$ have already been determined to this order, the right-hand sides therefore act merely as known source terms. Solving these ordinary differential equations then introduces two further arbitrary functions of $x$; these serve as $y_0^2$ corrections to the time-dependent moduli and may be fixed in the same manner as previously. Our integration scheme therefore proceeds at each order in $y_0$ via a two-step process: first, we integrate the Einstein equations (\ref{e1}) and (\ref{e2}) to determine the $\w$-dependence of the bulk geometry, and then secondly, we fix the $x$-dependent moduli pertaining to the brane geometry using the $G^5_5$ equation. This latter step works as follows: evaluating the $G^5_5$ equation on the branes, we can use the Israel matching conditions (\ref{ibd2}) to replace the single $\w$-derivatives that appear in this equation, yielding an ordinary differential equation in time for the geometry on each brane. Explicitly, we find \[ \left(\frac{bb_{,x}}{n}\right)_{\hspace{-1mm},x}=0, \] where five-dimensional considerations (see Section \S\,\ref{compwitheft}) further allow us to fix the constants of integration on the ($\pm$) brane as \[ \label{G55} \frac{b b_{,x}}{n}=\frac{b b_{,t}}{y_0 n}=\frac{b_{,t_\pm}}{y_0} =\pm\frac{1}{y_0}\tanh{y_0}, \] where the brane conformal time $t_\pm$ is defined on the branes via $n\d t = b\d t_\pm$. When augmented with the initial conditions that $n$ and $b$ both tend to unity as $x$ tends to zero (so that we recover compactified Milne spacetime near the collision), these two equations are fully sufficient to determine the two $x$-dependent moduli to all orders in $y_0$. Putting the above into practice, for convenience we will work with the Neumann variables $\tN$ and $q$, generalising (\ref{ad_n_series}) and (\ref{ad_q}) to \[ n^{-1} = \tN(x,\w)-\w x, \qquad \frac{b}{n}=q(x,\w). \] Seeking an expansion of the form \bea \label{ad_ansatz1} \tN(x,\w) &=& \tN_0(x,\w) + y_0^2\tN_1(x,\w)+O(y_0^4), \\ \label{ad_ansatz2} q(x,\w) &=& q_0(x,\w)+y_0^2\, q_1(x,\w)+O(y_0^4), \eea the Einstein equations (\ref{e1}) and (\ref{e2}) when expanded to zeroth order in $y_0$ immediately restrict $\tN_0$ and $q_0$ to be functions of $x$ alone. The bulk geometry to this order is then simply anti-de Sitter space with time-varying moduli, consistent with (\ref{ad_n_series}) and (\ref{ad_q}). The moduli $\tN_0(x)$ and $q_0(x)$ may be found by integrating the brane equations (\ref{G55}), also expanded to lowest order in $y_0$. In terms of the Lambert W-function \cite{LambertW}, $W(x)$, defined implicitly by \[ \label{Wdef} W(x)e^{W(x)}=x, \] the solution is \[ \label{sol1} \tN_0(x) = e^{\frac{1}{2}W(-x^2)}, \qquad q_0(x) = \left(1+W(-x^2)\right)\,e^{\frac{1}{2}W(-x^2)}. \] \begin{figure}[p] \begin{center} \hspace{-0.7cm} \includegraphics[width=12cm]{Lambert.ps} \caption{The eponymous hero: Johann Heinrich Lambert, 1728-1777. \newline Famed for his proof of the irrationality of $\pi$, and for his natty dress sense.} \label{LambertHimself} \end{center} \end{figure} Thus we have found the scaling solution for the background, whose form is independent of $y_0$, holding for any $y_0\ll 1$. Using the series expansion for the Lambert W-function about $x=W(x)=0$, namely\footnote{ Note that the radius of convergence of the series (\ref{W_series}) for $W(x)$ is $1/e$, and thus it converges for arguments in the range $-1/e\le x \le 0$ as required.} \[ \label{W_series} W(x) = \sum_{m=1}^\inf\frac{(-m)^{m-1}}{m!}x^m, \] we can immediately check that the expansion of our solution is in exact agreement with (\ref{N_series}) and (\ref{q_series}). At leading order in $y_0$ then, we have succeeded in resumming the Dirichlet/Neumann polynomial expansion results for the background to all orders in $x$. Later, we will return to evaluate the $y_0^2$ corrections in our expansion about the scaling solution. In the next two subsections, however, we will first examine the scaling solution in greater detail. \subsection{Evolution of the brane scale factors} Using the scaling solution (\ref{sol1}) to evaluate the scale factors on both branes, we find to $O(y_0^2)$ \[ b_\pm = 1\pm x e^{-\frac{1}{2}W(-x^2)} = 1\pm\sqrt{-W(-x^2)}. \] To follow the evolution of the brane scale factors, it is helpful to first understand the behaviour of the Lambert W-function, the real values of which are displayed in Figure \ref{Wfigure}. \begin{figure}[p] \begin{center} \includegraphics[width=12cm]{LambertW.ps} \caption{ The real values of the Lambert W-function. The solid line indicates the principal solution branch, $W_0(x)$, while the dashed line depicts the $W_{-1}(x)$ branch. The two branches join smoothly at $x=-1/e$ where $W$ attains its negative maximum of $-1$.} \label{Wfigure} \end{center} \end{figure} For positive arguments the Lambert W-function is single-valued; yet for the negative arguments of interest here, we see that there are in fact two different real solution branches. The first branch, denoted $W_0(x)$, satisfies $W_0(x)\ge -1$ and is usually referred to as the principal branch, while the second branch, $W_{-1}(x)$, is defined in the range $W_{-1}(x)\le -1$. The two solution branches join smoothly at $x=-1/e$, where $W=-1$. Starting at the brane collision where $x=0$, the brane scale factors are chosen to satisfy $b_\pm=1$, and so we must begin on the principal branch of the Lambert W-function for which $W_0(0)=0$. Thereafter, as illustrated in Figure \ref{bfigure}, $b_+$ increases and $b_-$ decreases monotonically until at the critical time $x=x_c$, when $W_0(-x_c^2)=-1$ and $b_-$ shrinks to zero. From (\ref{Wdef}), the critical time is therefore $ x_c = e^{-\frac{1}{2}} = 0.606..., $ and corresponds physically to the time at which the negative-tension brane encounters the bulk black hole\footnote{ From the exact solution in bulk-static coordinates, the scale factor on the negative-tension brane at the horizon obeys $b_-^2 = \sech{2 Y_0/L} = \tanh{y_0}$, and so $b_- \sim y_0^{1/2}$.}. At this moment, the scale factor on the positive-tension brane has only attained a value of two. From the Birkhoff-frame solution, in which the bulk is AdS-Schwarzschild and the branes are moving, we know that the positive-tension brane is unaffected by the disappearance of the negative-tension brane and simply continues its journey out to the boundary of AdS. To reconcile this behaviour with our solution in brane-static coordinates, it is helpful to pass to $t_+$, the conformal time on the positive-tension brane. Working to zeroth order in $y_0$, this may be converted into the dimensionless form \[ \label{x4} x_4= \frac{y_0 t_+}{L} =\frac{y_0}{L}\int \frac{n}{b}\,\d t= \int \frac{\d x}{q_0(x)}= x e^{-\frac{1}{2}W(-x^2)}=\sqrt{-W(-x^2)} . \] Inverting this expression, we find that the bulk time parameter $x=x_4\,e^{-\frac{1}{2}x_4^2}$. The bulk time $x$ is thus double-valued when expressed as a function of $x_4$, the conformal time on the positive-tension brane: to continue forward in $x_4$ beyond $x_4=1$ (where $x=x_c$), the bulk time $x$ must reverse direction and decrease towards zero. The metric functions, expressed in terms of $x$, must then continue back along the other branch of the Lambert W-function, namely the $W_{-1}$ branch. In this manner we see that the solution for the scale factor on the positive-tension brane, when continued on to the $W_{-1}$ branch, tends to infinity as the bulk time $x$ is reduced back towards zero (see dotted line in Figure \ref{bfigure}), corresponding to the positive-tension brane approaching the boundary of AdS as $x_4 \tt \inf$. For simplicity, in the remainder of this chapter we will work directly with the brane conformal time $x_4$ itself. With this choice, the brane scale factors to zeroth order in $y_0$ are simply \nopagebreak[1] $b_\pm = 1\pm x_4$. \begin{figure}[p] \begin{center} \includegraphics[width=12cm]{bfig.eps} \caption{ The scale factors $b_{\pm}$ on the positive-tension brane (rising curve) and negative-tension brane (falling curve) as a function of the bulk time parameter $x$, to zeroth order in $y_0$. The continuation of the positive-tension brane scale factor on to the $W_{-1}$ branch of the Lambert W-function is indicated by the dashed line. } \label{bfigure} \end{center} \end{figure} \subsection{Analytic continuation of the bulk geometry} In terms of $x_4$, the metric functions $n$ and $b$ are given by \[ \label{nandb} n = \frac{e^{\frac{1}{2}x_4^2}}{1-\w x_4}+O(y_0^2), \qquad b = \frac{1-x_4^2}{1-\w x_4}+O(y_0^2). \] At $x_4=1$, the three-dimensional scale factor $b$ shrinks to zero at all values of $\w$ except $\w=1$ (\ie the positive-tension brane). Since $b$ is a coordinate scalar under transformations of $x_4$ and $\w$, one might be concerned that that the scaling solution becomes singular at this point. When we compute the $y_0^2$ corrections, however, we will find that these corrections become large close to $x_4=1$, precipitating a breakdown of the small-$y_0$ expansion. Since it will later turn out that the scaling solution maps directly on to the four-dimensional effective theory, and that this, like the metric on the positive-tension brane, is completely regular at $x_4=1$, we are encouraged to simply analytically continue the scaling solution to times $x_4>1$. When implementing this analytic continuation careful attention must be paid to the range of the coordinate $\w$. Thus far, for times $x_4<1$, we have regarded $\w$ as a coordinate spanning the fifth dimension, taking values in the range $-1\le \w \le 1$. The two metric functions $n$ and $b$ were then expressed in terms of the coordinates $x_4$ and $\w$. Strictly speaking, however, this parameterisation is redundant: we could have chosen to eliminate $\w$ by promoting the three-dimensional scale factor $b$ from a metric function to an independent coordinate parameterising the fifth dimension. Thus we would have only one metric function $n$, expressed in terms of the coordinates $x_4$ and $b$. While this latter parameterisation is more succinct, its disadvantage is that the locations of the branes are no longer explicit, since the value of the scale factor $b$ on the branes is time-dependent. In fact, to track the location of the branes we must re-introduce the function $\w(x_4,b)=(b+x_4^2-1)/bx_4$ (inverting (\ref{nandb}) at lowest order in $y_0$). The trajectories of the branes are then given by the contours $\w=\pm 1$. \begin{figure}[p] \begin{center} \includegraphics[width=13cm]{Triplot4.eps} \caption{ The contours of constant $\w$ in the ($b$, $x_4$) plane. Working to zeroth order in $y_0$, these are given by $x_4 = \frac{1}{2}\big(b\w\pm \sqrt{b^2\w^2-4(b-1)}\big)$, where we have plotted the positive root using a solid line and the negative root using a dashed line. The negative-tension brane is located at $\w=-1$ for times $x_4<1$, and the trajectory of the positive-tension brane is given (for all time) by the positive root solution for $\w=1$. The region delimited by the trajectories of the branes (shaded) then corresponds to the bulk. From the plot we see that, for $0<x_4<1$, the bulk is parameterised by values of $\w$ in the range $-1\le \w \le 1$. In contrast, for $x_4>1$, the bulk is parameterised by values of $\w$ in the range $\w\ge 1$. } \label{coathanger} \end{center} \end{figure} The contours of constant $\w$ as a function of $x_4$ and $b$ are plotted in Figure \ref{coathanger}. The analytic continuation to times $x_4>1$ has been implemented, and the extent of the bulk is indicated by the shaded region. From the figure, we see that, if we were to revert to our original parameterisation of the bulk in terms of $x_4$ and $\w$, the range of $\w$ required depends on the time coordinate $x_4$: for early times $x_4<1$, we require only values of $\w$ in the range $-1\le \w \le 1$, whereas for late times $x_4>1$, we require values in the range $\w\ge 1$. Thus, while the positive-tension brane remains fixed at $\w=1$ throughout, at early times $x_4<1$ the value of $\w$ {\it decreases} as we head away from the positive-tension brane along the fifth dimension, whereas at late times $x_4>1$, the value of $\w$ {\it increases} away from the positive-tension brane. While this behaviour initially appears paradoxical if $\w$ is regarded as a coordinate along the fifth dimension, we stress that the only variables with meaningful physical content are the brane conformal time $x_4$ and the three-dimensional scale factor $b$. These physical variables behave sensibly under analytic continuation. In contrast, $\w$ is simply a convenient parameterisation introduced to follow the brane trajectories, with the awkward feature that its range alters under the analytic continuation at $x_4=1$. \begin{figure}[p] \begin{center} \begin{minipage}{10cm} \vspace{0.3cm} \includegraphics[width=10cm]{bearly.eps} \vspace{0.4cm} \end{minipage} \begin{minipage}{10cm} \includegraphics[width=10cm]{blate.eps} \vspace{0.5cm} \end{minipage} \caption{ The three-dimensional scale factor $b$, plotted to zeroth order in $y_0$ as a function of $x_4$ and $\w$, for $x_4<1$ (top) and $x_4>1$ (bottom). The positive-tension brane is fixed at $\w=1$ for all time (note the evolution of its scale factor is smooth and continuous), and for $x_4<1$, the negative-tension brane is located at $\w=-1$. } \label{bplots} \end{center} \end{figure} For the rest of this chapter, we will find it easiest to continue parameterising the bulk in terms of $x_4$ and $\w$, adjusting the range of the $\w$ where required. Figure \ref{bplots} illustrates this approach: at early times $x_4<1$ the three-dimensional scale factor $b$ is plotted for values of $\w$ in the range $-1\le \w \le 1$. At late times $x_4>1$, we must however plot $b$ for values of $\w$ in the range $\w\ge 1$. In this fashion, the three-dimensional scale factor $b$ always decreases along the fifth dimension away from the brane. We have argued that the scaling solution for the background, obtained at lowest order in $y_0$, may be analytically continued across $x_4=1$. There is a coordinate singularity in the $x_4$, $\w$ coordinates but this does not affect the metric on the positive-tension brane which remains regular throughout. The same features will be true when we solve for the cosmological perturbations. The fact that the continuation is regular on the positive-tension brane, and precisely agrees with the predictions of the four-dimensional effective theory, provides strong evidence for its correctness. Once the form of the the background and the perturbations have been determined to lowest order in $y_0$, the higher-order corrections are obtained from differential equations in $y$ with source terms depending only on the lowest order solutions. It is straightforward to obtain these corrections for $x_4<1$. If we analytically continue them to $x_4>1$ as described, we automatically solve the bulk Einstein equations and the Israel matching conditions on the positive tension brane for all $x_4$. The continued solution is well behaved in the vicinity of the positive-tension brane, out to large distances where the $y_0$ expansion eventually fails. \subsection Higher-order corrections} In this section we explicitly compute the $y_0^2$ corrections. The size of these corrections indicates the validity of the expansion about the scaling solution, which perforce is only valid when the $y_0^2$ corrections are small. Following the procedure outlined previously, we first evaluate the Einstein equations (\ref{e1}) and (\ref{e2}) to $O(y_0^2)$ using the ans{\"a}tze (\ref{ad_ansatz1}) and (\ref{ad_ansatz2}), along with the solutions for $\tN_0(x)$ and $q_0(x)$ given in (\ref{sol1}). The result is two second-order ordinary differential equations in $\w$, which may straightforwardly be integrated yielding $\tN_1(x,\w)$ and $q_1(x,\w)$ up to two arbitrary functions of $x_4$. These time-dependent moduli are then fixed using the brane equations (\ref{G55}), evaluated at $O(y_0^2)$ higher than previously. To $O(y_0^4)$, we obtain the result: \bea \label{ad_n} n(x_4,\w) &=& \frac{e^{\frac{1}{2}x_4^2}}{1 - \w x_4} + \frac{e^{\frac{1}{2}x_4^2} y_0^2}{30 {\left( -1 + \w x_4 \right) }^2 {\left( -1 + x_4^2 \right) }^4} \, \big( x_4 \big( 5 \w \left( -3 + \w^2 \right) \nonumber \\ && - 5 x_4 + 40 \w \left( -3 + \w^2 \right) x_4^2 - 5 \left( -14 + 9 \w^2 \left( -2 + \w^2 \right) \right) x_4^3 \nonumber \\ && + 3 \w^3 \left( -5 + 3 \w^2 \right) x_4^4 - 19 x_4^5 + 5 x_4^7 \big) - 5 {\left( -1 + x_4^2 \right) }^3 \ln (1 - x_4^2) \big), \nonumber \\ && \\ \label{ad_b} b(x_4,\w) &=& \frac{1 - x_4^2}{1 - \w x_4} + \frac{x_4 y_0^2}{30 {\left( -1 + \w x_4 \right) }^2 {\left( -1 + x_4^2 \right) }^3}\, \big( -5 \w \left( -3 + \w^2 \right) \nonumber \\ && - 20 x_4 + 5 \w \left( -7 + 4 \w^2 \right) x_4^2 - 10 \left( 1 - 12 \w^2 + 3 \w^4 \right) x_4^3 \nonumber \\ && + 3 \w \left( -20 - 5 \w^2 + 2 \w^4 \right) x_4^4 - 12 x_4^5 + 31 \w x_4^6 - 5 \w x_4^8 \nonumber \\ && - 5 {\left( -1 + x_4^2 \right) }^2 \left( \w - 2 x_4 + \w x_4^2 \right) \ln (1 - x_4^2) \big) \eea \begin{figure}[p] \begin{center} \begin{minipage}{11cm} \hspace{-1cm} \includegraphics[width=11cm]{bcorrearly.eps} \vspace{1cm} \end{minipage} \begin{minipage}{11cm} \hspace{-1cm} \includegraphics[width=11cm]{bcorrlate.eps} \vspace{0.5cm} \end{minipage} \caption{ The ratio of the $y_0^2$ corrections to the leading term in the small-$y_0$ expansion for $b$, plotted for $x_4<1$ (top) and $x_4>1$ (bottom), for the case where $y_0=0.1$. Where this ratio becomes of order unity the expansion about the scaling solution breaks down. The analogous plots for $n$ display similar behaviour. } \label{metricplots} \end{center} \end{figure} In Figure \ref{metricplots}, we have plotted the ratio of the $y_0^2$ corrections to the corresponding terms at leading order: where this ratio becomes of order unity the expansion about the scaling solution breaks down. Inspection shows there are two such regions: the first is for times close to $x_4=1$, for all $\w$, and the second occurs at late times $x_4>1$, far away from the positive-tension brane. In neither case does the failure of the $y_0$ expansion indicate a singularity of the background metric: from the bulk-static coordinate system we know the exact solution for the background metric is simply AdS-Schwarzschild, which is regular everywhere. The exact bulk-static solution in Birkhoff frame tells us that the proper speed of the negative-tension brane, relative to the static bulk, approaches the speed of light as it reaches the event horizon of the bulk black hole. It therefore seems plausible that a small-$y_0$ expansion based on slowly moving branes must break down at this moment, when $x_4=1$ in our chosen coordinate system. Analytically continuing our solution in both $x_4$ and $\w$ around $x_4=1$, the logarithmic terms in the $y_0^2$ corrections now acquire imaginary pieces for times $x_4>1$. Since these imaginary terms are all suppressed by a factor of $y_0^2$, however, they can only enter the Einstein-brane equations (expanded as a series to $y_0^2$ order) in a linear fashion. Hence the real and imaginary parts of the metric necessarily constitute {\it independent} solutions, permitting us to simply throw away the imaginary part and work with the real part alone. As a confirmation of this, it can be checked explicitly that replacing the $\ln{(1-x_4)}$ terms in (\ref{ad_n}) and (\ref{ad_b}) with $\ln{|1-x_4|}$ still provides a valid solution to $O(y_0^2)$ of the complete Einstein-brane equations and boundary conditions. Finally, at late times $x_4>1$, note that the extent to which we know the bulk geometry away from the positive-tension brane is limited by the $y_0^2$ corrections, which become large at an increasingly large value of $\w$, away from the positive-tension brane (see Figure \ref{metricplots}). The expansion about the scaling solution thus breaks down before we reach the horizon of the bulk black hole, which is located at $\w\tt \inf$ for $x_4>1$. \subsection{Treatment of the perturbations} Having determined the background geometry to $O(y_0^2)$ in the preceding subsections, we now turn our attention to the perturbations. In this subsection we show how to evaluate the perturbations to $O(y_0^2)$ by expanding about the scaling solution. The results will enable us to perform stringent checks of the four-dimensional effective theory and moreover to evaluate the mode-mixing between early and late times. In addition to the dimensionless variables $x=y_0 ct/L$ and $\w = y/ y_0$, when we consider the metric perturbations we must further introduce the dimensionless perturbation amplitude $\tB = B y_0^2 c^2/L^2 \sim B V^2/L^2$ and the dimensionless wavevector $\tk = kL/y_0\sim ckL/V$. In this fashion, to lowest order in $y_0$ and $k$, we then find $\Phi_L = A - B/t^2 = A - \tB /x^2$ and similarly $kct=\tk x$. (Note that the perturbation amplitude A is already dimensionless however). Following the treatment of the perturbations in the Dirichlet/Neumann polynomial expansion, we will again express the metric perturbations in terms of $W_L$ (obeying Dirichlet boundary conditions), and the Neumann variables $\phi_4$ and $\xi_4$, defined in (\ref{phi4xi4}). Hence we seek an expansion of the form \bea \label{adperts} \phi_4(x_4,\w)&=&\phi_{40}(x_4,\w)+y_0^2\,\phi_{41}(x_4,\w)+O(y_0^4), \\ \xi_4(x_4,\w) &=& \xi_{40}(x_4,\w)+y_0^2\,\xi_{41}(x_4,\w)+O(y_0^4),\\ W_L(x_4,\w)&=& W_{L0}(x_4,\w)+y_0^2\,W_{L1}(x_4,\w)+O(y_0^4). \eea As in the case of the background, we will use the $G^5_5$ equation evaluated on the brane to fix the arbitrary functions of $x_4$ arising from integration of the Einstein equations with respect to $\w$. By substituting the Israel matching conditions into the $G^5_5$ equation, along with the boundary conditions for the perturbations, it is possible to remove the single $\w$-derivatives that appear. We arrive at the following second-order ordinary differential equation, valid on both branes, \bea \label{breq} 0&=& 2n(x_4^2-1)(2b^2\phi_4+\xi_4)\dot{b}^2+b^2\left(nx_4(x_4^2-3)-(x_4^2-1)\dot{n}\right) (2b^2\dot{\phi}_4-\dot{\xi}_4)\nonumber \\ && +b\dot{b}\left(4nx_4(x_4^2-3)(b^2\phi_4+\xi_4)-(x_4^2-1)(4(b^2\phi_4+\xi_4)\dot{n} -n(10b^2\dot{\phi}_4+\dot{\xi}_4))\right)\nonumber \\ &&+bn(x_4^2-1)\left(4(b^2\phi_4+\xi_4)\ddot{b}+2b^3\ddot{\phi}_4-b\ddot{\xi}_4\right), \eea where dots indicate differentiation with respect to $x_4$, and where, in the interests of clarity, we have omitted terms of $O(\tk^2)$. Beginning our computation, the $G^5_i$ and $G^5_5$ Einstein equations when evaluated to lowest order in $y_0$ immediately restrict $\phi_{40}$ and $\xi_{40}$ to be functions of $x_4$ only. Integrating the $G^0_i$ equation with respect to $\w$ then gives $W_{L0}$ in terms of $\phi_{40}$ and $\xi_{40}$, up to an arbitrary function of $x_4$. Requiring that $W_{L0}$ vanishes on both branes allows us to both fix this arbitrary function, and also to solve for $\xi_{40}$ in terms of $\phi_{40}$ alone. Finally, evaluating (\ref{breq}) on both branes to lowest order in $y_0$ and solving simultaneously yields a second-order ordinary differential equation for $\phi_{40}$, with solution \[ \label{phi40soln} \phi_{40} = \Big(\frac{3A}{2}-\frac{9\tB}{16}\Big)-\frac{\tB}{2x_4^2} + O(\tk^2), \] where the two arbitrary constants have been chosen to match the small-$t$ series expansion given in Section \S\,\ref{seriessoln}. With this choice, dropping terms of $O(\tk^2)$ and higher, \bea \label{xi4soln} \xi_{40} &=& -A+\frac{11\tB}{8}-\frac{\tB}{x_4^2}+\Big(A-\frac{3\tB}{8}\Big)x_4^2 \\ W_{L0} &=& \frac{(1-\w^2)}{(1-x_4^2)^2}\Big(3 Ax_4^2(-2+\w x_4)+\tB x_4\big(\frac{9}{4}x_4+\w (1-\frac{9}{8}x_4^2)\big)\Big)\, e^{-\frac{1}{2}x_4^2}.\qquad \eea The resulting behaviour for the perturbation to the three-dimensional scale factor, $b^2 \Psi_L$, is plotted in Figure \ref{pertplots}. \begin{figure}[p] \begin{center} \begin{minipage}{11cm} \vspace{0.4cm} \hspace{-0.8cm} \includegraphics[width=11cm]{b2psiBearly.eps} \vspace{1cm} \end{minipage} \begin{minipage}{11cm} \hspace{-0.8cm} \includegraphics[width=11cm]{b2psiBlate.eps} \vspace{0.5cm} \end{minipage} \caption{ The perturbation to the three-dimensional scale factor, $b^2 \Psi_L$, plotted on long wavelengths to zeroth order in $y_0$ for early times (top) and late times (bottom). Only the $\tB$ mode is displayed (\ie $A=0$ and $\tB=1$). Note how the perturbations are localised on the positive-tension brane (located at $\w =1$), and decay away from the brane. } \label{pertplots} \end{center} \end{figure} In terms of the original Newtonian gauge variables, an identical calculation (working now to all orders in $\tk$ but dropping terms of $O(y_0^2)$) yields, {\allowdisplaybreaks \bea \label{PhiLallk} \Phi_L &=& \frac{2\,\tk \,(1-\w x_4)^2}{3\, (x_4^2-1)}\,\big(A_0 J_0(\tk x_4)+B_0 Y_0(\tk x_4)\big)\nonumber \\ && \qquad +\frac{1}{x_4}\Big(1+\frac{(1-\w x_4)^2}{1-x_4^2}\Big)\big(A_0 J_1(\tk x_4)+B_0 Y_1(\tk x_4)\big), \\%+O(y_0^2), \label{PsiLallk} \Psi_L &=& \frac{1}{3\,(1-x_4^2)}\Big(2\tk (1-\w x_4)^2 \big(A_0 J_0(\tk x_4)+B_0 Y_0(\tk x_4)\big) \nonumber \\ && \qquad - 3\,(x_4+\w(-2+\w x_4))\big(A_0 J_1(\tk x_4)+B_0 Y_1(\tk x_4)\big)\Big), \\% +O(y_0^2), \\ W_{L} &=& 2\,x_4^2 \,e^{-\frac{1}{2}x_4^2}\,\frac{(\w^2-1)}{(1-x_4^2)^2}\,\Big(\tk\, (1-\w x_4)\big(A_0 J_0(\tk x_4)+B_0 Y_0(\tk x_4)\big)\nonumber \\ &&\qquad +\w\,\big(A_0 J_1(\tk x_4)+B_0 Y_1(\tk x_4)\big) \Big), \eea } where the constants $A_0$ and $B_0$ are given by \[ \label{A0B0} A_0 = \frac{3A}{\tk}-\frac{9\tB}{8\tk}+\frac{1}{2}\tB \tk (\ln{2}-\gamma)+O(y_0^2), \qquad B_0 = \frac{\tB \tk \pi}{4} +O(y_0^2). \] To evaluate the $y_0^2$ corrections, we repeat the same sequence of steps: integrating the $G^5_i$ and $G^5_5$ Einstein equations (at $y_0^2$ higher order) gives us $\phi_{41}$ and $\xi_{41}$ up to two arbitrary functions of $x_4$, and integrating the $G^0_i$ equation then gives us $W_{L1}$ in terms of these two arbitrary functions plus one more. Two of the three arbitrary functions are then determined by imposing the Dirichlet boundary conditions on $W_{L1}$, and the third is found to satisfy a second-order ordinary differential equation after making use of (\ref{breq}) on both branes. Solving this differential equation, the constants of integration appearing in the solution are again chosen so as to match the small-$t$ series expansion of Section \S\,\ref{seriessoln}. Converting back to the original longitudinal gauge variables, the results to $O(y_0^4)$ and to $O(k^2)$ take the schematic form \bea \Phi_L &=& f^\Phi_0+y_0^2 (f^\Phi_1+f^\Phi_2 \ln{(1+x_4)} + f^\Phi_3 \ln{(1-x_4)}+f^\phi_4 \ln{(1-\w x_4)}, \qquad \\ \Psi_L &=& f^\Psi_0+y_0^2 (f^\Psi_1+f^\Psi_2 \ln{(1+x_4)} + f^\Psi_3 \ln{(1-x_4)}+f^\Psi_4 \ln{(1-\w x_4)}, \qquad \\ W_L&=&e^{-\frac{1}{2}\,x_4^2} \,\big( f^W_0+y_0^2 (f^W_1+f^W_2 \ln{(1+x_4)} \nonumber \\ && \qquad\qquad\qquad\qquad + f^W_3 \ln{(1-x_4)} +f^W_4 \ln{(1-\w x_4)})\big), \eea where the $f$ are rational functions of $x_4$ and $\w$ which, due to their length, have been listed separated separately in Appendix \ref{appD}. (If desired, more detailed results including the $O(k^2)$ corrections are available \cite{Website}). It is easy to check that the results obtained by expanding about the scaling solution are consistent with those obtained using our previous method based upon Dirichlet/Neumann polynomials. Taking the results from the polynomial expansion given in Appendix \ref{appC}, substituting $t=(x_4/y_0) e^{-\frac{1}{2}x_4^2}$ and $y=\w y_0$, retaining only terms of $O(y_0^2)$ or less, one finds agreement with the results listed in Appendix \ref{appD} after these have been re-expressed as a series in $x_4$. This has been checked explicitly, both for the background and the perturbations. Just as in the case of the background, the small-$y_0$ expansion breaks down for times close to $x_4=1$, when the $y_0^2$ corrections to the perturbations become larger than the corresponding zeroth order terms. Again, we will simply analytically continue the solution in $x_4$ and $\w$ around this point. In support of this, the induced metric on the positive-tension brane is, to zeroth order in $y_0$, completely regular across $x_4=1$, even including the perturbations as can be seen from (\ref{PhiLallk}) and (\ref{PsiLallk}). As in the case of the background, any imaginary pieces acquired from analytically continuing logarithmic terms are all suppressed by order $y_0^2$. Thus they may only enter the Einstein-brane equations (when these are expanded to order $y_0^2$) in a linear fashion, and hence the real and imaginary parts of the metric constitute independent solutions. We can therefore simply drop the imaginary parts, or equivalently replace the $\ln{(1-x_4)}$ and $\ln{(1-\w x_4)}$ terms with $\ln{|1-x_4|}$ and $\ln{|1-\w x_4|}$ respectively. We have checked explicitly that this still satisfies the Einstein-brane equations and boundary conditions. \section{Comparison with the four-dimensional effective theory} \label{compwitheft} We have now arrived at a vantage point from which we may scrutinise the predictions of the four-dimensional effective theory using our expansion of the bulk geometry about the scaling solution. We will find that the four-dimensional effective theory is in exact agreement with the scaling solution. Beyond this, the $y_0^2$ corrections lead to effects that cannot be described within a four-dimensional effective framework. Nonetheless, the higher-order corrections are automatically small at very early and very late times, restoring the accuracy of the four-dimensional effective theory in these limits. In the near-static limit, the mapping from four to five dimensions may be calculated from the moduli space approach \cite{Terning, Ekpyrotic,KhouryZ}: putting the four-dimensional effective theory metric $g^4_{\mu\nu}$ into Einstein frame, the mapping reads \[ g_{\mu \nu}^{+}= \cosh^2 (\phi/\sqrt{6})g_{\mu \nu}^{4} \qquad g_{\mu \nu}^{-}= \sinh^2 (\phi/\sqrt{6})g_{\mu \nu}^{4}, \label{map4} \] where $g_{\mu \nu}^{+}$ and $g_{\mu \nu}^{-}$ are the metrics on the positive- and negative-tension branes respectively, and $\phi$ is the radion. As we showed in Chapter \S\,\ref{confsymmchapter}, on symmetry grounds this is the unique local mapping involving no derivatives \cite{Conf_sym}, and that to leading order, the action for $g_{\mu \nu}^{4}$ and $\phi$ is that for Einstein gravity with a minimally coupled scalar field. Solving the four-dimensional effective theory is trivial: the background is conformally flat, $g^{4}_{\mu \nu} = b_4^2(t_4)\,\eta_{\mu \nu}$, and the Einstein-scalar equations yield the following solution, unique up to a sign choice for $\phi_0$, of the form \[ b_4^2=\bar{C}_4 t_4, \qquad e^{\sqrt{2\over 3}\phi_0} = \bar{A}_4 t_4, \label{back4i} \] with $\phi_0$ the background scalar field, and $\bar{A}_4$ and $\bar{C}_4$ arbitrary constants. (Throughout this chapter we adopt units where $8 \pi G_4 =1$). According to the map (\ref{map4}), the brane scale factors are then predicted to be \[ b_\pm= {1\over 2} b_4 e^{-{\phi_0\over \sqrt{6}}} \left( 1 \pm e^{\sqrt{2\over 3} \phi_0}\right) = 1\pm \bar{A}_4 t_4, \label{back4} \] where we have chosen $\bar{C}_4= 4 \bar{A}_4$, so that the brane scale factors are unity at the brane collision. As emphasised in \cite{TTS}, the result (\ref{back4}) is actually exact for the induced brane metrics, when $t_4$ is identified with the conformal time on the branes. From this correspondence, one can read off the five-dimensional meaning of the parameter $\bar{A}_4$: it equals $L^{-1} \tanh{y_0}$ (our definition of $y_0$ differs from that of \cite{TTS} by a factor of 2). With regard to the perturbations, in longitudinal gauge (see \eg \cite{mukhanov,bardeen}) the perturbed line element of the four-dimensional effective theory reads \[ \d s_4^2= b_4^2(t_4)\left[-(1+2\Phi_4)\d t_4^2+ (1-2\Phi_4)\d\vec{x}^2\right], \label{4dmet} \] and the general solution to the perturbation equations is \cite{TTS, Khoury} \bea \label{phi4eft} \Phi_4 &=& \frac{1}{t_4}\,\left(\tA_0 J_1(k t_4)+\tB_0 Y_1(k t_4)\right),\qquad\\ \label{deltaradioneft} \frac{\delta \phi}{\sqrt{6}} &=&\frac{2}{3}\,k\,\left(\tA_0 J_0(kt_4)+\tB_0Y_0(kt_4)\right) -\frac{1}{t_4}\,\left(\tA_0 J_1(kt_4)+\tB_0 Y_1(kt_4)\right),\qquad \eea with $\tA_0$ and $\tB_0$ being the amplitudes of the two linearly independent perturbation modes. \subsection{Background} In the case of the background, we require only the result that the scale factors on the positive- and negative-tension branes are given by \[ b_\pm = 1\pm \bar{A}_4 t_4, \] where the constant $\bar{A}_4=L^{-1}\tanh{y_0}$ and $t_4$ denotes conformal time in the four-dimensional effective theory. (Note this solution has been normalised so as to set the brane scale factors at the collision to unity). Consequently, the four-dimensional effective theory restricts $b_+ + b_-=2$. In comparison, our results from the expansion about the scaling solution (\ref{ad_b}) give \[ b_+ + b_- = 2 + \frac{2\,x_4^4\,\left( x_4^2-3 \right) \,y_0^2}{3\,{\left(1- x_4^2 \right) }^3} + O(y_0^4). \] Thus, the four-dimensional effective theory captures the behaviour of the full theory only in the limit in which the $y_0^2$ corrections are small, \iec when the scaling solution is an accurate description of the higher-dimensional dynamics. At small times such that $x_4\ll 1$, the $y_0^2$ corrections will additionally be suppressed by $O(x_4^2)$, and so the effective theory becomes increasingly accurate in the Kaluza-Klein limit near to the collision. Close to $x_4=1$, the small-$y_0$ expansion fails, hence our results for the bulk geometry are no longer reliable. For late times $x_4>1$, the negative-tension brane no longer exists and the above expression is not defined. We can also ask what the physical counterpart of $t_4$, conformal time in the four-dimensional effective theory, is: from (\ref{ad_b}), to $O(y_0^3)$, we find \[ t_4 = \frac{b_+-b_-}{2 \bar{A}_4} = \frac{x_4}{y_0}- \frac{y_0}{30 {( 1 - x_4^2) }^3}\,\Big( x_4^3 ( 5 - 14 x_4^2 + 5 x_4^4 ) - 5 x_4{( -1 + x_4^2 ) }^2 \ln (1 - x_4^2)\Big) \] In comparison, the physical conformal times on the positive- and negative-tension branes, defined via $b\, \d t_\pm = n\,\d t = (n/y_0)(1-x_4^2)\,e^{-x_4^2/2}\,\d x_4 $, are, to $O(y_0^3)$, \[ t_+ = \frac{x_4}{y_0} + \frac{y_0}{30 {( 1 - x_4^2 ) }^3}\Big( 10 - 30x_4^2- x_4^3( 5 - 14 x_4^2 + 5 x_4^4 ) + 5 x_4 {( 1 - x_4^2 ) }^2 \ln (1 - x_4^2) \Big) \] and \[ t_- = \frac{x_4}{y_0} - \frac{y_0}{30 {( 1 - x_4^2 ) }^3}\Big( 10 -30 x_4^2+x_4^3(5 - 14 x_4^2 + 5 x_4^5 ) - 5 x_4 {( 1 - x_4^2 ) }^2 \ln (1 - x_4^2) \Big), \] where we have used (\ref{ad_n}) and (\ref{ad_b}). Remarkably, to lowest order in $y_0$, the two brane conformal times are in agreement not only with each other, but also with the four-dimensional effective theory conformal time. Hence, in the limit in which $y_0^2$ corrections are negligible, there exists a universal four-dimensional time. In this limit, $t_4=x_4/y_0$ and the brane scale factors are simply given by $b_\pm = 1\pm\bar{A}_4 t_4 = 1\pm x_4$. The four-dimensional effective scale factor, $b_4$, is given by \[ \label{b4def} (b_4)^2=b_+^2-b_-^2=4 \bar{A}_4 t_4 = 4 x_4 \qquad \Rightarrow b_4 = 2 x_4^{1/2}. \] In order to describe the full five-dimensional geometry, one must specify the distance between the branes $d$ as well as the metrics induced upon them. The distance between the branes is of particular interest in the cyclic scenario, where an interbrane force depending on the interbrane distance $d$ is postulated. In the lowest approximation, where the branes are static, the four-dimensional effective theory predicts that \[ \label{lncoth} d =L \ln \coth\left({|\phi|\over\sqrt{6}}\right)= L\ln\left(\frac{b_+}{b_-}\right) = L \ln \left({1+\bar{A}_4 t_4\over 1-\bar{A}_4 t_4} \right). \] Substituting our scaling solution and evaluating to leading order in $y_0$, we find \[ \label{4d_pred} d = L \ln \left(\frac{1+x_4}{1-x_4}\right)+O(y_0^2). \] (Again, this quantity is ill-defined for $x_4>1$). In the full five-dimensional setup, a number of different measures of the interbrane distance are conceivable, and the interbrane force could depend upon each of these, according to the precise higher-dimensional physics. One option would be to take the metric distance along the extra dimension \[ d_m =L \int_{-y_0}^{y_0} \sqrt{g_{yy}} \d y = L \int_{-y_0}^{y_0} nt \d y = L \int_{-1}^{1} n x_4 e^{-\frac{1}{2}x_4^2}\d \w. \] Using (\ref{ad_n}), we obtain \[ \label{d_m} d_m = L \int_{-1}^{1}\frac{x_4}{1-\w x_4}\d \w+ O(y_0^2) = L \ln \left(\frac{1+x_4}{1-x_4}\right)+O(y_0^2), \] in agreement with (\ref{4d_pred}). An alternative measure of the interbrane distance is provided by considering affinely parameterised spacelike geodesics running from one brane to the other at constant Birkhoff-frame time and noncompact coordinates $x^i$. The background interbrane distance is just the affine parameter distance along the geodesic, and the fluctuation in distance is obtained by integrating the metric fluctuations along the geodesic, as discussed in Appendix \ref{appE}. One finds that, to leading order in $y_0$ only, the geodesic trajectories lie purely in the $y$-direction. Hence the affine distance $d_a$ is trivially equal to the metric distance $d_m$ at leading order, since \[ d_a = L \int \sqrt{g_{ab}\dot{x}^a\dot{x}^b}\,\d \lambda = L \int nt\dot{y} \d \lambda = d_m, \] where the dots denote differentiation with respect to the affine parameter $\lambda$. Both measures of the interbrane distance therefore coincide and are moreover in agreement with the four-dimensional effective theory prediction, but only at leading order in $y_0$. \subsection{Perturbations} Since the four-dimensional Newtonian potential $\Phi_4$ represents the \textit{anticonformal} part of the perturbed four-dimensional effective metric (see (\ref{4dmet})), it is unaffected by the conformal factors in (\ref{map4}) relating the four-dimensional effective metric to the induced brane metrics. Hence we can directly compare the anticonformal part of the perturbations of the induced metric on the branes, as calculated in five dimensions, with $2\Phi_4$ in the four-dimensional effective theory. The induced metric on the branes is given by \bea \d s^2 &=& b^2\, \left( -(1+2 \Phi_L)\, \d t_\pm^2 + (1-2 \Psi_L )\, \d \vec{x}^2 \right) \nonumber \\ &=& b^2\, (1+\Phi_L-\Psi_L) \left( -(1+\Phi_L+\Psi_L)\, \d t_\pm^2 + (1- \Psi_L-\Phi_L )\, \d \vec{x}^2 \right),\qquad \eea where the background brane conformal time, $t_\pm$, is related to the bulk time via $b\, \d t_\pm = n\,\d t$. The anticonformal part of the metric perturbation is thus simply $\Phi_L+\Psi_L$. It is this quantity, evaluated on the branes to leading order in $y_0$, that we expect to correspond to $2\Phi_4$ in the four-dimensional effective theory. Using our results (\ref{adperts}) and (\ref{phi40soln}) from expanding about the scaling solution, we have to $O(y_0^2)$, \[ \label{thiseqn} \frac{1}{2}(\Phi_L+\Psi_L)_+= \frac{1}{2}(\Phi_L+\Psi_L)_-=\phi_{40}(x_4 = \frac{1}{x_4}\left(A_0 J_1(\tk x_4)+B_0 Y_1(\tk x_4)\right), \] with $A_0$ and $B_0$ as given in (\ref{A0B0}). On the other hand, the Newtonian potential of the four-dimensional effective theory is given by (\ref{phi4eft}). Since $t_4$, the conformal time in the four-dimensional effective theory, is related to the physical dimensionless brane conformal time $x_4$ by $t_4=x_4/y_0$ (to lowest order in $y_0$), and moreover $\tk=k/y_0$, we have $\tk x_4 = k t_4$. Hence, the four-dimensional effective theory prediction for the Newtonian potential is in exact agreement with the scaling solution holding at leading order in $y_0$, upon identifying $\tA_0$ with $A_0/y_0$ and $\tB_0$ with $B_0/y_0$. The behaviour of the Newtonian potential is illustrated in Figure \ref{phifourplots}. \begin{figure}[p] \begin{center} \begin{minipage}{10cm} \includegraphics[width=10cm]{Phi4A.eps} \vspace{1cm} \end{minipage} \begin{minipage}{10cm} \hspace{-0.32cm} \includegraphics[width=10cm]{Phi4B.eps} \vspace{1cm} \end{minipage} \caption{ The four-dimensional Newtonian potential $\Phi_4$ on the positive-tension brane, plotted to zeroth order in $y_0$ as a function of the time $x_4$ for wavelength $\tk = 1$. The upper plot illustrates the mode with $A=1$ and $\tB=0$, while the lower plot has $A=0$ and $\tB=1$. } \label{phifourplots} \end{center} \end{figure} Turning our attention now to the radion perturbation, $\delta \phi$, we know from our earlier considerations that this quantity is related to the perturbation $\delta d$ in the interbrane separation. Specifically, from varying (\ref{lncoth}), we find \[ \delta d = 2 L\, \cosech\Big(\sqrt{\frac{2}{3}}\,\phi\Big)\,\frac{\delta \phi}{\sqrt{6}}. \] Inserting the four-dimensional effective theory predictions for $\phi$ and $\delta \phi$, we obtain \[ \label{deltad} \frac{\delta d}{L}=\Big(\frac{4 \bar{A}_4 t_4}{\bar{A}^2_4 t^2_4 -1}\Big)\Big(\frac{2}{3}\,k\,\big(\tA_0 J_0(kt_4)+\tB_0Y_0(kt_4)\big) -\frac{1}{t_4}\,\big(\tA_0 J_1(kt_4)+\tB_0 Y_1(kt_4)\big) \Big), \] where to lowest order $\bar{A}_4 = y_0+O(y_0^3)$. In comparison, the perturbation in the metric distance between the branes is \[ \label{deltad_m} \frac{\delta d_m}{L} = \int_{-y_0}^{y_0}nt\Gamma_L \d y = \int_{-1}^{1} \frac{n x_4 \xi_4}{b^2}\, e^{-\frac{1}{2}x_4^2} \d \w, \] where we have used (\ref{phi4xi4}). Evaluating the integral using (\ref{xi4soln}), to an accuracy of $O(y_0^2)$ we obtain \bea \frac{\delta d_m}{L}&=&\int_{-1}^{1} \frac{n x_4 \xi_{40}(x_4)}{b^2}\, e^{-\frac{1}{2}x_4^2} \d \ = \frac{2 x_4 \xi_{40}(x_4)}{(1-x_4^2)^2 \qquad\qquad\qquad\qquad \nonumber \\ &=& \frac{1}{(x_4^2-1)}\Big(\frac{8}{3}\,\tk x_4 (A_0 J_0(\tk x_4)+B_0 Y_0(\tk x_4)) \qquad \qquad\qquad\qquad \nonumber \\ && \qquad \qquad\qquad\qquad \qquad -4 (A_0 J_1(\tk x_4)+B_0 Y_1(\tk x_4))\Big), \eea which is in agreement with (\ref{deltad}) when we set $\bar{A}_4 t_4 \sim y_0 t_4 = x_4$, along with $\tA_0=A_0/y_0$, $\tB_0=B_0/y_0$ and $k=\tk y_0$. The calculations in Appendix \ref{appE} show moreover that the perturbation in the affine distance between the branes, $\delta d_a$, is identical to the perturbation in the metric distance $\delta d_m$, to lowest order in $y_0$. The four-dimensional effective theory thus correctly predicts the Newtonian potential $\Phi_4$ and the radion perturbation $\delta\phi$, but only in the limit in which the $y_0^2$ corrections are negligible and the bulk geometry is described by the scaling solution. While these corrections are automatically small at very early or very late times, at intermediate times they cannot be ignored and introduce effects that cannot be described by four-dimensional effective theory. The only five-dimensional longitudinal gauge metric perturbation we have not used in any of the above is $W_L$: this component is effectively invisible to the four-dimensional effective theory, since it vanishes on both branes and has no effect on the interbrane separation. \section{Mixing of growing and decaying modes} \label{mixingsection} Regardless of the rapidity of the brane collision $y_0$, one expects a four-dimensional effective description to hold both near to the collision, when the brane separation is much less than an AdS length, and also when the branes are widely separated over many AdS lengths. In the former case, the warping of the bulk geometry is negligible and a Kaluza-Klein type reduction is feasible, and in the latter case, one expects to recover brane-localised gravity. At the transition between these two regions, when the brane separation is of order one AdS length, one might anticipate a breakdown of the four-dimensional effective description. When the brane separation is of order a few AdS lengths, however, the negative-tension brane reaches the horizon of the bulk black hole and the small-$y_0$ expansion fails. This failure hampers any efforts to probe the breakdown of the four-dimensional effective theory at $x_4=1$ directly; instead, we will look for evidence of mixing between the four-dimensional perturbation modes in the transition from Kaluza-Klein to brane-localised gravity. To see this in action we have to compare the behaviour of the perturbations at very small times with that at very late times: in both of these limits a four-dimensional effective description should apply, regardless of the collision rapidity $y_0$, in which the four-dimensional Newtonian potential $\Phi_4$ satisfies \[ \Phi_4 = \frac{1}{x_4}\left(A_0 J_1(\tk x_4)+B_0 Y_1(\tk x_4)\right). \] Expanding this out on long wavelengths $\tk\ll 1$, taking in addition $\tk x_4 \ll 1$, we find \[ \label{phi4exp} \Phi_4 = -\,\frac{\tB_4^0}{x_4^2}+A_4^0+\frac{1}{2}\,\tB_4^0 \tk^2 \ln{\tk x_4}-\frac{1}{8}\,A_4^0 \tk^2 x_4^2+O(k^4), \] where the dimensionless constants $A_4^0$ and $\tB_4^0$ are given in terms of the five-dimensional perturbation amplitudes $A$ and $\tB$ by \[ \label{A4B4} A_4^0 = \frac{3}{2}\,A-\frac{9}{16}\,\tB-\frac{1}{8}\tB\tk^2, \qquad \tB_4^0 = \frac{1}{2}\tB, \] where we have used (\ref{A0B0}) and (\ref{thiseqn}), recalling that $\Phi_4 = \phi_{40}$ at leading order in $y_0$. In comparison, using our results from expanding about the scaling solution, we find the Newtonian potential on the positive-tension brane at small times $x_4 \ll 1$ is given by \bea \label{phi4early} \Phi_4 &= \(-\,\frac{\tB}{2 x_4^2}+ \frac{3}{2}\,A-\frac{9}{16}\,\tB-\frac{1}{8}\, \tB \tk^2+\frac{1}{4}\, \tB \tk^2 \ln{\tk x_4} -\frac{3}{16}\,A\tk^2 x_4^2+\frac{9}{128}\,\tB \tk^2 x_4^2\) \nonumber \\ && +y_0^2 \,\Bigg(\frac{11}{120}\,\tB-3 A x_4-\frac{47}{8}\,\tB x_4-\frac{1}{2}\,\tB\tk^2 x_4 \ln{\tk x_4}+6Ax_4^2\nonumber \\ && \qquad +\frac{1084}{105}\,\tB x_4^2 -\frac{211}{960}\,\tB \tk^2x_4^2+\tB\tk^2 x_4^2 \ln{\tk x_4}\Bigg) +O(x_4^3)+O(y_0^4). \eea Examining this, we see that to zeroth order in $y_0$ the result is in exact agreement with (\ref{phi4exp}) and (\ref{A4B4}). At $y_0^2$ order, however, extra terms appear that are not present in (\ref{phi4exp}). Nonetheless, at sufficiently small times the effective theory is still valid as these `extra' terms are subleading in $x_4$: in this limit we find \[ \Phi_4 = -\,\frac{\tB^E_4}{x_4^2}+A_4^E+\frac{1}{2}\,\tB_4^E\,\tk^2\ln{\tk x_4}+O(x_4) +O(\tk^4) \] (the superscript $E$ indicating early times), in accordance with the four-dimensional effective theory, where \[ \label{A4B4E} A_4^E = A_4^0 + \frac{11}{120}\tB y_0^2, \qquad \tB_4^E =\tB_4^0. \] At late times such that $x_4\gg 1$ (but still on sufficiently long wavelengths that $\tk x_4 \ll 1$), we find on the positive-tension brane \bea \label{phi4late} \Phi_4 &= \left(-\,\frac{\tB}{2 x_4^2}+ \frac{3}{2}\,A-\frac{9}{16}\,\tB-\frac{1}{8}\, \tB \tk^2+\frac{1}{4}\, \tB \tk^2 \ln{\tk x_4} -\frac{3}{16}\,A\tk^2 x_4^2+\frac{9}{128}\,\tB \tk^2 x_4^2\right) \nonumber \\ && +y_0^2 \,\Bigg(-\,\frac{A}{3 x_4^2}-\frac{\tB}{24 x_4^2}-\frac{A\tk^2}{8x_4^2} +\frac{173\tB\tk^2}{960 x_4^2}-\frac{\tB \tk^2 \ln{\tk}}{18x_4^2}-\frac{\tB \tk^2 \ln{ x_4}}{12 x_4^2}\nonumber \\ &&\qquad -\frac{3}{8}\,\tB+\frac{2}{9}\tB\tk^2+\frac{1}{6}\tB\tk^2\ln{x_4} +\frac{3}{64}\tB\tk^2 x_4^2\Bigg)+O\(\frac{1}{x_4^3}\)+O(y_0^4). \eea To zeroth order in $y_0$, the results again coincide with the effective theory prediction (\ref{phi4exp}) and (\ref{A4B4}). At $y_0^2$ order, however, extra terms not present in the four-dimensional effective description once more appear. In spite of this, at sufficiently late times the effective description still holds as these `extra' terms are suppressed by inverse powers of $x_4$ relative to the leading terms, which are \[ \Phi_4 = A_4^L-\,\frac{\tB^L_4}{x_4^2} -\,\frac{1}{8}\,A_4^L\, \tk^2 x_4^2 + O(\tk^2 \ln{\tk x_4}) \] (where the superscript $L$ indicates late times), in agreement with the four-dimen\-sional effective theory. (Since $x_4\gg 1$, we find $\tk \ll \tk x_4 \ll 1$, and we have chosen to retain terms of $O(\tk^2 x_4^2)$ but to drop terms of $O(\tk^2)$. The term of $O(x_4^{-2})$ is much larger than $O(\tk^2)$ and so is similarly retained). Fitting this to (\ref{phi4late}), we find \[ \label{A4Ldef} A_4^L = A_4^0 -\frac{3}{8}\,\tB y_0^2, \qquad \tB_4^L = \tB_4^0+\Big(\frac{A}{3}+\frac{\tB}{24}\Big)y_0^2. \] Comparing the amplitudes of the two four-dimensional modes at early times, $A_4^E$ and $\tB_4^E$, with their counterparts $A_4^L$ and $\tB_4^L$ at late times, we see clearly that the amplitudes differ at $y_0^2$ order. Using (\ref{A4B4}), we find: \[ \label{mixingmatrix} \left(\begin{array}{c} A_4^L \\[1ex] \tB_4^L \end{array}\right) = \left(\begin{array}{cc} 1 \ \ \ \ & -\frac{14}{15} y_0^2 \\[1ex] \frac{2}{9}y_0^2 \ \ \ \ & 1+\big(\frac{1}{3}+\frac{\tk^2}{18}\big)y_0^2\end{array}\right)\left(\begin{array}{c} A_4^E \\[1ex] \tB_4^E \end{array}\right) . \] Hence the four-dimensional perturbation modes (as defined at very early or very late times) undergo mixing. \section{Summary} In this chapter we have developed a set of powerful analytical methods which, we believe, render braneworld cosmological perturbation theory solvable. Considering the simplest possible cosmological scenario, consisting of slowly moving, flat, empty branes emerging from a collision, we have found a striking example of how the four-dimensional effective theory breaks down at first nontrivial order in the brane speed. As the branes separate, a qualitative change in the nature of the low energy modes occurs, from being nearly uniform across the extra dimension when the brane separation is small, to being exponentially localised on the positive-tension brane when the branes are widely separated. If the branes separate at finite speed, the localisation process fails to keep up with the brane separation and the low energy modes do not evolve adiabatically. Instead, a given Kaluza-Klein zero mode at early times will generically evolve into a mixture of both brane-localised zero modes and excited modes in the late-time theory. From the perspective of the four-dimensional theory, this is manifested in the mixing of the four-dimensional effective perturbation modes between early and late times, as we have calculated explicitly. Such a mixing would be impossible were a local four-dimensional effective theory to remain valid throughout cosmic history: mode-mixing is literally a signature of higher-dimensional physics, writ large across the sky. The strength of our expansion about the scaling solution lies in its ability to interpolate between very early and very late time behaviours, spanning the gap in which the effective theory fails. Not only can we solve for the full five-dimensional background and perturbations of a colliding braneworld, but our solution takes us beyond the four-dimensional effective theory and into the domain of intrinsically higher-dimensional physics. \chapter{Generating brane curvature \label{zetachapter} \begin{flushright} \begin{minipage}{8.5cm} \small {\it \noindent If you can look into the seeds of time, \\ And say which grain will grow, and which will not... } \begin{flushright} \noindent MacBeth, Act I. \end{flushright} \end{minipage} \end{flushright} \section{Introduction} A key quantity of cosmological interest is $\zeta$, the curvature perturbation on comoving slices. Well away from the collision, $\zeta_4$, the curvature perturbation in the four-dimensional effective theory, agrees well with the same quantity $\zeta$ defined on the branes. Yet whereas the brane curvature perturbation is precisely conserved (in the absence of additional bulk stresses), the curvature perturbation in the four-dimensional effective theory changes as the branes approach at order $(V/c)^2$, due to the mixing of four-dimensional perturbation modes. In light of this failure of the four-dimensional effective theory, it is interesting to revisit the generation of curvature on the branes from a five-dimensional perspective. In the ekpyrotic and cyclic models, this takes place through the action of an additional bulk stress, $\Delta T^5_5$, on top of the background negative cosmological constant \cite{Ekpyrotic, Steinhardt:2002ih, Cyclicevo}. In this chapter, we show how $\zeta$ on the branes is generated as the two branes approach by an effect proportional to $\Delta T_5^5$ times an `entropy' perturbation. The latter measures the relative time delay between the interbrane distance and a quantity measuring the time delay on the branes. The entropy perturbation is nonzero even in the model with no $\Delta T_5^5$, for which we have solved the five-dimensional Einstein equations. We are thus able to give an expression for the final brane curvature which is accurate to leading order in $\Delta T_5^5$. The $(V/c)^2$ violations of the four-dimensional effective theory cause the entropy perturbation to acquire a scale-invariant spectrum, which is then converted, under the influence of $\Delta T_5^5$, into a scale-invariant brane curvature perturbation. \subsection{$\zeta$ in the four-dimensional effective theory} Working in longitudinal gauge, from (\ref{zetadef}) the comoving curvature perturbation in the four-dimensional effective theory is given by \[ \zeta_4 = \Phi_4 - \frac{\cH(\Phi_4'+\cH \Phi_4)}{\cH'-\cH^2} = \frac{4}{3}\,A_4, \] where primes denote differentiation with respect to the four-dimensional conformal time $t_4$. (Recall from (\ref{b4def}) that the comoving Hubble parameter $\cH = b_4'/b_4 = 1/2 t_4$, and that $\Phi_4 = A_4 -B_4/t_4^2$ on long wavelengths). From the four-dimensional effective theory, we expect that $\zeta_4$ is constant on long wavelengths. As a consequence of the mode-mixing in (\ref{mixingmatrix}), however, the value of $A_4$ (and hence $\zeta_4$) differs at $O(y_0^2)$ between early and late times. \subsection{$\zeta$ on the branes} To find a quantity that is exactly conserved on long wavelengths, let us instead define the comoving curvature perturbation on the branes directly in five dimensions. The following linear combination of the five-dimensional isotropic spatial metric perturbation $\Psi$ and the lapse perturbation $\Phi$ is gauge-invariant, when evaluated on either brane: \[ \zeta \equiv \Psi - {H\over H_{,T}}\left( \Psi_{,T}+H\Phi\right), \label{zeta} \] where $T$ is now the proper time on the brane and $H=b_{,T}/b$. In four-dimensional cosmological perturbation theory, the three-curvature of spatial slices is proportional to $k^2 \Psi$, and the second term in (\ref{zeta}) is proportional to the matter velocity with respect to those slices \cite{mukhanov}. The latter may be gauged away in a suitable comoving time-slicing. Thus, $k^2 \zeta$ may be interpreted as the curvature perturbation of spatial slices which are comoving with respect to the matter. In our case, no matter is present on the branes. Nevertheless, $\zeta$ is still a well-defined gauge-invariant quantity. For empty branes and an empty bulk (apart from the background negative cosmological constant), as we shall see below, $\zeta$ is a conserved quantity at long wavelengths. We can therefore calculate $\zeta$ at small $t$ from the results (\ref{tseriesbgd}) and (\ref{tseriesperts}) from the previous chapter. Evaluating (\ref{zeta}) on each brane, we find that, to order $y_0^4$, \[ \zeta_+=\zeta_- = 2A-\frac{3}{4}\,B\,y_0^2-\frac{1}{2}\,B\,y_0^4 = \frac{4}{3}\,A_4^L = \frac{4}{3}\,A_4^E-\frac{56}{45}\,y_0^2 \tB_4^E , \label{qzeta} \] where the latter two equalities follow from (\ref{A4Ldef}), (\ref{A4B4}) and (\ref{mixingmatrix}). Thus, in the limit of late times, $\zeta_\pm$ and $\zeta_4$ coincide. At early times, however, $\zeta_4$ differs from $\zeta_\pm$ at $O(y_0^2)$. \section{Inducing brane curvature from a five-dimen\-sional perspective In the ekpyrotic and cyclic models, the primordial ripples on the branes are generated by a force between the branes \cite{Ekpyrotic, Khoury}. In the four-dimensional effective theory, this is described by an assumed potential $V(\phi)$ for the radion field $\phi$. In the five-dimensional description, the force between the branes is associated with an additional bulk stress $\Delta T^5_5$, on top of the background negative cosmological constant. This extra stress enters the background evolution equations as follows. The background $G_5^5$ Einstein equation, when evaluated on each brane with the appropriate boundary conditions \cite{Carsten}, yields an equation for the Hubble constant $H$ on each brane \cite{Binetruy}: \[ H_{,T}+ 2H^2 = -{1 \over 3 m_5^3}\, \Delta T^5_5, \label{back5} \] where $\Delta T^5_5$ is the five-dimensional stress evaluated on the brane, and the five-dimensional Planck mass $m_5$ is defined so that coefficient of the Ricci scalar in the five-dimensional Einstein-Hilbert action is $m_5^3/2$. Likewise for the linearised perturbations, the $G_5^5$ equation, when supplemented with the appropriate boundary conditions, yields \cite{Carsten} \[ \Psi_{,TT}+ H\,(\Phi_{,T} +4 \Psi_{,T}) +2 \Phi\,(H_{,T} +2 H^2) = {k^2 \over 3 b^2}\, (\Phi-2 \Psi) +{1\over 3 m_5^3} \,\delta T^5_5, \label{psieq} \] where $\delta T^5_5$ is the linearised perturbation of the bulk stress \cite{Carsten}. From equations (\ref{zeta}), (\ref{back5}) and (\ref{psieq}), one can derive the following evolution equation for $\zeta$: \[ \zeta_{,T} = -{1\over 3 m_5^3}\, {1\over H_{,T}}\, \left[(\Delta T^5_5)_{,T} \, {\cal S} - k^2\, b^{-2} H\, (\Phi-2 \Psi) \right], \label{zetaeq} \] where we define a gauge-invariant `entropy' perturbation \[ {\cal S}\equiv {H\over H_{,T}}\,\left( \Psi_{,T}+H\Phi\right) +H \,{\delta T^5_5\over (\Delta T^5_5)_{,T}}, \label{seq} \] which measures the difference between the dimensionless `time delay' in $\delta T^5_5$ and a quantity transforming in an identical manner constructed from the four-dimensional metric perturbations. Therefore ${\cal S}$ can be thought of as a measure of the extent to which $\delta T^5_5$ is synchronised with the metric perturbations on the brane. For adiabatic perturbations in four dimensions, ${\cal S}$ is zero on long wavelengths for all scalar variables such as $\Delta T^5_5$. In a fully five-dimensional description of the attractive interbrane force, (\ref{zetaeq}) and (\ref{seq}) could be used to study how $\zeta$ is generated on the branes. Such a calculation is beyond the scope of the present work. Instead, we will content ourselves with an estimate of the induced $\zeta$ on the branes, under the assumption that $\Delta T_5^5$ is a function of the interbrane separation $d$. Since ${\cal S}$ does not depend on the magnitude of $\Delta T_5^5$ (nor indeed on its functional dependence upon $d$), as long as $\Delta T_5^5$ is small we can still use our solution from the preceding chapter, in which the effects of $\Delta T_5^5$ have been ignored, to reliably calculate ${\cal S}$ to lowest order in $\Delta T_5^5$. Nonetheless, some ambiguity remains in our result because it is not obvious {\it a priori} precisely which measure of the interbrane distance $d$ should be used. In fact, this will depend on the detailed microphysics generating the interbrane force, which will be model-dependent. We will investigate two possible cases here and use the difference between the answers as a measure of the uncertainty in the final result. As a first choice, let us assume that the bulk stress $\Delta T_5^5$ is a function of the geodesic separation between the two branes, along spacelike geodesics chosen to be orthogonal to the four translational Killing vectors of the static background (corresponding to shifts in $\vec{x}$ and Birkhoff-frame time). The relevant distance $d$ and its perturbation are calculated in the final section of Appendix D. We find that, as the collision approaches, ${\cal S}$ tends to a constant on both branes: \[ {\cal S}\equiv {H\over H_{,T}}\,\left( \Psi_{,T}+H\Phi\right) +H \,{\delta d\over d_{,T}} \approx C_S B_4\,(V^2/L^2)\, (V/c)^2 +O(t^2), \label{sdeq} \] where $V$ is the relative speed of the branes as they collide, $B_4$ is the growing mode amplitude in the four-dimensional effective theory\footnote{ From (\ref{A4B4}) and (\ref{A4B4E}), to leading order in $(V/c)^2$ we have $B_4= B/2$ as the collision approaches, where $B$ is the five-dimensional growing mode amplitude.}, and $C_S=7$ for this choice of $d$. We have also calculated ${\cal S}$ under the assumption that the relevant $d$ is just the naive metric separation between the two branes, $\int dy \sqrt{g_{55}}$, in which case we find $C_S=1$. The absence of any $k$-dependence in this formula means that in the cyclic model, where the coefficient $B_4$ has a scale-invariant spectrum in the incoming state, the entropy perturbation is scale-invariant as the two branes collide. We emphasise that ${\cal S}$ is a fully gauge-invariant quantity, measuring the relative time delay between the perturbations on the branes, and the time to the collision as measured by the distance $d$, upon which the interbrane force depends. We can see from (\ref{zetaeq}) how a bulk stress $\Delta T_5^5$, acting on the entropy perturbation ${\cal S}$, leads directly to a scale-invariant $\zeta$ on the branes. If we work perturbatively in $\Delta T_5^5$, then using (\ref{back5}) and (\ref{zetaeq}), to leading order in $k$ we find \[ \zeta \approx {1\over 6 m_5^3} \,\int dT \,{(\Delta T_5^5)_{,T}\over H^2}\, {\cal S}. \label{pertz} \] When the branes are close, ${\cal S}$ is approximately constant. Since the integrand is not a total derivative, and therefore does not cancel, it follows that $\zeta$ acquires the same $k$-dependence as ${\cal S}$. For example, taking $\Delta T_5^5$ to be a narrow negative square well of strength $\Delta$ and width $\tau$, we find \[ \zeta \approx \,\frac{{\cal S}_0 \tau \Delta }{3 m_5^3 H} \] to leading order in $\tau$, where ${\cal S}_0$ is the constant value in (\ref{sdeq}). In the cyclic model, however, $\Delta T_5^5$ is not small. Its presence, as discussed in \cite{Cyclicevo}, qualitatively alters the evolution of the background solution so that the negative tension brane, initially contracting after the collision, turns around and starts expanding so that the interbrane distance never becomes large. An accurate calculation of the amplitude of the final perturbation spectrum will therefore require the effect of $\Delta T_5^5$ on the background, and on the perturbations, to be included. It is nevertheless not hard to estimate the parametric dependence of the result. In the scaling solution of the four-dimensional effective theory, one has $H \sim T^{-1}$ and $m_5^{-3} \Delta T_5^5 \sim - T^{-2}$ from (\ref{back5}). Equation (\ref{zetaeq}) then reads $T (d \zeta /dT) \sim {\cal S}$. Well before the collision, when the branes are moving slowly with respect both to each other and the bulk, one expects the four-dimensional effective description to be accurate and hence ${\cal S}$ should be zero. The brane curvature is then directly related to the four-dimensional effective theory curvature perturbation $\zeta_4$, which is zero on long wavelengths (hence $A_4=0$). As the branes approach, their speed picks up and ${\cal S}$ rises to a value of order $\sim B_4(V^2/L^2) (V/c)^2$. Finally, as the branes become very close, $\Delta T_5^5$ turns off and $\zeta$ is, from (\ref{zetaeq}), conserved once more. On long wavelengths, therefore, we obtain the following equation for the brane curvature as the collision approaches: up to a factor of order unity, we have \[ \zeta_+(0^-) \approx \zeta_-(0^-) \sim B_4 (V^2/L^2) (V/c)^2. \label{zap} \] In the ekpyrotic and cyclic models $B_4$ is scale-invariant in the incoming state, and so the final-state spectrum of curvature perturbations on the brane is also scale-invariant. \chapter{Colliding branes in heterotic M-theory} \label{Mthchapter} \begin{flushright} \begin{minipage}{7.5cm} \small {\it \noindent What is it that breathes fire into the equations \\ and makes a universe for them to describe? } \begin{flushright} \noindent Hawking, A Brief History of Time. \end{flushright} \end{minipage} \end{flushright} \vspace{0.2cm} Our goal in this chapter is to derive a cosmological solution of the Ho{\v r}ava-Witten model with colliding branes, in which the five-dimensional geometry about the collision is that of a compactified Milne spacetime, and the Calabi-Yau volume at the collision is finite and nonzero. We construct this solution as a perturbation expansion in the rapidity of the brane collision, applying the formalism developed in Chapter \S\,\ref{5dchapter}. As this work is currently still in progress, we will present here only the leading terms in this expansion, corresponding to the scaling solution for the background geometry. \section{Heterotic M-theory} One of the most successful models relating M-theory to phenomenology is that of Ho{\v r}ava and Witten \cite{HW}, in which an eleven-dimensional spacetime is the product of a compact Calabi-Yau manifold with a five-dimensional spacetime consisting of two parallel 3-branes or domain walls, one with negative and the other with positive tension. As in the case of the Randall-Sundrum model, the dimension normal to the 3-branes is compactified on an $S^1/\Z_2$ orbifold. Performing a generalised dimensional reduction of the eleven-dimensional theory, such that the 4-form has non-vanishing flux on the Calabi-Yau internal space \cite{Lavrinenko}, one obtains a five-dimensional supergravity theory as shown in \cite{Lukas}\footnote{Note in particular that the truncation from eleven to five dimensions is \textit{consistent}, \iec a solution of the five-dimensional theory is an exact solution of the full eleven-dimensional theory.}. The five-dimensional action for the bulk metric and the scalar field $\phi$, representing the volume of the internal Calabi-Yau manifold, then takes the form \[ S = \int_{5} \sqrt{-g}\, [R - \half (\D \phi)^2 - 6\a^2 e^{-2 \phi}] -\sum_{\pm}\int_\pm \sqrt{g^\pm}\,\sigma^\pm e^{-\phi}, \] where the sum runs over the two branes, with induced metrics $g^\pm_{\mu\nu}$, and we have dropped the Gibbons-Hawking term. The constants $\sigma^\pm$ are given by $\sigma^\pm = \pm 12 \a$, where $\a$ is an arbitrary constant parameterising the amount of 4-form flux threading the Calabi-Yau. The exponential potential in the bulk action is the remnant of the 4-form field strength, and plays a role analogous to that of the bulk cosmological constant in the Randall-Sundrum model. In the above, and for the remainder of this chapter, we have adopted units in which the five-dimensional Planck mass is set to unity. A static solution of this model \cite{Lukas}, comprising two domain walls located at $y=\pm y_0$, is given by \bea \label{domainwall} \d s^2 &=& H(y)\eta_{\mu\nu}\dxdx + H^4(y)\d y^2, \\ e^\phi &=& H^3(y), \\ H(y) &=& 1+2\a|y+y_0| , \eea where, to create the second domain wall, we must additionally impose a $\Z_2$ reflection symmetry about $y=+y_0$. Recently, exact solutions of the Ho{\v r}ava-Witten model with colliding branes were found in \cite{gibbons}. An analysis of these solutions from a four-dimensional effective perspective was given in \cite{Koyama}. In these solutions, however, the volume of the internal Calabi-Yau manifold shrinks to zero at the collision. For the purposes of constructing an M-theory model of a big crunch/big bang transition, it seems preferable to seek solutions in which the volume of the Calabi-Yau manifold at the collision is nonzero. This way, the only singular behaviour at the collision is that of the five-dimensional spacetime geometry, which we will take to be Milne about the collision. \section{A cosmological solution with colliding branes} \subsection{Equations of motion} As in the case of the Randall-Sundrum model\footnote{See Section \S\,\ref{solnmethods}.}, cosmological symmetry on the branes permits the bulk line element to be written in the form \[ \label{goodoldmetric} \d s^2 = n^2(t,y)(-\d t^2 + t^2 \d y^2)+b^2(t,y)\d \vec{x}^2, \] where the $x^i$ coordinates span the flat, three-dimensional spatial worldvolume of the branes. Moreover, in this coordinate system, the branes are held fixed at $y=\pm y_0$. (Recall also that, physically, $y_0$ represents the rapidity of the brane collision occurring at $t=0$). As in Chapter \S\,\ref{5dchapter}, it is useful to introduce the variables $\w=y/y_0$ and $x=y_0 t$, along with the metric functions $\beta(x, \w) = 3\ln b$ and $\nu(x, \w) = \ln (nx)$. The junction conditions, evaluated at $\w=\pm 1$, are then \[ \label{Mjn} \nu' = \a\, e^{\nu-\phi}, \qquad \beta' = 3\a\, e^{\nu-\phi}, \qquad \phi' = 6\a\, e^{\nu-\phi}, \] where primes denote differentiation with respect to $\w$. Evaluating the $\phi$, $G^0_0+G^5_5$, $G^5_5$, $G^0_0+G^5_5-(1/2)G^i_i$, and $G_{05}$ equations in the bulk, we obtain the following set of equations: \bea \label{Mphi} \phi''+\beta'\phi'+12\a^2 e^{2\nu-2\phi} &=& y_0^2\,(\ddot{\phi}+\dot{\beta}\dot{\phi}), \\ \label{MG00p55} \beta''+\beta'^2+6\a^2 e^{2\nu-2\phi} &=& y_0^2 \,(\ddot{\beta}+\dot{\beta}^2), \\ \label{MG55} \frac{1}{3}\,\beta'^2+\beta'\nu'-\frac{1}{4}\,\phi'^2+3\a^2 e^{2\nu-2\phi} &=& y_0^2\,(\ddot{\beta}+\frac{2}{3}\dot{\beta}^2 -\dot{\beta}\dot{\nu}+\frac{1}{4}\,\dot{\phi}^2), \\ \label{MG0055ii} \nu''-\frac{1}{3}\,\beta'^2+\frac{1}{4}\,\phi'^2-\a^2 e^{2\nu-2\phi} &=& y_0^2 \,(\ddot{\nu}-\frac{1}{3}\,\dot{\beta}^2+\frac{1}{4}\,\dot{\phi}^2), \\ \label{MG05} \dot{\beta}'+\frac{1}{3}\,\dot{\beta}\beta'-\dot{\nu}\beta'-\nu'\dot{\beta}+\half\,\dot{\phi}\phi' &=& 0, \eea where, throughout this chapter, we will use a dot as a shorthand notation for $x\D_x$. In the above, both the $G^5_5$ equation (\ref{MG55}) and the $G_{05}$ equation (\ref{MG05}) involve only single derivatives with respect to $\w$. Applying the junction conditions, we find that both left-hand sides vanish when evaluated on the branes. The $G_{05}$ equation is then trivially satisfied, while the $G^5_5$ equation yields the relation \[ \frac{b_{,xx}}{b}+\frac{b_{,x}^2}{b^2}-\frac{b_{,x}\,n_{,x}}{bn}+\frac{1}{12}\,\phi_{,x}^2 = 0, \] valid on both branes. Introducing the brane conformal time $\tau$, defined on either brane via $b\,\d \tau = n\, \d x$, this relation can be re-expressed as \[ \label{Mbraneeq} \frac{b_{,\tau\tau}}{b}+\frac{1}{12}\,\phi_{,\tau}^2=0. \] \subsection{Initial conditions} Through a suitable rescaling of coordinates, we can always arrange for the metric functions $n$ and $b$ to tend to unity as $t\tt 0$, so that the geometry about the collision is that of a Milne spacetime. Furthermore, we are seeking a solution in which the Calabi-Yau volume $e^\phi$ tends to a finite, nonzero value at the collision. Since a constant shift in $\phi$ is a symmetry of the system (given that $\a$ is arbitrary), without loss of generality we can set $e^\phi \tt 1$ as $t\tt 0$. To check that such a solution is possible, and to help ourselves later with fixing the initial conditions, it is useful to solve for the bulk geometry about the collision as a series expansion in $t$. Up to terms of order $t^3$, the solution corresponding to the Kaluza-Klein zero mode is: \bea n &=& 1 + \a\,(\sech{y_0}\sinh{y})\,t + \frac{\a^2}{8}\,\sech^2 y_0\,(9-\cosh{2y_0}-8\cosh{2y})\,t^2, \qquad\\ b &=& 1 + \a\,(\sech{y_0}\sinh{y})\,t+\frac{\a^2}{4}\,\sech^2 y_0 \,(3+\cosh{2y_0}-4\cosh{2y})\, t^2, \qquad\\ e^\phi &=& 1+6\a\,(\sech{y_0}\sinh{y})\,t-\frac{3\a^2}{2}\,\sech^2 y_0 \,(2-\cosh{2y_0}-\cosh{2y})\, t^2, \qquad \eea where we have used the junction conditions to fix the arbitrary constants arising in the integration of the bulk equations with respect to $y$. The brane conformal times are then given by \[ \tau_\pm = \int \frac{n_\pm}{b_\pm}\,\d x =x + O(x^3), \] in terms of which the brane scale factors $b_\pm$ are \[ \label{btau} b_\pm = 1 \pm \a\tanh{y_0}\,\Big(\frac{\tau_\pm}{y_0}\Big)-\frac{3}{2}\,\a^2 \tanh^2{y_0}\,\Big(\frac{\tau_\pm}{y_0}\Big)^2 + O\Big(\frac{\tau_\pm}{y_0}\Big)^3. \] \subsection{A conserved quantity} In the case of the Randall-Sundrum model, where there were only two time-dependent moduli, the $G^5_5$ equation evaluated on both branes provided sufficient information to determine both moduli. In the present setup, however, we have an extra time-dependent modulus due to the presence of the scalar field $\phi$. We therefore require a further equation to fix this. Introducing the variable $\chi=\phi-2\beta$, upon subtracting twice (\ref{MG00p55}) from (\ref{Mphi}) we find \[ \label{chiwave} (\chi'e^\beta)' = y_0^2 (\dot{\chi}e^\beta)\dot{\,}, \] which is simply the two-dimensional massless wave equation $\bo\chi=0$. Since the junction conditions imply $\chi'=0$ on the branes, the left-hand side vanishes upon integrating over $\w$. A second integration over $x$ then yields \[ \label{charge} \int_{-1}^{+1} \d \w \dot{\chi}e^\beta = \gamma, \] for some constant $\gamma$, which we can set to zero since our initial conditions are such that $\dot{\chi}e^\beta \tt 0$ as $x\tt 0$ Let us now consider solving (\ref{chiwave}) for $\chi$, as a perturbation expansion in $y_0^2$. Setting $\chi=\chi_0+y_0^2 \chi_1+O(y_0^4)$, and similarly for $\beta$, at zeroth order we have $(\chi_0'e^{\beta_0})'=0$. Integrating with respect to $\w$ introduces an arbitrary function of $x$ which we can immediately set to zero using the boundary condition on the branes, which, when evaluated to this order, read $\chi_0'=0$. This tells us that $\chi_0'=0$ throughout the bulk; $\chi_0$ is then a function of $x$ only, and can be taken outside the integral in (\ref{charge}). Since $\gamma=0$, yet the integral of $e^\beta$ across the bulk cannot vanish, it follows that $\chi_0$ must be a constant. At order $y_0^2$, the right-hand side of (\ref{chiwave}) evaluates to $y_0^2\, (\dot{\chi}_0e^{\beta_0})\dot{\,}$, which vanishes. Evaluating the left-hand side, we have $(\chi_1'e^{\beta_0})'=0$, and hence, by a sequence of steps analogous to those above, we find that $\chi_1$ must also be constant. It is easy to see that this behaviour continues to all orders in $y_0^2$. We therefore deduce that $\chi = \phi-2\beta$ is exactly constant. Since both $\phi$ and $\beta$ tend to zero as $x\tt 0$, this constant must be zero, and so we find $\phi=2\beta$. The essence of this result is that the scaling solution, and consequently any expansion about it in powers of $y_0^2$, exists only when $\chi$ is in the Kaluza-Klein zero mode. This may be seen from (\ref{chiwave}), which reduces, in the limit where $x\tt 0$ and $\beta\tt 0$, to $\chi''=y_0^2\, \ddot{\chi}$. For a perturbative expansion in $y_0^2$ to exist, the right-hand side of this equation must vanish at leading order. This is only the case, however, for the Kaluza-Klein zero mode: all the higher modes have a rapid oscillatory time dependence such that the right-hand side does contribute at leading order. This means that for the higher Kaluza-Klein modes a gradient expansion does not exist. Setting $\phi=2\beta$ from now on, returning to (\ref{Mbraneeq}) and recalling that $\beta=3\ln b$, we immediately obtain the equivalent of the Friedmann equation on branes; \[ \Big(\frac{b_{,\tau}}{b}\Big)_{,\tau}+4\,\Big(\frac{b_{,\tau}}{b}\Big)^2=0. \] Integrating, the brane scale factors are given by \[ b_\pm = \bar{A}_\pm (\tau_\pm - c_\pm)^{1/4}. \] To fix the arbitrary constants $\bar{A}_\pm$ and $c_\pm$, we need only to expand the above in powers of $\tau_\pm$ and compare with (\ref{btau}). We find \[ \label{Mb} b_\pm = (1\pm A \tau_\pm)^{1/4}, \] where $A = (4\a/y_0)\tanh{y_0}$. This equation determines the brane scale factors to all orders in $y_0^2$ in terms of the conformal time on each brane, and is one of our main results. (As a straightforward check, it is easy to confirm that the $O(\tau_\pm^2)$ terms in (\ref{btau}) are correctly reproduced). \subsection{The scaling solution} Having set up the necessary prerequisites, we are now in a position to solve the bulk equations of motion perturbatively in $y_0^2$. Setting \[ \beta = \beta_0+y_0^2 \beta_1 + O(y_0^4), \qquad \nu = \nu_0+y_0^2\nu_1+O(y_0^4), \] the scaling solution corresponds to the leading terms in this expansion, \iec to $\beta_0$ and $\nu_0$. To determine the $\w$-dependence of these functions, we must solve the bulk equations of motion at zeroth order in $y_0^2$. Evaluating the linear combination $(\ref{MG00p55})-3[(\ref{MG55})+(\ref{MG0055ii})]$, noting that at this order the right-hand sides all vanish automatically, we find \[ ((\beta'_0-3\nu'_0)e^{\beta_0})'=0. \] Using the boundary condition $\beta'_0-3\nu'_0=0$ on the branes, we then obtain $\beta_0=3\nu_0+f(x)$, for some arbitrary function $f(x)$. Backsubstituting into (\ref{MG55}) and taking the square root then yields \[ \nu'_0=\a\,e^{-5\nu_0-2f}, \] where consistency with the junction conditions (\ref{Mjn}) forced us to take the positive root. Integrating a second time, and re-writing $e^f=B^{-5}$, we find \[ \label{wdep} e^{\nu_0} = B^2(x)h^{1/5}, \qquad e^{\beta_0} = B(x)h^{3/5}, \] where $h = 5 \a\w+C(x)$ and $C(x)$ is arbitrary. In the special case where $B$ and $C$ are both constant, we recover the exact static domain wall solution (\ref{domainwall}), up to a coordinate transformation. In general, however, these two moduli will be time dependent. Inverting the relation $b_\pm^5 = B^{5/3}(\pm 5\a+C)$ to re-express $B$ and $C$ in terms of the brane scale factors $b_\pm(x)$, we find \[ B^{5/3} = \frac{1}{10\a}\,(b_+^5-b_-^5), \qquad C=5\a\( \frac{b_+^5+b_-^5}{b_+^5-b_-^5}\). \] Furthermore, at zeroth order in $y_0^2$, the conformal times on both branes are equal, since \[ \d \tau = \frac{n}{b}\,\d x = e^{\nu_0-\beta_0/3}\,\frac{\d x}{x}=B^{5/3}(x)\frac{\d x}{x} \] is independent of $\w$. To this order then, (\ref{Mb}) reduces to \[ b_\pm = (1\pm 4\a \tau)^{1/4}, \] allowing us to express the moduli in terms of $\tau$ as \bea \label{Bmod} B^{5/3}(\tau) &=& \frac{1}{10\a}\,[(1+4\a\tau)^{5/4}-(1-4\a\tau)^{5/4}], \\\nn \\ \label{Cmod} C(\tau) &=& 5\a\left[\frac{(1+4\a\tau)^{5/4}+(1-4\a\tau)^{5/4}}{(1+4\a\tau)^{5/4}-(1-4\a\tau)^{5/4}}\right]. \eea The brane conformal time $\tau$ and the Milne time $x$ are then related by \[ \ln x = 10 \a \int [(1+4\a\tau)^{5/4}-(1-4\a\tau)^{5/4}]^{-1}\d\tau. \] As we do not know how to evaluate this integral analytically, it seems that the best approach is simply to adopt $\tau$ as our time coordinate\footnote{In support of this, note that the relation between $x$ and $\tau$ is monotonic. At small times $x = \tau+(\a^2/4)\tau^3+O(\tau^5)$.} in place of $x$. The complete scaling solution is then given by (\ref{wdep}), (\ref{Bmod}) and (\ref{Cmod}). \section{Discussion} In this chapter we have taken some first steps towards a cosmological solution of the Ho{\v r}ava-Witten model with colliding branes, in which the Calabi-Yau volume at the collision is finite and nonzero, and the five-dimensional spacetime geometry about the collision is Milne. Employing the techniques developed in Chapter \S\,\ref{5dchapter}, we have shown how to construct the scaling solution for the background geometry. All the relevant tools are now in place to compute the corrections at $O(y_0^2)$. Only two steps are required: firstly, integrating the bulk equations of motion at $O(y_0^2)$ to find the $\w$-dependence of the solution; and secondly, fixing the time dependence using our explicit knowledge of the scale factors on the branes, according to (\ref{Mb}). With this accomplished, it is then feasible to consider solving for cosmological perturbations about this background, along the lines of the work presented in Chapter \S\,\ref{5dchapter}. An interesting feature of our solution is the vanishing of the scale factor (and also the volume of the Calabi-Yau manifold) on the negative-tension brane at $\tau=(1/4\a)$. In more general models incorporating matter on the negative-tension brane, four-dimensional effective theory arguments suggest that under similar circumstances the scale factor on the negative-tension brane, rather than shrinking to zero, would instead undergo a bounce at some small nonzero value. Since this behaviour persists even in the limit of only infinitesimal amounts of matter on the negative-tension brane, one might consider implementing a bounce in the present model by replacing the factors of $(1-4\a\tau)$ in (\ref{Bmod}) and (\ref{Cmod}) with $|1-4\a\tau|$. The scale factor on the positive-tension brane, along with the bulk time $x$, would then continue smoothly across the bounce and out to late times as $\tau\tt \inf$. The implications of this continuation remain to be explored. \chapter{Conclusions} \def1.66{1.66} \begin{flushright} \begin{minipage}{10cm} \small {\it \noindent All of this danced up and down, like a company of gnats, each separate, but all marvellously controlled in an invisible elastic net - danced up and down in Lily's mind... } \begin{flushright} \noindent Virginia Woolf, To the Lighthouse. \end{flushright} \end{minipage} \end{flushright} \vspace{0.2cm} In this thesis we have explored the impact of higher-dimensional physics in the vicinity of the cosmic singularity. Taking a simple Randall-Sundrum braneworld with colliding branes as our model of a big crunch/big bang universe, we have explicitly calculated the full five-dimensional gravitational dynamics, both for the background solution, and for cosmological perturbations. Our solution method, a perturbative expansion in $(V/c)^2$ (or equivalently, in the collision rapidity), provides a powerful analytical tool enabling us to assess the validity of the low energy four-dimensional effective theory. While this four-dimensional effective theory holds good at leading order in our expansion of the bulk geometry, at order $(V/c)^2$ new effects appear that cannot be accounted for within any local four-dimensional effective theory. Our principal example is the mixing of four-dimensional effective perturbation modes in the transition from early to late times (and vice versa). Despite the small size of this effect in the case of a highly non-relativistic collision, mode-mixing is nonetheless an important piece of the puzzle concerning the origin of the primordial density perturbations. If these primordial fluctuations were indeed generated via the ekpyrotic mechanism in a prior contracting phase of the universe, then mode-mixing suggests a resolution to the problem of matching the scale-invariant spectrum of growing mode time-delay perturbations in the collapsing phase, to the growing mode curvature perturbations post-bang. In such a scenario, higher-dimensional physics, rather than being insignificant, is instead responsible for all the structure we see in the universe today. The failure of the four-dimensional effective theory, however, reminds us that in order to address this issue satisfactorily, a fully five-dimensional description of the generation of curvature perturbations on the brane is required. Initiating this programme, we have shown how brane curvature is generated through the action of an additional five-dimensional bulk stress pulling the branes together. Our understanding is sufficient to provide a quantitative estimate of the final brane curvature prior to the collision, but to progress further it will be necessary to construct a specific five-dimensional model of the additional bulk stress. Work along these lines is already underway. Another direction of active research lies in the adaptation of our solution methods to other braneworld scenarios. Of particular interest are the cosmological colliding-brane solutions of the Ho{\v r}ava-Witten model. Here, we have constructed the leading terms of a background solution in which the five-dimensional geometry about the collision is Milne, and the Calabi-Yau volume tends to unity at the collision. Evaluating the corrections to the background at subleading order, as well as solving for cosmological perturbations, are both the subject of current research. Our methods should also extend to scenarios in which matter is present on the brane. A more challenging extension would be to probe the evolution of a black string \cite{BS} in an expanding cosmological background consisting of two separating branes. As the branes separate, the gradual stretching of the black string will eventually trigger the onset of the Gregory-Laflamme instability \cite{GL}. We conclude in the only fashion possible -- with a call-to-arms. Higher-dimensional physics, beyond the four-dimensional effective theory, necessarily holds the key to the decisive experimental signature over which braneworld cosmologies will one day stand or fall. \chapter{Introduction} \begin{flushright} \begin{minipage}{7cm} \small {\it \noindent There is no excellent beauty that hath not \\ some strangeness in the proportion. } \begin{flushright} \noindent Francis Bacon \end{flushright} \end{minipage} \end{flushright} \vspace{0.3cm} One of the most striking implications of string theory and M-theory is that there are extra spatial dimensions whose size and shape determine the particle spectrum and couplings of the low energy world. If the extra dimensions are compact and of fixed size $R$, their existence results in a tower of Kaluza-Klein massive modes whose mass scale is set by $R^{-1}$. Unfortunately, this prediction is hard to test if the only energy scales accessible to experiment are much lower than $R^{-1}$. At low energies, the massive modes decouple from the low energy effective theory and are, for all practical purposes, invisible. Therefore, we have no means of checking whether the four-dimensional effective theory observed to fit particle physics experiments is actually the outcome of a simpler higher-dimensional theory. The one situation where the extra dimensions seem bound to reveal themselves is in cosmology. At the big bang, the four-dimensional effective theory (Einstein gravity or its string-theoretic generalisation) breaks down, indicating that it must be replaced by an improved description. Already, there are suggestions of improved behaviour in higher-dimensional string theory and M-theory. If matter is localised on two branes bounding a higher-dimensional bulk, the matter density remains finite at a brane collision even though this moment is, from the perspective of the four-dimensional effective theory, the big bang singularity \cite{Cyclicevo,Ekpyrotic,Seiberg}. Likewise, the equations of motion for fundamental strings are actually regular at the collision in string theory, in the relevant background solutions \cite{perry, Gustavo}. We will adopt this scenario of colliding branes as our model of the big bang. Our focus, however, will not be the initial singularity itself, but rather, the dynamics of higher-dimensional gravity as the universe emerges from a brane collision. The model we study -- the Randall-Sundrum model \cite{RSI} -- is the simplest possible model of braneworld gravity, consisting of two empty $\Z_2$-branes (or orbifold planes) of opposite tension, separated by a five-dimensional bulk with a negative cosmological constant. We develop a solution method for the bulk geometry, for both the background and cosmological perturbations, in the form of a perturbative expansion in $(V/c)^2$, where $V$ is the speed of the brane collision and $c$ is the speed of light \cite{long}. Our solution allows us to track the evolution of the background and cosmological perturbations from very early times right out to very late times, providing a benchmark against which the predictions of the four-dimensional effective theory can be tested. We will find that the four-dimensional effective theory is accurate in two limits: that of early times, for which the brane separation is significantly less than the anti-de Sitter (AdS) radius $L$; and that of late times, for which the brane separation is significantly greater than $L$. In the former limit, the brane tensions and the warping of the bulk become negligible, and a simple Kaluza-Klein description consisting of four-dimensional gravity and a scalar field applies (the gauge field zero mode having been eliminated by the $\Z_2$ projections). In the latter limit, however, in which the branes are both widely separated and slowly-moving, the physics is qualitatively very different. Rather than being uniform across the extra dimension, the low energy gravitational zero modes are now localised around the positive-tension brane, as shown by Randall and Sundrum \cite{RSII}. Nevertheless, the four-dimensional effective theory describing this limit is identical, consisting of Einstein gravity and a scalar field, the radion, parameterising the separation of the branes. Surprisingly, however, our five-dimensional solution reveals that in the transition between these two limits -- from Kaluza-Klein to Randall-Sundrum gravity -- the four-dimensional effective theory fails at first nontrivial order in $(V/c)^2$. In effect, the separation of the branes at finite velocity serves to excite massive bulk modes, which curb the accuracy of the four-dimensional effective theory until their decay in the late-time asymptotic region. This process generates a striking signature impossible to forge within any local four-dimensional effective theory; namely, the mixing of four-dimensional cosmological perturbation modes between early and late times. This mode-mixing is conveniently described in four-dimensional longitudinal gauge, in which the sole physical degree of freedom associated with adiabatic\footnote{\ie perturbations which do not locally alter the matter history.} scalar perturbations is encoded in the Newtonian potential $\Phi_4$. For sufficiently long wavelengths such that $|k t_4| \ll 1$, the four-dimensional effective theory predicts that \[ \label{4dpred} \Phi_4=A_4-\frac{B_4}{t_4^2}, \] where $t_4$ is four-dimensional conformal time, and $A_4$ and $B_4$ are constants parameterising the amplitudes of the two perturbation modes. The first mode, with amplitude $A_4$, represents a curvature perturbation on constant energy density or comoving spatial slices, while the second mode, with amplitude $B_4$, corresponds to a local variation in the time elapsed since the big bang. (In an expanding universe, the curvature perturbation comes to dominate over the time-delay perturbation at late times, and hence is often referred to as the `growing' mode. In a collapsing universe, however, the two roles are reversed and the growing mode corresponds instead to the time-delay perturbation). Now, if the dynamics were truly governed by a local four-dimensional effective theory, the perturbation amplitudes $A_4$ and $B_4$ would be constants of the motion. From our five-dimensional solution, however, we can compute the {\it actual} asymptotic behaviour of the four-dimensional effective Newtonian potential, at early and at late times, by evaluating the five-dimensional metric perturbations on the positive-tension brane\footnote{Explicitly, the four-dimensional effective metric is related to the metric on the positive-tension brane via a conformal transformation, and so the anti-conformal part of the four-dimensional effective metric perturbation - namely, $\Phi_4$ - is equal to the anti-conformal part of the induced metric perturbation on the brane.}. This allows us to identify the four-dimensional effective mode amplitudes $A_4$ and $B_4$ in terms of the underlying {\it five-dimensional} mode amplitudes, which are truly constant. We find that, while the four-dimensional effective theory prediction (\ref{4dpred}) does indeed hold in the limit of both early and late times, the four-dimensional effective mode amplitudes $A_4$ and $B_4$ are mixed in the transition from early to late times. (For example, if the system starts out purely in the time-delay mode at small $t_4$, then one ends up in a mixture of both the time-delay and the curvature perturbation modes as $t_4\tt\inf$). This mixing first occurs at order $(V/c)^2$, reflecting the fact that the four-dimensional effective description holds good at leading (zeroth) order. Equivalently, parameterising the mixing by a matrix relating the dimensionless mode amplitudes $A_4$ and $B_4(L^2/V^2)$ at early times to their counterparts at late times, we find that this matrix differs from the identity at order $(V/c)^2$. Although this mixing is very small in the limit of a highly non-relativistic brane collision, it is nonetheless of great significance when considering the origin of the primordial density perturbations. Inflation provides one possible for explanation for the origin of these primordial fluctuations by postulating an early phase of quasi-de Sitter expansion, in which quantum fluctuations of the inflaton field are exponentially stretched and amplified into a classical, scale-invariant pattern of large-scale curvature perturbations. In a scenario such as the present, however, in which the universe undergoes a big crunch to big bang transition described by the collision of two branes, an alternative mechanism is feasible. In the ekpyrotic model \cite{Ekpyrotic}, and its cyclic extension \cite{Steinhardt:2002ih, Cyclicevo}, the observed scale-invariant perturbations first arise as ripples on the two branes, produced as they attract one another over cosmological time scales. These ripples are later imprinted on the hot radiation generated as the branes collide. So far, this process has only been described from a four-dimensional effective point of view, in which the perturbations are generated during a pre-big bang phase in which the four-dimensional Einstein-frame scale factor is contracting \cite{Khoury, Boyle, Gratton2, models}. A key difficulty faced by the ekpyrotic and cyclic scenarios is a scale-invariant spectrum of perturbations is generated only in the time-delay mode parameterised by $B_4$, which is the growing mode in a contracting universe. Naively, this mode is then orthogonal to the growing mode in the present expanding phase of the universe, which is the curvature perturbation mode parameterised by $A_4$. (In an expanding universe the time-delay perturbation mode $B_4$ decays rapidly to zero). In order to seed the formation of structure in the present universe, therefore, it is essential that some component of the growing mode time-delay perturbation in the collapsing phase be transmitted, with nonzero amplitude, to the growing mode curvature perturbation post-bang\footnote{This paradox is a recurring theme in the literature, see \eg \cite{gmode4, gmode1, gmode2, gmode3, gmode5, gmode6, gmode7, gmode8, jch2, gmode10, gmode11, gmode12, gmode13, Durrer, Copeland, Bozza&Veneziano}.}. Clearly, our discovery that the four-dimensional perturbation modes mix in the full five-dimensional setup sheds new light on this problem. The most simple-minded resolution would be for the ekpyrotic mechanism to generate a scale-invariant spectrum in $B_4$ long before the collision (when the branes are relatively far apart and the four-dimensional effective theory is still valid), with a piece of this subsequently being mixed into the curvature perturbation $A_4$ shortly before the collision. Yet the failure of the four-dimensional effective theory at order $(V/c)^2$, coupled with the likely five-dimensional nature of any matching rule used to propagate perturbations across the singularity, suggests that instead a reformulation of the problem in terms of purely five-dimensional quantities is necessary. To this end, we define the curvature perturbation on the brane directly in five dimensions, showing how it differs from the curvature perturbation in the four-dimensional effective theory at order $(V/c)^2$. We then proceed to analyse the generation of brane curvature through the action of an additional five-dimensional bulk stress (over and on top of the negative bulk cosmological constant) serving to pull the branes together. Under the assumption that this additional bulk stress is small, a quantitative estimate of the final brane curvature perturbation prior to the collision can be obtained, although further progress is dependent upon the introduction of a specific five-dimensional model for the additional bulk stress. Finally, it is worth bearing in mind that there is good hope of experimentally distinguishing inflation and the ekpyrotic mechanism over the coming decades through their very different predictions for the long-wavelength spectrum of primordial gravitational waves \cite{Boyleinflation}. In addition to the cosmological ramifications of the work in this thesis, another line of enquiry being actively pursued is the application of our methods to more sophisticated braneworlds stemming from fundamental theory, such as the Ho{\v r}ava-Witten model of heterotic M-theory. Our efforts to date form the basis of the concluding chapter of the present work. \begin{center} *** \end{center} The plan of this thesis is as follows: we begin in Chapter \S\,\ref{earlyunivchapter} with an introduction to the early universe, encompassing cosmological perturbation theory and the generation of the primordial density perturbations, via both inflation and the ekpyrotic mechanism. In Chapter \S\,\ref{branegravitychapter}, we review the Randall-Sundrum model and braneworld cosmology, before proceeding to introduce the low energy four-dimensional effective theory. The subsequent chapters present original research material: in Chapter \S\,\ref{confsymmchapter}, we investigate the role of conformal invariance in the braneworld construction, showing how the form of the effective action up to quadratic order in derivatives is fully constrained by this symmetry \cite{Conf_sym}. We also consider how to improve the effective action for a pair of branes of opposite tension through application of the AdS/CFT correspondence. Chapter \S\,\ref{5dchapter} forms the heart of the present work. In this chapter, we present our solution methods for the bulk geometry, and apply them to calculate the behaviour of both the background and perturbations in a big crunch/big bang cosmology \cite{long}. We highlight the failure of the four-dimensional effective theory, and compute explicitly the mixing of four-dimensional effective perturbation modes. In Chapter \S\,\ref{zetachapter}, we revisit the generation of curvature perturbations from a five-dimensional perspective, considering the action of an additional five-dimensional bulk stress. Finally, in Chapter \S\,\ref{Mthchapter}, we present work in progress seeking a cosmological solution of the Ho{\v r}ava-Witten model with colliding branes, in which the five-dimensional geometry about the collision is that of a compactified Milne spacetime, and the Calabi-Yau volume at the collision is finite and nonzero. \chapter{Detailed results} \label{detailedresults} \begin{flushright} \begin{minipage}{5.7cm} \small \noindent {\it By the pricking of my thumbs, \\ Something wicked this way comes.} \begin{flushright} \noindent MacBeth, Act IV. \end{flushright} \end{minipage} \end{flushright} \vspace{0.2cm} In this appendix, we list the results of detailed calculations of the background geometry and cosmological perturbations in a big crunch/big bang universe, obtained using a variety of methods. For clarity, the AdS radius $L$ has been set to unity throughout. (To restore $L$, simply replace $t \tt t/L$ and $k \tt kL$). \section{Polynomial expansion for background} \label{appB} Using the expansion in Dirichlet/Neumann polynomials presented in Section \S\,\ref{polysection} to solve for the background geometry, we find {\allowdisplaybreaks \bea N_0 &=& {1\over t}-{1\over 2}\,t y_0^2+{1\over 24}\,t \(8-9 \,t^2\) y_0^4-{1\over 720}\, t \(136+900 \,t^2+375 \,t^4\) y_0^6 \nonumber \\ && +{1\over 40320}\,t \(3968+354816 \,t^2-348544 \,t^4-36015 \,t^6\) y_0^8+O(y_0^{10}) \qquad\\ N_3 &=& -\frac{1}{6} + \left(\frac{5}{72} - 2 \,t^2 \right)y_0^2 - {1\over 2160}\,\( 61 - 20880 \,t^2 + 19440 \,t^4 \) y_0^4 \nonumber \\ && + \left( \frac{277}{24192} - \frac{743 \,t^2}{20} + \frac{677 \,t^4}{6} - \frac{101 \,t^6}{3}\right)y_0^6+O(y_0^{8}) \\ N_4 &=& \frac{3}{2}\, t^3 y_0^2 + \frac{1}{4}\,t^3\( -28 + 33 \,t^2\) y_0^4 \nonumber \\ && + \frac{1}{80}\,t^3 \( 1984 - 6776 \,t^2 + 2715 \,t^4 \) y_0^6 +O(y_0^8) \\ N_5 &=& -\frac{1}{120} + \frac{1}{1800}\,\( 7 - 1800 \,t^2 - 540 \,t^4\) y_0^2 \nonumber \\ && - \frac{1}{201600}\,\( 323 - 990528 \,t^2 + 2207520 \,t^4 + 362880 \,t^6 \) y_0^4+O(y_0^6) \qquad\\ N_6 &=& t^3 y_0^2 + \frac{1}{30}\,t^3 \( -142 + 371 \,t^2 \) y_0^4+O(y_0^6) \\ N_7 &=& -\frac{1}{5040}- \frac{1}{94080}\,\( -9 + 20384 \,t^2 + 23520 \,t^4\) y_0^2+O(y_0^4) \\ N_8 &=& \frac{3}{10}\,t^3 y_0^2+O(y_0^4) \\ N_9 &=& \frac{1}{362880} +O(y_0^2) \\ N_{10} &=& O(y_0^2) \eea} and \bea q_0 &=& 1 - \frac{3}{2}\,t^2 y_0^2 + \left( t^2 - \frac{7\,t^4}{8} \right) y_0^4 + \left( \frac{-17\,t^2}{30} + \frac{17\,t^4}{12} - \frac{55\,t^6}{48} \right) y_0^6\qquad \qquad \nonumber \\ && + \left( \frac{31\,t^2}{105} - \frac{9\,t^4}{5} + \frac{233\,t^6}{90} - \frac{245\,t^8}{128} \right) y_0^8 + O(y_0^{10}) \qquad \\ q_3 &=&-2\,t^3 y_0^2 + \left( \frac{29\, t^3}{3} - 8\, t^5 \right) y_0^4 \nonumber \\ && - \left( \frac{743\, t^3}{20} - \frac{322\, t^5}{3} + 27\, t^7 \right) y_0^6 +O(y_0^8) \\ q_4 &=& \frac{1}{2}\,t^4 y_0^2 + \left( \frac{-5\,t^4}{3} + \frac{9\,t^6}{4} \right) y_0^4+O(y_0^6) \\ q_5&=& - t^3 y_0^2 + \left( \frac{737\,t^3}{150} - \frac{58\,t^5}{5} \right) y_0^4+O(y_0^6) \\ q_6 &=& \frac{1}{3}\,t^4 y_0^2+O(y_0^4) \\ q_7 &=& -\frac{13}{60}\,t^3 y_0^2+O(y_0^4) \\ q_8 &=& O(y_0^2) \\ q_9 &=& O(y_0^2), \eea This solution explicit satisfies all the Einstein equations up to $O(y_0^{10})$. \section{Polynomial expansion for perturbations} \label{appC} \subsection{All wavelengths} Using the Dirichlet/Neumann polynomial expansion to solve for the perturbations, the solution may be expressed in terms of the original longitudinal gauge variables as \[ \Phi_L = \mathcal{P}_\Phi^{(0)}(y,t)F^{(0)}(t)+ \mathcal{P}_\Phi^{(1)}(y,t)F^{(1)}(t), \] where \[ F^{(n)}(t) = \bar{A} J_n(kt)+\bar{B} Y_n(kt), \] for $n=0,\,1$ and $\gamma = 0.577\dots$ is the Euler-Mascheroni constant. The constants $\bar{A}$ and $\bar{B}$ are arbitrary functions of $k$. In order to be consistent with the series expansion in $t$ presented in Section \S\,\ref{seriessoln}, we must set \bea \bar{A} &=& 12\,A+2\,B\,k^2\,(\ln{2}-\gamma ) - \frac{9\,B\,y_0^2}{2} + \frac{233\,B\,y_0^4}{45}+O(y_0^6), \\ \bar{B} &=& B\, k^2\,\pi+O(y_0^6). \eea The polynomials $\mathcal{P}_\Phi^{(n)}$ are then given (for all $k$ and $t$) by \bea \mathcal{P}_\Phi^{(0)}(t,y) &= & -\frac{1}{6} + \frac{1}{3}\,t y + \frac{1}{12}\,t^2 \left(-2 y^2 + y_0^2 \right) + \frac{1}{36}\,t y \left(11 y^2 + 3 (-11 + 3 t^2) y_0^2 \right) \nonumber \\ && +\frac{t^2}{2160}\, \left(-525 y^4 + 90 (19 - 5t^2 ) y^2 y_0^2 + (511 - 180 t^2 + 45 k^2 t^4 ) y_0^4 \right) \nonumber \\ && - \frac{t\,y}{2160}\,\Bigg(3\(-92 + (9 + 4 k^2) t^2 \) y^4 \nonumber \\ && + 30 \(92 - (219 + 4 k^2) t^2 + k^2 t^4 \) y^2 y_0^2 \nonumber \\ && - \big(6900 - (20087 + 300 k^2) t^2 + 90\, (13 + k^2)\, t^4 - 90\, k^2\, t^6 \big)\, y_0^4\Bigg) \nonumber \\ && +O(y_0^6),\nonumber \\ \eea and by \bea \mathcal{P}_\Phi^{(1)}(t,y) &=& \frac{1}{k t}\,\Bigg[\frac{1}{2} - \frac{t y}{2} + \frac{t^2}{12}\left(3 y^2 + (-3 + k^2 t^2)\, y_0^2 \right) \nonumber \\ && -\frac{t y}{36} \left( (3 + k^2 t^2 ) \, y^2 + 3 \left(-3 + (3 - k^2 )\, t^2 + 2 k^2 t^4 \right) y_0^2 \right) \nonumber \\ &&+\frac{t^2}{2160}\,\Bigg(75 \, (6 + k^2 t^2 ) \, y^4 + 90 \left(-12 + 3\, (2 - k^2)\, t^2 + 2 k^2 t^4 \right) y^2 y_0^2 \nonumber \\ && + \left(718 + k^2 t^2 \,(-101 + 225 t^2) \right) y_0^4 \Bigg) \nonumber \\ && -\frac{t y}{2160}\, \Bigg(3 \left(3 + 2 \,(-9 + 31 k^2)\, t^2 + 2 k^2 t^4 \right) y^4 \nonumber \\ && +30 \left(-3 + (219 - 62 k^2) \, t^2 + 16 k^2 t^4 \right) y^2 y_0^2 \nonumber \\ && + \left(225 -(20104 - 4650 k^2)\, t^2 + (1215 - 1822 k^2) \, t^4 + 765 k^2 t^6 \right) y_0^4 \Bigg) \Bigg]\nonumber \\ && +O(y_0^6) . \eea Since the $F^{(n)}$ are of zeroth order in $y_0$, the solution for $\Phi_L$ to a given order less than $O(y_0^6)$ is found simply by truncating the polynomials above. (Should they be needed, results up to $O(y_0^{14})$ can in addition be found at \cite{Website}). In a similar fashion we may express the solution for $\Psi_L$ as \[ \Psi_L = \mathcal{P}_\Psi^{(0)}(y,t)F^{(0)}(t)+ \mathcal{P}_\Psi^{(1)}(y,t)F^{(1)}(t), \] where $F^{(n)}$ is defined as above and \bea \mathcal{P}_\Psi^{(0)}(t,y) &=& \frac{1}{6} - \frac{t y}{3} + \frac{t^2}{6}\, (y^2 + y_0^2 ) + \frac{t y}{36} \left(-2 y^2 + 3 \,(2 - 3 t^2 )\, y_0^2 \right) \nonumber \\ && + \frac{t^2}{2160}\, \left(120 y^4 + 450 \,(-2 + t^2 )\, y^2 y_0^2 + (644 + 450 t^2 - 45 k^2 t^4 ) \, y_0^4 \right) \nonumber \\ && + \frac{t y}{2160} \Bigg(3 \left(-2 + (9 + 4 k^2)\, t^2 \right) y^4 + 30 \left(2 + 4 \,(9 - k^2) \,t^2 + k^2 t^4 \right) y^2 y_0^2 \nonumber \\ && - \left(150 + (2863 - 300 k^2 ) \, t^2 + 90 \, (13 + k^2) \, t^4 - 90 k^2 t^6 \right) y_0^4 \Bigg)\nonumber \\&& +O(y_0^6), \eea \bea \mathcal{P}_\Psi^{(1)}(t,y) &=& \frac{1}{k t}\Bigg[ \frac{t y}{2} - \frac{t^2}{12} \left(3 y^2 + (3 + k^2 t^2)\, y_0^2 \right) \nonumber \\ && +\frac{t y}{36} \left( (3 + k^2 t^2 )\, y^2 + 3 \left(-3 + (3 - k^2 )\, t^2 + 2 k^2 t^4 \right) y_0^2 \right) \nonumber \\ &&- \frac{t^2}{2160}\,\Bigg(15\, (12 + 5 k^2 t^2 ) \, y^4 - 90 \left(6 - 3\,(2 - k^2)\, t^2 - 2 k^2 t^4 \right) y^2 y_0^2 \nonumber \\ && -\left(752 - (540 - 101 k^2) \, t^2 - 360 k^2 t^4 \right) y_0^4 \Bigg) \nonumber \\ && +\frac{t y}{2160}\,\Bigg( \left(9 + 6 \,(-9 + k^2 )\, t^2 + 6 k^2 t^4 \right) y^4 \nonumber \\ && -30 \left(3 + (33 + 2 k^2 )\, t^2 - 7 k^2 t^4 \right) y^2 y_0^2 \nonumber \\ && + \left(225 + 2 \,(1288 + 75 k^2 )\, t^2 + (1215 - 1012 k^2 )\, t^4 + 765 k^2 t^6 \right) y_0^4 \Bigg) \Bigg]\nonumber \\ && +O(y_0^6) . \nonumber \\ \eea Finally, writing \[ W_L = \mathcal{P}_W^{(0)}(y,t)F^{(0)}(t)+ \mathcal{P}_W^{(1)}(y,t)F^{(1)}(t), \] we find \bea \mathcal{P}_W^{(0)}(t,y) &=&-\frac{1}{60}\,t^2\, (y^2 - y_0^2)\, \Big(-30 + 30 t y - 25 \left(y^2 + (-5 + 3 t^2)\, y_0^2 \right) \nonumber \\ && +t y \left(21 y^2 +(-149 + 75 t^2)\, y_0^2 \right) \Big)+O(y_0^6), \\ \nonumber \\ \mathcal{P}_W^{(1)}(t,y) &=&-\frac{1}{60 k}\,t^2\, (y^2 - y_0^2)\,\Big(30 y - 5 k^2 t \left(2 y^2 + (-10 + 3 t^2)\, y_0^2 \right) \nonumber \\ && + y\left( (12 + 11 k^2 t^2)\, y^2 + \left(-38 + (60 - 69 k^2)\, t^2 + 15 k^2 t^4 \right)y_0^2\right)\Big) \nonumber \\ && +O(y_0^6) .\nonumber \\ \eea \subsection{Long wavelengths} On long wavelengths, $F^{(n)}$ reduces to \bea F^{(0)}(t) &=& 12\,A - \frac{9\,B\,y_0^2}{2} + \frac{233\,B\,y_0^4}{45} +O(k^2)+O(y_0^6), \\ F^{(1)}(t) &=& \left( 6At - \left( \frac{2}{t} + \frac{9t y_0^2}{4} - \frac{233t y_0^4}{90}\right)B \right)k +O(k^2)+O(y_0^6).\qquad \eea For convenience, we list below the metric perturbations truncated at $O(k^2)$. \bea \Phi_L &=& \big(A - \frac{B}{t^2}\big) + \big( \frac{B y}{t} + A t y \big) + \frac{1}{8}\left( B ( y_0^2 - 4 y^2 ) - 4 A t^2 ( y_0^2 + y^2 ) \right) \nonumber \\ && + \frac{y}{24 t}\big( B ( 3 y_0^2 ( -4 + t^2 ) + 4 y^2 ) + 4 A t^2 ( 3 y_0^2 ( -19 + 3 t^2 ) + 19 y^2 \big) \nonumber \\ && + \Big( \frac{1}{6} A t^2 ( y_0^4 ( 29 - 6 t^2 ) + 3 y_0^2 ( 13 - 2 t^2 ) y^2 - 10 y^4 )\nonumber \\ && - \frac{1}{240}(B ( y_0^4 ( 56 - 45 t^2 ) + 15 y_0^2 ( -16 + 5 t^2 ) y^2 + 100 y^4 ) ) \Big) \nonumber \\ && + \frac{y}{240 t} \big( 2 A t^2 ( 5 y_0^4 ( 905 - 1338 t^2 + 75 t^4 ) + 10 y_0^2 ( -181 + 219 t^2 ) y^2 + 181 y^4 ) \nonumber \\ && + B ( y_0^4 ( 50 - 3509 t^2 + 135 t^4 ) + 5 y_0^2 ( -4 + 235 t^2 ) y^2 + ( 2 - 12 t^2 ) y^4 )\big) \nonumber \\ && + O(y_0^6), \\ \Psi_L &=& 2 A - \frac{y}{t}( B + A t^2 ) + \frac{1}{4}\big( 2 A t^2 ( y_0^2 + y^2 ) + B ( -y_0^2 + 2 y^2 ) \big) \nonumber \\ && - \frac{y}{24 t} \big( 4 A t^2 ( 3 y_0^2 ( -1 + 3 t^2 ) + y^2 ) + B ( 3 y_0^2 ( -4 + t^2 ) + 4 y^2 ) \big) \nonumber \\ && + \frac{1}{48}\Big( 8 A t^2 ( 2 y_0^4 ( 17 + 3 t^2 ) + 3 y_0^2 ( -7 + 2 t^2 ) y^2 + y^4 ) \nonumber \\ && + B ( y_0^4 ( 8 + 15 t^2 ) + 3 y_0^2 ( -8 + 5 t^2 ) y^2 + 8 y^4 ) \Big)\nonumber \\ && - \frac{y}{240 t} \Big( 2 A t^2 ( 25 y_0^4 ( 1 + 42 t^2 + 15 t^4 ) - 10 y_0^2 ( 1 + 39 t^2 ) y^2 + y^4 ) \nonumber \\ && + B ( y_0^4 ( 50 + 721 t^2 + 135 t^4 ) - 5 y_0^2 ( 4 + 47 t^2 ) y^2 + ( 2 - 12 t^2 ) y^4 ) \Big) \nonumber \\ &&+ O(y_0^6), \\ W_L &=& 6 A t^2 ( -y_0^2 + y^2 ) - t ( B + 3 A t^2 ) y ( -y_0^2 + y^2 ) \nonumber \\ && + \frac{1}{4}t^2 ( -y_0^2 + y^2 ) ( -9 B y_0^2 + 20 A ( y_0^2 ( -5 + 3 t^2 ) + y^2 ) )\nonumber \\ && - \frac{1}{120} t y ( -y_0^2 + y^2 ) ( 120 A t^2 ( y_0^2 ( -26 + 9 t^2 ) + 3 y^2 ) \nonumber \\ && + B ( y_0^2 ( -152 + 105 t^2 ) + 48 y^2 ) ) + O(y_0^6). \eea \section{Perturbations from expansion about the scaling solution} \label{appD} Following the method of expanding about the scaling solution presented in Section \S\,\ref{expaboutscalingsoln}, we have computed the perturbations to $O(y_0^4$). On long wavelengths, the five-dimensional longitudinal gauge variables take the form \bea \Phi_L &=& f^\Phi_0+y_0^2 (f^\Phi_1+f^\Phi_2 \ln{(1+x_4)} + f^\Phi_3 \ln{(1-x_4)}+f^\phi_4 \ln{(1-\w x_4)},\qquad \qquad \\ \Psi_L &=& f^\Psi_0+y_0^2 (f^\Psi_1+f^\Psi_2 \ln{(1+x_4)} + f^\Psi_3 \ln{(1-x_4)}+f^\Psi_4 \ln{(1-\w x_4)}, \qquad\qquad\\ W_L&=& e^{-\frac{1}{2}x_4^2}\,\big( f^W_0+y_0^2 (f^W_1+f^W_2 \ln{(1+x_4)} + f^W_3 \ln{(1-x_4)} \nonumber \\ && \qquad +f^W_4 \ln{(1-\w x_4)}\big), \eea where the $f$ are rational functions of $x_4$ and $\w$. For $\Phi_L$, we have \bea f^\Phi_0 &=& \frac{1}{16 x_4^2 ( -1 + x_4^2 ) }\Big(16 \tB - 16 \tB \w x_4 - 2 ( 8 A + \tB - 4 \tB \w^2 ) x_4^2 \nonumber \\ && \qquad + 2 ( -8 A + 3 \tB ) \w x_4^3 + ( 8 A - 3 \tB ) ( 3 + \w^2 ) x_4^4\Big),\\ f^\Phi_1 &=& \frac{1}{960 x_4^4 {( -1 + x_4^2 ) }^5}\Big(8 A x_4^5 \big( -580 x_4 + 95 x_4^3 - 576 \w^5 x_4^4 + 281 x_4^5 \nonumber \\ && + 96 \w^6 x_4^5 - 60 x_4^7 + 5 \w^4 x_4 ( 40 + 167 x_4^2 - 39 x_4^4 ) + 20 \w^3 ( -19 - 5 x_4^2 + 48 x_4^4 ) \nonumber \\ && - 10 \w^2 x_4 ( 78 + 117 x_4^2 - x_4^4 - 2 x_4^6 ) + 4 \w ( 285 + 100 x_4^2 - 25 x_4^4 - 29 x_4^6 + 5 x_4^8 ) \big) \nonumber \\ && + \tB \big( -1920 + x_4 ( 480 \w + 80 ( 91 + 15 \w^2 ) x_4 \nonumber \\ && - 160 \w ( 5 + 4 \w^2 ) x_4^2 + 40 ( -273 - 116 \w^2 + 7 \w^4 ) x_4^3 + 20 \w ( 649 - 127 \w^2 ) x_4^4 \nonumber \\ && + 4 ( 1231 - 3285 \w^2 + 2060 \w^4 ) x_4^5 - 4 \w ( 2036 - 2115 \w^2 + 1152 \w^4 ) x_4^6 \nonumber \\ && + ( 1107 + 8102 \w^2 - 4905 \w^4 + 768 \w^6 ) x_4^7 + 12 \w ( 233 + 48 \w^2 ( -5 + 3 \w^2 ) ) x_4^8 \nonumber \\ && - ( 3131 + 1582 \w^2 - 585 \w^4 + 288 \w^6 ) x_4^9 - 772 \w x_4^{10} + 20 ( 85 + 29 \w^2 ) x_4^{11} \nonumber \\ && + 180 \w x_4^{12} - 120 ( 3 + \w^2 ) x_4^{13} ) \big) \Big), \\ f^\Phi_2 &=& \frac{1}{48 x_4^5 {( -1 + x_4^2 ) }^3}\Big(\tB\big( 48 + ( 36 - 48 \w ) x_4 + 12 ( -7 - 6 \w + 2 \w^2 ) x_4^2 \nonumber \\ && + 4 ( -13 + 6 \w + 9 \w^2 ) x_4^3 + ( 60 + 80 \w - 12 \w^2 ) x_4^4 + 4 ( 8 + 6 \w - 9 \w^2 ) x_4^5 \nonumber \\ && - 3 ( 8 + 7 \w + 4 \w^2 ) x_4^6 + ( -11 + 5 \w^2 ) x_4^7 + 3 \w x_4^8 \big) + 8 A x_4^6 ( x_4 + \w^2 x_4 \nonumber \\ && - \w ( 1 + x_4^2 ) ) \Big), \eea \bea f^\Phi_3 &=& \frac{1}{48 x_4^5 {( -1 + x_4^2 ) }^3}\Big(\tB\big( -48 + 12( 3 + 4\w ) x_4 + 12 ( 7 - 6 \w - 2 \w^2 ) x_4^2 \nonumber \\ && - 4 ( 13 + 6 \w - 9 \w^2 ) x_4^3 - 4 ( 15 - 20 \w - 3 \w^2 ) x_4^4 + 4 ( 8 - 6 \w - 9 \w^2 ) x_4^5 \nonumber \\ && + 3 ( 8 - 7 \w + 4 \w^2 ) x_4^6 + ( -11 + 5 \w^2 ) x_4^7 + 3 \w x_4^8 ) \nonumber \\ && + 8 A x_4^6 ( x_4 + \w^2 x_4 - \w ( 1 + x_4^2 ) ) \Big), \\ f^\phi_4 &=& \frac{3 \tB {( -1 + \w x_4 ) }^2}{2 x_4^4 {( -1 + x_4^2 ) }^2} . \eea For $\Psi_L$, we find {\allowdisplaybreaks \bea f^\Psi_0 &=& \frac{1}{16 x_4 ( -1 + x_4^2 ) }\Big(16 \tB \w - 4 ( 8 A + \tB ( -1 + 2 \w^2 ) ) x_4 + 2 ( 8 A - 3 \tB ) \w x_4^2 \nonumber \\ && \qquad + ( -8 A + 3 \tB ) ( -3 + \w^2 ) x_4^3\Big), \\ f^\Psi_1 &=& \frac{1}{960 x_4^3 {( -1 + x_4^2 ) }^5}\Big(-480 \tB \w - 240 \tB ( -7 + 5 \w^2 ) x_4 + 160 \tB \w ( 5 + 4 \w^2 ) x_4^2 \nonumber \\ && - 40 \tB ( 143 - 104 \w^2 + \w^4 ) x_4^3 + 20 \w ( \tB ( 197 - 155 \w^2 ) + 8 A ( -3 + \w^2 ) ) x_4^4 \nonumber \\ && + 20 ( -8 A ( 34 - 21 \w^2 + \w^4 ) + \tB ( 98 - 369 \w^2 + 101 \w^4 ) ) x_4^5\nonumber \\ && - 4 \w ( 200 A ( -14 + 5 \w^2 ) + \tB ( 34 - 975 \w^2 + 288 \w^4 ) ) x_4^6 \nonumber \\ && + ( 40 A ( 55 - 234 \w^2 + 67 \w^4 ) + \tB ( 2455 + 838 \w^2 - 1005 \w^4 + 192 \w^6 ) ) x_4^7 \nonumber \\ && - 4 \w ( 3 \tB ( 53 + 120 \w^2 - 36 \w^4 ) + 8 A ( 155 - 120 \w^2 + 36 \w^4 ) ) x_4^8 \nonumber \\ && - ( \tB ( 3515 - 1042 \w^2 - 225 \w^4 + 72 \w^6 ) - 8 A ( 89 + 170 \w^2 - 75 \w^4 + 24 \w^6 ) ) x_4^9 \nonumber \\ && + 4 ( 232 A + 193 \tB ) \w x_4^{10} - 20 ( 8 A ( 1 + \w^2 ) \nonumber \\ && + \tB ( -91 + 29 \w^2 ) ) x_4^{11} - 20 ( 8 A + 9 \tB ) \w x_4^{12} + 120 \tB ( -3 + \w^2 ) x_4^{13}\Big), \\ f^\Psi_2 &=& \frac{-1}{48 x_4^4 {( -1 + x_4^2 ) }^3}\Big( 8 A x_4^5 ( x_4 + \w^2 x_4 - \w ( 1 + x_4^2 ) ) + \tB ( 36 + 60 x_4 - 36 x_4^2 \nonumber \\ && - 84 x_4^3 + 24 x_4^5 + 5 x_4^6 + \w^2 x_4 ( 24 + 36 x_4 - 12 x_4^2 - 36 x_4^3 - 12 x_4^4 + 5 x_4^5 ) \nonumber \\ && + \w ( -48 - 72 x_4 + 24 x_4^2 + 80 x_4^3 + 24 x_4^4 - 21 x_4^5 + 3 x_4^7 ) ) \Big), \\ f^\Psi_3 &=& \frac{1}{48 x_4^4 {( -1 + x_4^2 ) }^3}\Big(8 A x_4^5 ( - x_4 - \w^2 x_4 + \w (1+x_4^2) ) + \tB ( -36 + 60 x_4 + 36 x_4^2 \nonumber \\ && - 84 x_4^3 + 24 x_4^5 - 5 x_4^6 - \w^2 x_4 ( -24 + 36 x_4 + 12 x_4^2 - 36 x_4^3 + 12 x_4^4 + 5 x_4^5 ) \nonumber \\ && + \w ( -48 + 72 x_4 + 24 x_4^2 - 80 x_4^3 + 24 x_4^4 + 21 x_4^5 - 3 x_4^7 ) ) \Big), \\ f^\Psi_4 &=& \frac{-3 \tB {( -1 + \w x_4 ) }^2}{2 x_4^4 {( -1 + x_4^2 ) }^2} . \eea } Finally, for $W_L$, we have \bea f^W_0 &=&\frac{( -1 + \w^2 ) x_4 ( -24 A x_4 ( -2 + \w x_4 ) + \tB ( -18 x_4 + \w ( -8 + 9 x_4^2 ) ) ) }{8 {( -1 + x_4^2 ) }^2},\\ f^W_1 &=&\frac{(-1 + \w^2)}{480 x_4^2 {( -1 + x_4^2 ) }^7} \Big( 8 A x_4^4 \big( 1500 + 84 \w^4 x_4^4 ( -12 + x_4^2 ) \nonumber \\ && - 6 \w^2 ( 50 + 100 x_4^2 - 427 x_4^4 + x_4^6 ) + 3 \w^3 x_4 ( 60 + 585 x_4^2 - 160 x_4^4 + 3 x_4^6 ) \nonumber \\ &&+ \w^5 ( 168 x_4^5 - 36 x_4^7 ) - 6 x_4^2 ( -590 + 243 x_4^2 + 201 x_4^4 - 4 x_4^6 + 10 x_4^8 ) \nonumber \\ && + \w x_4 ( -1560 - 4935 x_4^2 + 2000 x_4^4 - 265 x_4^6 + 92 x_4^8 ) \big) \nonumber \\ && + \tB \big( 1440 + x_4 ( -84 \w^4 x_4^5 ( 24 + 28 x_4^2 + 3 x_4^4 ) \nonumber \\ && + 12 \w^5 x_4^6 ( 40 + 6 x_4^2 + 9 x_4^4 ) + 6 \w^2 x_4^3 ( -450 + 964 x_4^2 + 863 x_4^4 + 3 x_4^6 ) \nonumber \\ && - 3 \w^3 x_4^2 ( -40 - 748 x_4^2 - 2333 x_4^4 + 672 x_4^6 + 9 x_4^8 ) \nonumber \\ && - 6 x_4 ( 2160 - 5490 x_4^2 + 3770 x_4^4 - 2249 x_4^6 + 597 x_4^8 - 588 x_4^{10} + 90 x_4^{12} ) \nonumber \\ && + \w ( -2160 + 18920 x_4^2 - 53216 x_4^4 + 20629 x_4^6 - 11216 x_4^8 + 5579 x_4^{10} \nonumber \\ && - 2236 x_4^{12} + 360 x_4^{14} ) ) \big) \Big), \\ f^W_2 &=& \frac{( 1 - \w )}{24 {( x_4 - x_4^3 ) }^4} \Big( \tB \big( -144 + 108 ( -1 + 2 \w ) x_4 - 24 ( -5 + \w ) ( 3 + 2 \w ) x_4^2 \nonumber \\ && - 36 ( -11 + \w ( 16 + \w ) ) x_4^3 + 24 \w ( -29 + 7 \w ) x_4^4 \nonumber \\ && + 36 ( -2 + \w ( -2 + 7 \w ) ) x_4^5 + 3 ( 1 + \w ) ( 3 + 32 \w ) x_4^6 + 7 \w ( 1 + \w ) x_4^7 \nonumber \\ &&+ 27 ( 1 + \w ) x_4^8 - 9 \w ( 1 + \w ) x_4^9 \big) + 24 A ( 1 + \w ) x_4^6 ( -1 - 3 x_4^2 + \w ( x_4 + x_4^3 ) ) \Big), \nonumber \\ && \\ f^W_3 &=& \frac{( 1 + \w )}{24 x_4^4 {( -1 + x_4^2 ) }^4} \Big( \tB \big( -144 + 108 ( 1 + 2 \w ) x_4 - 24 ( 5 + \w ) ( -3 + 2 \w ) x_4^2 \nonumber \\ && + 36 ( -11 + ( -16 + \w ) \w ) x_4^3 + 24 \w ( 29 + 7 \w ) x_4^4 - 36 ( -2 + \w ( 2 + 7 \w ) ) x_4^5 \nonumber \\ && + 3 ( -1 + \w ) ( -3 + 32 \w ) x_4^6 - 7 ( -1 + \w ) \w x_4^7 - 27 ( -1 + \w ) x_4^8 \nonumber \\ &&+ 9 ( -1 + \w ) \w x_4^9 \big) - 24 A ( -1 + \w ) x_4^6 ( -1 - 3 x_4^2 + \w ( x_4 + x_4^3 ) ) \Big), \\ f^W_4 &=& \frac{3 \tB {( -1 + \w x_4 ) }^2 ( 4 - 10 x_4^2 + \w x_4 ( -1 + 7 x_4^2 ) ) }{x_4^4 {( -1 + x_4^2 ) }^4} . \eea Results including the corrections at $O(\tk^2)$ can be found at \cite{Website}. \section{Bulk geodesics} \label{appE} To calculate the affine distance between the branes along a spacelike geodesic we must solve the geodesic equations in the bulk. Let us first consider the situation in Birkhoff-frame coordinates for which the bulk metric is static and the branes are moving. The Birkhoff-frame metric takes the form (see Chapter \S\,\ref{branegravitychapter}) \[ \d s^2 = \d Y^2 - N^2(Y)\, \d T^2 + A^2(Y)\, \d \vec{x}^2, \] where for AdS-Schwarzschild with a horizon at $Y=0$, \[ A^2(Y) = \frac{\cosh(2 Y/L)}{\cosh(2Y_0/L)}, \qquad N^2(Y) = \frac{\cosh(2 Y_0/L)}{\cosh(2Y/L)}\left(\frac{\sinh{(2Y/L)}}{\sinh{(2Y_0/L)}}\right)^2. \] At $T=0$, the $Y$-coordinate of the branes is represented by the parameter $Y_0$. The subsequent brane trajectories $Y_\pm(T)$ can then be determined by integrating the Israel matching conditions, which read $\tanh{(2Y_\pm/L)}= \pm \sqrt{1-V_\pm^2}\,$, where $V_\pm = (\d Y_\pm/\d T)/N(Y_\pm)$ are the proper speeds of the positive- and negative-tension branes respectively. From this, it further follows that $Y_0$ is related to the rapidity $y_0$ of the collision by $\tanh y_0 =\sech(2Y_0/L)$. For the purpose of measuring the distance between the branes, a natural choice is to use spacelike geodesics that are orthogonal to the four translational Killing vectors of the static bulk, corresponding to shifts in $\vec{x}$ and $T$. Taking the $\vec{x}$ and $T$ coordinates to be fixed along the geodesic, we find that $Y_{,\lambda}$ is constant for an affine parameter $\lambda$ along the geodesic. To make the connection to our original brane-static coordinate system, recall that the metric function $b^2(t,y) = A^2(Y)$, and thus \[ Y_{,\lambda}^2 = \frac{(bb_{,t}t_{,\lambda}+b b_{,y} y_{,\lambda})^2}{b^4 - \theta^2} = n^2 (-t_{,\lambda}^2+t^2 y_{,\lambda}^2), \] where we have introduced the constant $\theta=\tanh{y_0}=V/c$. Adopting $y$ now as the affine parameter, we have \[ 0 = (b_{,t}^2 b^2+n^2(b^4-\theta^2))t_{,y}^2 + 2 b_{,t}b_{,y}b^2 t_{,y}+(b_{,y}^2 b^2-n^2t^2(b^4-\theta^2)), \] where $t$ is to be regarded now as a function of $y$. We can solve this equation order by order in $y_0$ using the series ansatz \[ t(y) = \sum_{n=0}^\inf c_n y^n, \] where the constants $c_n$ are themselves series in $y_0$. Using the series solution for the background geometry given in Section \ref{appB}, and imposing the boundary condition that $t(y_0)=t_0$, we obtain \bea c_0 &=& t_0 + \frac{t_0\,y_0^2}{2} - 2\,t_0^2\,y_0^3 + \frac{\left( t_0 + 36\,t_0^3 \right) y_0^4}{24} - t_0^2\left( 1 + 5\,t_0^2 \right) y_0^5 \nonumber \\ && + \left( \frac{t_0}{720} + \frac{17\,t_0^3}{4} + 4\,t_0^5 \right) y_0^6 - \frac{t_0^2\left( 13 + 250\,t_0^2 + 795\,t_0^4 \right) y_0^7}{60}\qquad \nonumber \\ && + O(y_0^8), \\ c_1 &=& 2\,t_0^2\,y_0^2 + \left( \frac{5\,t_0^2}{3} + 5\,t_0^4 \right) \,y_0^4 - 8\,t_0^3\,y_0^5 \nonumber \\ && + \left( \frac{91\,t_0^2}{180} + \frac{23\,t_0^4}{6} + \frac{53\,t_0^6}{4} \right) \,y_0^6 + O(y_0^7), \\ c_2 &=& -\frac{t_0}{2} - \frac{t_0\left( 1 + 6\,t_0^2 \right) y_0^2}{4} + t_0^2\,y_0^3 - \left( \frac{t_0}{48} - 2t_0^3 + 4t_0^5 \right)y_0^4 \nonumber \\ &&+ \frac{\left( t_0^2 + 23\,t_0^4 \right) y_0^5}{2} + O(y_0^6), \\ c_3 &=& -\frac{5\,t_0^2\,y_0^2}{3} - \frac{t_0^2\,\left( 25 + 201\,t_0^2 \right) y_0^4}{18} + O(y_0^5), \\ c_4 &=& \frac{5\,t_0}{24} + \left( \frac{5\,t_0}{48} + \frac{7\,t_0^3}{4} \right) y_0^2 - \frac{5\,t_0^2\,y_0^3}{12} + O(y_0^4), \\ c_5 &=& \frac{61\,t_0^2\,y_0^2}{60} + O(y_0^3), \\ c_6 &=& -\frac{61\,t_0}{720} + O(y_0^2), \\ c_7 &=& 0 + O(y_0) . \eea Substituting $t_0=x_0/y_0$ and $y=\w y_0$, we find $x(\w)=x_0/y_0+O(y_0)$, \ie to lowest order in $y_0$, the geodesics are trajectories of constant time lying solely along the $\w$ direction. Hence in this limit, the affine and metric separation of the branes (defined in (\ref{d_m})) must necessarily agree. To check this, the affine distance between the branes is given by \bea \frac{d_a}{L} &=& \int_{-y_0}^{y_0}n\sqrt{t^2-t'^2}\,\d y \nonumber \\ &=& 2\,t_0\,y_0 + \frac{\left( t_0 + 5\,t_0^3 \right) \,y_0^3}{3} - 4\,t_0^2\,y_0^4 + \frac{\left( t_0 - 10\,t_0^3 + 159\,t_0^5 \right) \,y_0^5}{60} \nonumber \\[1ex] && - \frac{2\,\left( t_0^2 + 30\,t_0^4 \right) \,y_0^6}{3} + \frac{\left( t_0 + 31115\,t_0^3 - 5523\,t_0^5 + 12795\,t_0^7 \right) \,y_0^7}{2520} \nonumber \\ && + O(y_0^8), \eea which to lowest order in $y_0$ reduces to \[ \frac{d_a}{L} = 2\,x_0 + \frac{5\,x_0^3}{3} + \frac{53\,x_0^5}{20} + \frac{853\,x_0^7}{168} + O(x_0^8) + O(y_0^2), \] in agreement with the series expansion of (\ref{d_m}). (Note however that the two distance measures differ nontrivially at order $y_0^2$). To evaluate the perturbation $\delta d_a$ in the affine distance between the branes, consider \bea \delta \int \sqrt{\g \dot{x}^\mu \dot{x}^\nu} \d \lambda &=& \frac{1}{2}\int \frac{\d \lambda}{\sqrt{g_{\rho\sigma} \dot{x}^\rho \dot{x}^\sigma}} \left(\delta\g \dot{x}^\mu\dot{x}^\nu+g_{\mu\nu ,\kappa}\delta x^\kappa\dot{x}^\mu \dot{x}^\nu +2 \g \dot{x}^\mu \delta \dot{x}^\nu\right) \nonumber \\ &=&\left[\frac{\dot{x}_\nu\delta x^\nu}{\sqrt{g_{\rho\sigma} \dot{x}^\rho \dot{x}^\sigma}}\right]+\frac{1}{2}\int\frac{\delta\g\dot{x}^\mu \dot{x}^\nu}{\sqrt{g_{\rho\sigma} \dot{x}^\rho \dot{x}^\sigma}}\,\d \lambda, \eea where dots indicate differentiation with respect to the affine parameter $\lambda$, and in going to the second line we have integrated by parts and made use of the background geodesic equation $\ddot{x}_\sigma=\frac{1}{2} g_{\mu\nu ,\sigma}\dot{x}^\mu \dot{x}^\nu$ and the constraint $\g \dot{x}^\mu \dot{x}^\nu=1$. If the endpoints of the geodesics on the branes are unperturbed, this expression is further simplified by the vanishing of the surface term. Converting to coordinates where $t_0= x_0/y_0$ and $y= \w y_0$, to lowest order in $y_0$ the unperturbed geodesics lie purely in the $\w$ direction, and so the perturbed affine distance is once again identical to the perturbed metric distance (\ref{deltad_m}). Explicitly, we find \bea \frac{\delta d_a}{L} &=& -\frac{2\,\left( B + A\,t_0^2 \right) \,y_0}{t_0} \nonumber \\ && - \left( \frac{B\,\left( 4 + 3\,t_0^2 \right)}{12\,t_0} + \frac{A\,\left( t_0+ 9\,t_0^3 \right) }{3} \right) y_0^3 + \left( -4\,B + 4\,A\,t_0^2 \right) y_0^4\nonumber \\ && - \left(\frac{B\,\left( 2 + 2169\,t_0^2 + 135\,t_0^4 \right) + 2\,A\,t_0^2\,\left( 1 + 1110\,t_0^2 + 375\,t_0^4 \right)} {120\,t_0}\right)y_0^5 \nonumber \\ && + \left(\frac{4\,A\,t_0^2\,\left( 1 + 42\,t_0^2 \right) - B\,\left( 4 + 57\,t_0^2 \right)}{6}\right) y_0^6 \nonumber \\ && - \frac{1}{10080 t_0}\, \Big(B\left( 4 + 88885t_0^2 + 952866t_0^4 + 28875t_0^6 \right) \nonumber \\ && + 4At_0^2\left( 1 - 152481 t_0^2 + 293517 t_0^4 + 36015 t_0^6 \right)\Big)\,y_0^7 \nonumber \\ && + O(y_0^8), \eea which, substituting $t_0=x_0/y_0$ and dropping terms of $O(y_0^2)$, reduces to \bea \frac{\delta d_a}{L} &=& -\frac{2\,\tB}{x_0} - 2\,A\,x_0 - \frac{\tB}{4}\,x_0 - 3\,A\,x_0^3 - \frac{9}{8}\,\tB\,x_0^3 - \frac{25}{4}\,A\,x_0^5 \nonumber \\ && - \frac{275}{96}\,\tB\,x_0^5 - \frac{343}{24}\,A\,x_0^7 + O(x_0^8),\qquad \eea where $\tB=B y_0^2$. Once again, this expression is in accordance with the series expansion of (\ref{deltad_m}). At $O(y_0^2)$, however, the perturbed affine and metric distances do not agree.
2,877,628,090,032
arxiv
\section{Introduction} In recent years, the use of wavefunction-based post-Kohn--Sham or post-Hartree--Fock methods to solve problems in materials science has proliferated. \cite{muller_wavefunction-based_2012} This is in part driven by an interest in obtaining precise energies (accurate to within 1mHa) for complex systems using hierarchies of methods found in quantum chemistry such as coupled cluster theory. While growing in popularity, wavefunction methods have yet to see widespread adoption, in large part due to their significant computational cost scaling with system size. This is especially of note in coupled cluster theory using a plane wave basis, and as a result, some authors are seeking methods to control finite size errors in order to run calculations using smaller system sizes.~\cite{gruber_applying_2018} Finite size errors arise when attempts are made to simulate an infinite system Hamiltonian with a periodic supercell containing a necessarily finite particle number.\cite{fraser_finite-size_1996,drummond_finite-size_2008} The finite size of a supercell places a limitation on the minimum momenta in Fourier sums (e.g., with a cubic box of length $L$, the smallest momentum transfer is $2\pi/L$). These limitations ultimately lead to errors in the correlation energy; \cite{gruber_applying_2018,ruggeri_correlation_2018} this has been attributed to long range van der Waals forces. \cite{gruber_applying_2018,gruber_ab_2018} Since these finite size errors are large and slowly converging with increasing supercell size, {which has been analyzed in detail for coupled cluster theory, \cite{mcclain_gaussian-based_2017}} there has been significant interest in developing wavefunction methods with reduced computational cost to circumvent finite size error and allow the treatment of larger supercells. These include embedding methods,\cite{sun_quantum_2016} such as density matrix embedding,\cite{knizia_density_2012,knizia_density_2013,bulik_electron_2014,bulik_density_2014,ricke_performance_2017,zheng_cluster_2017,pham_can_2018} wavefunction-in-DFT embedding,\cite{henderson_embedding_2006,tuma_treating_2006,gomes_calculation_2008,sharifzadeh_all-electron_2009,huang_quantum_2011,libisch_embedded_2014, manby_simple_2012,goodpaster_accurate_2014,chulhai_projection-based_2018} electrostatic embedding,\cite{hirata_fast_2005,dahlke_electrostatically_2007,hirata_fast_2008,leverentz_electrostatically_2009,bygrave_embedded_2012} QM/MM-inspired schemes,\cite{shoemaker_simomm:_1999,sherwood_quasi:_2003,herschend_combined_2004,beran_predicting_2010,chung_oniom_2015} and others.\cite{eskridge_local_2018,lan_communication:_2015,rusakov_self-energy_2019,voloshina_embedding_2007,masur_fragment-based_2016} Local correlation methods\cite{collins_energy-based_2015,usvyat_periodic_2018} such as fragment-based schemes,\cite{gordon_fragmentation_2012,li_generalized_2007,li_cluster--molecule_2016,rolik_general-order_2011,li_divide-and-conquer_2004,kobayashi_alternative_2007,kristensen_locality_2011,ghosh_noncovalent_2010,kitaura_fragment_1999,fedorov_extending_2007,netzloff_ab_2007,ziolkowski_linear_2010} incremental methods,\cite{stoll_correlation_1992,paulus_method_2006,friedrich_fully_2007,stoll_approaching_2012,friedrich_incremental_2013,voloshina_first_2014,kallay_linear-scaling_2015,fertitta_towards_2018} and heirarchical methods,\cite{deev_approximate_2005,manby_extension_2006,nolan_calculation_2009,collins_ab_2011} break the system into smaller subsystems, then extrapolate or stitch together the energies. Some methods take advantage of range separation\cite{toulouse_adiabatic-connection_2009,bruneval_range-separated_2012,shepherd_range-separated_2014} or other distance-based schemes\cite{spencer_efficient_2008,maurer_efficient_2013,kats_sparse_2013,kats_speeding_2016,ayala_extrapolating_1999} to reduce computational cost. In addition to work on developing or modifying electronic structure methods, much work on reducing the cost of wavefunction methods has been focused on modifying basis sets in order to accelerate convergence and decrease computation time. Local orbital methods have been popular,\cite{pisani_local-mp2_2005,ayala_atomic_2001,usvyat_periodic_2015,werner_fast_2003,flocke_natural_2004,werner_scalable_2015,rolik_efficient_2013,forner_coupled-cluster_1985,schutz_low-order_2000,neese_efficient_2009,sun_gaussian_2017,booth_plane_2016,blum_ab_2009,subotnik_local_2005} often based on the local ansatz of Pulay and Saebo\cite{saebo_local_1993} or Stollhoff and Fulde.\cite{stollhoff_local_1977} Other common methods include progressive downsampling,\cite{shimazaki_brillouin-zone_2009,hirata_fast_2009,ohnishi_logarithm_2010} downfolding,\cite{purwanto_frozen-orbital_2013} use of explicitly-correlated basis sets\cite{adler_local_2009,shiozaki_communications:_2010,gruneis_explicitly_2013,usvyat_linear-scaling_2013,gruneis_efficient_2015} or natural orbitals,\cite{gruneis_natural_2011} and tensor manipulations.\cite{hohenstein_tensor_2012,benedikt_tensor_2013,hummel_low_2017,peng_highly_2017,motta_efficient_2018} Discussion of the details and relative merits of these methods is beyond the scope of this paper; for a review, we direct the interested reader to Refs. \onlinecite{huang_advances_2008,muller_wavefunction-based_2012,beran_modeling_2016,andreoni_coupled_2018}. However, there has been some work on developing corrections for finite size errors.\cite{fraser_finite-size_1996,kent_finite-size_1999,kwee_finite-size_2008,drummond_finite-size_2008,holzmann_theory_2016,liao_communication:_2016} Many-body methods can sometimes be integrated to the thermodynamic limit (TDL),\cite{gell-mann_correlation_1957,nozieres_correlation_1958,onsager_integrals_1966,bishop_electron_1982,bishop_electron_1978,bishop_overview_1991,ziesche_selfenergy_2007} allowing for the derivation of analytic finite-size correction expressions.\cite{chiesa_finite-size_2006} Several studies from the last year have particular relevance to our work here. Gr{\"u}eneis \emph{et al.}\cite{gruber_applying_2018,gruber_ab_2018} employed a grid integration within periodic coupled cluster for \emph{ab initio} Hamiltonians with applications to various solids. In another study, Alavi \emph{et al.}\cite{ruggeri_correlation_2018} devised a novel extrapolation relationship that links different electron gas calculations through the density parameter. Both of these papers use a technique known as twist averaging to try to remove finite size error. Twist averaging is a method that attempts to control finite size errors by first offsetting the $k$-point grid by a small amount, ${\bf k}_s$, and then averaging over all possible offsets.\cite{lin_twist-averaged_2001} { We refer to ${\bf k}_s$ here as a twist angle. One of the main purposes of twist averaging is to provide for a smoother extrapolation to the thermodynamic limit by reducing severe energy fluctuations as the particle number varies. } When performed with a fixed particle number and box length, this process is referred to as twist averaging in the canonical ensemble, which is what we study here. When employed in stochastic methods, such as variational Monte Carlo,\cite{lin_twist-averaged_2001} diffusion Monte Carlo\cite{drummond_finite-size_2008} or full configuration interaction quantum Monte Carlo,\cite{ruggeri_correlation_2018,shepherd_quantum_2013} the grid can be stochastically sampled at the same time as the main stochastic algorithm, and both stochastic error and error in twist-averaging related to approximate integration can be removed at the same time. As a result, the scaling with the number of twist angles sampled is extremely modest. Unfortunately, the same cost savings cannot be realized for deterministic methods. In this case, in order to achieve a reasonable estimate for the average, one must use a large number of individual energy calculations. This results in the cost scaling linearly with the number of twist angles used, {although the lessening of finite size effects with rising electron number would alleviate this scaling to some extent.\cite{mcclain_gaussian-based_2017}} Here, we seek to remedy the linear scaling of twist averaging for deterministic methods by devising a way to provide an energy that is as accurate as twist-averaging, but with single-calculation cost. {\color{black} In principle, it is possible to find a single twist angle which exactly reproduces the total twist-averaged energy by recognizing that it is an integral of the energy over the twist angles for a system. This was the same logic used in analysis by Baldereschi to find a special $k$-point\cite{baldereschi_mean-value_1973} and has been used by others in the QMC community to find a special twist angle.\cite{dagrada_exact_2016,Rajagopal_quantum_1994,Rajagopal_variational_1995} We are motivated similarly and wish to find a single twist angle that yields an energy approximately equal to the full twist-averaged energy for CCD and related wavefunction methods.} { We take advantage of the similarity between the MP2 and CCD correlation energy expressions, using the much cheaper MP2 method to find a single twist angle that produces a system with the most similar number of allowed excitations to the twist-averaged system. We refer to this set of allowed excitations as the `connectivity'. We then use this twist angle to calculate the CCD energy, which is in good agreement with the fully twist-averaged CCD energy. Finally, we compare our energies to those obtained using one twist angle at the Baldereschi point.\cite{baldereschi_mean-value_1973} } { We do not seek to completely remedy the whole of the finite size error, instead noting that other authors have come up with corrections or extrapolations that can be used after twist-averaging is applied.\cite{chiesa_finite-size_2006,drummond_quantum_2009,gruber_ab_2018}} \section{Twist averaging \& Connectivity} Both continuum/real-space and basis-set twist averaging have been used effectively in quantum Monte Carlo calculations;\cite{lin_twist-averaged_2001,drummond_finite-size_2008,ruggeri_correlation_2018,shepherd_quantum_2013} however, twist averaging remains relatively rare in coupled cluster calculations. In \reffig{fig:TADemonstration}, the total $\Gamma$-point CCD energy ($N=38$ to $N=922$) and twist-averaged CCD energy ($N=38$ to $N=294$) are plotted alongside the extrapolation to the TDL for the uniform electron gas ($0.609(3)$ Ha/electron, where the error in the last digit is in parentheses). The CCD calculation is performed in a finite basis that is analogous to a minimal basis.\cite{shepherd_many-body_2013} The $\Gamma$-point energy is highly non-monotonic; it does not fit well with the extrapolation. The twist-averaged data shows a much better fit with the extrapolation, resulting in a better estimate of the TDL. The drawback of twist averaging, however, is that it costs $N_\mathrm{s}\,\mathcal{O}\mathrm{[CCD]}$ for $N_\mathrm{s}$ twist angles (here, 100). The twist-averaged energy becomes too costly to calculate with CCD for system sizes above 294 electrons. \begin{figure} \includegraphics[width=0.49\textwidth,height=\textheight,keepaspectratio]{./Figure1.pdf} \caption{Comparison between the twist-averaged (TA) CCD energy and the $\Gamma$-point CCD energy for a uniform electron gas with $r_s=1.0$ as the system size changes (up to $N=294$ and $N=922$, respectively). In general, an extrapolation (here, red line) is performed to calculate the TDL energy. Twist averaging makes this extrapolation easier, because the noise around the extrapolation is smaller, leading to a smaller extrapolation error. Twist averaging is performed over 100 twist angles. Standard errors are calculated in the normal fashion for twist averaging, $\sigma\approx\sqrt{\mathrm{Var}(E_{\mathrm{CCD}}({{\bf k}_s})) / N_s}$ (are too small to be shown on the graph, on average 0.2 mHa/el).} \label{fig:TADemonstration} \end{figure} Figure \ref{fig:TADemonstration} is a clear statement of the problem we wish to resolve here. { Twist averaging resolves some finite size errors that are present at an individual particle number $N$, and allows for improved extrapolation to the thermodynamic limit. } That said, the scaling with the number of twist angles is cost-prohibitive. We aim to develop an approximation to twist averaging that gives comparable accuracy at a fraction of the cost. We begin by analyzing how the Hartree-Fock energy and the MP2 correlation energy are modified by twist averaging. This analysis then allows us to build an algorithm that produces CCD twist-averaged accuracy/results for only MP2 cost. \subsection{Hartree-Fock and single-particle eigenvalues} { \begin{figure} \includegraphics[width=0.49\textwidth,height=\textheight,keepaspectratio]{./Figure2.pdf} \caption{{The degeneracy pattern in the energy levels of the $\Gamma$-point calculation can be identified by plotting the HF eigenvalues are plotted in ascending order. Here, we show $N=14, 54$ two systems that are closed shell at the $\Gamma$-point. Averaging the eigenvalues in the manner described in the text removes these degeneracies. The gap between the eigenvalues themselves and across the band gap goes to zero as the TDL is approached, giving rise to the metallic character of the gas.}} \label{fig:AverageEigenvalues} \end{figure} } A finite-sized electron gas at the $\Gamma$-point is only closed-shell at certain so-called magic numbers, which are determined by the symmetry of the lattice (for example $N=2$, 14, 38, and 54). { One of the reasons that the $\Gamma$-point calculations are so noisy (\reffig{fig:TADemonstration}) is that there are degeneracies in the HF eigenvalues, which can be seen in \reffig{fig:AverageEigenvalues} and has long been recognized. \cite{drummond_quantum_2009} This can be partially remedied by modifying the Hartree--Fock eigenvalues. The starting-point for this is} writing the HF energy as follows: \begin{equation} E_\mathrm{HF}({\bf k}_s)= \sum_{i} T_i ({\bf k}_s)- \frac{1}{2} \sum_{ij} v_{ijji}({\bf k}_s) \label{eq:fullHF} \end{equation} where $T_i$ is the kinetic energy of orbital $i$ and $v_{ijji}$ is the exchange integral between electrons in orbitals $i$ and $j$. Here, we have included the explicit form of the dependence on the twist angle, ${\bf k}_s$. The twist-averaged energy is found by summing \refeq{eq:fullHF} over all possible ${\bf k}_s$: \begin{equation} \langle E_\mathrm{HF} \rangle_{\bf{k}_s} = \frac{1}{N_\mathrm{s}}\sum_{{\bf k}_s}^{N_\mathrm{s}} \sum_{i} T_i ({\bf k}_s) - \frac{1}{N_\mathrm{s}}\sum_{{\bf k}_s}^{N_\mathrm{s}}\frac{1}{2} \sum_{ij} v_{ijji}({\bf k}_s) \end{equation} where $N_s$ indicates the number of twist angles used. Swapping the sums yields: \begin{equation} \langle E_\mathrm{HF} \rangle_{\bf{k}_s} = \sum_{i} \left[\frac{1}{N_\mathrm{s}} \sum_{{\bf k}_s}^{N_\mathrm{s}} T_i ({\bf k}_s) \right]- \frac{1}{2} \sum_{ij} \left[ \frac{1}{N_\mathrm{s}}\sum_{{\bf k}_s}^{N_\mathrm{s}} v_{ijji}({\bf k}_s) \right] . \label{eq:ks_sums} \end{equation} Therefore, twist averaging the HF energy is numerically identical to twist averaging the individual matrix elements: \begin{equation} \langle E_\mathrm{HF} \rangle_{\bf{k}_s} = \sum_{i} \langle T_i \rangle_{\bf{k}_s} - \frac{1}{2} \sum_{ij} \langle v_{ijji} \rangle_{\bf{k}_s} \end{equation} Overall, then, we can use twist-averaged HF eigenvalues in place of twist-averaging the HF energy, obtaining a more reasonable density of states \reffig{fig:AverageEigenvalues}. We will use this in our subsequent scheme. \subsection{Beyond Hartree--Fock} { The above approach does not generalize to correlated theories because they have more complex energy expressions. For example, averaging the second-order M{\o}ller-Plesset theory (MP2) correlation energy over all possible twist angles can be written: \begin{equation} \langle E_\mathrm{corr} \rangle_{\bf{k}_s}=\frac{1}{N_\mathrm{s}}\sum_{{\bf k}_s}^{N_\mathrm{s}} \frac{1}{4}\sum_{ijab} \bar{t}_{ijab}({\bf k}_s) \bar{v}_{ijab} ({\bf k}_s), \label{eq:Mp2} \end{equation} where $i$ and $j$ refer to occupied orbitals and $a$ and $b$ refer to unoccupied orbitals. The symbols $\bar{v}$ and $\bar{t}$ refer to the antisymmetrized electron-repulsion integral and amplitude respectively. For MP2: \begin{equation} \bar{t}_{ijab} ({\bf k}_s) \bar{v}_{ijab} ({\bf k}_s)= \frac{ |\bar{v}_{ijab}({\bf k}_s)|^2 }{\epsilon_i({\bf k}_s)+\epsilon_j({\bf k}_s)-\epsilon_a({\bf k}_s)-\epsilon_b({\bf k}_s)} \end{equation}} { Even though MP2 diverges in the thermodynamic limit, the energy expression (\refeq{eq:Mp2}) has a similar structure to coupled cluster theory, the random phase approximation, and even full configuration interaction quantum Monte Carlo. As such, we can make generalized observations using the MP2 energy expression, and then use these observations to derive a scheme to find an optimal $k_s$ twist angle that works for all of these methods.} \subsection{The connectivity approach} The MP2 correlation energy can vary substantially as the twist angle is changed. For example, in the $N=14$ electron system with a basis set of $M=38$ orbitals, the MP2 energy can vary between $-0.0171$ Ha/electron to $-0.0001$ Ha/electron. This arises, in particular, because the number of low-momentum excitations (minimum $|{\bf k}_i-{\bf k}_a|$) will vary significantly. Since the contribution of each excitation to the MP2 sum is $|{\bf k}_i-{\bf k}_a|^{-4}$, there is a rapid decay of an excitation's contribution to the correlation energy beyond the minimum vector. { This effect arises because, when the twist angle is changed,} different orbitals now fall into the occupied ($ij$) space, and different orbitals fall into the virtual ($ab$) space. This changes the value of the sum over both occupied and virtual orbitals, since many individual terms in the sum are now substantively different. We illustrate this using a diagram in the Supplementary Information. By contrast, the integrals themselves do not change; { to show this, the integral can be written: {\color{black} \begin{equation} v_{ijab}=\frac{4\pi}{L^3} \frac{1}{({\bf k}_i-{\bf k}_a)^2} \delta_{{\bf k}_i-{\bf k}_a , {\bf k}_b-{\bf k}_j} \delta_{\sigma_i \sigma_a}\delta_{\sigma_j \sigma_b} . \label{eq:ERIs} \end{equation} } The Kronecker deltas, $\delta$, ensure that momentum and spin symmetry (denoted $\sigma$) are conserved.} On changing ${\bf k}_p \rightarrow {\bf k}_p+{\bf k}_s$ for all ${\bf k}$'s, the difference in the denominator here does not change, since $({\bf k}_i+{\bf k}_s-{\bf k}_a-{\bf k}_s)^2=({\bf k}_i-{\bf k}_a)^2$. {\color{black} In general, our calculations were set up using details which can be found in our prior work e.g. Ref. \onlinecite{shepherd_convergence_2012}}. At this stage, we conjecture that \emph{if} one of the mechanisms by which twist averaging is affecting the MP2 energy (and other correlation energies) is to smooth out the inconsistent contributions between different momenta, \emph{then} it might be possible for us to find a `special twist angle' where the number of low-momentum states for that single twist angle is a good match to the average number of momentum states across all twist angles. { Further, we will show this special twist angle is transferable to other, more sophisticated methods such as coupled cluster doubles theory. } To find this special twist angle, we proceed as follows: \begin{enumerate} \item For a given twist angle ${\bf k}_s$, loop over the same $ijab$ as the MP2 sum $\sum_{ijab}$. For each $ijab$ set: \begin{enumerate} \item Determine the momentum transfer $x=|{\bf n}_i-{\bf n}_a|^2$ where ${\bf n}_a$ is the integer equivalent of the quantum number: ${\bf k}_a=\frac{2\pi}{L}{\bf n}_a$. \item Increment a histogram element $h_x$ by one. \end{enumerate} \item Create a vector ${\bf h}$, whose elements are $h_x$, which correspond to the number of of $v_{ijab}$ matrix elements with magnitude $\frac{1}{\pi L}\frac{1}{x}$ that are encountered during the MP2 sum. \item Average ${\bf h}$ over all twist angles, yielding $\langle {\bf h} \rangle_{\bf{k}_s}$ \item Loop over the twist angles again, and find the single ${\bf h}$ (and corresponding twist angle) that best matches $\langle {\bf h} \rangle_{\bf{k}_s}$ using: \begin{equation} \min_{\bf{k}_s} \sum_x \frac{1}{x^2} \left( h_x - \langle h_x \rangle_{\bf{k}_s} \right)^2 \end{equation} The weight term $1/x^2$ was chosen empirically to diminish the contributions of large numbers of high-momentum weights that contribute relatively little to the energy. \end{enumerate} { Looking at \refeq{eq:Mp2}, there are two ways to proceed. We could either use this special ${\bf k}_s$ for all aspects of the calculation (e.g. for both the integral evaluation and the eigenvalue difference), or we could use the special ${\bf k}_s$ for the integral only, and twist-average the eigenvalues before performing the CCD calculation. We found that the latter was more numerically effective for $N=14$ and decided to use this approach to generate the results presented here. In general, though, for larger systems it does not make a large difference.} In practice, we implemented this algorithm within an MP2 and CCD code; we call the MP2 calculation at each twist angle and then the CCD calculation once at the end. For the remainder of this work, we will call this application of the above algorithm the ``connectivity scheme," referencing the idea that the pattern of non-zero matrix elements $v_{ijab}$ resembles a connected network. \section{Results} {We demonstrate the effectiveness of this algorithm for coupled cluster calculations on the uniform electron gas in \reffig{fig:results}. In general, our results as show that the connectivity scheme works for different electron numbers, basis sets, and $r_s$ values. Furthermore, evaluation of the connectivity scheme is approximately 100x cheaper than twist averaging. } \begin{figure} \begin{center} \vspace{-1cm} \subfigure[\mbox{}]{% \includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{./Figure3a.pdf} \label{subfig:diffN} } \subfigure[\mbox{}]{% \includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{./Figure3b.pdf} \label{subfig:diffM} } \subfigure[\mbox{}]{% \includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{./Figure3c.pdf} \label{subfig:diffRs} } \caption{All energies shown represent the difference in correlation energy between the $\Gamma$-point and the relevant calculation, since, by design, the Hartree-Fock energy is identical between the connectivity scheme and standard twist averaging (TA). The connectivity scheme delivers comparable corrections to the correlation energy (relative to the $\Gamma$-point) when compared with twist averaging across a wide range of (a) electron numbers (using a minimal basis set, where $M \approx 2N$, as mentioned in Ref. \onlinecite{shepherd_many-body_2013} and tabulated in the Supplementary Information), (b) different basis sets ($M=36-2838$ orbitals, with $N=54$ electrons), and (c) $r_s$ values (0.01 -- 50.0 a.u., with $N=54$ electrons). Twist averaging is performed over 100 twist angles. Standard errors are calculated in the normal fashion for twist averaging, $\sigma\approx\sqrt{\mathrm{Var}(E_{\mathrm{CCD}}({{\bf k}_s})) / N_s}$. }\label{fig:results} \end{center} \end{figure} In \reffig{subfig:diffN}, we compare the connectivity scheme to full twist-averaging for CCD calculations on the uniform electron gas. Energy differences from the $\Gamma$-point energy are plotted for each electron number. Our results show that the connectivity scheme delivers comparable accuracy (mean absolute deviation = 0.3 mHa/electron) to twist averaging, with the benefit of being much faster to compute. The connectivity scheme is substantially cheaper than the twist-averaging scheme: the $N=294$ twist-averaged calculation, for example, costs $58$ hours, which is about the same time it takes to run the $N=922$ connectivity scheme calculation. A complete set of timings is provided in the Supplementary Information. In \reffig{subfig:diffM}, we compare our connectivity scheme to full twist-averaging over a range of basis set sizes ($M= 36 - 2838$ orbitals) for 54 electrons. In \reffig{subfig:diffRs}, we compare the connectivity scheme to full twist-averaging over a range of $r_s$ values ($0.01 -50.0$ a.u.) for 54 electrons. In both cases there is good agreement between the two methods for all system sizes, proving that the connectivity scheme delivers good accuracy when compared with twist averaging for a range of both basis set sizes (mean absolute deviation $<$ 0.35 mHa/electron) and $r_s$ values (mean absolute deviation $<$ 0.25 mHa/electron) at a decreased cost. \begin{figure} \includegraphics[width=0.5\textwidth,height=\textheight,keepaspectratio]{./Figure4.pdf} \label{subfig:TDL1} \caption{Connectivity scheme CCD correlation energies for electron numbers up to $N=922$ for $r_s=1.0$ in the uniform electron gas (yellow triangles). We fit 10 points (dotted red line) to the function $E=a+bN^{-1}$, as proposed by other authors; \cite{drummond_finite-size_2008} we then use this fit to extrapolate to the thermodynamic limit.} \label{fig:TDLresults} \end{figure} In \reffig{fig:TDLresults}, we show the extrapolation of our connectivity scheme CCD correlation energy to the thermodynamic limit for the $r_s=1.0$ uniform electron gas. We perform calculations up to $N=922$ electrons, and fit these results to the equation $E=a+bN^{-1}$, as proposed by other authors. \cite{drummond_finite-size_2008} We then use this fit to extrapolate the correlation energy to the thermodynamic limit. We also performed the same extrapolation for the twist-averaged data set up to $N=294$ electrons (not shown). The extrapolations predict the TDL energy to be $-0.0340(8)$ Ha/electron for the connectivity scheme and $-0.033(4)$ Ha/electron for the twist-averaged scheme, a difference of $0.001(4)$ Ha/electron. The numbers in parentheses are errors in the final digit. These agree within error, and the connectivity scheme has an improved error due to having more data points. {Next, we demonstrate how to use this method to obtain a complete basis set and thermodynamic limit estimate for the uniform electron gas. Connectivity scheme CCD energies were collected for the $N=54$ electron system with basis sets varying from $M=922$ to $M=2838$ orbitals, and for systems with electron numbers varying between $N=162$ to $610$, with $M\approx 4N$. These data allow us to extrapolate to both the complete basis set limit and the thermodynamic limit by using the numerical approach set out in our previous work.\cite{shepherd_communication:_2016} This yields an energy that is 0.0566(6), with the error in parentheses resulting from the extrapolations; this is in good agreement with our prior estimate with significantly less error.\cite{shepherd_communication:_2016} For more details the reader is referred to the Supplementary Information. } { Finally, in \reffig{fig:BPCompresults}, we compare the CCD energies from full twist-averaging, our connectivity scheme, and performing single calculation using the Baldereschi point as a twist angle. This point, first developed for insulators, is well known for the role it played in developing efficient thermodynamic integrations\cite{baldereschi_mean-value_1973,chadi_special_1973,cunningham_special_1974,monkhorst_special_1976} and was subsequently used for twist-averaging as the center-point of uniform grid twist-averaging by Drummond \emph{et al.}\cite{drummond_quantum_2009}. At higher electron numbers ($N\geq 162$) the difference between BP and the TA energies falls below 1mHa/electron as all of the approaches converge to the same energy. At small electron numbers, however, the Baldereschi point significantly deviates from the twist-averaged energy, while the connectivity scheme is a much better approximation.} \begin{figure} \includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{./Figure5.pdf} \label{subfig:BPComp} \caption{{All energies shown reflect the difference in correlation energy between the $\Gamma$-point and the relevant calculation. The connectivity algorithm delivers comparable corrections to the correlation energy (relative to the $\Gamma$-point) when compared with twist averaging across a wide range of electron numbers. The Baldereschi point only delivers comparable corrections to the correlation energy (relative to the $\Gamma$-point) at higher electron numbers ($N \geq 162$) when compared with twist averaging. Twist averaging is performed over 100 twist angles. Standard errors are calculated in the normal fashion for twist averaging, $\sigma\approx\sqrt{\mathrm{Var}(E_{\mathrm{CCD}}({{\bf k}_s})) / N_s}$.}} \label{fig:BPCompresults} \end{figure} \section{Discussion \& concluding remarks} Our results show that a finite electron gas is best able to reproduce the twist-averaged total and correlation energies when a special $\bf{k}_s$-point is chosen to minimize the differences between the momentum connectivity of the finite system and a reference (here, a twist-averaged finite system). {Our interpretation of the connectivity-derived special $\bf{k}_s$-point's utility is that the low-momentum two-particle excitations from HF often suffer from finite size errors due to the shape of the Fermi surface in $k$-space. By finding a particularly representative $k_s$-point, we aim to take the `best case' of a representative shape--or, at least, as best as can be managed by a truly finite system. } When we examine the occupied orbitals in $k$-space at the special $\bf{k}_s$-point, they adopt low-symmetry patterns that tend more toward the shape of a sphere than the $\Gamma$-point distribution. Though we have made significant progress here towards ameliorating finite size error, {there are still two open questions. First, could our method be modified in order to minimize the energy difference to the thermodynamic limit rather than just to the twist-averaged energy? The second open question surrounds the extrapolation -- in particular, what is the \emph{actual} form of the energy as the system size tends to infinity?} We could investigate this source of error by comparing with the known high-density limit of RPA, which CCD is expected to be able to capture. We leave both of these investigations for future work. Overall, the results here should improve our ability to understand infinite-sized model systems that are necessarily represented as finite systems, such as the electron gas with varying dimensions, the Hubbard model, and the models of nuclear matter we previously studied. \cite{shepherd_communication:_2016,Baardsen2,Baardsen1} This communication is timely due to a resurgence of interest in the uniform electron gas~\cite{neufeld_study_2017,white_time-dependent_2018,spencer_large_2018,mcclain_spectral_2016,spencer_hande-qmc_2019,malone_accurate_2016,shepherd_many-body_2013,shepherd_range-separated_2014,gruneis_explicitly_2013} and of twist-averaged coupled cluster calculations.~\cite{gruber_applying_2018,hagen_coupledcluster_2014} We expect this work can immediately be applied to improve calculations. { Our long-term goals are to use this approach to study realistic systems. Though calculations are left for future manuscripts, we expect to follow a similar approach to our prior work in this area. In particular, we start by observing the similarity between how twist-averaging works in plane wave ab initio calculations where the energy is still obtained as a sum over matrix elements $v_{ijab}$ (as in \refeq{eq:ERIs}) which are offset by a twist angle. Specifically, then, it should be possible to choose the twist angle in the same way as we propose here, so for a cubic system with $N$ electrons and a box length of $L$, the same twist angle as used here should work. As such, we will soon be applying this to real solids and leave this for a future study. } {\bf \emph{Supplementary Material.--} } The reader is directed to the supplementary material for raw data tables and illustrations mentioned in the text. {\bf \emph{Acknowledgements.--} } JJS and TM acknowledge the University of Iowa for funding. JJS thanks the University of Iowa for an Old Gold Award. ARM was supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374. The code used throughout this work is a locally modified version of a github repository used in previous work~\cite{shepherd_range-separated_2014,shepherd_coupled_2014}: https://github.com/jamesjshepherd/uegccd.
2,877,628,090,033
arxiv
\section{Introducción} Esta comunicación está dirigida a personas que no tienen conocimientos previos sobre el cruce interdisciplinario entre la lingüística y la estadística. Tiene el doble propósito de ser una aportación desde el punto de vista de la lingüística teórica y, a la vez, ser útil para la documentación y para la terminología. Consecuentemente, incluye ejemplos de cómo este conocimiento teórico se puede aplicar a la solución de problemas prácticos. La intención es introducir a la temática, pero también concienciar y aclarar. Concienciar, porque lamentablemente la lingüística cuantitativa no sólo es un área marginal en la lingüística actual sino que, además, muchas veces tanto lingüistas como estadísticos ignoran su existencia. Aclarar, porque la relación entre estadística y lengua no es ninguna novedad ni pertenece al mundo de las ``nuevas tecnologías''. Estamos hablando de una tradición que hace más de sesenta años que difunde conceptos y métodos que no tienen una relación intrínseca con la informática. La utilización de ordenadores es evidentemente necesaria para llevar a cabo estudios en lingüística cuantitativa, pero hablar de estos temas no significa hablar de programas informáticos, porque esto equivale a confundir el fenómeno observado con el instrumento de observación. Ciertamente, los medios son determinantes, ya que, como decía Saussure, el punto de vista define el objeto. Sin embargo, esto no debe llevar al error de reificar las ideas en la forma de un software. En definitiva, lo importante es conocer qué estudios se han hecho o se pueden hacer y tomar conciencia de que esta disciplina no se limita al recuento de veces que dos palabras aparecen juntas en un corpus. En cuanto a mi legitimidad como orador, estoy aquí por mi función en el IULA\footnote{Institut Universitari de Lingüística Aplicada (http://www.iula.upf.edu). La relación contractual estaba vigente en el momento de la charla (año 2009).}, consistente en asimilar el conocimiento que ya existe sobre lingüística cuantitativa, aplicar este conocimiento a la solución de problemas prácticos y, a la vez, intentar proponer algún conocimiento nuevo en los foros científicos. No presento nada nuevo en esta comunicación. Haré, en cambio, un recorrido por algunas ideas que he tratado ya en otros trabajos. Es importante advertir que no represento necesariamente la opinión de mis compañeros de trabajo. Me refiero particularmente a un protocolo que incluye un compromiso con la independencia de lengua, es decir averiguar primero hasta qué punto se puede llegar a sacar conclusiones útiles sin introducir conocimiento explícito sobre una lengua en particular. Esta comunicación está organizada de la siguiente manera: en la próxima sección 2, analizaremos la confrontación existente entre dos formas muy diferentes de acercarse al estudio de la lengua, ante las que la lingüística se encuentra en una posición ambivalente: el mundo humanístico, por llamarlo de alguna manera aunque parezca ligeramente impreciso, y el mundo científico, en particular el mundo de las «ciencias duras» por oposición a las ciencias sociales, donde el pensamiento cuantitativo es, a veces, todavía visto con sospecha. A continuación, en la sección 3, entraremos en la materia del análisis lingüístico enfocado desde la perspectiva estadística. Analizaremos concretamente el concepto de combinatoria de palabras desde tres perspectivas diferentes: en la subsección 3.1, los estudios de asociación entre las unidades que se combinan; en la subsección 3.2, la manera en que esta combinación de unidades se distribuye en un corpus y las conclusiones que podemos derivar de ello y, finalmente, en la subsección 3.3, las formas de calcular la similitud entre unidades de acuerdo con sus posibilidades de combinación. Como ejemplo, analizaremos el bigrama y estableceremos el significado de esta unidad más allá de su definición formal, para saber en profundidad qué tipo de información codifica. Veremos que, aunque parezca sorprendente, nuestra identidad individual y colectiva está contenida en el bigrama. Como ejemplo de las aplicaciones prácticas, en la sección 4 veremos la clasificación de documentos en distintas variantes (subsecciones 4.1 y 4.2) así como elementos para la caracterización del significado y la desambiguación de terminología. Finalmente, en la subsección 4.3 abordaremos el descubrimiento de neología. Existen otras posibilidades de aplicación, entre las que encontramos líneas de investigación en curso, tales como la extracción automática de terminología especializada o la extracción de terminología bilingüe de corpus no paralelos, pero estas líneas, a pesar de su interés, no se tratarán aquí por las limitaciones de espacio. \section{El choque entre dos culturas} Wilhelm Dilthey (1883) advirtió ya las diferencias epistemológicas entre las ciencias naturales, por un lado, y las ciencias sociales y humanidades (o ciencias del espíritu), por otro, continuando una línea de pensamiento que ya había iniciado Kant. Mientras que en las ciencias naturales prevalece un pensamiento mecanicista, con el que se puede predecir la consecuencia de determinados acontecimientos, en las ciencias del espíritu, en cambio, este determinismo no es posible. La respuesta de un ser humano ante un determinado evento es en última instancia imprevisible. Incluso en estas circunstancias, las ciencias del espíritu nos permiten al menos comprender (\textit{verstehen}) las circunstancias históricas e individuales que rodean a lo humano. En hito, sin embargo, en la historia de la toma de conciencia de la división de la cultura en el saber científico y el saber humanístico --división que todavía estructura la currícula de la educación secundaria y superior-- sea probablemente una conferencia dada por C.P. Snow (1959), en la que describe la sospecha mutua y la incomprensión existente entre científicos e intelectuales. Aunque pertenezcan a las capas más educadas de la población, los dos colectivos son ignorantes el uno del otro. Si bien después moderó su discurso, en aquella ocasión Snow planteó que la gente que tiene un pensamiento de tipo técnico es en general inculta, y los intelectuales, por su parte, hostiles a este pensamiento técnico, son generalmente incapaces de comprender los conceptos científicos más elementales. Esta separación de saberes resulta particularmente interesante en el seno de las ciencias sociales, consideradas «ciencias blandas» por oposición al rigor de las ciencias naturales, las «ciencias duras». La inclinación de los científicos sociales por una o por otra rama de pensamiento dependerá de la orientación ideológica personal o de la de cada facultad o departamento, pero entre los intelectuales de las ciencias sociales es común advertir una reticencia \textit{a priori} hacia todo pensamiento de tipo técnico en el estudio de lo que es humano. Esta reticencia está representada en la idea de Castoriadis (1975) sobre el hecho de que con un lenguaje reducido a lo que es instrumental se puede operar y calcular, pero no se puede pensar, una idea con resonancias a la polémica constatación hecha por Heidegger sobre la idea de que «la ciencia no piensa». En sociología, esta diferencia estuvo claramente representada por la oposición entre el pensamiento crítico y la reflexión filosófica e histórica de la Escuela de Frankfurt frente al hábito de los sociólogos norteamericanos de la Mass Communication Research de promover la aplicación de métodos cuantitativos por encima de la reflexión teórica, enfrentamiento que continuó a pesar de la colaboración entre algunos de los máximos exponentes de ambos bandos, como Theodor Adorno y Paul Lazarsfeld. El caso es particularmente interesante en la lingüística, si se quiere, «la más dura de las ciencias blandas». Incluso lingüistas experimentados expresan sorpresa al tomar conciencia de que existe una lingüística cuantitativa. Los que son «de letras» no saben «de números». Mandelbrot (1961) todavía estaba en el momento oportuno para revitalizar la pregunta sobre qué es la lingüística y establecer una diferencia entre gramáticos y lingüistas. En el caso de los primeros, prevalece el conocimiento de una lengua en particular y de lo que puede ser y lo que no puede ser gramaticalmente correcto; mientras que, según este autor, la lingüística pertenece al mundo de las ciencias duras, y, por tanto, para ellos lo importante no son tanto las características particulares de cada lengua, que son de una infinita diversidad, sino las propiedades estructurales del lenguaje (actitud contra la que Saussure seguramente no tendría nada que decir). El estudio de estas propiedades posibilita enunciados científicos con una validez que trasciende el conocimiento que se tenga de una lengua en particular, lo que está de acuerdo con el espíritu científico que es proclive a la generalización, ya que no hay o no debería haber ciencia de lo particular. El cruce interdisciplinario, sin embargo, es difícil. Las personas que venimos de ámbitos más cercanos a la lingüística en general estamos poco informados sobre los conceptos matemáticos más elementales y resulta laborioso empezar de cero en el campo, sobre todo para quien no tiene los hábitos de pensamiento de las ciencias duras. Sin embargo, este es, sin dudarlo, un campo de estudio que justifica el desafío. Por ello, por medio de esta presentación, pretendo contagiar el interés y aportar argumentos a la confusión de las barreras entre ciencias duras y blandas, o entre conocimiento científico y conocimiento humanístico en general. Estas barreras ya se confunden y la lingüística no es el único ejemplo. La teoría literaria, mundo humanístico por antonomasia, empieza a sufrir también el asedio de la estadística. Un ejemplo es la aportación que la estadística está haciendo en las disputas sobre la autoría de obras literarias, en casos que incluyen Figuras prominentes como la de Shakespeare (Vickers, 2002). \section{La información como probabilidad} En la línea de Shannon (1948), podemos estimar la cantidad de información como la probabilidad de ocurrencia de un signo en un mensaje, una medida de la cantidad de sorpresa que nos puede provocar un determinado evento. Para explicarlo con palabras sencillas, en determinados contextos sabemos que hay eventos que son más o menos normales y otros que son inesperados. En el lenguaje hay ciertas concatenaciones que son más predecibles que otras. Si cada día, al salir de la trabajo, el jefe dice «hasta mañana» al trabajador, tras una serie de eventos de este tipo el enunciado resulta poco informativo. Pero si un determinado día el texto cambia por «esta empresa ya no requerirá sus servicios», diremos que este segundo enunciado es comparativamente más informativo, es decir que causa mayor sorpresa. Esta sorpresa está directamente relacionada con la probabilidad de aparición de este mensaje (la sorpresa no será tan grande si el empleado está acostumbrado a ser despedido de diferentes trabajos). El criterio de la frecuencia como estimación de probabilidad es el mismo que aplicamos cuando nos encontramos en la situación de sacar bolas de una urna. Si suponemos que cada bola tiene la misma probabilidad de ser elegida, y si al sacar las bolas de una en una observamos que las bolas a veces son negros y otras veces son blancas, y tras sacar cien bolas nos damos cuenta de que hemos obtenido noventa cinco bolas negras, esta circunstancia, aunque sea de manera intuitiva, nos hará sospechar que la próxima bola, la 101, tendrá un 95\% de probabilidades de ser negra. Podemos aplicar esta intuición al estudio del lenguaje y adjudicar así un valor que represente la cantidad de información de los signos de acuerdo con su probabilidad de aparición en un mensaje. En la Ecuación 1, la probabilidad de aparición de una determinada palabra $i$ es expresada como $p(i)$, $f(i)$ sería la frecuencia de $i$ en un determinado corpus y $N$, la cantidad total de palabras de este corpus. \begin{equation} p(i) = f(i) / N \label{Eq1} \end{equation} En el léxico tenemos palabras que son más o menos informativas. La aparición de palabras como \textit{el, de} o \textit{que} en un texto nos sorprende poco, y por eso decimos que son poco informativas. Si ordenamos todas las palabras de un corpus por frecuencia decreciente, observaremos que la frecuencia de una unidad está en función de su posición en el rango (r); por tanto, se cumple -aproximadamente- la Ecuación [2]: \begin{equation} f(x) = 1 / r \label{Eq2} \end{equation} Si multiplicamos la frecuencia de una unidad por su rango (Ecuación [3]) obtenemos un valor constante. \begin{equation} c = f . r \label{Eq3} \end{equation} La curva de la función [2] representa también la distribución de la renta en las sociedades capitalistas, conocida como la ley de Pareto, por Vilfredo Pareto, quien la describió en 1906. Ordenados de mayor a menor renta, se advierte como son unos pocos los individuos que poseen la mayor parte de la riqueza, mientras que la gran mayoría percibe una mínima parte. Entre los lingüistas, su descubrimiento se atribuye a J. Estoup, quien la describió en 1916, aunque fue divulgada por G. Zipf en 1949. El interés por la ley de Zipf decayó, sin embargo, a partir del estudio de Mandelbrot (1961), quien la reformuló para que se adaptara mejor a los datos observados (Ecuación [4]), particularmente en los rangos más altos y más bajos de la curva. \begin{equation} f(x) = P . (r + p)^{-B} \label{Eq4} \end{equation} En la fórmula de Mandelbrot, $f$ es la frecuencia y $r$ el rango, mientras que $P$, $p$ y $B$ son parámetros constantes. Herdan (1964), sin embargo, objeta que estos tres parámetros no son constantes sino que dependen del tamaño del corpus. La consecuencia de esto es que la fórmula no podría ser aplicada a la comparación de muestras de tamaño diferente con el fin de, por ejemplo, comparar la riqueza léxica de las muestras. La riqueza del vocabulario está directamente relacionada con la cantidad de información de los signos, lo que determina el grado de dificultad de lectura o densidad de un texto. Esto es lo que Mandelbrot llama la ``temperatura del discurso''. En su caso, planteaba la relación entre la extensión y el vocabulario de un texto, es decir, la cantidad de palabras diferentes dividida por la cantidad total de palabras. Pero podemos establecer diferentes medidas de riqueza del vocabulario para un autor o un texto no solamente según esto, sino también poniendo en relación un texto analizado con un conocimiento previo que podamos tener de la lengua en que está escrito. Este conocimiento previo puede tener la forma de un modelo de lengua elaborado sobre la base de un corpus de una extensión de $n$ millones de palabras, un corpus que podríamos llamar corpus de referencia de una lengua, conformado por textos de prensa u otros géneros que pertenezcan a una determinada lengua o variedad dialectal. Mal llamado «corpus de referencia» en realidad, ya que este corpus, por más grande que sea, siempre tendrá un determinado sesgo y no llegará a ser verdaderamente una referencia de la lengua. Utilizado como modelo, sin embargo, nos permitirá conocer la rareza de las palabras que utiliza un texto o un autor, ya que para nosotros representaría un estándar de lo que se puede considerar lengua «normal». \subsection{Asociación} A pesar del interés que pueda tener la asignación individual de información para los signos, es mucho más interesante estimar sus probabilidades de combinatoria. Si los signos se combinaran en el lenguaje de manera aleatoria, sus probabilidades de combinación serían iguales a la multiplicación de sus probabilidades individuales de aparición. La probabilidad de combinación aleatoria de las palabras $i$ y $j$ (Ecuación [5]) define que la probabilidad de aparición conjunta de $i$ y $j$ (expresada aquí como intersección) es igual a la de $i$ multiplicada por la de $j$. \begin{equation} p (i \cap j) = p (i) . p (j) \label{Eq5} \end{equation} Existe una abrumadora cantidad y diversidad de medidas para calcular las probabilidades de combinación de las palabras -o eventos, en general- (Muller, 1973; Manning y Schütze, 1999; Evert, 2004; entre otros). En lingüística podemos ver estas medidas aplicadas a la extracción de terminología especializada polilexemática o al estudio de las colocaciones, todo un capítulo en el estudio del lenguaje. Las combinaciones de palabras no son dadas solamente por la gramática, y esto tiene indudablemente su correlato en las frecuencias de coocurrencia. Por ejemplo, en inglés, se dice \textit{strong coffee}, pero no \textit{powerful coffee}. Sin embargo, podemos hablar de una \textit{powerful computer}, pero no de una \textit{strong computer}\footnote{Este último ejemplo es interesante, porque actualmente ambas secuencias de palabras tienen prácticamente la misma frecuencia en Google; algo que puede engañar al usuario desprevenido porque la segunda forma, \textit{strong computer}, aparece siempre formando parte de estructuras más grandes como \textit{strong computer password}. Es decir, el núcleo del que depende \textit{strong} no es en este caso \textit{computer} sino \textit{password}, o \textit{skills}, o \textit{science, background}, etcétera.}. En cada lengua, e incluso en cada dominio de especialidad, existen ciertas preferencias en las combinaciones de palabras en las diversas categorías gramaticales (verbo-nombre; adjetivo-nombre; nombre-nombre, etc.). Por una razón pragmática, las cosas suelen decirse de una determinada manera, y si bien la gramática nos permitiría formular el texto de otra, haciéndolo así correríamos el riesgo de confundir al receptor si ya existe, en esta lengua, dominio o registro, una manera típica o idiosincrática de decir lo que queremos decir. Las estadísticas de asociación nos pueden informar sobre la manera típica en la que se combinan las palabras de una lengua porque responden a la pregunta sobre cuál es la probabilidad de que dos eventos ocurran juntos en una misma situación, o más precisamente, si la frecuencia de aparición de dos eventos en una misma situación se puede atribuir al azar. Un evento puede ser la aparición de una palabra y la situación puede ser un texto, un párrafo, una oración, una «ventana» de $n$ palabras, etc. También se puede tratar de la aparición de las palabras de forma concantenada, o no. Si se trata de una secuencia de dos palabras podemos hablar de un bigrama, de un trigrama en el caso de tres unidades o de un $n$-grama para $n$ unidades. Pero hay que tener en cuenta que un $n$-grama podría ser definido de otra manera, como una secuencia de letras o de categorías gramaticales. La coocurrencia, además, puede ser definida de una manera distinta a la secuencial. Podemos definir coocurrencia como la aparición de las dos palabras en una ventana de contexto sin importarnos el orden en que aparecen. Las Figuras 1 y 2 muestran, por ejemplo, un criterio de coocurrencia que consiste en comprobar cuántas veces aparecen las palabras --a diferentes distancias y en diferente orden-- en una ventana de contexto de veinte palabras. En ambos casos, estamos analizando las palabras que coocurren con la forma inglesa \textit{platypus} (ornitorrinco) en un corpus descargado de Internet. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig1.png} \caption{\label{fig1}Histograma que caracteriza la coocurrencia de la forma platypus (`ornitorrinco', en inglés) y la forma anatinus (parte de su denominación científica). Ejemplo 1: \textit{...the platypus ornithorhynchus anatinus is a semiaquatic mammal endemic to eastern Australia, including...}} \end{figure} En la Figura 1, observamos que las ocurrencias de la forma \textit{anatinus}, una de las palabras con las que está asociada, se reparten a izquierda y derecha de \textit{platypus}. Comprobamos aquí que las ocurrencias de \textit{anatinus} se concentran en la posición +2, es decir que la mayoría de las veces la forma \textit{anatinus} aparece dos posiciones tras la forma \textit{platypus}, como en el ejemplo 1. En la Figura 2, observamos que lo mismo ocurre con la forma \textit{has}, aunque ahora la forma se concentra en la posición +1, tal como ocurre en el ejemplo [2]. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig2.png} \caption{\label{fig2}Histograma por las formas platypus y \textit{has} ( 'tiene'). Ejemplo 2: \textit{...the platypus has four legs which extend horizontally from its body...}} \end{figure} En lingüística de corpus es habitual utilizar medidas de asociación, pero no tanto para falsar una hipótesis nula según la cual los elementos que estamos estudiando se combinarían por azar, sino más bien para ordenar combinaciones de elementos a partir de la ponderación que obtienen como consecuencia de la aplicación de estas medidas. Podemos establecer diferentes tipos de medidas de asociación en función de la simetría o asimetría que presentan. Entre las medidas de asociación simétricas encontramos el concepto de información mutua (Ecuación [6]), derivado de la teoría de la información de Shannon (1948). Representa la cantidad de información que nos da la ocurrencia del evento $i$ sobre la ocurrencia del evento $j$ (Church y Hanks, 1991; Manning y Schütze, 1999). Con esta Ecuación medimos, en bits, qué tan previsible es un acontecimiento $i$ al pasar uno $j$. Es decir, cuánta sorpresa nos causa $i$ una vez que ha aparecido $j$. En un caso extremo, una alta información mutua sería que $i$ sólo ocurre cuando ha pasado $j$, y en el extremo opuesto, que si pasa $i$ puede pasar $j$ o cualquier otro acontecimiento. Es simétrica por definición, ya que otorga un mismo valor a $i$ dada $j$, que a $j$ dada $i$. Esta medida no es aplicable a eventos que tienen poca frecuencia, ya que atribuiría una alta asociación a los que aparecen de manera conjunta por simple azar. \begin{equation} MI(i, j) = \log_2 \frac{p(i,j)} { p(i) p(j) } \label{Eq6} \end{equation} En tanto, entre las medidas de asociación asimétricas encontramos la probabilidad condicional (Ecuación 7). Es una medida asimétrica porque puede no ser igual la probabilidad $i$ dada $j$, que la probabilidad de $j$ dada $i$. Por ejemplo, si $j$ es la palabra \textit{augurio} e $i$ es \textit{mal} (o \textit{buen}), la palabra \textit{augurio} predice \textit{mal}, pero \textit{mal} no predice en absoluto a \textit{augurio}. \begin{equation} p(i | j) = \frac{p(i \cap j)} { p(j) } \label{Eq7} \end{equation} Hasta ahora hemos visto ejemplos con bigramas, es decir, secuencias de dos palabras. Si estamos estimando la probabilidad de aparición de un bigrama, podríamos también volver a la Ecuación [1] y definir la probabilidad como la frecuencia de aparición dividido por la cantidad total de bigramas que hemos observado en el corpus. Veremos en la sección 4 que es posible, estudiando sólo las frecuencias de aparición de los bigramas, reconocer la escritura de autores individuales. Esto es posible porque el lenguaje es un sistema de opciones y elecciones. El lenguaje ofrece al hablante o autor diferentes posibilidades de combinatoria, y este, con las sucesivas elecciones, se va construyendo a sí mismo. Entonces empiezan a producirse combinaciones que son recurrentes o típicas de un autor en comparación con otros. Pero no hablamos sólo de autores, porque también las variantes dialectales de los diferentes colectivos o naciones tienen una determinada manera de combinar las palabras y conforman patrones que el ordenador puede reconocer mediante la aplicación de un sencillo cálculo estadístico. Estos patrones, no hace falta decirlo, son completamente imperceptibles para el ojo humano, y quien los produce no es conciente de que lo hace. \subsection{Distribución} La sección anterior ofrece una visión del corpus como un espacio continuo donde se puede dar la coocurrencia de eventos-palabras, valiéndose de la noción de ventana de contexto para definir cuándo dos palabras aparecen juntas. Esta sección, en cambio, ofrece una perspectiva diferente del corpus, ya que lo presenta dividido según un criterio determinado. En primer lugar, comentaremos algunos ejemplos de cómo podemos estudiar o visualizar la distribución de unidades o de combinación naciones de unidades en corpus divididos de manera diferente. Finalmente, estudiaremos la manera de ordenar las unidades de un corpus a partir del comportamiento que tiene su curva de distribución. El primer ejemplo es el análisis de la distribución de términos en un documento concreto. De acuerdo con finalidades diversas, ya sea el análisis del discurso en el plano teórico o la elaboración de sistemas de indexación para la recuperación de información, podemos tener interés en averiguar cómo se distribuyen las ocurrencias de determinados términos en la obra de un autor. Es posible que existan términos clave en ciertas obras que se distribuyan de una manera recurrente a lo largo del texto. También puede ocurrir que algunos términos se concentren en determinados capítulos de la obra. Puede que se encuentren en la introducción, por ejemplo, ya que su función es introducir al lector en los conceptos que luego presentará el texto, asociados a los conocimientos que se supone tiene el lector. Pero es posible también que estos términos introductorios no sean fundamentales en la obra. La Figura 3, por ejemplo, muestra que tres términos clave en una versión en inglés de la Crítica de la Razón Pura, de Kant: \textit{concepts}, \textit{empirical} e \textit{intuition} se distribuyen de manera regular en la obra, si bien \textit{intuition} se concentra en el capítulo dedicado a la estética. También es posible que una gran cantidad de palabras se distribuya de manera regular a lo largo de la obra; pero no porque estas sean importantes en el contenido, sino porque forman parte del sistema de la lengua. Por ello, para los estudios de distribución de una obra concreta, hay que tener en cuenta también la distribución de las unidades en un corpus. La Figura 4 muestra un ejemplo de distribución de unidades, esta vez en un corpus diacrónico. Se trata de las frecuencias de las palabras en los archivos del diario El País\footnote{Véase http://www.elpais.es.}. Cada una de las divisiones en el eje horizontal representa todas las ediciones de un mismo año. El eje vertical representa la frecuencia relativa de una palabra determinada o de una combinación de palabras en cada año. Podemos observar que, mientras que algunas palabras tienen un uso continuo a lo largo del tiempo, ya que son palabras del vocabulario central de la lengua (Figura 4), otras unidades tienen un uso que fluctúa, ya que hacen referencia a conceptos extralingüísticos que tienen diferente vigencia en función de la agenda temática de los medios de comunicación (Figura 5). \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig3.png} \caption{\label{fig3}Distribución de las formas \textit{concepts}, \textit{empirical} e \textit{intuition} a lo largo de los diferentes capítulos de una versión en inglés de la ``Crítica de la razón pura'', de Kant. El eje horizontal representa los diferentes capítulos y el eje vertical la frecuencia relativa.} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig4.png} \caption{\label{fig4}Distribución de las formas \textit{después} y \textit{entonces}, dos palabras del vocabulario central de la lengua castellana, en los archivos del diario El País en el período 1.976-2.007. El eje horizontal representa el tiempo y el eje vertical, la frecuencia relativa.} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig5.png} \caption{\label{fig5}Distribución de las formas \textit{demencia} y \textit{Alzheimer}, en el mismo corpus, dos unidades que hacen referencia a conocimiento extralingüístico.} \end{figure} Un caso diferente es el de dos unidades que, si bien también están implantadas, presentan oscilaciones debido a la evolución del sistema semántico de la lengua. Lo ejemplificamos en la Figura 6, con las unidades \textit{hombre} y \textit{mujer}, que representan el desarrollo ideológico de una sociedad que toma conciencia del lenguaje sexista. Así, vemos que mientras en 1976 la palabra \textit{hombre} era mucho más frecuente que la palabra \textit{mujer}, esta diferencia se va revirtiendo con el tiempo hasta alcanzar la misma frecuencia de uso en 2007. Basándonos en el comportamiento de las curvas de distribución de frecuencias de las unidades en estos corpus divididos, hay varios coeficientes que nos interesan para diferentes fines. En algunos casos, nos interesarán las unidades o combinaciones de unidades que tengan una frecuencia de uso ascendente, como en el caso de la extracción de neología (subsección 4.3). Pero en otros casos nos interesará saber cuál es el vocabulario consolidado de una lengua, por contraste con las unidades referenciales, es decir, aquellas que hacen referencia a conocimiento extralingüístico. En este caso, nos interesan aquellas unidades que tengan las curvas más horizontales. En el caso opuesto, podemos caracterizar la irregularidad de una distribución mediante la fórmula [8] (Nazar, 2008) que mide la dispersión $D$ de una unidad $t$ mediante la multiplicación del valor máximo de frecuencia de $t$ o $max f ( t )$, que sería la frecuencia de $t$ en la partición donde es más frecuente, multiplicada por $Cr ( t )$, que sería la cantidad de particiones en las que $t$ tiene frecuencia 0 o una frecuencia inferior a un determinado parámetro $k$. \begin{equation} D(t) = \max_{f(t)} . Cr(t) \label{Eq8} \end{equation} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig6.png} \caption{\label{fig6}Distribución de las formas \textit{hombre} y \textit{mujer} en el mismo corpus.} \end{figure} \subsection{Similitud} En esta sección tratamos el concepto de similitud desde un espectro amplio. Podríamos hablar exclusivamente de similitud entre entidades lingüísticas, pero hay saber que también es posible calcular la similitud entre diferentes objetos complejos si somos capaces de codificarlos como vectores. Podemos agrupar diferentes objetos según la similitud que tengan, definida de acuerdo con los atributos que comparten. Estos atributos estarán definidos para cada objeto en forma de vector. Un vector puede representar varias cosas: un documento, el haz de coocurrencia de un término, los predicados con los que suele aparecer un nombre, etc. La cantidad de valores de un vector es lo que determina su dimensionalidad, $n$, donde los $x$ son los componentes (Ecuación [9]). \begin{equation} \vec{x}= ( x_1, x_2, x_3, ..., x_n ) \label{Eq9} \end{equation} Un vector se intuye con facilidad como una fila de una matriz. La Tabla 1 muestra, por ejemplo, una matriz de documentos por términos, mientras que la Tabla 2 muestra una matriz de términos por términos. \begin{table} \centering \begin{tabular}{c|c|c|c|c} \hline & $Term_1$ & $Term_2$ & $Term_3$ & ... \\ \hline $Doc_1$ & 1 & 0 & 1 \\ $Doc_2$ & 0 & 1 & 1 \\ $Doc_3$ & 0 & 1 & 0 \\ \hline ... & ... & ... & ... & ... \\ \hline \end{tabular} \caption{\label{docxterm}Matriz de documento por término.} \end{table} \begin{table} \centering \begin{tabular}{c|c|c|c|c} \hline & $Term_1$ & $Term_2$ & $Term_3$ & ... \\ \hline $Term_1$ & 1 & 0 & 1 \\ $Term_2$ & -- & 0 & 1 \\ $Term_3$ & -- & -- & 0 \\ \hline ... & ... & ... & ... & ... \\ \hline \end{tabular} \caption{\label{termxterm}Matriz de término por término } \end{table} Si los objetos que estamos comparando fueran términos y los componentes de sus vectores representaran los $n$-gramas de letras que los conforman, entonces podríamos utilizar las medidas de similitud entre cadenas de caracteres para tener, entre otras cosas, una forma de pseudolematitzación en el trabajo con textos no etiquetados, ya que esta metodología sería capaz de detectar la similitud que existe entre cadenas como \textit{enfermedad} y \textit{enfermedades}; o bien la identificación de variantes terminológicas, como en el caso de \textit{superficie pulmonar} y \textit{superficie de los pulmones}. Con medidas de similitud como estas podemos elaborar, por ejemplo, un programa que, a partir de un término de entrada, indique una lista de términos en un corpus que presenten una similitud morfológica. Lo mismo puede hacerse con documentos: a partir de un documento determinado, el programa ordenará el resto de los documentos del corpus de acuerdo con la similitud que tengan. Las posibilidades, sin embargo, son aún mayores. Por ejemplo, con una colaboradora, Vanesa Vidal, hicimos un experimento en el que comparamos diferentes verbos especializados en función de los nombres con los que estos verbos suelen aparecer. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c} \hline & $Nom_1$ & $Nom_2$ & $Nom_3$ & $Nom_4$ &... \\ \hline $Verb_1$ & 0 & 0 & 0 & 1 & ...\\ $Verb_2$ & 1 & 0 & 0 & 0 & ...\\ $Verb_3$ & 0 & 0 & 1 & 0 & ...\\ \hline ... & ... & ... & ... & ... & ...\\ \hline \end{tabular} \caption{\label{verbxnom}Matriz de verbos por nombres.} \end{table} La tabla 3 muestra un fragmento de una matriz que tiene cientos de filas y columnas que cruzan la información de coocurrencia de verbos (filas) y nombres (columnas) en un corpus de genómica. Es una matriz binaria, ya que codifica, en cada celda, la aparición o la no aparición de las combinaciones verbonominales. La comparación automática de todos los verbos entre sí da una lista de los grupos de verbos más similares, es decir, aquellos que se relacionan con el mismo o casi con el mismo grupo de nombres. De este modo, podremos ver que, sin tener en cuenta ningún tipo de información sobre la similitud morfológica y ortográfica, encontramos que, en castellano, en el ámbito de la genómica, los verbos \textit{enrollar} y \textit{desenrrollar} son muy similares porque aparecen junto a los nombres \textit{hélice, cadena, adn, hebra}, etc.; así como los verbos \textit{beber}, \textit{ingerir} y \textit{reabsorber} parecen porque comparten los nombres \textit{agua}, \textit{cantidad}, \textit{cola}, \textit{célula} y \textit{glucosa}, entre otros. Diferentes autores han adoptado estrategias más o menos similares, no ya en el estudio de combinaciones verbonominales, sino para el descubrimiento de sinónimos, cuasisinónimos o equivalentes en diferentes lenguas que ponen en relación elementos que comparten los mismos vecinos (Nazar, 2010\footnote{La investigación estaba en curso en el momento de la conferencia.}). Entre otras medidas de similitud, la medida Dice es apropiada para la comparación de vectores con valores binarios. Lo que esta medida hace es contar la cantidad de dimensiones en que en dos vectores el valor es 1 en relación con la cantidad de valores. Si X e Y son dos vectores, la medida queda expresada en la Ecuación [10]. | X | es el conjunto cardinal de X, es decir, la cantidad de componentes. Se multiplica por 2 para tener un escala que va de 0,0 a 1,0, que sería la similitud total. \begin{equation} Dice(X,Y) = \frac{ 2 |X \cap Y| } { |X| + |Y|} \label{Dice} \end{equation} La medida Jaccard (Ecuación [11]) es similar a la anterior, pero introduce una normalización que consiste en la división por la cantidad de dimensiones de los vectores, es decir que introduce una penalización cuando hay pocas dimensiones compartidas en proporción a la cantidad total de dimensiones. \begin{equation} Jaccard(X,Y) = \frac{| X \cap Y | } { |X \cup Y |} \label{Jaccard} \end{equation} \section{Aplicaciones prácticas} Si bien la sección anterior ya sugiere algunos ejemplos de aplicación práctica, en esta sección presentaremos un espectro de aplicación más amplio. Analizaremos la aplicación de medidas de similitud y coocurrencia en el ámbito de la clasificación automática de documentos en las dos modalidades en que esta práctica existe actualmente: la clasificación con aprendizaje supervisado y no supervisado. Finalmente, comentaremos brevemente la aplicación de medidas de distribución aplicadas al descubrimiento de neología. La falta de espacio nos obligará a dejar temas que habría sido muy interesante comentar, como por ejemplo la aplicación de metodologías estadísticas para la extracción de terminología especializada, así como las metodologías para la extracción de terminología bilingüe a partir corpus no paralelos, que son líneas de invsestigación en curso. \subsection{Clasificación de documentos} Como es sabido, los algoritmos de clasificación automática de documentos se dividen en supervisados y no supervisados (Manning y Schütze, 1999; Sebastiani, 2002). En ambos casos estamos agrupando objetos (documentos, en este contexto), pero la diferencia es que, en el primero, un algoritmo de clasificación tiene un conocimiento previo sobre los objetos que ha de clasificar, ya que ha pasado por un proceso de «entrenamiento», en el que un usuario le ha enseñado ejemplos de objetos clasificados según un criterio cualquiera. En el segundo caso, en cambio, la tarea de clasificación se hace sin este conocimiento, es decir que el algoritmo no sabrá cuántas ni cuáles son las categorías según las cuales los objetos deben ser agrupados, y por tanto la clasificación será una propiedad emergente a partir de las similitudes que tienen los objetos. \subsubsection{Clasificación con aprendizaje supervisado} En el año 2004 me vinculé a dos grupos de investigación que estaban trabajando en áreas que en principio pueden parecer disímiles. Uno de los grupos estaba trabajando en la atribución de autoría con el propósito de aplicarla a la lingüística forense. El otro grupo, más vinculado a la terminología, tenía interés en encontrar una manera sistemática de clasificar un documento, tanto según la temática como según el grado de especialidad. La filosofía de trabajo en ambos grupos era la misma: diseñar estrategias fundamentadas en el conocimiento lingüístico, entendida como el examen manual de la casuística y la identificación, de acuerdo con la intuición del investigador, de aquellos rasgos que podrían ser discriminantes de las diferentes categorías. En ambos casos se trata de un trabajo de enorme complejidad y arraigado en el conocimiento que el investigador tiene de la lengua particular en la que está escrito el texto. En el caso de la lingüística forense, los rasgos pueden ser, por citar algunos ejemplos, giros idiosincrásicos que puedan delatar una pertenencia a una zona geográfica o a una condición social, o bien particularidades como los errores de ortografía o gramática que tengan en común los textos de autoría disputada con aquellos textos de autoría indubitada (véase Turell, 2005, para una introducción). En el caso de la clasificación de documentos por tema o por grado de especialidad, la estrategia consistía en encontrar rasgos lingüísticos de un dominio temático (la densidad de terminología especializada en el texto, por ejemplo) o bien otros rasgos morfológicos y léxicos que pueden ser característicos de la literatura especializada (Cabré et al., 2009). En este contexto surgió el software Poppins\footnote{El programa Poppins puede ser ejecutado a través de Internet en la dirección http:// www.poppinsweb.com} (Figura 7). \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig7.png} \caption{\label{fig7}Captura de pantalla de la interfaz web del software Poppins} \end{figure} Este programa representa una solución de clasificación diferente, ya que se puede aplicar tanto a los problemas de atribución de autoría como la clasificación por tema, por grado de especialidad y hasta para otros problemas de clasificación en los que el algoritmo sea entrenado, y ello con independencia de la lengua de los documentos, del dominio temático o del criterio de clasificación. Como decíamos antes para el caso de los algoritmos supervisados, la lógica de este programa incluye dos fases principales. En la primera, la fase de entrenamiento, un usuario «presenta» al programa ejemplos de documentos ordenados en clases. Una vez terminada esta etapa, la etapa de clasificación consiste en, partiendo de un nuevo conjunto de documentos, ordenarlos basándose en la clasificación que ha aprendido durante la fase de entrenamiento. El modo de funcionamiento es bastante básico porque los textos que son clasificados no son sometidos a ningún tipo de procesamiento. La única operación que se hace es calcular las frecuencias de aparición de los diferentes bigramas de palabras del corpus. Así, cada clase de entrenamiento se convierte en un vector que tiene por atributos los bigramas y por valor la frecuencia de aparición. De esta manera, a partir de un nuevo documento, lo que hacemos es computar una medida de similitud que consiste en sumar las frecuencias de los bigramas que tienen en común el documento a clasificar y cada una de las clases. La comparación que obtiene como resultado la suma mayor es la clase elegida para este documento. Con otra colaboradora, Marta Sánchez Pol (Nazar y Sánchez Polo, 2006), descubrimos que con este programa podíamos determinar correctamente la autoría de un texto con una probabilidad del 90\%. La interfaz del programa muestra experimentos con otros casos, como el de los Federalist Papers, un famoso caso de autoría disputada, y atribuye los textos de autoría disputada a James Madison (Figura 8), lo que coincide con otros estudios llevados a cabo sobre el mismo caso (Mostellaria y Wallace, 1984). En cuanto a la clasificación por temática y por grado de especialidad, experimentos de clasificación de documentos del Corpus Técnico del IULA (Vivaldi, 2009) demostraron niveles de precisión similares. El experimento aún se puede repetir de diversas maneras, mediante la clasificación los documentos por lengua, por variante dialectal, por género o por otros criterios. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig8.png} \caption{\label{fig8}Resultados de la clasificación de documentos con el software Poppins} \end{figure} \subsubsection{Clasificación con aprendizaje no supervisado} Como hemos dicho en la introducción de esta sección, la clasificación con aprendizaje no supervisado es el escenario en el que el algoritmo no ha pasado por una etapa de entrenamiento y, por tanto, no sabe cuáles ni cuántas son las categorías en las que deben ser clasificados los documentos. Si en el caso anterior relacionábamos la clasificación de documentos con aplicaciones concretas como la atribución de autoría, en este caso la clasificación de documentos con aprendizaje no supervisado se peude relacionar con la desambiguación de terminología. Esto es así porque planteamos la clasificación como un problema de desambiguación. En este experimento, reunimos una colección de documentos en que aparece una forma ambigua, por ejemplo mediante la descarga de documentos de Internet, y los clasificamos a partir de los diferentes sentidos que puede mostrar esta forma dentro de la colección. Esta clasificación se lleva a cabo por medio de los grafos de coocurrencia léxica. Tomemos, en primer lugar, un ejemplo con una forma ambigua como \textit{ratón} en castellano, que, en el Corpus Técnico del IULA --que contiene documentos sobre informática y sobre genómica-- puede ser utilizada para hacer referencia al dispositivo periférico del ordenador o bien al animal de laboratorio. En los grafos de coocurrencia hay un nodo principal, situado en la zona superior central, que corresponde a la unidad que estamos analizando: \textit{ratón} en este caso. De este nodo dependen todos los demás. Cada nodo representa una palabra o una combinación de palabras, y las conexiones entre nodos expresan que las palabras que los nodos representan aparecen juntas en los mismos contextos donde aparece la unidad analizada. En la Figura 9 se aprecia la existencia de dos regiones en el grafo, una en la derecha y otra a la izquierda. Estas dos regiones --atractores o clústers de nodos-- se corresponden con cada uno de los sentidos que la forma presenta. En un caso, las unidades con las que aparecerá ratón serán \textit{cromosoma, mamífero, rata, genoma, laboratorio, bacteria}, entre otros; mientras que, en el otro caso, las unidades que se relacionan con ratón son \textit{usuario, pantalla, teclado, clic}, etcétera. En la tesis (Nazar, 2010) presento, entre otras cosas, la aplicación de este método para la desambiguación de siglas, ya que estas son formas ambiguas por naturaleza. Así, ante de una colección de documentos descargada de Internet con una forma ambigua como NLP, por ejemplo, un programa informático es capaz de obtener dos clústers que representan los dos sentidos de esta palabra: por un lado, documentos referidos a la forma expandida \textit{natural language processing} y por otro, documentos sobre \textit{neurolinguistic programming}. En el primer caso, NLP se relaciona con unidades \textit{comon knowledge representation, language technology, functional grammar, machine translation, statistical NLP, computational lingüísticos}, entre otros; mientras que el segundo clúster incluye unidades como \textit{practitioner training, practitioner NLP, gestalt therapy, John Grinder, Richard Bandler, Robert Dilts}, etcétera. \subsection{Descubrimiento de neología} En esta sección analizaremos la aplicación de algunas de las medidas de distribución que hemos visto en la sección 3.2, con el propósito concreto de hacer un experimento de extracción automática de neología. Los resultados de la aplicación de estas técnicas para la extracción de neología, así como de las técnicas de desambiguación automática presentada en el punto anterior (4.1.2) fueron presentadas en un trabajo previo (Nazar y Vidal, 2008). \begin{figure} \centering \includegraphics[width=1\textwidth]{fig9.png} \caption{\label{fig9}Grafo de coocucurrencia de la forma polisémica ratón, con un significado que hace referencia al dispositivo periférico del ordenador en los documentos de informática y el otro utilizado en los documentos de genómica para hacer referencia al animal, frecuentemente utilizado en laboratorios.} \end{figure} Las Figuras 10 y 11 ofrecen gráficas que ya nos son familiares, porque hemos visto curvas similares en la subsección 3.2: seguimientos de determinadas unidades léxicas a lo largo del corpus diacrónico de El País. Muestran ejemplos del comportamiento de unidades que consideramos neologismos, tales como \textit{teléfono móvil, teléfono fijo} y \textit{cambio climático}, unidades cuya frecuencia de uso muestra un incremento acusado en la línea del tiempo. \begin{equation} f(x) = x^k \label{Ideal} \end{equation} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig10.png} \caption{\label{fig10}Distribución de las formas \textit{teléfono móvil} (curva superior) y \textit{teléfono fijo} (curva inferior) en el corpus diacrónico de El País.} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig11.png} \caption{\label{fig11}Distribución de la forma \textit{cambio climático} en el mismo corpus.} \end{figure} En dicho trabajo sobre extracción de neología definimos lo que sería la curva de comportamiento de un neologismo ideal o teórico, representada en la Figura 12 y definida en la Ecuación [12]. Se trata de una curva exponencial en el intervalo de años estudiado (en nuestros experimentos probamos con $k = 10$). El experimento consistió en tomar una muestra de $n$ unidades del corpus (Las unidades eran tanto palabras aisladas como secuencias de hasta cinco palabras de longitud) y ordenarlas de acuerdo con la distancia euclideana de sus curvas de frecuencia con la curva de este neologismo ideal. De este modo podemos obtener las unidades que se han ido incorporando a la lengua en los últimos años, unidades que luego se han de filtrar, ya que incluyen formas que no son neologismos, como es el caso de nombres propios o referentes que han adquirido notoriedad en los últimos años. Naturalmente, este sencillo método no resultaba eficaz en el caso de los neologismos semánticos, unidades que si bien son formalmente idénticas a otras formas de la lengua, se empiezan a utilizar con un significado diferente. Estas formas representan un desafío para la extracción automática con los métodos tradicionales, pero este mismo escenario es el que encontrábamos en la subsección 4.1.2, en la que clasificábamos contextos de aparición de unidades polisémicas. Es el caso, por ejemplo, de la forma \textit{palabra de honor}, que si bien tiene un uso literal, en el sentido de `hacer una promesa verbal', en los últimos años es cada vez más frecuente utilizarla para designar un determinado tipo de escote. Si bien su condición de neologismo para a este segundo sentido es discutible, ya que este tipo de escote no es nuevo, sí es nueva la masificación de este uso del término, y, por tanto, el ejemplo sigue siendo útil. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig12.png} \caption{\label{fig12}Gráfica del neologismo ideal.} \end{figure} Un algoritmo de clusterización o clustering similar al descrito en la subsección 3.2 es capaz de clasificar todos los contextos de aparición de la forma \textit{palabra de honor} en los archivos de El País y ofrecer dos clústeres con un nombre para cada uno. El clúster 1 se llama ``empeñar'' y el clúster 2 es llamado ``escotes'' . Cada uno de estos clústeres contiene una serie de unidades léxicas que conforman el entorno típico de las ocurrencias de la expresión en un sentido y en el otro. Así, en el clúster 1, tenemos unidades como: \textit{Astarloa, Barrionuevo, confederal, consentidas, credulidad, empeño, esclarece, Escudero, Fusté, Herrero, incite, inocencia, proclamó, quebrantamiento, reiterada}, etc. Estas formas se relacionan con el sentido literal, porque vemos que se trata de nombres propios de personajes públicos, para los que la credibilidad no debería ser irrelevante. En el caso del segundo clúster, en cambio, los vecinos típicos tienen relación con el mundo de la moda: \textit{cubres, drapeados, escotes, Gucci, marrón, modista, ojito, organiza, Swarovski, tonos}, etcétera. \section{Conclusiones} Se ha presentado aquí una visión amplia del cruce entre la lingüística y la estadística, con algunos ejemplos de técnicas que se pueden utilizar para el estudio del lenguaje. Estas técnicas se han acompañado, además, con ejemplos de aplicación concreta, como es el caso de la clasificación de documentos con o sin supervisión, así como la desambiguación de signos polisémicos y el descubrimiento de neología. Hubiera sido interesante mencionar otros ejemplos de aplicación práctica de estas técnicas, como la utilización de medidas de similitud para la comparación entre unidades léxicas de diferentes lenguas, es decir, la extracción de terminología bilingüe desde corpus no paralelos, o bien para la comparación de unidades léxicas de diferentes variedades dialectales. A priori, puede parecer que se trata de áreas de aplicación completamente diferentes, sobre todo para quien está acostumbrado a enfrentar tareas de este tipo con la incorporación de reglas explícitas que codifican conocimiento de la lengua o del dominio temático, así como información semántica extraída de diccionarios y ontologías, en el caso de la extracción de terminología, o corpus de exclusión lexicográficos, en el caso de la extracción de neología. La estadística, por el contrario, posibilita una manera diferente de concebir la lengua. Una investigación de la complejidad, pero desde una perspectiva integradora y simplificadora. Esto es así porque, desde el punto de vista estadístico, tareas y datos disímiles empiezan a parecer relacionados. A veces, los mismos métodos o las mismas formas de pensar se pueden aplicar a problemas que en principio parecían completamente diferentes. Concebimos, pues, la estadística como una «trans-disciplina». Para cerrar esta presentación, me gustaría remarcar que no hay que perder de vista el aspecto teórico. No estamos hablando sólo de «trucos ingenieriles» para resolver problemas prácticos que no tienen una relación intrínseca con la lingüística, como si estas soluciones estuvieran desprovistas de teoría. Está por ver si la estadística y la lingüística conforman disciplinas diferentes o si puede haber algo que llamamos una «sensibilidad estadística» en el análisis lingüístico, una manera de aproximarse a los datos, de advertir patrones, regularidades o tendencias en el cúmulo de los casos individuales donde el ojo humano no puede ver sino cantidad y diversidad. \section*{Referencias} {\small \begin{itemize}[label={},leftmargin=*] \item CABRÉ, M. T.; BACH, C.; DA CUNHA, I.; MORALES, A.; VIVALDI, J. (2009). Comparación de algunas características lingüísticas del discurso especializado frente al discurso general: el caso del discurso económico. XXVII Congreso de AESLA (Ciudad Real, 26-28 març 2009). \item CASTORIADIS, C. (1975). La institución imaginaria de la sociedad. Buenos Aires: Tusquets. \item CHURCH, K.; HANKS, P. (1990). «Word Association Norms, Mutual Information and Lexicography». Computational Linguistics, vol 16, núm. 1, p. 22-29. \item DILTHEY, W. (1986). Introducción a las Ciencias del Espíritu. Madrid: Alianza. \item EVERT, S. (2004). The Statistics ofWord Coocurrences. Tesis doctoral. Stuttgart: Universitat de Stuttgart. Institut fürMaschinelle Sprachverarbeitung, 2004. \item HERDAN, G. (1964). Quantitative Linguistics.Washington: Butterworths. \item MANDELBROT, B. (1961). «On the theory of word frequencies andMarkovianmodels of discourse». A: Structure of Language and itsMathematical Aspects. Symposia on Applied Mathematics. AmericanMathematical Society. Vol. 12, p. 190-219. \item MANNING, C.; SCHÜTZE, H. (1999). Foundations of Statistical Natural Language Processing. MIT Press, 1999. \item MOSTELLER, F.; WALLACE, D. (1984). Applied Bayesian and Classical Inference: the Case of the Federalist Papers. Nova York: Springer. \item MULLER, C. (1973). Estadística Lingüística.Madrid: Gredos. \item NAZAR, R. (2008). Diferencias cuantitativas entre referencia y sentido. Actas del XXVI Congreso de AESLA. (Universitat d’Almeria, 3-5 de abril de 2008). \item NAZAR, R. (2010). A Quantitative Approach to Concept Analysis. Tesis doctoral. Barcelona: Universitat Pompeu Fabra. Institut Universitari de Lingüística Aplicada. \item NAZAR, R; SÁNCHEZ POL,M. (2006). An Extremely Simple Authorship Attribution System. Second European IAFL Conference on Forensic Linguistics / Language and the Law (Barcelona, 2006). \item NAZAR, R.; VIDAL, V. (2008). Aproximación cuantitativa a la neología. I Congreso Internacional de Neología en las lenguas románicas (Barcelona, 7-10 mayo 2008). \item SEBASTIANI, F. (2000). Machine earning in automated text categorization. ACM Press, vol. 34, núm. 1. \item SHANNON, C. E. (1948). «Amathematical theory of communication». Bell SystemTechnical Journal, vol. 27 (julio), p. 379-423. \item SNOW, C. P. (1959 [1993]). The Two Cultures. Cambridge: Cambridge University Press. \item TURELL,M. (2005). «Presentación». A: Lingüística forense, lengua y derecho: conceptos, métodos y aplicaciones. Barcelona: Universitat Pompeu Fabra. Institut Universitari de Ligüística Aplicada, p. 13-16. \item VICKERS, B. (2002). Counterfeiting Shakespeare. Cambridge: Cambridge University Press. \item VIVALDI, J. (2009). Corpus and exploitation tool: IULACT and bwanaNet. I Congreso Internacional de Lingüística de Corpus (Murcia, 7-9 mayo 2009). \end{itemize} } \end{document}
2,877,628,090,034
arxiv
\section{} \label{} \section{Introduction} Magnetic resonance imaging (MRI) is an important non-invasive imaging technique, which enables excellent assessments of structural and functional conditions with no radiation in a reproducible manner. Basically, MRI is aimed to reconstruct the images from the observed signals whose degradation process can be formulated as follows: \begin{align}\label{formula:mri_forword} y = \mathcal{F}x + n, \end{align} \noindent where $x,y \in \mathbb{C}^{N}$ are the vectors denoting the latent image to reconstruct in the image domain and the observed measurements in \textit{k}-space, $\mathcal{F} \in \mathbb{C}^{N \times N}$ is the 2D discrete Fourier transform (DFT) and $n$ is the noise inevitably appearing in the signal acquisition process. However, acquiring the full measurements of $y$ to construct a high-quality MR image $x$ is highly time-consuming. Moreover, the long scanning time will bring about the artefacts arising from the voluntary movements of the patients and involuntary physiological movements~\citep{Zbontar2018}. In order to mitigate the long acquisition time of MRI as well as alleviate the aliasing artefacts, a range of methods has been developed for accelerating MRI to obtain accurate reconstructions. Traditionally, gradient refocusing~\citep{Stehling1991} and a multiple-radio frequency mediated approach~\citep{Hennig1986} were proposed. Under constraints of the Nyquist-Shannon sampling theorem, they did reduce the scanning time although by only a limited factor. With the development of the parallel imaging (PI) and the compressed sensing (CS) theory, the fast MRI based on these two theories attracted much researches and advancements. Parallel imaging was introduced to take advantage of spatial sensitivity distribution derived from an array of carefully distributed receiver surface coils, to reduce the measurement from each coil, alleviating the need of enhancing gradient performance and hence reducing the acquisition time~\citep{Blaimer2004}. The undersampled \textit{k}-space signal using PI MRI can be represented by a general model as: \begin{align}\label{formula:pi_mri_forword} y^q = \mathcal{F}_{u}(\mathcal{S}^q \otimes x) + n^q, q = 1,...S, \end{align} \noindent where $\mathcal{S}^q$ is the sensitivity map and $\mathcal{F}_{u} \in \mathbb{C}^{M \times N}$ is the undersampled 2D DFT matrix with $M\ll N$ to reduce the measurements of each $y^q$. With $S$ coils applied parallelly, one can obtain $y^{1}, ..., y^{S}$ simultaneously to reconstruct the latent image $x$. To reconstruct these PI acquired images, great progress in developing PI reconstruction techniques has taken place, proposing popular methods such as the simultaneous acquisition of spatial harmonic (SMASH)~\citep{Sodickson1997}, sensitivity encoding (SENSE)~\citep{Pruessmann1999} and generalized auto-calibrating partially parallel acquisition (GRAPPA)~\citep{Griswold2002}. The invention of CS theory~\citep{Donoho2006} further advanced the sampling efficiency of MRI. The CS-MRI utilised the non-linear methodology and sparse transformation to reconstruct the latent image from only a small portion of \textit{k}-space measurement under a much smaller downsampling rate than the Nyquist rate. The general problem of MRI using the CS-MRI is to find the minimiser image to the following problem: \begin{align}\label{formula:cs_mri_1} \argmin_{x} \mid\mid\Phi x\mid\mid_{1}, \text{ s.t. } y=\mathcal{F}_{u} x, \end{align} \noindent where $\Phi$ is the sparsifying transformation, $\mathcal{F}_{u} \in \mathbb{C}^{M \times N}$ is undersampled 2D DFT with $M\ll N$, and $y \in \mathbb{C}^{M}$ is the observed undersampled measurements in \textit{k}-space. A range of non-linear reconstruction methods has demonstrated success in resolving this, including some fixed sparsifying methods such as total variance~\citep{Block2007}, curvelets~\citep{Beladgham2008} and double-density complex wavelet~\citep{Zhu2013}, and a few adaptive sparsifying models taking the advantage of dictionary learning~\citep{Ravishankar2011}. While both CS-MRI and PI-MRI can significantly reduce the required number of measurements in \textit{k}-space, the iterative algorithms are required to derive the image however prolong the reconstruction time and hence cause concerns when translated for actual clinical uses. As a modern popular method for general image analysis, deep learning has been very successful by exploiting the non-linear and complex natures of the network with supervised or unsupervised learning. Convolutional neural networks (CNNs) as a special type of deep learning networks enable enhanced latent feature extraction by their very deep hierarchical structure. CNN has demonstrated its superiority in multiple tasks, including detection~\citep{Girshick2014}, classification~\citep{Szegedy2015}, segmentation~\citep{Long2015} and super-resolution~\citep{Dong2016}. Wang et al.~\citep{Wang2016} became the pioneer to take advantage of CNNs by extracting latent correlations between undersampled and fully sampled \textit{k}-space data for MRI reconstruction. Yang et al.~\citep{Yang2016} further improved the network structures by re-applying the alternating direction method of multipliers (ADMM), which was originally used for CS-based MR reconstruction methods. A cascaded structure was developed by Schlemper et al.~\citep{Schlemper2018} for the more targeted reconstruction of dynamic sequences in cardiac MRI. To enable further latent mapping in the reconstruction model, Zhu et al.~\citep{Zhu2018} developed a novel framework to provide more dense mapping through domains via its proposed automated transform by manifold approximation. For a long time, CNNs have had a dominant position in the field of computer vision (CV) since convolutions are effective feature extractors. Most deep learning-based MRI reconstruction methods are based on CNNs, including the GAN-based model. As Figure~\ref{fig:FIG_Overview}(A) shows, the feature extraction of CNNs is based on convolution, which is locally sensitive and lacks long-range dependency. The receptive field of CNNs is limited by the convolutional kernel and the network depth. Oversized convolutional kernel brings huge computational cost and overly-deep network depth, which can cause gradient vanishing. A novel structure, transformer, taking advantage of even deeper mapping, sequence-to-sequence model design~\citep{Sutskever2014} and adaptive self-attention setting~\citep{Vaswani2017,Parikh2016,Cheng2016,Matsoukas2021} with expanding receptive fields (Figure~\ref{fig:FIG_Overview}(A))~\citep{Parmar2018,Salimans2017} has been proposed recently and been popularised in natural language processing (NLP) initially~\citep{Qiu2020}. Then it has been applied to object detection~\citep{Carion2020} and image recognition~\citep{Dosovitskiy2020} and extended to super-resolution~\citep{Parmar2018} for general image analysis. \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_INTRO/FIG_Overview.pdf} \caption{ Overview of the proposed SwinMR. (A) and (B) are the schematic diagrams of the receptive field for 2D convolution (Conv2D), multi-head self-attention (MSA) and shifted windows based multi-head self-attention (W-MSA/SW-MSA). Conv2D is locally sensitive and lacks long-range dependency. Compared with Conv2D, MSA and (S)W-MSA are globally sensitive and have larger receptive fields. MSA is performed in the whole image space, while (S)W-MSA is performed in shifted windows. (Red box: the receptive field of the operation; green box: the pixel; blue box: the patch in self-attention.) (C) is the overview of SwinMR. (D) shows the results of the proposed SwinMR compared with GT, ZF and another method DAGAN~\citep{Yang2018}. (IM: the input module; FEM: the feature extraction module; OM: the output module. ZF: undersampled zero-filled MR images; Recon: reconstructed MR images; Multi-Channel GT: multi-channel ground truth MR images; GT: single-channel ground truth MR images; MASK: the undersampling mask; SM: sensitivity maps.) } \label{fig:FIG_Overview} \end{figure} With its superior ability in image reconstruction and synthesis as demonstrated in natural images, we could see transformers applied in MRI in many different ways. For synthesis, it has greatly enhanced cross-modality image synthesis (PET-to-MR by directional encoder~\citep{Shin2020}, T1-to-T2 by a pyramid structure~\citep{Zhang2021}, and MR-to-CT and T1/T2/PD by a novel aggregated residual transformer block~\citep{Dalmaz2021}). Variants of the transformer also enabled improved performance in reconstruction and super-resolution tasks. It was first applied on the reconstruction of brain MR imaging~\citep{Korkmaz2021_1}. Korkmaz et al.~\citep{Korkmaz2021_2} developed an unsupervised adversarial method to alleviate the scarce training sample populations. To further improve the quality of imaging, Feng et al.~\citep{Feng2021_1} enabled an end-to-end joint reconstruction and super-resolution. Feng et al.~\citep{Feng2021_2} further advanced the model for these dual tasks by incorporating the model with task-specific novel cross-attention modules. However, the shift from NLP tasks to CV tasks leads to challenges: (1) Difference in scale: visual elements (e.g., pixels) in CV tasks tend to vary substantially in scale unlike language elements (e.g., word tokens) in NLP tasks. (2) Higher resolution: the resolution of pixels in images (or frames) tend to be much higher than words in sentences.~\citep{Liu2021} Therefore, it is a trade-off for less computational complexity to limit the scale of self-attention in a local window, as Figure~\ref{fig:FIG_Overview}(A) and (B) shows. \textbf{S}hifted \textbf{win}dows (Swin) transformer~\citep{Liu2021} replaced the traditional multi-head self-attention (MSA) by the shifted windows based multi-head self-attention (W-MSA/SW-MSA). Based on the Swin transformer module, Liang et al.~\citep{Liang2021} proposed SwinIR for image restoration tasks. In this work, we introduced the SwinMR, a novel parallel imaging coupled Swin transformer based model for fast CS MRI reconstruction, as Figure~\ref{fig:FIG_Overview}(C) shows. The main contributions can be summarised as follows: \begin{itemize} \item[$\bullet$] A novel parallel imaging coupled Swin transformer-based model for fast MRI reconstruction was proposed, as Figure~\ref{fig:FIG_Overview}(C) shows. \item[$\bullet$] A novel multi-channel loss was proposed by using the sensitivity maps, which was proved to reserve more textures and details in the reconstruction results. \item[$\bullet$] A series of ablation studies and comparison experiments were conducted. Experimental studies using different undersampling trajectories with various noises were performed to validate the robustness of our proposed SwinMR. \item[$\bullet$] A downstream task experiment using a segmentation network was conducted. A pre-trained segmentation network was applied to test the segmentation score for reconstructed images. \end{itemize} \section{Method} \subsection{Classic Model-Based CS MRI Reconstruction} To recover better spatial information with less artefacts from the undersampled \textit{k}-space data, traditional CS-MRI methods usually consider solving the following optimisation problem: \begin{align}\label{formula:cs_mri_2} \min _{x} \frac{1}{2} \mid\mid \mathcal{F}_{u} x-y \mid\mid_{2}^{2}+ \lambda R(\Phi x), \end{align} \noindent where $\Phi$ is the sparsifying transform, e.g., discrete wavelet transform~\citep{qu2012undersampled}, gradient operator~\citep{Block2007, wu2017solving} and dictionary-based transform~\citep{ravishankar2010mr}. $R(\cdot)$ is the regularisation function imposed on the sparsity, e.g, $l_1$-norm and $l_0$-norm, and $\lambda$ is the weight parameter to balance the two terms. The solution of the above problem can be derived by the non-linear optimisation solvers such as gradient-based algorithms~\citep{lustig2007sparse} and variable splitting methods~\citep{wang2014compressed, yang2010fast}. Depending on the manually designed regularisation, some models may suffer from a long reconstruction time for better reconstruction quality. Additionally, the manually selected sparsifying transform $\Phi$ could also introduce the undesirable artefacts, e.g., TV-based regularisation which is well-known for removing the noise and preserving the sharp edges can introduce staircase artefacts~\citep{Beladgham2008} and the tight wavelet frame transform increases the reconstruction efficiency but may lead to the blocky artefacts~\citep{cai2020data}. \subsection{CNN-based Fast MRI Reconstruction} To relieve the artefacts brought by the hand-crafting regularisation and the long reconstruction time of classic models, the deep neural networks which are well-known as the powerful features extractors, was firstly applied in the CS-MRI in~\citep{Wang2016}. In this work, a deep convolutional network was applied to learn the mapping from down-sampled reconstruction images to fully sampled reconstruction images directly. Following that, several networks have been proposed to further improve the reconstruction quality. Some works attempted to bridge the classic models with deep CNNs by mimicking the iterative algorithm in their network architectures. Deep ADMM Net~\citep{Yang2016} was firstly trained by unfolding the optimisation algorithm ADMM to derive the solution to the general model Equation (\ref{formula:cs_mri_2}) by network blocks. In~\citep{Schlemper2018}, the reconstruction of the deep CNN from lower-quality image was adopted as the prior information to approximate in a classic CS-model as follows: \begin{align}\label{formula:cnn_mri} \min_{x} \frac{1}{2} \mid\mid y - \mathcal{F}_u x\mid\mid^2_2 + \lambda \mid\mid x - f_{\mathrm{CNN}}(x_u\mid\theta) \mid\mid^2_2, \end{align} \noindent where the solution of the above function was further adopted into the network architecture iteratively to improve the reconstruction result of $f_{\mathrm{CNN}}$ which takes the zero-filled reconstruction $x_u$ as the input. On top of the CNNs, conditional generative adversarial networks (cGANs) exploited the advantages of deep learning further and proved to enhance the quality of the MR image reconstruction to a large extent~\citep{Yang2021, Lv2021_1}. Such a competitive network introduced a two-player generator-discriminator training mechanism to competitively improve reconstruction performance by alternatively optimising $\theta_{G}$ and $\theta_{D}$ of the generator $G$ and the discriminator $D$, in a general form as: \begin{align}\label{formula:gan1} \min_{\theta_{G}} \max_{\theta_{D}} {\mathbb{E}}_{x \sim p_{\text{gt}}} \left[\log D_{\theta_{D}}(x)\right] + {\mathbb{E}}_{{x_u \sim p_{\text{u}}}}\left[\log \left(1-D_{\theta_{D}}\left(G_{\theta_{G}}(x_u)\right)\right)\right], \end{align} \noindent where $G_{\theta_{G}}$ and $D_{\theta_{D}}$ denote the generator and the discriminator with parameters $\theta_{G}$ and $\theta_{D}$, respectively. $x$ and $x_u$ denote the ground truth MR images and undersampled zero-filled MR images with aliasing artefacts. After the training, the generator can yield the corresponding reconstruction from $x_u$ to reconstructed images $G_{\theta_{G}}(x_u)$. Variants of generators and discriminators have been developed to cope with multiple flaws in the original architecture of GAN -- for improved generator~\citep{Shaul2020}, improved discriminator~\citep{Huang2021}, loss functions~\citep{Quan2018}, regularisation~\citep{Ma2021}, training stability by Wasserstein GAN~\citep{Arjovsky2017,Guo2020} and attention mechanism~\citep{Jiang2021}. DAGAN~\citep{Yang2018}, by substituting the residual networks with U-Net~\citep{Ronneberger2015}, combined the advantage of U-Net in latent information extraction with competitive training and pre-trained VGG based transfer learning. Furthermore, PIDDGAN~\citep{Huang2021} considered edge information into their model and further enhance the edge information in the reconstruction, which are clinically important when interpreting MR images. The utilisation of transfer learning improved the generalisability of a network trained with a small dataset~\citep{Lv2021_3}. CNNs-based MR reconstruction methods showed their superiority both on reconstruction quality and efficiency compared to classical MR reconstruction methods. However, the performance of those CNN-based methods was limited by the local sensitivity of the convolutional operation. Motivated by this limitation, we proposed a Swin transformer based MR reconstruction method SwinMR. \subsection{SwinMR: Swin Transformer for MRI Reconstruction} \subsubsection{Overall Architecture} The overall architecture is shown in Figure~\ref{fig:FIG_Overview}(C) and the data flow of SwinMR is shown in Figure~\ref{fig:FIG_DataFlow}. Root sum square (RSS) is applied to combine the multi-channel ground truth MR images $x^q$ to single-channel ground truth MR images $x$ ($q$ denotes the $q^{\text{th}}$ coil). Sensitivity maps $\mathcal{S}^q$ are estimated by ESPIRiT~\citep{Uecker2014} from multi-channel ground truth MR images $x^q$. Undersampling and noise interruption are performed in \textit{k}-space using fast Fourier transform (FFT) and inverse fast Fourier transform (iFFT) (Gaussian noise is added in the noise experiments), which converts single-channel ground truth MR images $x$ to undersampled zero-filled MR images $x_u$. \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_INTRO/FIG_DataFlow.pdf} \caption{The dataflow of proposed SwinMR. Root sum square (RSS) is applied to combine the multi-channel ground truth MR images (Muti-Channel GT) to single-channel ground truth MR images (GT). Undersampling and noise interruption are performed in \textit{k}-space using fast Fourier transform (FFT) and inverse fast Fourier transform (iFFT) to convert the GT to undersampled zero-filled MR images (ZF) as the input of our proposed SwinMR. Multi-channel reconstructed MR images (Muti-Channel Recon) are calculated by the pixel-wise multiplication of single-channel reconstructed MR images (Recon), which are the output of the proposed SwinMR, and sensitivity maps, which are estimated by ESPIRiT from the Multi-Channel GT. } \label{fig:FIG_DataFlow} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_INTRO/FIG_Structure.pdf} \caption{The structure of proposed SwinMR. (A) shows the overall structure of SwinMR. In SwinMR architecture, two Conv2Ds are placed at the beginning and the ending. A cascade of RSTBs and a Conv2D with a residual connection are placed between the two Conv2Ds. (B) shows the structure of RSTB. The RSTB consists of a patch embedding operator, $N$ cascaded STLs, a patch unembedding operator, a Conv2D, and a residual connection between the input and output of RSTB. An STL consists of an LN, an (S)W-MSA, an LN and an MLP respectively, with two residual connections. (RSTB: the residual Swin transformer block; STL: the Swin transformer layer; Conv2D: the 2D convolutional layer; LN: the layer normalisation layer; (S)W-MSA: the (shifted) windows multi-head self-attention; MLP: the multi-layer perceptron.) } \label{fig:FIG_Structure} \end{figure} The proposed SwinMR model can produce reconstructed MR images $\hat x_u$ from undersampled zero-filled MR images $x_u$, where the residual connection is applied to accelerate the convergence and stable the training processing. It can be expressed by \begin{align}\label{formula:5} \hat x_u = \text{SwinMR}(x_u \mid \theta) + x_u, \end{align} \noindent where the SwinMR network is parameterised by $\theta$. Figure~\ref{fig:FIG_Structure}(A) shows the structure of SwinMR, which is composed of an input module (IM), a feature extraction module (FEM) and an output module (OM). The IM and OM are at the beginning and the end of the whole structure, and the FEM is placed between the IM and OM with a residual connection. The structure can be expressed by \begin{align}\label{formula:6} & F_{\text{IM}} = \text{H}_{\text{IM}}(x_u),\\ & F_{\text{FEM}} = \text{H}_{\text{FEM}}(F_{\text{IM}}),\\ & F_{\text{OM}} = \text{H}_{\text{OM}}(F_{\text{FEM}}+F_{\text{IM}}), \end{align} \noindent where the $\text{H}_{\text{IM}}(\cdot)$, $\text{H}_{\text{FEM}}(\cdot)$ and $\text{H}_{\text{OM}}(\cdot)$ denote the IM, FEM and OM respectively. $F_{\text{IM}}$, $F_{\text{FEM}}$ and $F_{\text{OM}}$ denote the output of the IM, FEM and OM respectively. \subsubsection{Input Module and Output Module} The IM is used for early visual processing and mapping from the input image space to higher dimensional feature space for the following FEM. The IM applies a 2D convolutional layer (Conv2D) mapping $x_u \in \mathbb{R}^{H \times W \times 1}$ to $F_{\text{IM}} \in \mathbb{R}^{H \times W \times C}$. In contrast, the OM is used to map the higher dimensional feature space to the output image space by a Conv2D mapping $F_{\text{FEM}} \in \mathbb{R}^{H \times W \times C}$ to $F_{\text{OM}} \in \mathbb{R}^{H \times W \times 1}$. In the training stage, the input image is randomly cropped to a fixed size $H \times W$ ($H=W$). In the inference stage, $H$, $W$ denote the height and weight of the input image. Here we define $H$ (or $W$) as the patch number and $C$ as the channel number for the self-attention processing. \subsubsection{Feature Extraction Module} The FEM is composed of a cascade of residual Swin transformer blocks (RSTBs) and a Conv2D at the end. It can be expressed as \begin{align}\label{formula:7} & F_0 = F_{\text{IM}}, \\ & F_i = {\text{H}}_{\text{RSTB}_i} (F_{i-1}), \quad i = 1,2,...,P, \\ & F_{\text{FEM}} = {\text{H}}_{\text{CONV}}(F_P), \end{align} \noindent where $F_{\text{IM}}$ and $F_{\text{FEM}}$ are the input and output of the FEM. ${\text{H}}_{\text{RSTB}_i}(\cdot)$ denotes the $i^{\text{th}}$ RSTB ($P$ RSTBs in total) in the FEM. ${\text{H}}_{\text{CONV}}(\cdot)$ denotes the Conv2D after a series of RSTBs. Figure~\ref{fig:FIG_Structure}(B) shows the structure of the RSTB. An RSTB consists of $Q$ Swin transformer layers (STLs) and a Conv2D, and a residual connection is linked between the input and output of the RSTB. It can be expressed as \begin{align}\label{formula:8} & F_{i,0} = {\text{H}}_{\text{Emb}_{i}}(F_{i-1}), \\ & F_{i,j} = {\text{H}}_{\text{STL}_{i,j}} (F_{i,j-1}), \quad j = 1,2,...,Q, \\ & F_{i} = {\text{H}}_{\text{CONV}_{i}}({\text{H}}_{\text{Unemb}_{i}}(F_{i,Q}) + F_{i-1}), \end{align} \noindent where ${\text{H}}_{\text{Emb}_{i}}(\cdot)$ is the patch embedding from $F_{i-1} \in \mathbb{R}^{H \times W \times C}$ to $F_{i,0} \in \mathbb{R}^{HW \times C}$, and ${\text{H}}_{\text{Unemb}_{i}}(\cdot)$ is the patch unembedding from $F_{i,Q} \in \mathbb{R}^{HW \times C}$ to $\mathbb{R}^{H \times W \times C}$. ${\text{H}}_{\text{STL}_{i,j}}(\cdot)$ and ${\text{H}}_{\text{CONV}_{i}}(\cdot)$ denote the $j^{\text{th}}$ STL and the Conv2D in the $i^{\text{th}}$ RSTB, respectively. \subsubsection{Swin Transformer Layer} The whole process of the STL can be expressed as \begin{align}\label{formula:9} &X^{\prime}={\text{H}}_{\text{(S)W-MSA}}({\text{H}}_{\text{LN}}(X))+X,\\ &X^{\prime\prime}={\text{H}}_{\text{MLP}}({\text{H}}_{\text{LN}}(X^{\prime}))+X^{\prime}, \end{align} \noindent where $X$ and $X^{\prime\prime}$ are the input and output of the STL. ${\text{H}}_{\text{MLP}}(\cdot)$ and ${\text{H}}_{\text{LN}}(\cdot)$ denote the multilayer perceptron and the layer normalisation layer. Windows multi-head self-attention (W-MSA) and shifted windows multi-head self-attention (SW-MSA) ${\text{H}}_{\text{(S)W-MSA}}(\cdot)$ are alternative applied in the STL. Spatial constraints are added in the Swin transformer layer compared to the original transformers. Figure~\ref{fig:FIG_Structure}(B) shows the W-MSA and the SW-MSA compared with the original MSA. Original MSA performs self-attention in the whole image space. Although the information of the entire picture is involved in each attention calculation, it aggravates computational costs and redundant connections. The computational complexity for the original MSA is as follows: \begin{align}\label{formula:10} \Omega({\text{H}}_{\text{MSA}})=4HWC^{2}+2(HW)^{2}C. \end{align} In Swin transformer layers, a $\mathbb{R}^{H \times W \times C}$ feature map are divided into $\frac{HW}{M^2}$ non-overlapped windows with the size of $M^2 \times C$. (S)W-MSA is calculated in each window, instead of the whole image space. The computational complexity for (S)W-MSA is as follows: \begin{align}\label{formula:11} \Omega({\text{H}}_{\text{(S)W-MSA}})=4HWC^{2}+2M^{2}HWC, \end{align} \noindent which is significantly reduced compared to the original MSA. However, if the separation of windows is fixed between each STL, the network will lose the link between different windows. Normal windows and shifted windows are alternatingly utilised in each STL to enable information communication from different windows. (S)W-MSA for each non-overlap window $X$ can be expressed by \begin{align}\label{formula:12} Q=X P_{Q}, \quad K=X P_{K}, \quad V=X P_{V}, \end{align} \noindent where the $P_{Q}$, $P_{K}$, $P_{V}$ are shared projection matrices over all the windows. The query $Q$, key $K$, value $V$ and learnable relative position encoding $B$ ($\mathbb{R}^{M^2 \times d}$) are used in the calculation of the self-attention mechanism in a local window, which can be expressed by \begin{align}\label{formula:13} \operatorname{Attention}(Q, K, V)=\operatorname{SoftMax}\left(Q K^{T} / \sqrt{d}+B\right) V. \end{align} Such self-attention mechanism calculations are performed for $h$ times and concatenated for (S)W-MSA. \subsubsection{Loss Function} A novel multi-channel loss using the sensitivity maps was introduced for better reconstruction quality and more textures and details. Charbonnier loss~\citep{Lai2019} was utilised for the pixel-wise loss and the frequency loss since it is more robust and able to handle the outliers better. The total loss $\mathcal{L}_{\mathrm{TOTAL}}(\theta)$ consists of the pixel-wise Charbonnier loss $\mathcal{L}_{\mathrm{pixel}}(\theta)$, the frequency Charbonnier loss $\mathcal{L}_{\mathrm{freq}}(\theta)$ and perceptual loss $\mathcal{L}_{\mathrm{VGG}}(\theta)$. The pixel-wise Charbonnier loss can be expressed by \begin{align}\label{formula:14} \mathop{\text{min}}\limits_{\theta} \mathcal{L}_{\mathrm{pixel}}(\theta) = \frac{1}{S} \sum_{q=1}^{S} \sqrt{\mid\mid x^q - \mathcal{S}^q \hat x_u \mid\mid^2_2 + \epsilon^2}, \end{align} \noindent where $\epsilon$ is a constant which is set to $10^{-9}$ empirically and $\mathcal{S}^q$ is the sensitivity map of $q^{\text{th}}$ coil ($S$ colis in total). The frequency Charbonnier loss can be expressed by \begin{align}\label{formula:15} \mathop{\text{min}}\limits_{\theta} \mathcal{L}_{\mathrm{freq}}(\theta) = \frac{1}{S} \sum_{q=1}^{S} \sqrt{\mid\mid y^q - \mathcal{F}\mathcal{S}^q \hat x_u \mid\mid^2_2 + \epsilon^2}. \end{align} The perceptual VGG loss can be expressed by \begin{align}\label{formula:16} \mathop{\text{min}}\limits_{\theta} \mathcal{L}_{\mathrm{VGG}}(\theta) = \mid\mid f_{\mathrm{VGG}}(x) - f_{\mathrm{VGG}}(\hat x_u) \mid\mid_1, \end{align} \noindent where $f_{\mathrm{VGG}}(\cdot)$ denotes the VGG network, and $\mid\mid \cdot \mid\mid_1$ denotes the $l_1$ norm. The utilisation of $\mathcal{L}_{\mathrm{VGG}}$ is able to optimise the perceptual quality of reconstructed results. The total loss can be expressed by \begin{align}\label{formula:17} \mathcal{L}_{\mathrm{TOTAL}}(\theta) = \alpha \mathcal{L}_{\mathrm{pixel}}(\theta) + \beta \mathcal{L}_{\mathrm{freq}}(\theta) + \gamma \mathcal{L}_{\mathrm{VGG}}(\theta), \end{align} \noindent where $\alpha$, $\beta$ and $\gamma$ are coefficients controlling the balance of each term in the loss function. \section{Experiments and Results} \subsection{Datasets} In this work, the Calgary Campinas multi-channel (CC) dataset~\citep{Souza2018} and the Multi-modal Brain Tumour Segmentation Challenge 2017 (BraTS17)~\citep{BraTS17_1,BraTS17_2,BraTS17_3} dataset were used for the experiment sections. For the CC dataset, 15360 slices of 12-channel T1-weight brain 2D MR images were randomly divided into training, validation and testing sets (7680, 3072, and 4608 slices respectively), according to the ratio of 5:2:3. For the BraTS17 dataset, we applied the brain data with reference segmentation results (280 3D brains in BraTS17 official training dataset), including both higher and lower grade glioma. 280 3D brain data were divided into training, validation and testing set (235, 20, and 30 3D brains respectively), and cropped to $152 \times 192 \times 144$ volumes (slice, height and weight, respectively). \subsection{Implementation Detail} The proposed SwinMR was implemented using PyTorch, trained on two NVIDIA RTX 3090 GPUs with 24GB GPU memory, and tested on an NVIDIA RTX 3090 GPU or an Intel Core i9-10980XE CPU. We set the RSTB number, the STL number, the window size number and the attention head number to 6, 6, 8 and 6 respectively, which are the default setting in the original SwinIR~\citep{Liang2021}. The patch number and channel number were empirically set to 96 and 180, according to our ablation studies. For the parameter in the loss function, $\alpha$, $\beta$, $\gamma$ were set to 15, 0.1 and 0.0025 to balance each term, according to our ablation studies. We used SwinMR (PI) to denote the proposed model trained with multi-channel data and sensitivity maps, and SwinMR (nPI) to indicate the proposed model trained with single-channel data without sensitivity maps. \subsection{Evaluation Methods} Structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR) and Fr\'echet inception distance (FID)~\citep{Heusel2017} were utilised for evaluation. SSIM quantifies the structural similarity between two images based on luminance, contrast, and structures. PSNR is the ratio between maximum signal power and noise power, which measures the fidelity of the representation. Both metrics are based on simple and shallow functions, and direct comparisons between images, which are not necessary for the visual quality for human observers~\citep{Zhang2018}. FID is calculated by computing the Fr\'echet distance between two multivariate Gaussians, which measures the similarity between two sets of images. FID correlates well with visual quality for human observers, and a lower FID indicates more perceptual results. Both Intersection over Union (IoU) and Dice scores were applied to measure the segmentation quality in the brain tumour segmentation experiment. \subsection{Comparisons with Other Methods} In this experimental study, we compared our proposed SwinMR (nPI and PI) with other benchmarked MR reconstruction methods, including Deep ADMM Net~\citep{Yang2016}, DAGAN~\citep{Yang2018}, PIDDGAN~\citep{Huang2021}, as well as ground truth MR images (GT) and undersampled zero-filled MR images (ZF) using Gaussian 1D 30\% mask. Among them, PIDDGAN and SwinMR (PI) were parallel imaging-coupled, i.e., trained with multi-channel MR images. This experiment was conducted using the CC dataset. The quantitative result of comparisons is shown in Table~\ref{tab:comparison}. Our proposed SwinMR (nPI) achieved the highest SSIM and PSNR, and SwinMR (PI) achieved the best FID score. The time in Table~\ref{tab:comparison} indicates the average time for one inference measured by ten times inferences in average in an Intel Core i9-10980XE CPU or an NVIDIA RTX 3090 GPU. The computational cost of SwinMR was higher than other CNN-based models. Figure~\ref{fig:FIG_EXP_IMAGE_Comparison} shows the reconstructed MR images, edge information extracted by Sobel operator and absolute differences of standardised pixel intensities ($10 \times$) between reconstructed MR images and GT MR images from top to button respectively. The proposed SwinMR shows superiority to other methods in terms of overall reconstruction quality and edge information. \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_Comparison.pdf} \caption{ Samples of the comparison experiment with ground truth images (GT), undersampled zero-filled images (ZF) and reconstructed images by other methods. Row 1: GT, ZF and reconstructed images by different methods; Row 2: Edge information extracted by Sobel operator; Row 3: Gaussian 1D 30\% mask and the absolute differences between reconstructed (or ZF) images and GT images ($10 \times$). } \label{fig:FIG_EXP_IMAGE_Comparison} \end{figure} \begin{table}[H] \centering \caption{ Quantitative results of the comparison experiment with other methods using Gaussian 1D 30\% mask (mean (std)). $^{\star}$: $p \textless 0.05$; $^{\star\star}$: $p \textless 0.01$ (compared with SwinMR (PI) by paired t-Test). $^{\dagger}$: $p \textless 0.05$; $^{\dagger\dagger}$: $p \textless 0.01$ (compared with SwinMR (nPI) by paired t-Test). PSNR: Peak signal-to-noise ratio; SSIM: Structural similarity index; FID: Fr\'echet inception distance; Time: The average time for one inference in an Intel Core i9-10980XE CPU or an NVIDIA RTX 3090 GPU.\\ } \scalebox{0.75}{ \begin{tabular}{cccccc} \toprule \multirow{2}[4]{*}{Methods} & \multirow{2}[4]{*}{PSNR} & \multirow{2}[4]{*}{SSIM} & \multirow{2}[4]{*}{FID} & \multicolumn{2}{c}{Time} \\ \cmidrule{5-6} & & & & CPU (s) & GPU (s) \\ \midrule ZF & 27.81 (0.83)$^{\star\star\dagger\dagger}$ & 0.884 (0.012)$^{\star\star\dagger\dagger}$ & 156.39 & - & - \\ Deep ADMM Net & 29.24 (0.99)$^{\star\star\dagger\dagger}$ & 0.922 (0.012)$^{\star\star\dagger\dagger}$ & 54.56 & 0.459 (0.052) & - \\ DAGAN & 30.41 (0.83)$^{\star\star\dagger\dagger}$ & 0.924 (0.010)$^{\star\star\dagger\dagger}$ & 56.05 & 0.089 (0.003) & 0.003 (0.000) \\ PIDDGAN & 31.23 (0.93)$^{\star\star\dagger\dagger}$ & 0.936 (0.010)$^{\star\star\dagger\dagger}$ & 17.55 & 0.166 (0.007) & 0.006 (0.000) \\ SwinMR (nPI) & \textbf{32.83 (1.10)$^{\star\star}$} & \textbf{0.954 (0.009)$^{\star\star}$} & 27.67 & 19.341 (0.060) & 0.388 (0.001) \\ SwinMR (PI) & 31.88 (1.03)$^{\dagger\dagger}$ & 0.943 (0.010)$^{\dagger\dagger}$ & \textbf{13.17} & 19.779 (0.038) & 0.388 (0.001) \\ \bottomrule \end{tabular}}% \label{tab:comparison}% \end{table}% \subsection{Experiments on Masks} This experimental study aimed to evaluate the performance of SwinMR using different undersampling trajectories. Three 1D Cartesian undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%) and Gaussian 1D 50\% (G1D50\%), as well as two 2D non-Cartesian undersampling trajectories including radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment. This experiment compared the SSIM, PSNR and FID of SwinMR (PI), DAGAN and ZF, and was conducted using the CC dataset. The quantitative results of the experiment on masks are shown in Figure~\ref{fig:FIG_EXP_GRAPH_Mask} and Table~\ref{tab:mask_fid}. The sample of reconstructed images, edge information and absolute differences of standardised pixel intensities ($10 \times$) between reconstructed images and GT images are shown in Figure~\ref{fig:FIG_EXP_IMAGE_Mask}, Figure~\ref{fig:FIG_EXP_EDGE_Mask} and Figure~\ref{fig:FIG_EXP_ERROR_Mask} respectively. According to the results, the proposed SwinMR achieved a higher reconstruction quality compared to DAGAN using different undersampling trajectories, especially when the mask of low undersampling rate (10\%) was applied. \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Graph/FIG_EXP_GRAPH_Mask.pdf} \caption{ Peak signal-to-noise ratio (PSNR) and Structural similarity index (SSIM) of the experiment on different masks. Five undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment. (Box range: interquartile range; $\times$:1\% and 99\% confidence interval; $-$: maximum and minimum; $\square$: mean; $\shortmid$: median.) The SwinMR (PI) outperforms the DAGAN using different undersampling masks with significantly higher PSNR, SSIM ($p < 0.05$ by paired t-Test).} \label{fig:FIG_EXP_GRAPH_Mask} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_Mask.pdf} \caption{ Samples of the experiment on different masks. Five undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment. Row 1: Undersampled zero-filled MR images (ZF) using different masks; Row 2: Ground truth MR images (GT); Row 3: Reconstructed MR images by DAGAN; Row 4: Reconstructed MR images by SwinMR (PI); Row 5: Undersampling masks. The Peak signal-to-noise ratio (PSNR) and Structural similarity index (SSIM) of reconstructed and ZF images are shown in the top-left corner. } \label{fig:FIG_EXP_IMAGE_Mask} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_EDGE_Mask.pdf} \caption{ Edge information of the experiment on different masks. Five undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment. Row 1: Edge information of undersampled zero-filled MR images (ZF) using different masks; Row 2: Edge information of ground truth MR images (GT); Row 3: Edge information of reconstructed MR images by DAGAN; Row 4: Edge information of reconstructed MR images by SwinMR (PI). The edge information was extracted by the Sobel operator. } \label{fig:FIG_EXP_EDGE_Mask} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_ERROR_Mask.pdf} \caption{ Absolute differences of standardised pixel intensities ($10 \times$) of the experiment on different masks. Five undersampling trajectories including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment. Row 1: Absolute differences between undersampled zero-filled MR images (ZF) using different masks and ground truth MR images (GT); Row 2: Absolute differences between reconstructed MR images by DAGAN and GT; Row 3: Absolute differences between reconstructed MR images by SwinMR (PI) and GT. } \label{fig:FIG_EXP_ERROR_Mask} \end{figure} \begin{table}[H] \centering \caption{ Fr\'echet inception distance (FID) of the experiment on different masks. Five undersampling masks including Gaussian 1D 10\% (G1D10\%), Gaussian 1D 30\% (G1D30\%), Gaussian 1D 50\% (G1D50\%), radial 10\% (R10\%) and spiral 10\% (S10\%) were applied in this experiment.\\ } \setlength{\tabcolsep}{8mm}{ \begin{tabular}{cccc} \toprule Mask & SwinMR (PI) & DAGAN & ZF \\ \midrule G1D10\% & \textbf{35.23} & 169.83 & 325.99 \\ G1D30\% & \textbf{13.17} & 56.04 & 156.38 \\ G1D50\% & \textbf{7.92} & 19.26 & 86.25 \\ R10\% & \textbf{43.86} & 132.58 & 319.45 \\ S10\% & \textbf{35.88} & 115.98 & 333.40 \\ \bottomrule \end{tabular}}% \label{tab:mask_fid}% \end{table}% \subsection{Experiments on Noise} This experimental study aimed to evaluate the robustness of SwinMR under the influence of noise. The noise in MRI is imposed on the \textit{k}-space and that could follow a Gaussian distribution~\citep{Hansen2015}. In our experiments, different noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested after undersampling (Gaussian 1D 30\% mask) in \textit{k}-space. The noise level is defined as: \begin{align}\label{formula:18} \text{NL} = \frac{N^{\prime}}{S^{\prime}+N^{\prime}}, \end{align} \noindent where $N^{\prime}$ and $S^{\prime}$ denote the power of noise and signal, respectively. This experiment compared the SSIM, PSNR and FID of SwinMR (PI), DAGAN and ZF, and was conducted using the CC dataset. The quantitative results of the noise experiments are shown in Figure~\ref{fig:FIG_EXP_GRAPH_Noise} and Table~\ref{tab:noise_fid}. The sample of reconstructed images, edge information and absolute differences of standardised pixel intensities ($10 \times$) between reconstructed images and GT images are shown in Figure~\ref{fig:FIG_EXP_IMAGE_Noise}, Figure~\ref{fig:FIG_EXP_EDGE_Noise} and Figure~\ref{fig:FIG_EXP_ERROR_Noise}, respectively. According to the results, under the interruption of noise, SwinMR maintains better reconstruction quality compared to DAGAN. The quality improvement becomes more clear when under a high noise level. \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Graph/FIG_EXP_GRAPH_Noise.pdf} \caption{ Peak signal-to-noise ratio (PSNR) and Structural similarity index (SSIM) of the experiment on different noise using Gaussian 1D 30\% mask. Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested in this experiment. (Box range: interquartile range; $\times$:1\% and 99\% confidence interval; $-$: maximum and minimum; $\square$: mean; $\shortmid$: median.) The SwinMR (PI) outperforms the DAGAN under different noise levels with significantly higher PSNR, SSIM ($p < 0.05$ by paired t-Test). } \label{fig:FIG_EXP_GRAPH_Noise} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_Noise.pdf} \caption{ Samples of the experiment on different noise using Gaussian 1D 30\% mask. Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested in this experiment. Row 1: Undersampled zero-filled MR images (ZF) with different noise levels; Row 2: Ground truth MR images (GT); Row 3: Reconstructed MR images by DAGAN; Row 4: Reconstructed MR images by SwinMR (PI). The Peak signal-to-noise ratio (PSNR) and Structural similarity index (SSIM) of reconstructed and ZF images are shown in the top-left corner. } \label{fig:FIG_EXP_IMAGE_Noise} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_EDGE_Noise.pdf} \caption{ Edge information of the experiment on different noise using Gaussian 1D 30\% mask. Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested in this experiment. Row 1: Edge information of undersampled zero-filled MR images (ZF) with different noise levels; Row 2: Edge information of ground truth images (GT); Row 3: Edge information of reconstructed MR images by DAGAN; Row 4: Edge information of reconstructed MR images by SwinMR (PI). The edge information was extracted by the Sobel operator. } \label{fig:FIG_EXP_EDGE_Noise} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_ERROR_Noise.pdf} \caption{ absolute differences of standardised pixel intensities ($10 \times$) of the experiment on different using Gaussian 1D 30\% mask noise. Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were tested in this experiment. Row 1: Absolute differences between undersampled zero-filled MR images (ZF) with different noise levels and ground truth MR images (GT); Row 2: Absolute differences between reconstructed MR images by DAGAN and GT; Row 3: Absolute differences between reconstructed MR images by SwinMR (PI) and GT. } \label{fig:FIG_EXP_ERROR_Noise} \end{figure} \begin{table}[H] \centering \caption{ Fr\'echet inception distance (FID) of the experiment on different noise using Gaussian 1D 30\% mask. Five noise levels (NL20\%, NL30\%, NL50\%, NL70\% and NL80\%) were applied in this experiment.\\ } \setlength{\tabcolsep}{8mm}{ \begin{tabular}{cccc} \toprule Noise Level & SwinMR (PI) & DAGAN & ZF \\ \midrule NL20\% & \textbf{21.11} & 66.89 & 156.55 \\ NL30\% & \textbf{21.84} & 71.44 & 168.57 \\ NL50\% & \textbf{29.66} & 75.77 & 203.46 \\ NL70\% & \textbf{35.92} & 78.24 & 225.41 \\ NL80\% & \textbf{40.81} & 73.12 & 250.99 \\ \bottomrule \end{tabular}}% \label{tab:noise_fid}% \end{table}% \subsection{Ablation Experiments on the Patch Number and Channel Number} The patch number $H$ (or $W$) and the channel number $C$ decide the input size of STL in the SwinMR. Ablation studies for different patch numbers and channel numbers were conducted to study the impression of them on the reconstruction results. Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (A) and Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (C) show the SSIM, PSNR and FID of SwinMR with different patch numbers. Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (E) shows the loss function of SwinMR in the training process. Figure~\ref{fig:FIG_AEXP_Patch} displays the sample of reconstructed images of SwinMR with different patch numbers. Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (B) and Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (D) show the SSIM, PSNR and FID of SwinMR with different channel numbers. Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (F) shows the loss function of SwinMR in the training process. Figure~\ref{fig:FIG_AEXP_Channel} displays the sample of reconstructed images of SwinMR with the different channel numbers. For the patch number, from Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (A) and Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (C), the results demonstrate that reconstruction quality becomes better as the patch number grows. According to Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (E), the training loss converges faster and lower as the patch number grows. However, the growing patch number aggravates the computational cost. Empirically, we applied patch number 96 for training. For the channel number, from Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (B) and Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (D), the results did not resemble the trend presented in the ablation experiment on patch number. There were no significant differences for the three indicators (SSIM, PSNR and FID) as the channel number changed. According to Figure~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch} (F), the training loss converges faster and lower as the channel number grows. Empirically, we applied a patch number of 180 for training. For the comparison of multi-channel data (PI) and single-channel data (nPI), SwinMR (PI) tend to have a better (lower) FID, but worse (lower) SSIM/PSNR than SwinMR (nPI). \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Graph/FIG_AEXP_GRAPH_ChannelPatch.pdf} \caption{ Structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR) and Fr\'echet inception distance (FID) of ablation experiments of the patch number and channel number. (A), (C) and (E) are the SSIM/PSNR, FID and training loss of the ablation experiment of the patch number. (B), (D) and (F) are the SSIM/PSNR, FID and training loss of the ablation experiment of the channel number. } \label{fig:FIG_AEXP_GRAPH_ChannelPatch} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_AEXP_Patch.pdf} \caption{ Samples of the ablation experiment on the patch number using Gaussian 1D 30\% mask. Row 1: Reconstructed MR images by SwinMR (nPI) with different patch numbers and zero-filled MR images (ZF); Row 2: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (nPI) and ground truth MR images (GT), and absolute differences ($10 \times$) between ZF and GT; Row 3: Reconstructed MR images by SwinMR (PI) with the different patch number and GT; Row 4: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (PI) and GT, and the Gaussian 1D 30\% mask. } \label{fig:FIG_AEXP_Patch} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_AEXP_Channel.pdf} \caption{ Samples of the ablation experiment on the channel number using Gaussian 1D 30\% mask. Row 1: Reconstructed MR images by SwinMR (nPI) with the different channel numbers and zero-filled MR images (ZF); Row 2: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (nPI) and ground truth MR images (GT), and absolute differences ($10 \times$) between ZF and GT; Row 3: Reconstructed MR images by SwinMR (PI) with the different channel number and GT; Row 4: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (PI) and GT, and the Gaussian 1D 30\% mask. } \label{fig:FIG_AEXP_Channel} \end{figure} \subsection{Ablation Experiments on the Loss Function} This ablation study aimed to discover the effect of each term in the loss function. According to Equation (\ref{formula:17}), the loss function of SwinMR consists of pixel-wise loss, frequency loss and perceptual loss. Four experiments were performed in this ablation study: (1) PFP: \textbf{P}ixel-wise, \textbf{F}requency and \textbf{P}erceptual loss; (2) PP: \textbf{P}ixel-wise and \textbf{P}erceptual loss; (3) PF: \textbf{P}ixel-wise and \textbf{F}requency loss; (4) P: only \textbf{P}ixel-wise loss. Figure~\ref{fig:FIG_AEXP_GRAPH_Loss} shows the SSIM, PSNR and FID of SwinMR trained with different loss functions. Figure~\ref{fig:FIG_AEXP_Loss_edge} displays the samples of reconstructed images of SwinMR trained with different loss functions. According to Figure~\ref{fig:FIG_AEXP_GRAPH_Loss}, for SwinMR (PI), the utilisation of frequency loss tends to improve SSIM/PSNR and decrease the FID (PFP vs PP; PF vs P). For SwinMR (nPI), the utilisation of frequency loss leads to improvement only on SSIM and PSNR, but scarcely on FID. In most cases, the utilisation of the frequency loss has a positive impact on reconstruction quality indicators -- both SSIM/PSNR and FID. For SwinMR (PI), the utilisation of perceptual loss tends to slightly decrease SSIM and PSNR, but substantially decrease the FID (PFP vs PF; PP vs P). For SwinMR (nPI), the utilisation of perceptual loss tends to achieve a better FID but scarcely change SSIM and PSNR (PFP vs PF; PP vs P). In most cases, the utilisation of the perceptual loss has a positive impact on FID, but a negative impact on SSIM/PSNR when using multi-channel data. \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Graph/FIG_AEXP_GRAPH_Loss.pdf} \caption{ Structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR) and Fr\'echet inception distance (FID) of the ablation experiment on the loss function using Gaussian 1D 30\% mask. PFP: pixel-wise, frequency and perceptual loss; PP: pixel-wise and perceptual loss; PF: pixel-wise and frequency loss; P: only pixel-wise loss. } \label{fig:FIG_AEXP_GRAPH_Loss} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_AEXP_Loss_edge.pdf} \caption{ Samples of the ablation experiment on the loss function using Gaussian 1D 30\% mask. PFP: pixel-wise, frequency and perceptual loss; PP: pixel-wise and perceptual loss; PF: pixel-wise and frequency loss; P: only pixel-wise loss. Row 1: Reconstructed MR images by SwinMR (nPI) and zero-filled MR images (ZF); Row 2: Edge information of reconstructed MR images by SwinMR (nPI) and edge information of ZF; Row 3: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (nPI) and ground truth MR images (GT), and absolute differences ($10 \times$) between ZF and GT; Row 4: Reconstructed MR images by SwinMR (PI) and GT; Row 5: Edge information of reconstructed MR images by SwinMR (PI) and edge information of GT; Row 6: Absolute differences ($10 \times$) between reconstructed MR images by SwinMR (PI) and GT, and the Gaussian 1D 30\% mask. } \label{fig:FIG_AEXP_Loss_edge} \end{figure} \subsection{Downstream Task Experiments: Brain Segmentation Experiments on BraTS17} In this experiment, we performed a downstream task using a reconstructed MR image, in order to measure the reconstruction quality. Specifically, an open-access multi-modalities brain tumour segmentation network~\footnote{https://github.com/Mehrdad-Noori/Brain-Tumor-Segmentation}~\citep{Noori2019} was trained on the BraTS17 dataset (four modalities are required including FLAIR, T1, T1CE and T2). Then, we trained four SwinMR weights using BraTS17 FLAIR, T1, T1CE and T2 data, respectively. After that, segmentation tasks were conducted on GT MR images, SwinMR reconstructed MR images and ZF MR images directly using the pre-trained segmentation network. Ideally, the segmentation score of reconstructed images and GT images should be as closer as possible. Table~\ref{tab:BraTS17_Recon} shows the result of SwinMR trained with BraTS17 FLAIR, T1, T1CE and T2 respectively. Figure~\ref{fig:FIG_EXP_IMAGE_BraTS17_Reconstruction} displays the samples of the reconstruction of different modalities. Table~\ref{tab:BraTS17_IoU} and Table~\ref{tab:BraTS17_Dice} show the IoU and Dice score of the segmentation task. Figure~\ref{fig:FIG_EXP_IMAGE_BraTS17_Segmentation} displays the sample of the segmentation task. According to Table~\ref{tab:BraTS17_IoU} and Table~\ref{tab:BraTS17_Dice}, the IoU and Dice score of reconstructed MR images are improved compared with ZF MR images and much closer to the score of GT MR images. According to the Mann-Whitney Test, the IoU and Dice score distributions of the reconstructed MR images using the Gaussian 1D 30\% mask are not significantly different from the distributions of the GT MR images ($p > 0.05$). \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_BraTS17_Reconstruction.pdf} \caption{ Samples of reconstruction results for SwinMR on BraTS17 dataset including FLAIR, T1, T1CE and T2 MR images. Row 1: Ground truth MR images (GT); Row 2: Zero-filled MR images (ZF) undersampled by Gaussian 1D 10\% mask (G1D10\%); Row 3: Reconstructed MR images undersampled by G1D10\%; Row 4: ZF undersampled by Gaussian 1D 30\% mask (G1D30\%); Row 5: Reconstructed MR images undersampled by G1D30\%. } \label{fig:FIG_EXP_IMAGE_BraTS17_Reconstruction} \end{figure} \begin{figure}[H] \centering \includegraphics[width=5in]{FIG_EXP_Sample/FIG_EXP_IMAGE_BraTS17_Segmentation.pdf} \caption{ Samples of segmentation results for SwinMR on the BraTS17 dataset. Col 1: Segmentation reference; Col 2: Segmentation prediction using GT images; Col 3: Segmentation prediction using zero-filled MR images (ZF) undersampled by Gaussian 1D 10\% mask (G1D10\%); Col 4: Segmentation prediction using reconstructed MR images undersampled by G1D10\%; Col 5: Segmentation prediction using ZF undersampled by Gaussian 1D 30\% mask (G1D30\%); Col 6: Segmentation prediction using reconstructed MR images undersampled by G1D30\%. Blue area: Whole tumour (WT); Red area: Enhancing tumour (ET); Green area: Tumour core (TC). } \label{fig:FIG_EXP_IMAGE_BraTS17_Segmentation} \end{figure} \begin{table}[H] \centering \caption{ Quantitative results of reconstructed images by SwinMR (Recon) and zero-filled images (ZF) on BraTS17 dataset (mean (std)). PSNR: Peak signal-to-noise ratio; SSIM: Structural similarity index; FID: Fr\'echet inception distance. G1D10\%: Gaussian 1D 10\% mask; G1D30\%: Gaussian 1D 30\% mask.\\ } \scalebox{0.75}{ \begin{tabular}{cccccc} \toprule \multirow{2}[4]{*}{Mask} & \multirow{2}[4]{*}{Indicator} & \multicolumn{4}{c}{Recon} \\ \cmidrule{3-6} & & FLAIR & T1 & T1CE & T2 \\ \midrule \multirow{3}[1]{*}{G1D10\%} & PSNR & 30.07 (1.99) & 33.80 (2.30) & 33.80 (1.84) & 32.20 (1.81) \\ & SSIM & 0.751 (0.043) & 0.760 (0.046) & 0.797 (0.049) & 0.745 (0.039) \\ & FID & 38.02 & 32.97 & 31.46 & 21.84 \\ \multirow{3}[1]{*}{G1D30\%} & PSNR & 37.97 (2.42) & 41.08 (3.36) & 42.29 (2.12) & 38.37 (2.02) \\ & SSIM & 0.942 (0.013) & 0.953 (0.012) & 0.953 (0.015) & 0.937 (0.016) \\ & FID & 5.94 & 4.80 & 4.39 & 8.95 \\ \midrule \multirow{2}[4]{*}{Mask} & \multirow{2}[4]{*}{Indicator} & \multicolumn{4}{c}{ZF} \\ \cmidrule{3-6} & & FLAIR & T1 & T1CE & T2 \\ \midrule \multirow{3}[1]{*}{G1D10\%} & PSNR & 23.87 (1.64) & 25.92 (1.48) & 25.92 (1.70) & 23.92 (1.79) \\ & SSIM & 0.388 (0.070) & 0.414 (0.061) & 0.414 (0.068) & 0.431 (0.057) \\ & FID & 225.70 & 234.52 & 227.51 & 219.09 \\ \multirow{3}[1]{*}{G1D30\%} & PSNR & 28.74 (1.78) & 28.82 (1.60) & 30.60 (1.82) & 29.46 (2.01) \\ & SSIM & 0.597 (0.046) & 0.602 (0.051) & 0.602 (0.051) & 0.632 (0.038) \\ & FID & 91.18 & 100.98 & 106.28 & 85.49 \\ \bottomrule \end{tabular}}% \label{tab:BraTS17_Recon}% \end{table}% \begin{table}[H] \centering \caption{ Intersection over union (IoU) of the segmentation experiment (median/mean [Q1,Q3]). $^{\star}$: $p \textless 0.05$; $^{\star\star}$: $p \textless 0.01$ (compared with GT by Mann-Whitney Test). GT: ground truth MR images; Recon: reconstructed MR images by SwinMR; ZF: undersampled zero-filled MR images. G1D10\%: Gaussian 1D 10\% mask; G1D30\%: Gaussian 1D 30\% mask. WT: Whole tumour; TC: Enhancing tumour ; ET: Tumour core.\\ } \scalebox{0.75}{ \begin{tabular}{ccccc} \toprule \multicolumn{2}{c}{IoU} & GT & Recon & ZF \\ \midrule \multirow{3}[2]{*}{G1D10\%} & WT & 0.930/0.924 [0.900,0.954] & 0.898/0.899 [0.868,0.940]$^{\star\star}$ & 0.838/0.836 [0.795,0.881]$^{\star\star}$ \\ & TC & 0.821/0.771 [0.726,0.903] & 0.758/0.722 [0.661,0.890]$^{\star\star}$ & 0.617/0.539 [0.393,0.733]$^{\star\star}$ \\ & ET & 0.772/0.735 [0.625,0.889] & 0.740/0.652 [0.471,0.846]$^{\star\star}$ & 0.570/0.527 [0.336,0.694]$^{\star\star}$ \\ \midrule \multirow{3}[2]{*}{G1D30\%} & WT & 0.930/0.924 [0.900,0.954] & 0.924/0.921 [0.895,0.953] & 0.897/0.897 [0.862,0.945]$^{\star\star}$ \\ & TC & 0.821/0.771 [0.726,0.903] & 0.811/0.766 [0.719,0.904] & 0.763/0.728 [0.669,0.895]$^{\star\star}$ \\ & ET & 0.772/0.735 [0.625,0.889] & 0.770/0.725 [0.616,0.883] & 0.748/0.697 [0.573,0.859]$^{\star\star}$ \\ \bottomrule \end{tabular}}% \label{tab:BraTS17_IoU}% \end{table}% \begin{table}[H] \centering \caption{ Dice score of the segmentation experiment (median/mean [Q1,Q3]). $^{\star}$: $p \textless 0.05$; $^{\star\star}$: $p \textless 0.01$ (compared with GT by Mann-Whitney Test). GT: ground truth MR images; Recon: reconstructed MR images by SwinMR; ZF: undersampled zero-filled MR images. G1D10\%: Gaussian 1D 10\% mask; G1D30\%: Gaussian 1D 30\% mask. WT: Whole tumour; TC: Enhancing tumour; ET: Tumour core.\\ } \scalebox{0.75}{ \begin{tabular}{ccccc} \toprule \multicolumn{2}{c}{Dice} & GT & Recon & ZF \\ \midrule \multirow{3}[2]{*}{G1D10\%} & WT & 0.968/0.965 [0.952,0.981] & 0.950/0.950 [0.933,0.974]$^{\star\star}$ & 0.916/0.914 [0.892,0.940]$^{\star\star}$ \\ & TC & 0.904/0.857 [0.845,0.951] & 0.863/0.819 [0.800,0.944]$^{\star\star}$ & 0.767/0.653 [0.566,0.847]$^{\star\star}$ \\ & ET & 0.874/0.835 [0.777,0.941] & 0.852/0.766 [0.640,0.917]$^{\star\star}$ & 0.725/0.665 [0.503,0.820]$^{\star\star}$ \\ \midrule \multirow{3}[2]{*}{G1D30\%} & WT & 0.968/0.965 [0.952,0.981] & 0.964/0.963 [0.948,0.980] & 0.949/0.949 [0.930,0.975]$^{\star\star}$ \\ & TC & 0.904/0.857 [0.845,0.951] & 0.897/0.854 [0.838,0.951] & 0.868/0.826 [0.803,0.947]$^{\star\star}$ \\ & ET & 0.874/0.835 [0.777,0.941] & 0.871/0.827 [0.765,0.939] & 0.857/0.808 [0.729,0.925]$^{\star\star}$ \\ \bottomrule \end{tabular}}% \label{tab:BraTS17_Dice}% \end{table}% \section{Discussion} In this work, a novel Swin transformer based model, i.e., SwinMR, for fast MRI reconstruction have been proposed. Most existing deep learning based image restoration methods, including MRI reconstruction approaches, are based on CNNs. The convolution is a very effective feature extractor but lacks long-range dependency. The receptive field of CNNs is limited by the size of the kernel and the depth of the network. To tackle this problem, researchers have developed transformers based image restoration methods that have been originally used for solving NLP tasks. The core of the transformer is MSA, which has global sensitivity. In MSA operation, each patch can link with any other patches in the whole image space but also aggravates the computational burden. However, we have believed that in MRI reconstruction, the MSA, which is operated in the whole image space, is redundant and not necessary. It is not difficult to understand that in NLP tasks the first and the last words may have a strong connection in a sentence. However, this may not be applicable in CV tasks. Visual elements (e.g., pixels) in CV tasks can vary substantially in scale unlike language elements (e.g., word tokens) in NLP tasks~\citep{Liu2021}. Since in most cases, for example, the top-left corner patch has no relationship with the bottom-right corner patch within an image. Moreover, for MRI reconstruction, the biggest difficulty is the recovery of detailed information and texture information. Focusing too much on global information and ignoring the detailed (local) information may make the image smoother and lose more details. The utilisation of a Swin transformer can achieve a trade-off for CV tasks. In Swin transformer, operations are conducted in shifted windows instead of the whole images. It has a larger receptive field compared to CNNs but is not overly concerned with global information. This is the reason why we have developed a Swin transformer for MRI reconstruction. To evaluate our proposed methods, several comparison experiments and ablation studies have been conducted. In this study, we have compared our proposed SwinMR with benchmark MRI reconstruction methods. The results in Table~\ref{tab:comparison} have demonstrated that our SwinMR has achieved the highest SSIM/PSNR and lowest FID compared to CNN-based and GAN-based models. From Figure~\ref{fig:FIG_EXP_IMAGE_Comparison}, we have shown clearly that our SwinMR has obtained better reconstruction quality, especially in the zoom-in area, where the details of the cerebellum have been well-preserved. In this study, we have also compared SwinMR (PI) that has been trained with multi-channel brain data with SwinMR (nPI) that has been trained with single-channel brain data. The results have led to a similar conclusion in our previous study~\citep{Huang2021}, where FID of the model trained with multi-channel data has been better compared to the model trained with single-channel data, and the SSIM/PSNR has shown the opposite (i.e., SSIM/PSNR: nPI \textgreater PI; FID: PI \textless nPI). This phenomenon can also be observed in the subsequent ablation experiments. From Figure~\ref{fig:FIG_EXP_IMAGE_Comparison}, we can find that the reconstructed images of SwinMR (PI) have shown more details and texture information, but the reconstructed images of SwinMR (nPI) have shown smoother. The experimental results have demonstrated that the three metrics that compared PI and nPI gave different answers. We have speculated that this might be due to the different principles of these metrics. PSNR is a classic metric based on per-pixel comparisons, which are not able to reflect the structure information for images. SSIM is a perceptual metric that measures structure similarity. However, both of them are based on simple and shallow functions and direct comparisons between images, which is insufficient to account for many nuances of human perception~\citep{Zhang2018}. For FID, the comparison is based on perception and performed on two sets of images. Images are mapped to high-dimension representations by a pre-trained InceptionV3 network, which is well-related to human visual perception. The SwinMR (PI) reconstructed images have demonstrated more details and texture information. Even though these details and texture information may not be so \emph{accurate}, they make the reconstructed images more \emph{visually similar} with the ground truth images. However, the SwinMR (nPI) reconstructed images have shown smoother in pixel-wise scale, at the cost of less detail and texture information. Therefore, SwinMR (PI) have tended to have better FID and worse SSIM/PSNR compared to SwinMR (nPI), due to the principle differences of the evaluation methods. From Table~\ref{tab:comparison}, we can find a common problem of transformer-based methods, which is the higher computational cost compared to other CNN-based and GAN-based methods. Equation (\ref{formula:11}) have shown that the computational complexity is proportional to the $HW$ of the input of (S)W-MSA. The time shown in Table~\ref{tab:comparison} has been the inference time, where the original height and weight have been treated as $H$ and $W$ ($256 \times 256$ here). For training, randomly cropping have been applied to ease the long processing time. Experiments using different undersampling masks with various noise levels have demonstrated that our proposed method SwinMR have shown superiority to DAGAN in all the tests. The evaluation indicators change as expected when the condition changes (different masks and noise levels). Ablation studies on the patch number and the channel number have demonstrated that reconstruction quality has been improved as the patch number has been increased and has gradually been saturated, according to Figures~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch}(A) and (C). However, according to Equation (\ref{formula:11}), the computational complexity also has been increased as the patch number has been increased. As a trade-off, we have set the patch number to 96. Beyond our expectations, the changing of channel number has not been positively correlated with the evaluation indicator in this experiment, according to Figures~\ref{fig:FIG_AEXP_GRAPH_ChannelPatch}(B) and (D). We have assumed that the evaluation indicators have saturated in the range of channel number in this experiment. Empirically, we have set the channel number to 180 according to the default setting of SwinIR. Ablation studies on different loss functions have been conducted. As expected, the utilisation of the pixel-wise loss and the frequency loss has mainly constrained the fidelity of reconstruction, and the utilisation of perceptual VGG loss has focused on perception, which has been well-related to the human visual system. Therefore, the utilisation of frequency loss has had a positive impact on SSIM and PSNR, which has been more sensitive to the fidelity of reconstruction. The utilisation of perceptual loss has had a positive impact on FID, which has been based on perception. There are still some limitations of our work. First, in the (S)W-MSA operation, the size of windows is fixed. Inspired by Google-Net, multi-scale windows could be incorporated and results from different scales could be merged in the (S)W-MSA. Second, the heavy computational cost is still an obstacle to the development of transformers. The improvement that transformers bring is at the sacrifice of increased computational cost. A lightweight transformer model could be a potential future research direction. \section{Conclusion} In this work, we have developed the SwinMR, a novel parallel imaging coupled Swin transformer-based model for fast multi-channel MRI reconstruction. The proposed method has outperformed other benchmark CNN-based and GAN-based MRI reconstruction methods. It has also shown excellent robustness using different undersampling trajectories with various noises. \section*{Acknowledgement} This work was supported in part by the UK Research and Innovation Future Leaders Fellowship [MR/V023799/1], in part by the Medical Research Council [MC/PC/21013], in part by the European Research Council Innovative Medicines Initiative [DRAGON, H2020-JTI-IMI2 101005122], in part by the AI for Health Imaging Award [CHAIMELEON, H2020-SC1-FA-DTS-2019-1 952172], in part by the British Heart Foundation [Project Number: TG/18/5/34111, PG/16/78/32402], in part by the Project of Shenzhen International Cooperation Foundation [GJHZ20180926165402083], in part by the Basque Government through the ELKARTEK funding program [KK-2020/00049], and in part by the consolidated research group MATHMODE [IT1294-19]. \newpage \bibliographystyle{elsarticle-num}
2,877,628,090,035
arxiv
\section{Introduction} \label{sec:intro} Over the past decade, as smartphone cameras started to rival and even beat DSLR cameras in resolution, a large number of ultra high resolution (UHR) images, each of more than ten million pixels, are produced, stored and transmitted everyday. However, these images are almost always down sampled from its original resolution to fit small displays of mobile end devices. Also in web applications, users often want to browse through a set of images quickly in relatively low resolution to save time and communication bandwidth. All these practices raise a question, why not keep UHR images in a downsampled version for routine uses, and only upsample to original ultra high resolution when needed, say to be exhibited on very high-resolution displays? Such a two-layer image representation, in our opinion, improves both the operability and bandwidth economy. One can appreciate the savings in storage and bandwidth for internet content providers if UHR images are coded in this new presentation. These observations motivate our research on jointly optimizing operation pairs of downsampling and upsampling that are spatially adaptive to image contents for maximal rate-distortion performance. Image downsampling is one of the most common image processing operations, aiming to reduce the resolution of the high-resolution (HR) image while retaining the maximum amount of information. According to the Nyquist-Shannon sampling theorem~\cite{shannon1949communication}, high-frequency contents will inevitably get lost after downsampling. Opposite to image downsampling is image upsampling, also known as super-resolution (SR), with the goal of recovering the underlying HR image from the given LR input. Image SR is an ill-posed inverse problem because an undersampled image can be the result of down sampling many different HR images. Due to this uncertainty, the image SR methods are bound to erase or distort high-frequency features~\cite{yang2017image,yang2010image,dong2011image,dong2010super}. Previously, image downsampling and image SR are studied as separate problems. In this paper, we jointly design image down- and up-sampling operators and propose a new methodology of adaptive downsampled dual-layer image compression (ADDL). The paired down- and up-sampling tasks are fulfilled by neural networks. In the proposed ADDL image compression system, an image is downsampled by a bank of learned content-adaptive downsampling kernels and then compressed to get the low-resolution (LR) layer. The LR layer can be decompressed and used as is, and it can also be upconverted, if so wished, to the original high resolution using a deep upsampling neural network. The latter task is aided by the prior knowledge of the learned adaptive downsampling kernels. Unlike the existing method~\cite{sun2020learned} that directly optimizes the weights of the downsampling kernels using a deep neural network, we restrict the downsampling kernels to the form of Gabor filters and optimize the filter parameters at each pixel location. This allows us to greatly reduce the complexity of the downsampling kernel optimization network. In addition, downsampling kernels contain the information of the high-frequency content which is lost in the downsampling process, such as the orientation of the edges, so they are useful for reconstructing the high-frequency textures in the upconversion. However, transmitting the downsampling kernels costs extra bits and hence reduces coding efficiency, even though we only need to transmit the parameters of the downsampling Gabor filters. Instead, we propose to predict the learned Gabor filter parameters from the compressed low-resolution layer using a lightweight network, and then further quantize the prediction residues for transmission. In synchronization, the decoder uses the same network to predict the downsampling filter parameters from the received low-resolution layer and add the quantized residues back to estimate the Gabor filter parameters used by the encoder. To optimize the upsampling process, we design an upsampling network which incorporates the compressed low-resolution layer and the content-adaptive Gabor filter parameters to reconstruct the high-resolution image. Specifically, we modulate the standard convolution by Gabor filter parameters to obtain spatially variant convolution and build the upsampling network using the so-called Gabor-filter-induced spatially adaptive convolution (GSAC). A highly desirable property of the ADDL image compression strategy is its scalability, which is important to omnipresent wireless visual communications practised at homes and offices. In wireless networks the bandwidth is always at a premium, and end devices have diverse display capabilities, ranging from small screens of cell phones to regular screens of laptops, and even to very large displays and projection screens. In such heterogeneous wireless environments, existing scalable or layered image compression methods (e.g., JPEG 2000) are less inefficient than ADDL, because the refinement portion of the scalable code stream still consumes significant bandwidth and yet generates no benefits for low-resolution devices. Furthermore, because the down-sampled image is only a small fraction of the original size, ADDL greatly reduces the encoder complexity, regardless what third-party codec is used in conjunction. This property allows the system to shift the computation burdens from the encoder to decoder, making ADDL an attractive asymmetric compression solution when the encoder is resource deprived. The paper is organized as follows. Section~\ref{sec:related} describes the related works. Section~\ref{sec:addl} presents the main contribution of this paper: the design of the proposed ADDL image compression system. We report and discuss the experimental results in Section~\ref{sec:exps}. Section~\ref{sec:Conclusion} concludes the paper. \section{Related Work} \label{sec:related} \subsection{Image super resolution and rescaling} Image super resolution aims to reconstruct the underlying high-resolution image given the downsampled Low-resolution image. After SRCNN~\cite{dong2014srcnn}, the first CNN-based super-resolution method, many other CNN-based models have been proposed in recent years~\cite{dong2014srcnn,kim2016vdsr,ledig2017SRGAN,wang2018esrgan,zhang2018rcan,zhang2018RDN,liu2020RFANet,zhang2020usrnet,liang21swinir,kai2021bsrgan,liang21manet,zhang2021DPIR}. However, these newly proposed methods tend to produce over-smoothed results when trained with the pixel loss. To solve this problem, the perceptual loss~\cite{johnson2016perceptual,wang2018esrgan} and GAN loss~\cite{goodfellow2014generative,ledig2017SRGAN,wang2018esrgan,zhang2019ranksrgan} are proposed to enhance the details of the generated reuslts and improve the perceptual quality. A related problem is image rescaling. It is to downsample the HR image to a visually meaningful LR image, while facilitating scale up to original HR image. Different from image super resolution that works on a given downsampling scheme (e.g. bicubic downsampling), image rescaling tries to retain as much information in the LR image as possible for a better subsequent HR reconstruction. Image rescaling is mainly used to support resolution conversions between large and low resolution displays. In general, in image rescaling, the downsampling and upsampling processes are jointly modelled and optimized by an encoder-decoder framework~\cite{kim2018task,li2018learning,sun2020learned,xiao2020IRN}, so that the downsampling model is optimized for the later upsampling operation. \subsection{Image compression artifacts reduction} There is a large body of literature on removing compression artifacts in images~\cite{rw_foi,rw_zhang,rw_li,rw_chang,rw_dar,rw_liu,zhou2011,zhou2012,shu2017,calic,davd,mmsd,mdvd}. The majority of the studies on the subject focus on post-processing JPEG images to alleviate compression noises, apparently because JPEG is the most widely used lossy compression standard. Inspired by successes of deep learning in image restoration, a number of CNN-based compression artifacts removal methods were developed~\cite{ARCNN,rw_svoboda,CAR_guo,CAR_galteri}. Borrowing the CNN designs for super-resolution (SRCNN), Dong~\textit{et~al.}~\cite{ARCNN} proposed an artifact reduction CNN (ARCNN). The ARCNN has a three-layer structure: a feature extraction layer, a feature enhancement layer, and a reconstruction layer. This CNN structure is designed in the principle of sparse coding. It was improved by Svoboda~\textit{et~al.}~\cite{rw_svoboda} who combined residual learning and symmetric weight initialization. Guo~\textit{et~al.}~\cite{CAR_guo} and Galteri~\textit{et~al.}~\cite{CAR_galteri} proposed to reduce compression artifacts by Generative Adversarial Network (GAN), as GAN is able to generate sharper image details. Zhang~\textit{et~al.}~\cite{ultra} proposed to incorporate an $\ell_\infty$ fidelity criterion in the design of networks to protect small, distinctive structures in the framework of near-lossless image compression. Mukati~\textit{et~al.}~\cite{mukati2022deep} proposed a novel $\ell_\infty$-constrained light-field image compression system that has a very low-complexity DPCM encoder and a CNN-based deep decoder. \subsection{Learning based image compression} Much progress has been made on learning based image compression after the pioneering work of Toderici~\textit{et~al.} \cite{toderici2015} to exploit recurrent neural networks for learned image compression. To make the network end-to-end trainable, the non-differential quantization is shown to be approximated by a differentiable process, so is context modeling in entropy coding~\cite{balle2016,theis2017,agustsson2017}. After that, a number of methods focusing on the network design are proposed. Johnston~\textit{et~al.} \cite{johnston2018} published a spatially adaptive bit allocation algorithm that efficiently uses a limited number of bits to encode visually complex image regions. Rippel~\textit{et~al.} \cite{rippel2017,agustsson2019} proposed to learn the distribution of images using adversarial training to achieve better perceptual quality at extremely low bit rate. Li~\textit{et~al.} \cite{li2018} developed a method to allocate the content-aware bit rate under the guidance of a content-weighted importance map. Some recent papers focused on investigating the adaptive context model for entropy estimation to achieve a better trade-off between reconstruction errors and required bits (entropy)~\cite{mentzer2018,balle2018,minnen2018,nonlinear,lee2018}, among which the CNN methods of \cite{minnen2018,lee2018} are the first to outperform BPG in PSNR. Choi~\textit{et~al.} \cite{choi2019} published a novel variable-rate learned image compression framework with a conditional auto-encoder. Cheng~\textit{et~al.} \cite{cheng2020} proposed to use discretized Gaussian Mixture Likelihoods to parameterize the distributions of latent codes and achieved a more accurate and flexible entropy model. Zhang~\textit{et~al.} \cite{agdl} proposed a deep learning system for attention-guided dual-layer image compression (AGDL) by introducing a novel idea of critical pixel set. \begin{figure*}[t] \centering \includegraphics[width=0.99\textwidth]{figure/addl_framework.pdf} \caption{The overall framework of the proposed ADDL image compression system. It consists of a content-adaptive downsampling encoder and an spatially-varying upsampling decoder.} \label{addl} \end{figure*} \section{ADDL Compression System} \label{sec:addl} \subsection{Overview} In this section, we design the proposed ADDL image compression system and present the following two key technical developments, which are also the main contributions of this work. 1. learning content-adaptive downsampling kernels in the form of Gabor filters; 2. upconverting the downsampled and compressed image to its original resolution using a deep upsampling neural network, aided by the prior knowledge of the learned adaptive upsampling kernels. The overall framework of the proposed ADDL image compression system is shown in Fig.~\ref{addl}. The system consists of a content-adaptive downsampling encoder and an upsampling decoder. Given an HR image $X$ to be compressed, the ADDL compression system first estimates the optimal downsampling kernels in each spatial location, then applies the estimated downsampling kernels to $X$ to produce the downsampled image $Y$. In our design, the downsampling kernels are restricted to the form of parametric Gabor filters to simplify the task and the architecture of the downsampling kernel optimization network. The parametric filter representation also reduces the side information needed to be transmitted to aid the upsampling at the decoder. Next, the downsampled image $Y$ is compressed by any traditional image compressor (e.g. JPEG, JPEG200, WebP, BPG) to get the base layer $\hat{Y}$ for transmission and storage. To achieve better upconversion result, the ADDL system not only transmits the base layer (downsampled and compressed image) $\hat{Y}$, but also the learned Gabor filter parameters. The upsampling decoder takes the base layer (compressed downsampled image) $\hat{Y}$ and the Gabor filter parameters as input to produce the reconstructed HR image $\hat{X}$ by a network designed for joint super resolution and compression artifact reduction. As illustrated by Fig.~\ref{addl}, the encoder and decoder networks, together with the spatially varying Gabor filters, are jointly optimized, via an end-to-end deep learning process, for the objective of minimizing the final reconstruction error $\| X-\hat{X} \|$. \subsection{Content adaptive downsampling} Traditional downsampling methods such as bilinear or bicubic, all have fixed downsampling kernels. This content-independent approach is obviously not optimal and prone to aliasing artifacts. In the ADDL system, spatially adaptive downsampling kernels are used for maximum information preservation. The downsamplings kernel is optimized for each pixel using the corresponding context in the HR image. Different from the existing method~\cite{sun2020learned} that directly optimizes the weights of the downsampling kernels, we restrict the downsampling kernels to the form of Gabor filters and optimize only the parameters. In the spatial domain, a 2-D Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave: \begin{equation} \begin{split} G (x, y; \lambda, \theta, \psi, \sigma, \gamma) & = e^{-\frac{x'^2 + \gamma^{2}y'^2} {2\sigma^2}} cos(\frac{2\pi}{\lambda} x' + \psi) \\ x' & = x cos\theta + y sin\theta \\ y' & = -x sin\theta + y cos \theta \end{split} \label{eq:gabor} \end{equation} where $\lambda$ represents the wavelength of the sinusoidal factor, $\theta$ represents the orientation of the normal to the parallel stripes of a Gabor function, $\psi$ is the phase offset, $\sigma$ is the sigma/standard deviation of the Gaussian envelope and $\gamma$ is the spatial aspect ratio, and specifies the ellipticity of the support of the Gabor function. The parametric representation of the Gabor filter reduces the problem dimensionality to five from the size of convolution kernel, and hence can lead to a simpler CNN model, called Gabor-Net, for the task of optimizing downsampling filter kernels. Another reason for our choice of the filter type is that the frequency and orientation characteristics of Gabor filters are similar to those of the human visual system. \cite{olshausen1996emergence}. The architecture of Gabor-Net $\mathcal{G}$ is shown in Fig.~\ref{gabornet}. It is a U-Net-like Encoder-Decoder network, trained to optimize the five parameters $\{\lambda, \theta, \psi, \sigma, \gamma\}$ of the Gabor filters for each pixel. The encoder part has an input convolution layer and five stages comprised of a max-pooling layer followed by two convolutional layers. The input layer has 32 convolution filters with size of 3$\times$3 and stride of 1. The first stage is size-invariant and the other four stages gradually reduce the feature map resolution by max-pooling to obtain a larger receptive field. The decoder is almost symmetrical to the encoder. Each stage consists of a bilinear upsampling layer followed by two convolution layers and a ReLU activation function. The input of each layer is the concatenated feature maps of the up-sampled output from its previous layer and its corresponding layer in the encoder. For an 2-D input image $X$ of size $H*W$, since Gabor-Net learns an optimal Gabor filter at each position for downsampling, the output of Gabor-Net will be five 2-D maps $I_G = \{I_\lambda, I_\theta, I_\psi, I_\sigma, I_\gamma \}$ of size $5*\frac{H}{2}*\frac{W}{2}$, representing the five Gabor filter parameters, respectively. The learned optimal Gabor filters are used to downsample the original image $X$ to produce the low-resolution image $Y$. The resulting low-resoution $Y$ can be further compressed by any traditional image compressor, e.g. JPEG, WebP, BPG, etc., and transmitted to the receiver. \begin{figure*}[t] \centering \includegraphics[width=0.98\linewidth]{figure/addl_gabornet.pdf} \caption{The architecture of the proposed Gabor-Net. It is a U-Net-like Encoder-Decoder network, trained to optimize the five parameters $\{\lambda, \theta, \psi, \sigma, \gamma\}$ of the Gabor filters for each pixel.} \label{gabornet} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{figure/addl_prednet.pdf} \caption{The architecture of the lightweight prediction network designed for predicting the Gabor filter parameters from the downsampled and compressed base layer image.} \label{prednet} \end{figure} \subsection{Predictive coding of Gabor parameters} The learned downsampling kernels contain the information of the high-frequency content which is lost in the downsampling process, such as the orientations of the edges, so they are useful for reconstructing the high-frequency textures in the upconversion. But this benefits compression only if we have a way of efficiently coding Gabor filter parameters. Instead of directly transmitting the Gabor filter parameters $I_G$, we first predict the $I_G$ from the compressed low-resolution layer $\hat{Y}$ using a lightweight network, and further quantize the prediction residues for transmission. The use of the predictive coding strategy ~\cite{calic} can greatly reduce the bit budget for transmitting the filter parameters while controlling the compression distortion to be below a threshold. Specifically, we design a lightweight prediction network $\mathcal{L}$ (see Fig.~\ref{prednet}) to predict the Gabor filter parameters $I_G$ from $\hat{Y}$. The prediction network $\mathcal{L}$ has a similar architecture to the Gabor-Net, except that the number of convolution layers and convolution kernels per layer are decreased to reduce the complexity. Let the output of the prediction network $\mathcal{L}$ be $I_G' = \mathcal{L}(\hat{Y})$. The prediction residue $E_G = I_G - I_G'$ has a lower entropy than $I_G$. We next quantize the prediction residues $E_G$ to further reduce the bit rates that need to be transmitted. Finally we only need to transmit the quantized prediction residues $\hat{E_G}$, that is: \begin{equation} \hat{E_G} = \mathcal{Q}[ I_G - \mathcal{L}(\hat{Y}) ] \label{quan} \end{equation} where $\mathcal{Q}$ represents the quantization function. By adjusting the quantization step in $\mathcal{Q}$, we can control the bit rates of the transmitted quantized prediction residue to not exceed 20\% of the bit rates of the downsampled and coded base layer. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{figure/addl_pac.pdf} \caption{The Comparison between the standard convolution and the proposed Gabor-filter-induced spatially adaptive convolution.} \label{pac} \end{figure} \subsection{Upsampling with spatially adaptive convolution} The upsampling decoder aims to upconvert the compressed downsampled image $\hat{Y}$ to the original resolution and reduce the compression artifacts. This task is very close to the existing image super-resolution technology~\cite{liu2020RFANet,zhang2020usrnet,liang21swinir,kai2021bsrgan,liang21manet,zhang2021DPIR}. However, the existing super-resolution methods all adopt spatially invariant convolution as the basic unit to construct the deep neural network. This is not the optimal choice for our task. Instead, the ADDL decoder performs spatially varying upsampling to match the content-adaptive downsampling at the encoder. It adopts pixel-adaptive convolution~\cite{pac} as the basic unit to construct the upsampling neural network. A standard convolution operation can be defined as: \begin{equation} y_i = W \cdot x_i + b \end{equation} where $x_i$ is the $i$-th convolution window, $W$ and $b$ are the convolution weight and bias. Departing from the tradition, we propose a Gabor-filter-induced spatially adaptive convolution (GSAC), as shown in Fig.~\ref{pac}. In GSAC, the Gabor filter parameters $I_G$ are fed into two convolutional layers to extract features $F_G$, then the resulting $F_G$ are used to modulate the standard convolution to make it spatially adaptive. This process can be formulated as: \begin{equation} y'_i = (W \cdot f_i) \cdot x_i + b \end{equation} where $f_i$ represents the $i$-th convolution window in $F_G$. We design an upsampling network with the proposed GSAC method, as shown in Fig.~\ref{upsampling}. It consists of 16 residual blocks with GSAC, in each of which there are two GSAC layers and two standard convolution layers. To avoid interference of different textures in one image, we restrict the receptive field of the GSAC by using $1 \times 1$ convolution kernels. The standard convolution layers all adopt $3 \times 3$ convolution kernels. Skip connection is used to ease the training of deep CNN. The upsampling layer is implemented by the transposed convolution. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{figure/addl_upsampling.pdf} \caption{Architecure of the proposed upsampling neural network.} \label{upsampling} \end{figure} Now we are at the point to summarize the overall pipeline of ADDL compression system in Algorithm.~\ref{alg_agdl}. \begin{algorithm}[!t] \caption{ Framework of ADDL compression system.} \label{alg_agdl} \hspace*{0.02in} {\bf Input:} The original image, $X$; \\ \hspace*{0.02in} {\bf Output:} The decoded image, $\hat{X}$; \\ \hspace*{0.02in} {\bf Encoding:} \begin{algorithmic}[1] \STATE Learn content adaptive Gabor parameters $I_G$ from $X$; \STATE Calculate the Gabor downsampling kernels according to the learned parameters $I_G$; \STATE Downsample $X$ using the Gabor filter kernels to produce the low-resolution $Y$; \STATE Compress $Y$ using a traditional image compressor (e.g. JPEG) to $\hat{Y}$; \STATE Predict $I_G'$ from $\hat{Y}$ and quantize the prediction residues to get $\hat{E_G} = \mathcal{Q}[ I_G - I_G']$; \STATE Transmit $\hat{Y}$ and $\hat{E_G}$. \end{algorithmic} \hspace*{0.02in} {\bf Decoding:} \begin{algorithmic}[1] \STATE Decode $\hat{Y}$ and $\hat{E_G}$ from the bit stream; \STATE Predict $I_G'$ from $\hat{Y}$ and reconstruct the Gabor filter parameters by $\hat{I_G} = I_G'+\hat{E_G}$; \STATE Construct the GSAC layer using $\hat{I_G}$ and nuild the upsampling network; \STATE Reconstruct the high-resolution image $\hat{I}$. \end{algorithmic} \end{algorithm} \begin{figure*}[t] \centering \includegraphics[width=0.49\linewidth]{figure/rd_classic5.pdf} \hfill \includegraphics[width=0.49\linewidth]{figure/rd_live1.pdf} \\ \includegraphics[width=0.49\linewidth]{figure/rd_bsd.pdf} \hfill \includegraphics[width=0.49\linewidth]{figure/rd_icb.pdf} \caption{RD curves of JPEG + ADDL scheme and other competing methods on Classic5, LIVE1, BSD500 and ICB datasets.} \label{fig:rd_jpeg} \end{figure*} \section{Experiments} \label{sec:exps} In this section, we present the implementation details of the proposed ADDL image compression system. To systematically evaluate and analyze the performance of the ADDL compression system, we conduct extensive experiments and compare our results with several stat-of-the-art methods. \subsection{Data preparation and network training} For training the proposed ADDL compression system, we use the widely-used high-quality 2K-resolution image dataset DIV2K~\cite{DIV2K} as our training data. The set consists of 800 training images and 100 validation images. For testing, we evaluate the trained model on the four commonly used benchmarks: Classic5~\cite{classic5}, LIVE1~\cite{LIVE1}, BSD500~\cite{BSD100} and ICB~\cite{icb}, and report the performances. The whole pipeline of the proposed ADDL system is too complicated to do the end-to-end training. For this reason, we first train the Gabor-Net and the upsampling module without the side information of Gabor filter parameters via an end-to-end manner. After that, we train the prediction network $\mathcal{L}$ to predict the Gabor filter parameters from the downsampled and JPEG-compressed images. Finally, we finetune the upsampling network by incorporating the transmitted Gabor filter parameters. During training, we randomly extract patches of size $256 \times 256$. All training processes use the Adam~\cite{adam} optimizer by setting $\beta_1 = 0.9$ and $\beta_2 = 0.999$, with a batch size of $64$. The learning rate starts from $1 \times 10^{-4}$ and decays by a factor of 0.5 every $4 \times 10^{4}$ iterations and finally ends with $1.25 \times 10^{-5}$. $\mathcal{L}_1$ loss is adopted to optimize all networks in the ADDL compression system. We train our model with PyTorch on four NVIDIA GeForce GTX 2080Ti GPUs. It takes about two days to converge. All the training and evaluation processes are performed on the luminance channel (in YCbCr color space). We choose JPEG as the traditional image compressor in ADDL as it is by far the most common image compression method. During training the JPEG quality factor is randomly sampled from 10 to 90. However, JPEG compression algorithm contains quantization/rounding operation, and the rounding operation $\lfloor \cdot \rceil$ has derivative 0 nearly everywhere, which is not compatible with the gradient-based optimization, so it can not be directly embedded into the training process. To solve this problem, following the solution in~\cite{shin2017jpeg}, we instead use the approximation $\lfloor x \rceil _{approx}= \lfloor x \rceil + (x - \lfloor x \rceil)^{3}$, which has non-zero derivatives nearly everywhere, and close to $\lfloor x \rceil$. we also build ADDL system with other traditional image compressors (such as BPG) and report the experimental results in the following subsections. \begin{figure*}[t] \centering \includegraphics[width=0.88\linewidth]{figure/demo_church_down.png} \caption{Visual comparisons of JPEG + ADDL scheme and other competing methods on the Image 'LIVE1: church'. BD means bicubic downsampling.} \label{church} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.88\linewidth]{figure/demo_bikes_down.png} \caption{Visual comparisons of JPEG + ADDL scheme and other competing methods on the Image 'LIVE1: bikes'. BD means bicubic downsampling..} \label{bikes} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.88\linewidth]{figure/demo_carnivaldolls_down.png} \caption{Visual comparisons of JPEG + ADDL scheme and other competing methods on the Image 'LIVE1: carnivaldolls'. BD means bicubic downsampling.} \label{carnivaldolls} \end{figure*} \begin{figure*}[!h] \centering \includegraphics[width=0.88\linewidth]{figure/demo_sailing3_down.png} \caption{Visual comparisons of JPEG + ADDL scheme and other competing methods on the Image 'LIVE1: sailing3'. BD means bicubic downsampling.} \label{sailing3} \end{figure*} \subsection{ADDL with JPEG} To demonstrate the advantages of the proposed ADDL compression system, we compare ADDL with several other compression systems, in which JPEG is also used as the image compressor and different learning-based post-processing algorithms (super-resolution, compression artifact reduction) are used for restoring/enhancing the compressed images. We divide these competing compression systems into two categories:\\ \textbf{1. JPEG + Deblocking}. In this category, several learning-based deblocking (also called compression artifact reduction) methods: DNCNN~\cite{dncnn}, DCSC~\cite{dcsc}, QGAC~\cite{qgac}, FBCNN~\cite{fbcnn} are combinded with JPEG compressor to form the competing image compression systems.\\ \textbf{2. Downsampling + JPEG + Deblocking + SR}. This competing system consists of: downsampling by the fixed kernels, traditional compression, joint CNN-based deblocking and super-resolution. we choose bicubic downsampling, FBCNN~\cite{fbcnn} and RCAN~\cite{rcan} as downsampling, deblocking and super-resolution method, respectively. In the ADDL system, the total bit rates need to be transmitted are the sum of the rates of JPEG-coded low-resolution layer and the quantized prediction residues of the learned content adaptive Gabor filter parameters. To facilitate fair rate-distortion performance evaluations, for each test image, the rates of the competing compression systems are adjusted to match or be slightly higher than that of the ADDL compression system. It is noteworthy that by adjusting the quantization step of $\mathcal{Q}$ in Eq.~\ref{quan}, for different bit rates (or compression ratios), the bit rates of the quantized prediction residues of the Gabor filter parameters are controlled to be about 20\% of the bit rates of the JPEG-coded base layer. \textbf{Quantitative evaluation.} We present rate-distortion (RD) curves of the competing methods in Fig.~\ref{fig:rd_jpeg}. As shown in the figure, the proposed ADDL compression system outperforms all the competing image compression methods consistently in PSNR measure at low to medium bit rates. For Classic5, LIVE1 and BSD500 datasets, the proposed ADDL achieves superior rate-distortion performances to the best of other methods (JPEG + FBCNN), for bit rates lower than 0.5bpp. For the ICB dataset, ADDL beats all other methods for bit rates lower than 0.9bpp. \textbf{Perceptual quality comparison.} The perceptual qualities of competing methods, given the same bit rate, are compared in Figs.~\ref{church}, ~\ref{bikes}, ~\ref{carnivaldolls} and ~\ref{sailing3}. It can be seen that ADDL preserves high-frequency textures, such as meshes and letters, much better than the other compression methods. At modest bit rates, ADDL achieves visually transparent quality compared with the ground truth, while the other methods still suffer from highly noticeable distortions. Both figures clearly demonstrate the advantage of the content-dependent downsampling of ADDL (exhibit (g)) over spatially-invariant bicubic downsampling (exhibit (f)). \subsection{ADDL with BPG} In addition to JPEG, we also build ADDL system with the most powerful traditional image compressor BPG and compare the BPG + ADDL scheme with end-to-end optimized image compression methods. We conduct experiments on Kodak and CLIC Pro datasets and present the reate-distortion curves in Fig.~\ref{fig:rd_bpg}. It can be seen that BPG + ADDL compression system outperforms all the end-to-end optimized image compression methods consistently in PSNR measure at low bit rate level. \begin{figure*}[t] \centering \includegraphics[width=0.49\linewidth]{figure/addl_bpg_kodak.png} \hfill \includegraphics[width=0.49\linewidth]{figure/addl_bpg_clicpro.png} \caption{RD curves of BPG + ADDL scheme and other end-to-end optimized image compression methods on Kodak and CLIC Pro datasets.} \label{fig:rd_bpg} \end{figure*} \subsection{Visualization of Gabor filter parameters} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figure/bikes_params.png} \caption{Visualization of the learned Gabor filter parameters.} \label{params_vis} \end{figure} To have a more intuitive understanding of the learned Gabor downsampling kernels, let us visualize the learned Gabor filter parameters in Fig.~\ref{params_vis}. It can be observed that the learned Gabor filter parameters are highly correlated to the image features and structures, especially for parameter $\theta$, which determines the orientation of the Gabor downsampling kernels. This explains why spatially adaptive Gabor downsampling kernel can preserve high-frequency information, and also why predictive coding of Gabor filter parameters works. \begin{figure}[!h] \centering \includegraphics[width=0.95\linewidth]{figure/rd_ablation.pdf} \caption{Ablation RD curves on LIVE1 dataset.} \label{rd_ablation} \end{figure} \subsection{Ablation Study} In this subsection, we test various ablations of our full architecture to evaluate the effects of each component of the proposed ADDL compression system. We first evaluate the effects of whether to transmit the Gabor filter parameters and also how to transmit them to the receiver. We build an ablation architecture which does not transmit the Gabor filter parameters at all (called ADDL\_w/o\_transmitting\_params) and another ablation architecture that directly transmits the Gabor filter parameters without predictive coding (called ADDL\_w/o\_predictive\_coding). The performances of these two architectures are shown in Fig.~\ref{rd_ablation}. We can see that these two ablation architectures perform much worse than the full ADDL architecture in rate-distortion measure. We also build an another ablation network architecture (ADDL\_w/o\_GSAC), which transmits the Gabor filter parameters using the proposed predictive coding method, but the decoder uses only the received Gabor filter parameters without the GSAC module in the HR reconstruction. We present the rate-distortion curve of this case in Fig.~\ref{rd_ablation}. It can be seen that without the GSAC module, the performance of the ADDL compression system drops a bit. The decline in ADDL performance shows the effectiveness of the proposed GSAC module. \section{Conclusion} \label{sec:Conclusion} We propose, implement and evaluate the new deep learning based ADDL image compression system. The key idea is to code an image into a compact two-layer representation: a base layer that is generated by learned content-adaptive downsampling, and a refinement layer that is generated by a deep upsampling network. The ADDL encoder and decoder collaborate through the sharing of information on spatially varying Gabor downsampling filters. \bibliographystyle{IEEEtran}
2,877,628,090,036
arxiv
\section{Introduction} The great majority of phase transitions are characterized by spontaneous symmetry breaking and can be described by the qualitative changes in the order parameter behavior associated with long-range order \cite{Landau_1,Bogolubov_2}. This concerns both first-order as well as second-order phase transitions. There are also so-called topological phase transitions \cite{Fradkin_3} that are not necessarily accompanied by symmetry breaking, but exhibit the changes in the behavior of correlation functions and of reduced density matrices, connected with a kind of quasi-long-range or mid-range order \cite{Coleman_4}. In all cases, even when order parameters cannot be defined, different phases can be classified by order indices of density matrices \cite{Coleman_5,Coleman_6,Coleman_7,Coleman_8} quantifying all types of order, be they long-range or mid-range. Phase transitions between equilibrium states of matter are induced by the variation in thermodynamic parameters or static external fields. Similarly, the appearance of new properties in a nonequilibrium system can be induced by alternating fields, especially when some resonances occur. In the present paper, we advocate the point of view that resonance phenomena can be treated as nonequilibrium phase transitions. As in the case of equilibrium phase transitions, resonance phenomena are accompanied by qualitative changes in their macroscopic properties, which makes it possible to introduce related order parameters. In many cases, resonance phenomena exhibit a kind of symmetry breaking. We illustrate these properties by considering several resonances for which we define the related order parameters and show that this kind of symmetry can become broken. As examples, we consider spin-wave resonance, helicon resonance, and spin-reversal resonance. Let us emphasize that we do not claim that resonance phenomena are exactly the same as equilibrium phase transitions. This is evidently not so, since resonance phenomena describe nonequilibrium processes. However, we show that both these phenomena can share two, probably the most important, properties: First, there can occur some symmetry breaking in a region around the resonance. Second, it is possible to introduce order parameters distinguishing between qualitatively different states of the considered system. These similarities justify the comparison between resonance phenomena and phase transitions. We do not prescribe in advance what would be the order of transition in particular resonance phenomena. As always, this is defined by the behavior of the related order parameters. The realistic description of resonance phenomena is usually dealt with finite systems, since, to realize a resonance, one always needs an alternating external field, created outside of and acting on the system. For finite systems, the problem of thermodynamic limit does not arise at all. Dealing with finite systems, it is natural to expect that the related resonance transitions will be of continuous type, analogous to transformations in finite equilibrium systems. In the majority of cases, transitions caused by resonances are in fact crossover transitions. An exact resonance, as such, happens at a single point of a varied parameter, usually of frequency. However, this does not mean that the transition is localized at that single point. As we shall see from the examples below, there is a region around the point of the exact resonance where new properties, compared to those in the regions far from the resonance point, arise. Such a region, which can be called the {\it resonance region}, reminds us of the critical region in equilibrium phase transitions. Again let us stress that resonance phenomena are nonequilibrium, and symmetry breaking, generally, drives systems to nonuniform states. The most convenient way of describing such systems is by considering the dynamics of observable quantities that are defined through the corresponding statistical averages. All examples, we consider below, are based on exactly this approach of studying the dynamic behavior of observable quantities. The dynamical equations in all cases are derived from related microscopic theories. Of course, it would be absolutely unreasonable to reproduce all these rather complicated derivations in the present paper. This could take hundreds of pages, especially for realistic problems we deal with. Instead, it is sufficient in the present paper to give the appropriate references where the reader can find all details. Throughout the paper, we set the Planck constant to unity. \section{Spin-Helicon Waves} First, we consider resonances that occur in a paramagnetic metal subject to the irradiation of an external electromagnetic field. The metal is assumed to have the geometry of a plate in the region $0 < z < L$. There is an external static magnetic field along the $z$ axis, \be \label{1} \bB_0 = B_0 \bfe_z \; \ee and perpendicularly to its surface the metal is irradiated by an alternating electromagnetic field of frequency $\omega$ much lower than the plasma frequency, \be \label{2} \om \ll \om_p \qquad \left ( \om_p^2 \equiv 4\pi\; \frac{\rho e_0^2}{m} \right ) \; \ee where $\rho\sim 10^{22}$ cm$^{-3}$, $e_0$, and $m$ are the electron density, charge, and mass, respectively. The plasma frequency is $\omega_p \sim 10^{15}$ s$^{-1}$. The static paramagnetic susceptibility in paramagnetic metals is small, so~that \be \label{3} 4\pi\chi \ll 1 \; . \ee Usually, the susceptibility is of order $\chi \sim 10^{-6}$. The excitation of waves inside a metallic plate is achieved in the best way by circularly polarized electromagnetic waves \cite{Kaner_9,Falko_10}, because of which we consider the electromagnetic fields and magnetic moment in the form of the combinations \be \label{4} H = H_x - i H_y \; , \qquad E = E_x - i E_y \; , \qquad M = M_x - i M_y \; . \ee The temporal behavior is described by $\exp(-i \omega t)$. The coupled Maxwell--Bloch equations for linear field deviations in a paramagnetic metal with weak dispersion and isotropic Fermi surface can be written \cite{Silin_11,Platzmann_12,Yukalov_13,Yukalov_14,Yukalov_15} as $$ \frac{dH}{dz} \; - \; \ep k E = 0 \; , \qquad \frac{dE}{dz} \; + \; k (H + 4\pi M ) = 0 \; $$ \be \label{5} \left (D\; \frac{d^2}{dz^2} \; - \; \om_s + i \nu_s \right ) ( M - \chi H ) + \om M = 0 \; . \ee Here, $k \equiv \omega/c$, the effective dielectric permeability is \be \label{6} \ep = - \; \frac{\om_p^2}{\om(\om-\om_s+ i\nu_0) } \; \ee and the diffusion coefficient reads as \be \label{7} D = \frac{v_F^2(1+\bt_0)(1+\bt_1)}{3(\om-\om_0+ i\nu_0) } \; \ee where \be \label{8} \om_s = \frac{e_0B_0}{mc} \ee is the Larmor spin frequency, and the cyclotron frequency \be \label{9} \om_0 = \frac{1+\bt_1}{1+\bt_0} \; \om_s \ee is renormalized by the Landau Fermi-liquid interaction parameters $\beta_0$ and $\beta_1$. The attenuations \be \label{10} \nu_s = \frac{1+\bt_0}{\tau_s} \; , \qquad \nu_0 = \frac{1+\bt_1}{\tau_0} \ee are defined by the times of momentum, $\tau_0 \sim 10^{-9}$ s, and spin, $\tau_s \sim 10^{-6}$ s, relaxations. The Fermi velocity of conduction electrons is $v_F \sim 10^8$ cm s$^{-1}$. Equations (\ref{5}) describe the coupled spin-helicon waves with the dispersion relation \be \label{11} q^2 = \ep \mu(q,\om) \; \ee where the effective magnetic permeability is \be \label{12} \mu(q,\om) = \frac{\om-\om_s+i\nu_s-Dq^2}{\om+(1-4\pi\chi)(-\om_s+i\nu_s-Dq^2)} \; . \ee This gives us two solutions for the characteristic wave vectors, associated with the spin waves, $k_s$, and helicon waves, $k_h$. Taking into account the smallness of the static paramagnetic susceptibility, we~can write \be \label{13} k_s^2 = \frac{\om-\om_s+i\nu_s}{D} \; , \qquad k_h^2 = \ep k^2 \qquad (4\pi\chi \ll 1 ) \ee to zero order in $\chi$, and $$ k_s^2 = \frac{\om-\om_s+i\nu_s}{D} \; \left ( 1 + \frac{4\pi\chi\om}{\om-\om_s+i\nu_s-D\ep k^2} \right ) \; $$ \be \label{14} k_h^2 = \ep k^2 \left ( 1 + 4\pi\chi\; \frac{-\om_s + i\nu_s - D\ep k^2}{\om-\om_s+i\nu_s-D\ep k^2} \right ) \; \ee to first order in $\chi$. The incident and reflected fields are plane waves, with the magnetic components \be \label{15} H_0(z) = H_0 e^{ikz} \; , \qquad H_1(z) = H_1 e^{-ikz} \qquad ( z \leq 0 ) \; , \ee which defines the total magnetic field $H_0(z) + H_1(z)$. Respectively, the electric components yield the electric field \be \label{16} E_0(z) + E_1(z) = i [ H_0(z) - H_1(z) ] \qquad ( z\leq 0) \; . \ee Inside the metallic plate, the magnetic field consists of four parts, including two running waves and two waves reflected from the second surface of the plate, \be \label{17} H(z) = H_2 e^{ik_sz} + H_3 e^{-ik_sz} + H_4 e^{ik_hz} + H_5 e^{-ik_hz} \qquad ( 0 \leq z \leq L) \; . \ee The field transmitted through the second surface is \be \label{18} H_6(z) = H_6 e^{ikz} \; , \qquad E_6(z) = i H_6(z) \qquad ( z \geq L ) \; . \ee Equations (\ref{5}) are to be complimented by the boundary conditions. For magnetic and electric fields, there are the standard continuity conditions for their tangential components on each of the surfaces. The spatial structure of the metallic surface influences the spatial distribution function of conducting electrons \cite{Askerov_16}. This can result in the appearance of a magnetic anisotropy on the surface. Several boundary conditions for the magnetization have been studied \cite{Walker_17,Janossy_18,Menard_19,Flesner_20,Silsbee_21,Graham_22}. A simple boundary condition for the magnetization was proposed by Dyson \cite{Dyson_23}, which reads as \be \label{19} \frac{dM}{d{\bf n}} + \zeta M = 0 \qquad ( z = 0, L ) \; \ee where the first term implies the normal derivative at the boundary and $\zeta$ is a surface anisotropy parameter connected to the probability of a spin flip when scattering at the surface. The Dyson boundary condition has been used in several papers \cite{Janossy_18,Yukalov_24,Yukalov_25,Yukalov_26}. Experiments have not been able to determine between the preferred type of condition \cite{Magno_27,Vander_28}. Hence, without the loss of generality, we can employ the Dyson condition (\ref{19}). The six boundary conditions at two surfaces, for the fields and for the magnetic moment, define~all six amplitudes $H_i$, with $i = 1,2,3,4,5,6$ as functions of the incident-field amplitude $H_0$. A convenient observable quantity is the transparency coefficient \be \label{20} C_T \equiv \left | \; \frac{H_6}{H_0} \; \right |^2 \ee showing how the incident electromagnetic field is transmitted through the metallic plate. \section{Spin-Wave Resonance} When the frequency $\omega$ of the incident field is close to the spin frequency $\omega_s$, the helicon amplitudes $H_4$ and $H_5$ are small, as compared to the spin-wave amplitudes $H_2$ and $H_3$, and the spin wave forms a standing wave, so that the magnetic field inside the plate becomes practically periodic, slightly~perturbed by attenuations. Respectively, the magnetic moment is also practically periodic: \be \label{21} M(z) \; \propto \; \cos(k_s z) \; . \ee This looks like a kind of magnetic crystallization of spin-wave collective excitations, whereby the system becomes spatially periodic if a small attenuation is neglected. For typical paramagnetic metals, the spin frequency is $\omega_s \sim 10^{11}$ s$^{-1}$, the wave vectors are $k_s \sim 10^2$ cm$^{-1}$ and $k_h \sim 10^6$ cm$^{-1}$, and the magnetic anisotropy parameter is $\zeta \sim 10^3$cm$^{-1}$. The plate width is typically $L \sim 10^{-2}$ cm. Hence the inequalities \be \label{22} |\; k_s \; | \ll \zeta \ll |\; k_h \; | \; \qquad |\; k_h L \; | \gg 1 \ee are valid, which will be used in what follows. Employing these inequalities, we obtain the transparency coefficient \be \label{23} C_T = \frac{64\pi^2\chi^2\om^2|\;k_s \;|^2}{c^2\zeta^6 |\; \sin(k_sL) \; |^2} \qquad ( \om \sim \om_s) \; . \ee In the denominator, $$ |\; \sin(k_sL) \; |^2 = \sin^2({\rm Re}k_s L ) + \sinh^2({\rm Im}k_sL ) \; $$ where $$ {\rm Re}k_s = |\;k_s\;| \cos\vp \; , \qquad {\rm Im}k_s = |\;k_s\;| \sin\vp \; , $$ and $\varphi$ is the argument of $k_s$. Thus, we have \be \label{24} C_T \; \propto \; \frac{\om^2|\;k_s\;|^2 L^2} {\sin^2(|\;k_sL\;| \cos\vp) + \sinh^2(|\;k_sL\;|\sin\vp)} \; \ee where \be \label{25} |\;k_sL\;| = \frac{\sqrt{3}}{v_F} \; \left \{ \frac{[\; (\om-\om_s)^2+\nu_s^2\;][\;(\om-\om_0)^2+\nu_0^2\;]}{(1+\bt_0)^2(1+\bt_1)^2} \right \}^{1/4} \ee and the phase is \be \label{26} \vp = \frac{1}{2} \; \arctan \left [ \; \frac{\nu_0(\om-\om_s)+\nu_s(\om-\om_0)}{(\om-\om_s)(\om-\om_0) -\nu_0\nu_s} \; \right ] \; . \ee The spin-wave resonance occurs under the condition \be \label{27} {\rm Re k_s L = |\;k_s L\;| \cos\vp = \pi n \qquad ( n = 1,2,\ldots ) \; . \ee To simplify the formulas, we can take into account that the Fermi-liquid interaction parameters $\beta_0$ and $\beta_1$ are small and that dimensionless attenuations \be \label{28} \nu_1 \equiv \frac{\nu_0}{\om_s} \; , \qquad \nu_2 \equiv \frac{\nu_s}{\om_s} \; . \ee In typical paramagnetic metals, $\nu_0 \sim 10^9$ s$^{-1}$, $\nu_s \sim 10^6$s$^{-1}$, and $\omega_s \sim 10^{11}$ s$^{-1}$. Therefore, $\nu_1 \sim 10^{-2}$ and $\nu_2 \sim 10^{-5}$. In view of these small parameters, we have $$ |\;k_s\;| \simeq \frac{\sqrt{3}}{v_F} \; ( \om - \om_s ) \; . $$ Thus, the resonance condition (\ref{27}) yields the spin-resonance frequencies \be \label{29} \om_n = \om_s \left ( 1 + \frac{An}{\cos\vp} \right ) \qquad ( n = 1,2,\ldots ) \; \ee in which \be \label{30} A \equiv \frac{\pi v_F}{\sqrt{3} \; L\om_s} \; . \ee For the typical values $v_F \sim 10^8$ cm s$^{-1}$ and $L \sim 10^{-2}$ cm, the parameter $A \sim 0.1$ and $\varphi \sim 10^{-2}$. Hence, \be \label{31} \vp \simeq \frac{1}{2} \; \arctan \left ( \frac{\nu_1+\nu_2}{An} \right ) \; . \ee In what follows, we consider the first resonance, with $n = 1$, whose frequency is \be \label{32} \om_1 = \om_s ( 1 + A ) \; \ee where we take into account that, because of the smallness of $\varphi$, $\cos \varphi \approx 1$. Introducing the relative detuning \be \label{33} \dlt \equiv \frac{\om-\om_1}{\om_1} \; \ee we can write $$ |\;k_s L\;| = \pi ( 1 + b\dlt ) \qquad \left ( b \equiv \frac{1+A}{A} \right ) \; . $$ The occurrence of spin-wave resonance manifests itself by the appearance of the well observable property of the transparency of the metallic plate with respect to the penetration of electromagnetic waves. The order parameter can be defined as the normalized transparency coefficient \be \label{34} \eta \equiv \frac{C_T(\dlt)}{C_T(0)} \; . \ee For the latter, we obtain \be \label{35} \eta = \frac{(1+\dlt)^2(1+b\dlt)^2\sinh^2(\pi\sin\vp)}{\sin^2[\pi(1+b\dlt)\cos\vp] + \sinh^2[\pi(1+b\dlt)\sin\vp] } \; . \ee The order parameter (Equation (\ref{35})), as a function of the relative detuning, is shown in Figure \ref{f1}. In the vicinity of the resonance frequency, where the detuning is close to zero, the order parameter is close to one. In addition, it diminishes with increasing the detuning. \begin{figure}[H] \centering \includegraphics[width=6cm]{Fig1_ResPhen.eps} \caption{Order parameter (\ref{35}) characterizing the plate transparency under a spin-wave resonance, with the parameters $A = 0.1$, $\nu_1 = 10^{-2}$, and $\nu_2 = 10^{-5}$.} \label{f1} \end{figure} The state at small detuning is qualitatively different from the state far from $\delta = 0$. The sample under spin-wave resonance becomes periodic due to the developed standing wave (\ref{21}), and the plate exhibits the macroscopic property of transparency. These features disappear outside of the resonance. Such behavior reminds us of a phase transition. \section{Helicon Resonance} At a frequency $\omega$ much lower than the spin frequency $\omega_s$, \be \label{36} \om \ll \om_s \; \ee spin waves strongly attenuate, but helicon waves can persist \cite{Petrashov_29,Kotelnikov_30}. At such frequencies, the transparency coefficient (\ref{20}) becomes \be \label{37} C_T = \frac{4\om\om_s}{\om_p^2|\;\sin(k_hL)\;|^2} \; \ee where \be \label{38} k_h = \sqrt{\ep} \; \frac{\om}{c} \; , \qquad \ep = \frac{\om_p^2(\om_s+i\nu_0)}{\om(\om_s^2+\nu_0^2)} \; . \ee The real and imaginary parts of the helicon wave vector are \be \label{39} {\rm Re}k_h = \frac{\om_p}{c} \; \sqrt{ \frac{\om}{\om_s}} \; \cos\vp \; , \qquad {\rm Im}k_h = \frac{\om_p}{c} \; \sqrt{ \frac{\om}{\om_s}} \; \sin\vp \; \ee with the argument \be \label{40} \vp = \frac{1}{2}\; \arctan\; \frac{\nu_0}{\om_s} \; . \ee From the transparency coefficient \be \label{41} C_T = \frac{4\om\om_s}{\om_p^2[\sin^2({\rm Re}k_hL)+\sinh^2({\rm Im}k_h L)]} \ee it follows that the helicon resonance happens when \be \label{42} {\rm Re}k_h L = \pi n \qquad ( n = 1,2,\ldots ) \; . \ee The helicon resonance frequencies, keeping in mind that $\nu_0/ \omega_s \sim 10^{-2}$, is read as \be \label{43} \om_n = \frac{\pi^2 c^2\om_s n^2}{\om_p^2 L^2 \cos^2\vp} \qquad ( n = 1,2,\ldots ) \; . \ee We shall consider the first resonance with the frequency \be \label{44} \om_1 = \left ( \frac{\pi c}{\om_p L} \right )^2 \; \om_s \; . \ee This frequency is of order $\omega_1 \sim 10^6$ s$^{-1}$. The order parameter can again be defined as the normalized transparency coefficient $$ \eta \equiv \frac{C_T(\dlt)}{C_T(0)} \; , \qquad \dlt \equiv \frac{\om-\om_1}{\om_1} \; $$ being a function of the relative detuning. Using the quantities $$ {\rm Re}k_h L = \pi \; \sqrt{1+\dlt} \; , \qquad {\rm Im}k_h L = \pi\; \sqrt{1+\dlt} \; \tan \vp $$ results in the order parameter \be \label{45} \eta = \frac{(1+\dlt)\sinh^2(\pi\tan\vp)} {\sin^2(\pi\sqrt{1+\dlt})+\sinh^2(\pi\sqrt{1+\dlt}\tan\vp)}\; . \ee The order parameter (Equation (\ref{45})) is shown in Figure \ref{f2}. The situation is similar to the case of spin-wave resonance. At small detuning, a standing periodic magnetic field develops, and the state is characterized by a large transparency. Outside of the resonance, the plate is not transparent. \begin{figure}[H] \centering \includegraphics[width=6cm]{Fig2_ResPhen.eps} \caption{Order parameter (\ref{45}) describing the plate transparency under helicon resonance, with the parameter $\nu_1 = 10^{-2}$.} \label{f2} \end{figure} In a similar way, we could describe other magnetic resonance phenomena, e.g., ferromagnetic resonance \cite{Akhiezer_31}. Some quantum mechanical scattering problems, dealing with finite-width systems, also lead to equations exhibiting resonances analogous to that considered above \cite{Zakhariev_32,Zakhariev_33,Zakhariev_34}. For such problems, it is also possible to introduce order parameters as normalized transparency coefficients. \section{Spin-Rotation Symmetry} Let us consider a lattice of $N$ lattice sites, with a spin operator ${\bf S}_j$ in the $j$-th site, where $j = 1, 2, \ldots , N$. The system is placed in a magnetic field ${\bf B}_0 = B_0 {\bf e}_z$ along the $z$-axis. The Hamiltonian is \be \label{46} \hat H_0 = -\mu_0 B_0 \sum_{j=1}^N S_j^z + \frac{1}{2} \sum_{i\neq j} \hat H_{ij} \; \ee in which $\mu_0 =-g_S \mu_B$, with $g_S$ being a $g$-factor and $\mu_B$, being the Bohr magneton. The exchange spin~interactions \be \label{47} \hat H_{ij} = - J_{ij} \left ( S_i^x S_j^x + S_i^y S_j^y \right ) - I_{ij} S_i^z S_j^z \ee correspond to the so-called $XXZ$ model. The Hamiltonian is invariant with respect to spin rotations around the $z$-axis. The rotation operator~is \be \label{48} \hat R_z = \exp \left ( -i\vp S^z \right ) \; \ee where $\varphi$ is a rotation angle and \be \label{49} S^z \equiv \sum_{j=1}^N S_j^z \ee is the $z$-component of the total lattice spin. Because of the commutation relations $$ \left [ S^z , \; S_i^x S_j^x + S_i^y S_j^y \right ] = \left [ S^z , \; S_i^z S_j^z \right ] = 0 \; $$ the $z$-component of the total spin is conserved: $$ \left [ S^z , \; \hat H_0 \right ] = 0 \; . $$ Therefore, the Hamiltonian is invariant under the rotation transformation, \be \label{50} \hat R_z^+ \hat H_0 \hat R_z = \hat H_0 \; \ee which implies the symmetry with respect to the spin rotation around the $z$-axis. Thus, the Hamiltonian symmetry is $U(1)$. Because of this symmetry, the transverse components of the average spin of an equilibrium system are zero: \be \label{51} \lgl S_j^x \rgl = \lgl S_j^y \rgl = 0 \; . \ee Respectively, the average values of the ladder operators $$ S_j^\pm \equiv S_j^x \pm i S_j^y $$ are also zero: \be \label{52} \lgl S_j^+ \rgl = \lgl S_j^- \rgl = 0 \; . \ee Suppose that, at the initial moment of time, the system is prepared as described above. However, if the system is made nonequilibrium, the average spin can start changing, breaking the $U(1)$ symmetry. This can also be accompanied by a reversal of the total average spin. A nonequilibrium spin system is characterized by the time-behavior of the following quantities: the transition function \be \label{53} u \equiv \frac{1}{NS} \sum_{j=1}^N \lgl S_j^- \rgl \; \ee the coherence intensity \be \label{54} w \equiv \frac{1}{N(N-1)S^2} \sum_{i\neq j}^N \lgl S_i^+ S_j^- \rgl \; \ee and the average spin projection \be \label{55} s \equiv \frac{1}{NS} \sum_{j=1}^N \lgl S_j^z \rgl \; . \ee The details of temporal behavior of a nonequilibrium system depend on the type of conditions transforming the system to a nonequilibrium state. \section{Spin-Reversal Resonance} The system, described in the previous section, then is connected to a resonator electric circuit, and an additional transverse magnetic field $H$ starts acting on the sample. Thus, the total magnetic field~becomes \be \label{56} \bB = B_0 \bfe_z + H \bfe_x \; . \ee The important point is that this additional field $H$ is not just an external field, but a feedback field created by the moving spins of the system. The equation for the feedback field can be derived \cite{Yukalov_35,Yukalov_36,Yukalov_37} from the Kirchhoff equation, yielding \be \label{57} \frac{dH}{dt} + 2\gm H + \om^2 \int_0^t H(t') \; dt' = - 4\pi\; \frac{dm_x}{dt} \; \ee where $\omega$ is the resonator natural frequency, $\gamma$ is the resonator attenuation, and the electromotive force is caused by moving spins with the magnetization density \be \label{58} m_x = \frac{\mu_0}{V_{res}} \sum_{j=1}^N \lgl S_j^x \rgl \; \ee with $V_{res}$ being the resonator coil volume. Switching on the additional feedback field leads to the Hamiltonian $$ \hat H = \hat H_0 - \mu_0 H \sum_{j=1}^N S_j^x \; . $$ Thus, the total Hamiltonian becomes \be \label{59} \hat H = -\mu_0 \sum_{j=1}^N \bB \cdot \bS_j \; + \; \frac{1}{2} \sum_{i\neq j}^N \hat H_{ij} \; . \ee It should be mentioned here that spins formed by electrons cause the negative magnetic moment $\mu_0 < 0$. When~$B_0 > 0$, a positive value of the Zeeman frequency results: \be \label{60} \om_0 = -\mu_0 B_0 > 0 \; . \ee The coupling of the spin system to a resonator producing a feedback field defines the feedback~attenuation \be \label{61} \gm_0 \equiv \pi \mu_0^2 S \; \frac{N}{V_{res}} \; . \ee The effective coupling parameter, characterizing the interaction of the system with the resonator,~is \be \label{62} g \equiv \frac{\gm_0 \om_0}{\gm\gm_2} \qquad ( \gm_2 \equiv \rho \mu_0^2 S ) \; \ee where $\rho$ is the spin density. An efficient interaction between the system and the resonator can develop only when the Zeeman frequency (Equation (\ref{60})) is close to the resonator natural frequency $\omega$ and hence when the detuning \be \label{63} \dlt \equiv \frac{\Dlt}{\om_0} = \frac{\om-\om_0}{\om_0} \ee is small. Additionally, all attenuations in the system need to be small compared to $\omega$ or $\omega_0$, and these attenuations include the resonator attenuation $\gamma$, feedback attenuation $\gamma_0$, longitudinal attenuation $\gamma_1$, transverse attenuation $\gamma_2$, and the spin-wave attenuation $\gamma_3$: \be \label{64} \frac{\gm}{\om} \ll 1 \; , \qquad \frac{\gm_0}{\om_0} \ll 1 \; , \qquad \frac{\gm_1}{\om_0} \ll 1 \; , \qquad \frac{\gm_2}{\om_0} \ll 1 \; , \qquad \frac{\gm_3}{\om_0} \ll 1 \; . \ee Finally, the relative anisotropy parameter \be \label{65} A \equiv \frac{S\Dlt J}{\om_0} \; , \qquad \Dlt J \equiv \frac{1}{N} \sum_{i\neq j}^N (I_{ij} - J_{ij} ) \; \ee should also be small so that the initial spin polarization is not blocked by the anisotropy field and the latter does not create an essential dynamical shift of the frequency \be \label{66} \om_s \equiv \om_0 ( 1 - As ) \; \ee thus producing a large effective detuning \be \label{67} \Dlt_s \equiv \om - \om_s = \Dlt + \om_0 A s \; . \ee The existence of the above small parameters makes it possible to analyze the evolution equations for the functional variables (Equations (\ref{53})--(\ref{55})) by the scale separation approach \cite{Yukalov_38,Yukalov_39}, since the functional variable $u$ can be treated as fast, and $w$ and $s$ treated as slow. In the frame of this approach, with the use of the stochastic mean-field approximation, we come to the equations for the guiding centers of the slow functional variables $w$ and $s$: $$ \frac{dw}{dt} = - 2\gm_2 ( 1 - \al s) w + 2\gm_3 s^2 \; $$ \be \label{68} \frac{ds}{dt} = - \gm_2 \al w - \gm_3 s - \gm_1 ( s - s_\infty ) \; \ee in which $s_{\infty}$ is the equilibrium average spin and the effective interaction between the sample and resonator is described by the coupling function \be \label{69} \al = g \; \frac{\gm^2}{\gm^2+\Dlt^2_s} \; ( 1 - As) \left \{ 1 - \left [ \cos(\Dlt_s t)\; - \; \frac{\Dlt_s}{\gm} \; \sin(\Dlt_s t) \right ] e^{-\gm t} \right \} \; . \ee According to the spin rotation symmetry of the system at the initial time, we impose the zero initial conditions for $u(0) = 0$ and $w(0) = 0$. However, the spin polarization is assumed to be non-zero and aligned along the static magnetic field, such that $s(0) = 1$. Under these initial conditions, the system at~$t = 0$ is in a nonequilibrium state. As soon as it starts at least slightly fluctuating due to spin waves, the feedback field forces the total average spin to reverse aligning opposite to the static field $B_0$. In~the process of the reversal, the transverse magnetization $u$ becomes non-zero, which implies spin rotation symmetry breaking when \be \label{70} \lgl S_j^\pm \rgl \neq 0 \; . \ee The maximal absolute value of $u(t_0)$, occurring at time $t_0$, corresponds to the maximal value of the coherence intensity $w(t_0)$. The latter depends also on the detuning $w = w(t_0, \delta)$. Thus, the maximal spin rotation symmetry breaking happens simultaneously with the maximal rotation coherence when \be \label{71} w(t_0,\dlt) = \max_t w(t,\dlt) \; . \ee In this way, the effective order parameter can be defined as the normalized maximal coherence~intensity \be \label{72} \eta \equiv \frac{w(t_0,\dlt)}{w(t_0,0)} \; . \ee From Equations (\ref{68}), we find the maximal coherence intensity \be \label{73} w(t_0,\dlt) = \left ( 1 \; - \; \frac{\gm\gm_2}{\gm_0\om_0} \; - \; \frac{\gm_2\om_0}{\gm\gm_0}\; \dlt^2 \right )^2 \; . \ee Thus, introducing the critical detuning \be \label{74} \dlt_c \equiv \sqrt{ \frac{\gm}{\om_0} \left ( \frac{\gm_0}{\gm_2} \; - \; \frac{\gm}{\om_0} \right ) } \; \ee we obtain the order parameter \be \label{75} \eta = \left ( 1 \; - \; \frac{\dlt^2}{\dlt_c^2} \right )^2 \; \ee whose behavior as a function of the relative detuning $\delta$ is demonstrated in Figure \ref{f3}. \begin{figure}[H] \centering \includegraphics[width=6cm]{Fig3_ResPhen.eps} \caption{Order parameter (\ref{75}) characterizing maximal coherence intensity under spin-reversal resonance, with the parameters $\gamma_0/ \gamma_2 = 1$ and $\gamma/ \omega_0 = 0.1$ (dashed-dotted line) and $\gamma/ \omega_0 = 0.01$ (solid line).} \label{f3} \end{figure} It is worth noting that the non-zero transverse magnetization does not imply magnon condensation but merely means that the average magnetic moment of the system rotates around the $z$-axis, so that the total magnetization is not directed along this axis \cite{Yukalov_40}. The correct introduction of magnons in a nonequilibrium picture requires employing the Holstein--Primakov transformation with respect to the local in time axis defined by the instantaneous time-dependent direction of the total average magnetization \cite{Ruckriegel_41}. \section{Conclusions} We have demonstrated that resonance phenomena can be treated as a kind of nonequilibrium phase transitions. Resonance phenomena, similar to equilibrium phase transitions, are accompanied by symmetry breaking and can be described by order parameters. Thus, under spin-wave resonance and helicon resonance, the magnetization inside a metallic plate, induced by an incident electromagnetic field, becomes periodic, with a slight perturbation caused by attenuation. In the case of spin-reversal resonance, spin rotation symmetry breaks, and the role of the order parameter is played by the coherence intensity. Experimental study of the behavior of the order parameters can yield information about the properties of the considered materials. It is worth emphasizing that resonance phenomena are usually observed in finite systems. Finite-width metallic plates were considered in the case of spin-wave and helicon resonances. For the spin-reversal resonance, it was a finite sample that could be inserted in a resonator electric coil of a finite volume. Symmetry breaking in finite systems is a delicate topic, as it is quite different from the symmetry breaking in infinite systems exhibiting equilibrium phase transitions. The effects of symmetry breaking in finite quantum systems have recently been described in the review article \cite{Birman_42}. The present paper can be considered as an additional chapter for this review. \vskip 5mm \authorcontributions{ The authors equally contributed to the paper.} \vskip 5mm {\parindent=0pt \small {\bf Conflicts of Interest:} The authors declare no conflict of interest. } \vskip 1cm
2,877,628,090,037
arxiv
\section{Introduction}\label{sec:intro} General Relativity (GR) has undergone brilliant successes since its inception 100 years ago (see, e.g., the review \cite{2015Univ....1...38I} and references therein). Einstein's theory is the standard paradigm for describing the gravitational interaction, verified by many experimental evidences \cite{Will:2014bqa}, even though, with the possible exception of binary-pulsar systems, at least to a certain extent, these tests are probes of the weak-field gravity, or differently speaking they probe gravity up to intermediate scales ($\simeq 1-10^1$ au). Nevertheless, one of the current challenges in theoretical physics and cosmology is the description of gravitation at large scales. In particular, evidences from astrophysics and cosmology \cite{Perlmutter:1997zf,Riess:1998cb,Tonry:2003zg, Knop:2003iy, Barris:2003dq, Riess:2004nr, Astier:2005qq,Eisenstein:2005su,Spergel:2006hy,Hinshaw:2012aka} suggest that the Universe content is 76\% \textrm{dark energy}, 20\% \textrm{dark matter}, 4\% ordinary baryonic matter. This implies that in order to reconcile the observations with GR we are led to assume that the Universe is dominated by \textrm{dark entities}, with peculiar characteristics. The dark energy is an exotic cosmic fluid, which has not yet been detected directly, and which does not cluster as ordinary matter; indeed, its behaviour closely resembles that of the cosmological constant $\Lambda$, which, in turn, brings about other problems, concerning its nature and origin \cite{Peebles:2002gy,Martin:2012bt}. On the other hand, the dark matter is an unknown type of matter, which has the clustering properties of ordinary matter; since 1933 it is has been related to the problem of missing matter in astrophysical scenarios \cite{Zwicky:1933gu}. Moreover, some kind of cold and pressureless dark matter (whose distribution is that of a spherical halo around the galaxies) is also required to explain the rotation curves of spiral galaxies \cite{Binney87}. Hence, the best answer we have today for these cosmic puzzles is the so called concordance model or $\Lambda$CDM, which provides the simplest description of the available data concerning the large-scale structure of the Universe. \textcolor{black}{For a recent review, see e.g. \cite{2016PDU....12...56B}.} This picture is completed with the inflationary scenario which solves the horizon, flatness and monopole problems \cite{kolb1990early}.\\ \indent Besides these difficulties in explaining observations, there are theoretical motivations suggesting that a theory of gravity more fundamental than GR should be formulated: Einstein's theory is not renormalizable, and thus it cannot be quantized as is. In a recent paper by Berti et al. \cite{2015CQGra..32x3001B}, a thorough review of the motivations to consider extensions of GR can be found together with a discussion of some modified theories of gravity (see also the recent reviews \cite{2011PhR...509..167C, 2015Univ....1...92H, 2015Univ....1..186B, 2015Univ....1..199C, 2015Univ....1..446Z} and references therein). \\ \indent A possible strategy towards a new theory of gravity is, in some sense, a natural generalization of Einstein's approach, according to which \textrm{gravity is geometry}. Accordingly, a new theory is obtained extending GR on a purely geometric basis: in other words, the required ingredients to match the observations or to solve the theoretical conundrums derive from a geometric structure richer than that of GR.\\ \indent As a prototype of this strategy, which has gained an increasing attention during the last decade, we mention the $f(R)$ theories, where the gravitational Lagrangian depends on a function of the scalar curvature $R$; extensive reviews can be found in \cite{Capozziello:2007ec, 2007Geom, Sotiriou:2008rp, defelice, 2011PhR...505...59N}. These theories are also referred to as ``extended theories of gravity'', since they naturally generalize GR: in fact, when $f(R)=R$ the action reduces to the usual Einstein-Hilbert action, and Einstein's theory is obtained.\\ \indent Motivations for studying these theories can be different but, as clearly synthesized by Sotiriou and Faraoni \cite{Sotiriou:2008rp}, they can be considered as toy-theories that are relatively simple to handle and that allow to study the effects of the deviations from Einstein's theory with sufficient generality. For instance $f(R)$ theories provide cosmologically viable models, where both the inflation phase and the late-time accelerated expansion are reproduced; furthermore, they have been used to explain the rotation curves of galaxies without need for dark matter (see \cite{Capozziello:2007ec, Sotiriou:2008rp, defelice} and references therein). These theories can be studied in the metric formalism, where the action is varied with respect to metric tensor, and in the Palatini formalism, where the action is varied with respect to the metric and the affine connection, which are supposed to be independent from one another (there is also the metric-affine formalism, in which the matter part of the action depends on the affine connection, and is then varied with respect to it). In general, the two approaches are not equivalent: the solutions of the Palatini field equations are a subset on solutions of the metric field equations \cite{magnano}.\\ \indent A different approach to the extension of GR derives from a generalization of Teleparallel Gravity (TEGR) \cite{pereira,Maluf:2013gaa}: this theory is based on a Riemann-Cartan space-time, endowed with the non symmetric Weitzenb\"ock connection which, unlike the Levi-Civita connection of GR, gives rise to torsion but it is curvature-free. In TEGR torsion determines the geometry, while the tetrad field is the dynamical one; the field equations are obtained from a Lagrangian containing the torsion scalar $T$, arising from contractions of the torsion tensor. Notwithstanding GR and TEGR have a different geometric structure, they have the same dynamics: in other words, every solution of GR is also solution of TEGR and vice versa. Hence, one could start from TEGR and extend its Lagrangian from $T$ to an arbitrary function $f(T)$, resulting to the so-called $f(T)$ gravity \cite{Ferraro:2008ey,Linder:2010py} (for a review see \cite{Cai:2015emx}). Since $f(T)$ gravity is different from TEGR, $f(T)$ theories have been considered as potential candidates to describe the cosmological behavior \cite{cardone12,Chen:2010va,sari11,Myrzakulov:2010vz,Yang:2010hw,bengo,kazu11, Karami:2013rda, cai11,capoz11,Bamba:2013jqa,Camera:2013bwa}. Additionally, various aspects of $f(T)$ gravity have been considered, such as for instance, exact solutions and stellar models \cite{Wang:2011xf,Ferraro:2011ks,Gonzalez:2011dr, Capozziello:2012zj,Rodrigues:2013ifa,Nashed:uja,Nashed:2015qza, Junior:2015fya, Bejarano:2014bca,ss3, ss4,ss6,ss7}. \\ \indent Another possible new theory of gravity can be obtained by a massive deformation of GR. Endowing graviton with a mass is a plausible modified theory of gravity that is both phenomenologically and theoretically intriguing. From the theoretical point of view, a small non-vanishing graviton mass is an open issue. The idea was originally introduced in the work of Fierz and Pauli \cite{fierz39}, who constructed a massive theory of gravity in a flat background that is ghost\,-\,free at the linearized level. Since then, a great effort has been put in extending the result to the nonlinear level and constructing a consistent theory. A few years ago a covariant massive gravity model has been proposed in \cite{derham11}. Since the linearization of the mass term breaks the gauge invariance of GR then, in order to construct a consistent theory, non-linear terms should be tuned to remove order by order the negative energy state in the spectrum \cite{boulware72}. The theoretical model under investigation follows from a procedure originally outlined in \cite{arkani03,creminelli05} and has been found not to show ghosts at least up to quartic order in the nonlinearities \cite{derham11,hassan11}. The consequent theory exploits several remarkable features. Indeed the graviton mass typically manifests itself on cosmological scales at late times thus providing a natural explanation of the presently observed accelerating phase \cite{cardone12b}. Moreover, the theory allows for exotic solutions in which the contribution of the graviton mass affects the dynamics at early times. It actually allows for models in which the Universe oscillates indefinitely about an initial static state, ameliorating the fine-tuning problem suffered by the emergent Universe scenario in GR \cite{parisi12}. Another approach that could lead towards a formulation of a quantum theory of gravity is the Ho\v{r}ava formulation of a model that is power-counting renormalizable due to an anisotropic scaling of space and time \cite{horava09}. This is reminiscent of Lifshitz scalars in condensed matter physics \cite{lifshitz41a, lifshitz41b}, hence the theory is often referred to as the Ho\v{r}ava-Lifshitz gravity. This theory has attracted a lot of attention, due to its several remarkable features in cosmology. Unfortunately, the original model suffers from instability, ghosts, strong coupling problems and the model has been implemented along different lines \cite{Bogdanos:2009uj}. \indent On the other hand, if we consider the excellent agreement of GR with Solar System and binary pulsar observations, it is apparent that any modified theory of gravity should reproduce GR at the Solar System scale, i.e. in a suitable weak-field limit. In other words, these theories must have correct Newtonian and post-Newtonian limits and, up to intermediate scales, the deviations from the GR predictions can be considered as perturbations. This agreement should be obtained, for all the above gravitational modifications. In particular, all these theories have the same spherically symmetric solution that describes the gravitational field around a point-like source: the Schwarzschild-de Sitter space-time (SdS). Interestingly enough, this is a solution of GR field equations with a cosmological constant. However, for these modified gravities the cosmological term is not added by hand, but it naturally originates from the modified Lagrangian.\\ \indent In this paper, we assume that the SdS solution can be used to model the gravitational field of an isolated source like the Sun, and we examine the impact that the gravitational modifications have on the Solar System dynamics. Additionally, we explore the possibility of constraining $\Lambda$ in the distant peripheries of the Solar System by means of the currently ongoing spacecraft-based mission New Horizons. For a recently proposed long-range mission aimed to test long-distance modifications of gravity in the Solar System, see \cite{2015PhRvD..92j4048B}. \\ \indent This work is organized as follows: In Section \ref{sec:SdS} we describe the main features of the SdS space-time, focusing on $f(R)$ and $f(T)$ theories, massive gravity and Ho\v{r}ava-Lifshitz gravity. Section \ref{track} is devoted to a preliminary exposition of the experimental constraints which might be posed by using accurate tracking of distant man-made objects traveling to the remote outskirts of the Solar System; the case of the New Horizons probe is considered. Finally, section \ref{theend} summarizes our results. \section{Schwarzschild-de Sitter space-time as a vacuum solution of modified gravities} \label{sec:SdS} The SdS metric (see e.g \cite{2001rsgc.book.....R}) \beq ds^{2}=\left(1-\frac{2GM}{r}-\frac{1}{3}\Lambda r^{2} \right)dt^{2}-\frac{1}{\left(1-\frac{2GM}{r}-\frac{1}{3}\Lambda r^{2} \right)}dr^{2}-r^{2}d\Omega^{2} \label{eq:SdSmetric} \eeq where $d\Omega^{2}= d\theta^2+ \sin^2 \theta d\phi^2$, is a spherically symmetric solution of the Einstein field equations with cosmological constant $\Lambda$ in vacuum, namely \beq R_{\mu\nu}-\frac 1 2 g_{\mu \nu}R+\Lambda g_{\mu\nu}=0, \label{eq:einsteinLambda} \eeq or equivalently \beq R_{\mu\nu}=\Lambda g_{\mu\nu}, \label{eq:einsteinLambdabis} \eeq around the mass $M$. The SdS space-time has been studied in connection with the constraints arising from Solar System data \cite{Iorio:2005vw,Kagramanova:2006ax} and moreover focusing on the effects on gravitational lensing \cite{Rindler:2007zz,Sereno:2008kk,Ruggiero:2007jr}. In the following subsections, we are going to show that the metric (\ref{eq:SdSmetric}) is a solution of various gravitational modifications, under certain considerations. \subsection{$f(R)$ theories} \label{ssec:theofR} Let us start by summarizing the theoretical framework of the $f(R)$ theories, in order to obtain the field equations, both in metric and the Palatini approach (see \cite{Capozziello:2007ec, Sotiriou:2008rp, defelice} for an exhaustive discussion), and to show that the SdS space-time is a solution.\\ \indent The field equations can be obtained by a variational principle, starting from the action\footnote{Let the signature of the $4$-dimensional Lorentzian manifold $\mathcal M$ be $(+,-,-,-)$. furthermore, if not otherwise stated, we use units such that $c=1$.} \begin{equation} S=\frac{1}{16\pi G}\int d^{4}x \sqrt{-\mathrm{det}(g_{\mu\nu)}} f (R)+S_{M}. \label{eq:actionf(R)} \end{equation} As we mentioned above, in these theories the gravitational part of the Lagrangian is represented by a function $f(R)$ of the scalar curvature $R$, while $S_{M}$ is the action for the matter sector, which functionally depends on the matter fields together with their first derivatives. In the metric formalism, $\Gamma$ is supposed to be the Levi-Civita connection of the metric $g$ and, consequently, the scalar curvature $R$ has to be intended as $R\equiv R(g) =g^{\alpha\beta}R_{\alpha \beta}(g)$. On the contrary, in the Palatini formalism the metric $g$ and the affine connection $\Gamma$ are supposed to be independent, so that the scalar curvature $R$ has to be intended as $R\equiv R( g,\Gamma) =g^{\alpha\beta}R_{\alpha \beta}(\Gamma )$, where $R_{\mu \nu }(\Gamma )$ is the Ricci-like tensor of the connection $\Gamma$.\\ \indent In the metric formalism the action (\ref{eq:actionf(R)}) is varied with respect to the metric $g$, and one obtains the following field equations \begin{align} f'(R) R_{\mu \nu }-\frac{1}{2}f(R)g_{\mu \nu }-\left(\nabla _{\mu }\nabla _{\nu }-g_{\mu \nu }\square \right)f'(R)= {8\pi G} \,\mathcal{T}_{\mu \nu}, \label{eq:fieldmetric1} \end{align} where $f'(R)=df(R)/d R$, $\nabla_{\mu}$ is the covariant derivative associated with $\Gamma$, $\square \equiv \nabla_{\mu} \nabla^{\mu}$, and $\displaystyle {\mathcal{T}}^{\mu\nu} = -\frac{2}{\sqrt g}\frac{\delta S_{M}}{\delta g_{\mu\nu}}$ is the standard minimally coupled matter energy-momentum tensor. The contraction of the field equations (\ref{eq:fieldmetric1}) with the metric tensor leads to the scalar equation \begin{align} \label{eq:fieldmetricscalar1} 3\square f'(R)+f'(R)R-2f(R)={8\pi G} \mathcal{T}, \end{align} where $\mathcal{T}$ is the trace of the energy-momentum tensor. Note that Eq. (\ref{eq:fieldmetricscalar1}) is a differential equation for the scalar curvature $R$, while in GR the scalar curvature is algebraically related to $\mathcal{T}$ through $\displaystyle R=-{8\pi G} \mathcal{T}$.\\ \indent In the Palatini formalism, by independent variations with respect to the metric $g$ and the connection $\Gamma$, we obtain the following equations of motion: \begin{eqnarray} f^{\prime }(R) R_{(\mu\nu)}(\Gamma)-\frac{1}{2} f(R) g_{\mu \nu }&=&{8\pi G}{} \mathcal{T}_{\mu \nu }, \label{ffv1}\\ \nabla _{\alpha }^{\Gamma }\textcolor{black}{(} \sqrt{g} f^\prime (R) g^{\mu \nu })&=&0, \label{ffv2} \end{eqnarray} where $\nabla^{\Gamma}$ denotes covariant derivative with respect to the connection $\Gamma$. Actually, it is possible to show \cite{FFVa,FFVb} that the manifold $\mathcal{M}$, which is the model of the space-time, can be a posteriori endowed with a bi-metric structure $(\mathcal{M},g,h)$ equivalent to the original metric-affine structure $(\mathcal{M},g,\Gamma)$, where $\Gamma$ is assumed to be the Levi-Civita connection of $h$. The two metrics are conformally related by \begin{equation}\label{h_met2} h_{\mu \nu }=f^\prime (R) \; g_{\mu \nu }. \end{equation} The equation of motion (\ref{ffv1}) can be supplemented by the scalar-valued equation obtained by taking the contraction of (\ref{ffv1}) with the metric tensor: \begin{equation} f^{\prime} (R) R-2 f(R)= {8\pi G}{} \mathcal{T}. \label{ss} \end{equation} Equation (\ref{ss}) is an algebraic equation for the scalar curvature $R$, thus slightly generalizing the GR case where $R$ is proportional to $T$. In order to compare the predictions of $f(R)$ gravity with Solar System dynamics data, we have to consider the solutions of the field equations in vacuum. As a consequence, in the metric approach the field equations read \begin{align} f'(R) R_{\mu \nu }-\frac{1}{2}f(R)g_{\mu \nu }-\left(\nabla _{\mu }\nabla _{\nu }-g_{\mu \nu }\square \right)f'(R)= 0, \label{eq:fieldmetric1vac} \end{align} supplemented with the scalar equation \begin{align} \label{eq:fieldmetricscalar1vac} 3\square f'(R)+f'(R)R-2f(R)=0. \end{align} In the the Palatini approach, the field equations in vacuum become \begin{eqnarray} f^{\prime }(R) R_{(\mu\nu)}(\Gamma)-\frac{1}{2} f(R) g_{\mu \nu }&=&0, \label{ffv1vac}\\ \nabla _{\alpha }^{\Gamma }\textcolor{black}{(} \sqrt{g} f^\prime (R) g^{\mu \nu })&=&0, \label{ffv2vac} \end{eqnarray} and they are supplemented by the scalar equation \begin{equation} f^{\prime} (R) R-2 f(R)=0. \label{ssvac} \end{equation} It is useful to emphasize some features of the scalar equations (\ref{eq:fieldmetricscalar1vac}) and (\ref{ssvac}), which can help to understand the differences between the vacuum solutions in the two formalisms. In Palatini $f(R)$ gravity, the trace equation (\ref{ssvac}) is an algebraic equation for $R$, which admits constant solutions $R=c_{i}$ \cite{FFVa}, and it is identically satisfied if $f(R)$ is proportional to $R^2$. As a consequence, it is easy to verify that (if $f'(R) \neq 0$) the field equations become \begin{equation} R_{\mu\nu}=\frac 1 4 R g_{\mu\nu}, \label{eq:gralambda1} \end{equation} which are the same as the GR field equations with a cosmological constant (\ref{eq:einsteinLambdabis}). In particular, we now have \begin{equation} \Lambda_{fR}=\frac {1}{ 4} R. \label{LambdafR} \end{equation} In other words, in the Palatini formalism, in vacuum, we can obtain only solutions that describe space-times with constant scalar curvature $R$. Summarizing, Eq. (\ref{eq:gralambda1}) suggests that all GR solutions with cosmological constant are solutions of vacuum Palatini field equations: the function $f(R)$ only determines the solutions of algebraic equation (\ref{ssvac}). \\ \indent In metric $f(R)$ gravity the trace equation (\ref{eq:fieldmetricscalar1vac}) is a differential equation for $R$: this implies that, in general, it admits more solutions than the corresponding Palatini equation. In particular, we notice that if $R=\mathrm{constant}$ we obtain the Palatini case. Hence for a given $f(R)$ function, in vacuum, the solutions of the field equations of Palatini $f(R)$ gravity are a subset of the solutions of the field equations of metric $f(R)$ gravity \cite{magnano}. However, in metric $f(R)$ gravity vacuum solutions with variable $R$ are allowed too (see, e.g., \cite{metricspherically}).\\ \indent Therefore, if we confine ourselves with constant scalar curvature, we have shown that in $f(R)$ gravity the SdS space-time (\ref{eq:SdSmetric}) is a solution of the field equations, and in particular the ``effective'' cosmological constant term depends on the analytical expression of $f(R)$.\\ \indent As for the reliability of these solutions for describing without conceptual drawbacks the gravitational field of a star, like the Sun, the issue has been lively debated in the literature (see e.g. \cite{Sotiriou:2008rp,defelice}). In the Palatini formalism, the possibility of constructing vacuum solutions that match an internal solution has been discussed, and it has been shown that when one considers even a simple model such as a polytropic star, divergences arise. However, things are different for non-analytical $f(R)$, and also the role of the conformal metric $h_{\mu\nu}$ can help to avoid these singularities (see \cite{lorenzof2015} and references therein). On the other hand, metric $f(R)$ gravity is in agreement with Solar System tests only if the chamaleon mechanism is considered, according to which the additional scalar degree of freedom of the theory is a function of the curvature: the mass of the scalar field is large at Solar System scale, in order not to affect the dynamics, while it is small at cosmological scale, in order to drive the accelerated expansion. For a thorough discussion about the reliability of $f(R)$ gravity see the reviews \cite{Sotiriou:2008rp, defelice}, where it is discussed that Palatini $f(R)$ gravity, beyond the above mentioned difficulty with polytropic stars, suffers from other problems, which make acceptable models practically indistinguishable from $\Lambda$CDM. On the other hand, in metric $f(R)$ gravity it is possible to obtain models that are in agreement with observations, having peculiarities that make it possible, at least in principle, to distinguish them from $\Lambda$CDM. However, we are not going to get into the details of the above debate, since for the purpose of the present work it is adequate that $f(R)$ gravity admits the SdS solution. Finally, we recall that some properties of SdS and Reissner-Nordstr\"{o}m (SdS generalised) black holes in $f(R)$ modified gravity were investigated in \cite{2013CQGra..30l5003N, 2014PhLB..735..376N}. \subsection{$f(T)$ theories} \label{ssec:theoT} In this subsection we outline the theoretical framework of $f(T)$ gravity and we obtain the field equations that accept the SdS space-time as solution \cite{Cai:2015emx}). In $f(T)$ gravity the tetrads are the dynamical fields. Given a coordinate basis, the components $e^a_\mu$ of the tetrads are related to the metric tensor through $g_{\mu \nu}(x) = \eta_{a b} e^a_\mu(x) e^b_\nu(x)$, with $\eta_{a b} = \text{diag}(1,-1,- 1,-1)$. We point out that, in our notation, latin indices refer to the tangent space, while greek indices label coordinates on the manifold. The field equations can be obtained by varying the action \begin{equation} S = \frac{1}{16 \pi G} \int{ f(T)\, e \, d^4x} + S_M \label{eq:action} \end{equation} with respect to the tetrads, where $e = \text{det} \ e^a_\mu = \sqrt{-\text{det}(g_{\mu \nu})}$ and $S_M$ is the action for the matter fields. In the action (\ref{eq:action}), $f$ is a differentiable function of the torsion scalar $T$: in particular, if $f(T)=T$, the action is the same as in TEGR, and the theory is equivalent to GR. In terms of the tetrads one defines the torsion tensor as \beq T^\lambda_{\ \mu \nu} = e^\lambda_a \left( \partial_\nu e^a_\mu - \partial_\mu e^a_\nu \right ), \ \label{eq:deftorsiont} \eeq and the ``super-potential'' tensor \beq S^\rho_{\ \mu \nu} = \frac{1}{4} \left ( T^{\rho}_{\ \ \mu \nu} - T_{\mu \nu}^{\ \ \rho}+T_{\nu \mu} ^{\ \ \rho} \right ) + \frac{1}{2} \delta^\rho_\mu T_{\sigma \nu}^{\ \ \sigma} - \frac{1}{2} \delta^\rho_\nu T_{\sigma \mu} ^{\ \ \sigma}, \label{eq:defcontorsion} \eeq from which one obtains the torsion scalar \beq T = S^\rho_{\ \mu \nu} T_\rho ^{\ \mu \nu}. \label{eq:deftorsions} \eeq By variation of the action (\ref{eq:action}) with respect to the tetrad field $e^a_\mu$, we obtain the field equations \beq e^{-1}\partial_\mu(e\ e_a^{\ \rho} S_{\rho}^{\ \mu\nu})f_T-e_{a}^{\ \lambda} S_{\rho}^{\ \nu\mu} T^{\rho}_{\ \mu\lambda} f_T + e_a^{\ \rho} S_{\rho}^{\ \mu\nu}\partial_\mu (T) f_{TT}+\frac{1}{4}e_a^{\nu} f = 4\pi G e_a^{\ \mu} {\mathcal{T}}_\mu^\nu, \label{eq: fieldeqs} \eeq where ${\mathcal{T}}^\nu_\mu$ is the matter energy-momentum tensor, and where the subscripts $T$ denote differentiation with respect to $T$.\\ \indent We are interested in static spherically symmetric solutions that can be used to describe the gravitational field of a point-like source, e.g. of the Sun. To this end, we write the space- time metric in the form \begin{equation} ds^2=e^{A(r)}dt^2-e^{B(r)}dr^2-r^2 d\Omega^2 \ . \label{metric} \end{equation} In the usual, ``pure-tetrad'' formulation of $f(T)$ gravity, the above metric is produced by the non-diagonal tetrad (\cite{tamanini12,Krssak:2015oua}) \footnotesize \begin{multline} \!\!\!\!\!\!\!\!\!\!\!{e_\mu}^a= \left( \begin{array}{cccc} e^{A/2} & 0 & 0 & 0\\ 0 & e^{B/2} \sin \theta \cos \phi & e^{B/2} \sin \theta \sin \phi & e^{B/2} \cos \theta \\ 0 & -r \left(\cos \theta \cos \phi \sin \gamma+\sin \phi \cos\gamma \right) & r \left(\cos \phi \cos\gamma -\cos \theta \sin \phi \sin\gamma \right) & r \sin \theta \sin\gamma \\ 0 & r \sin \theta \left(\sin \phi \sin\gamma -\cos \theta \cos \phi \cos\gamma \right) & -r \sin \theta \left(\cos \theta \sin \phi \cos\gamma +\cos \phi \sin\gamma \right) & r \sin ^2\theta \cos\gamma \end{array} \right), \label{eq:rotatedtetrad} \end{multline}\normalsize where $\theta$, $\phi$ are rotation angles, and $\gamma(r)$ is a general function of $r$. The expression of the torsion scalar for the above tetrad turns out to be \begin{equation} T(r) = \frac{2\, e^{-B}}{r^2} \left[ 1 + e^B + 2\,e^{B/2} \sin\gamma + 2\, e^{B/2}\, r\, \gamma' \cos\gamma \\+ r\, A' \left(1+e^{B/2}\sin\gamma\right) \right]\,. \label{eq:torsionscalar} \end{equation} We are interested in extracting static vacuum solution with constant torsion scalar $T=T_{0}$ (i.e. $T'=0$). The field equations (\ref{eq: fieldeqs}) become \begin{eqnarray} \frac{f_0}{4} -\frac{f_{T_0}\,e^{-B}}{4r^2}\left( 2-2\,e^B+r^2e^B T_0-2r\,B' \right)&=&0 \label{eq:fieldqT1} \,,\\ -\frac{f_0}{4} +\frac{f_{T_0}\,e^{-B}}{4r^2} \left( 2-2\,e^B+r^2e^B T_0-2r\,A' \right)&=&0 \label{eq:fieldqT2} \,,\\ 4-4\,e^B -r^2A'^2 +2r\,B'+r\,A'\left(2+r\,B'\right) -2r^2A'' &=&0 \,, \label{eq:fieldqT3} \end{eqnarray} where $f_{0}\equiv f(T_{0})$, $f_{T_{0}}\equiv f_{T}(T_{0})$ and prime denotes differentiation with respect to $r$. We point out that spherically symmetric solutions with non constant torsion scalar $T' \neq 0$ have been already investigated \cite{Iorio:2012cm,Ruggiero:2015oka} and Solar System constraints have been discussed \cite{Iorio:2015rla,2013MNRAS.433.3584X}.\\ \indent It is possible to show (see \cite{tamanini12}) that the unique solution of the equations (\ref{eq:fieldqT1})-(\ref{eq:fieldqT3}) is given by \begin{eqnarray} &&e^{A(r)}= 1- \frac{2\,M}{r}-\frac{\Lambda_{fT}}{3}r^2,\nonumber\\ &&e^{B(r)}=e^{-A(r)}, \label{eq:solAfT} \end{eqnarray} with \beq \Lambda_{fT} = \frac{1}{2} \left(\frac{f_0}{f_{T_0}}-T_0\right) \,. \label{eq:defLambdaf} \eeq Thus, in the theory at hand one obtains an ``effective'' cosmological constant, determined by the functional form of $f(T)$, and thus he obtains a SdS solution. Notice however that, because of the presence of the arbitrary function $\gamma(r)$ in the definition of the torsion scalar (\ref{eq:torsionscalar}), knowing $\Lambda_{fT}$ cannot constrain $f(T)$, since an arbitrary value of $\Lambda$ can be achieved by fine tuning $T_{0}$ with a suitable choice of $\gamma(r)$. In other words, when the torsion tensor is constant, any $f(T)$ model admits the solution in the form of (\ref{eq:solAfT}), with given values of $M$ and $\Lambda_{fT}$. On the contrary, in the case of $f(R)$ theories, the value of the scalar curvature $R$, that is proportional to $\Lambda_{fR}$, strictly depends on the function $f(R)$, since it is obtained from the trace equation (\ref{ssvac}). \subsection{Massive gravity}\label{spec:massive} In this subsection we summarize the basic part of massive gravity formulation relevant to the present analysis. Specifically, we are interested in static spherically symmetric solutions in which the mass term becomes identical to the cosmological constant term.\\ \indent The possibility of endowing graviton with a mass goes back to 1939, where Fierz and Pauli constructed the linearized theory of non-interacting massive gravitons in a flat background \cite{fierz39}. Unfortunately, the solutions of this theory do not continuously connect with those of GR in the limit of zero graviton mass, and this is the famous van Dam, Veltman and Sakharov (vDVZ) discontinuity \cite{vanDam70, zakharov70}. This vDVZ discontinuity can be alleviated at the nonlinear level through the Vainshtein mechanism \cite{vainshtein72}, however these nonlinearities produce the so-called Boulware-Deser (BD) ghost degree of freedom \cite{boulware72}.\\ \indent In 2010 a ghost-free theory was proposed by de Rham, Gabadadze and Tolley (dRGT) \cite{derham11}. In the standard formalism of dRGT theory, the dynamics is determined by a modified action written in terms of a dynamical metric $g_{\mu\nu}$ and an arbitrary fiducial metric $f_{\mu\nu}$ needed to construct the gravitational self-interacting potential $\mathcal{U}$. The corresponding action reads: \begin{equation}\label{totaction} S=-\frac{1}{8\pi G}\int \left(\frac{1}{2} R+m^2 \mathcal{U}\right) \sqrt{-g} \ d^{4}x \ + \ S_{M}, \end{equation} where $S_M$ describes ordinary matter which is supposed to directly interact only with $g_{\mu\nu}$. The potential term, coupled through the graviton mass $m$, is defined by \cite{nieuwenhuizen11} \begin{align}\label{potential} \mathcal{U}\nonumber &= \frac{1}{2} (K_1^2-K_2)+\frac{c_3}{6}(K_1^3 -3 K_1K_2+2K_3)+\\ \nonumber \\ & + \frac{c_4}{12}(K_1^4 -6K_1^2 K_ 2+3 K_2^2 +8K_1 K_3 -6 K_4), \end{align} with $K_n$ denoting the traces of a tensor $K^{\mu}_{\nu}$ constructed from the inverse metric $g^{\mu\nu}$ and the fiducial one through $$ K^\mu_\nu=\delta^\mu_\nu-\sqrt{g^{\mu\rho} f_{ab}\partial_\rho \phi^a\partial_\nu \phi^b}, $$ and with $K_n\equiv \text{tr} K^n$. The four fields $\phi^a$ are the St\"uckelberg fields\footnote{St\"uckelberg fields were originally introduced by St\"uckelberg in 1938 to restore gauge-invariance in electromagnetism but the method works equivalently well for spin-2 fields.}, which transform as scalars under coordinate transformations, such that the fixed metric $f_{\mu\nu}$ $$ f_{\mu\nu}=f_{ab}\partial_\mu \phi^a\partial_\nu \phi^b, $$ as well as the quantity $g^{\mu\alpha}f_{\alpha\nu}$, are promoted to tensor fields, while the potential $U(g,f)$ to a scalar. Potential (\ref{potential}) has been shown to be the most general potential for a ghost-free theory of massive gravity in four dimensions \cite{hassan11}.\\ \indent Apart from interesting cosmological features, the dRGT massive gravity admits the SdS solution where the ``effective'' cosmological constants arises due to the graviton mass. In particular, considering the choice $$ c_4=1+c_3+c_3^2 $$ in (\ref{potential}), then the mass term of the theory behaves exactly as the cosmological constant term in GR for a spherically symmetric ansatz \cite{berezhiani12}, and the resulting expression for the metric reads as follows: \begin{equation} ds^2=-\left(1-\frac{2 G M}{r} -\frac{\Lambda}{3} r^2\right)\ dt^2+\frac{1}{1-\frac{2 G M}{r} -\frac{\Lambda}{3} r^2} \ dr^2+r^2\ d\Omega^2, \end{equation} that is the standard SdS solution of GR in static coordinates. The difference here is that it is accompanied by nontrivial background of the St\"uckelberg fields. In terms of the parameters of the theory the effective cosmological constant reads \begin{equation}\label{lambdamassive} \Lambda_{mg}=\frac{2m^2}{1+c_3}. \end{equation} Finally, note that this solution allows to recover GR when $c_3+c_4>0$ \cite{koyama11}, below the so-called Vainshtein radius $r_V=\left(GM/m^2\right)^{1/3}$. \textcolor{black}{For extended dRGT models, see, e.g., \cite{2015PhRvD..92l4063T, 2016arXiv160104399W}.} \subsection{Ho\v{r}ava-Lifshitz gravity}\label{spec:horava lifshitz} Let us summarize Ho\v{r}ava-Lifshitz gravity \cite{horava09}, in order to extract its spherically symmetric solutions. As we mentioned in the Introduction, Ho\v{r}ava-Lifshitz gravity is a power-counting renormalizable theory, obtained through an anisotropic scaling of space and time in the Ultraviolet limit. This feature allows for the inclusion of higher-dimensional spatial derivative operators that dominate in the high energy limit, while in the Infrared lower-dimensional operators take over, presumably providing a healthy low-energy limit, namely GR. Additionally, the absence of higher order time derivative terms prevents ghost instabilities. However, as it becomes obvious, the anisotropic scaling breaks Lorentz invariance, and breaking of general covariance has been shown to introduce a dynamical scalar mode that may lead to strong coupling problem and instabilities \cite{Bogdanos:2009uj,wang11}.\\ \indent Recently, a new covariant version of Ho\v{r}ava Lifshitz gravity has been formulated by Ho\v{r}ava and Melby-Thompson \cite{HMT10} in which, in order to heal the scalar graviton problem, two auxiliary scalar fields have been introduced: the Newtonian pre-potential $\phi(t,x)$ and the gauge field $A(t,x)$. The latter eliminates the new scalar degree of freedom, thus curing the strong coupling problem in the Infrared limit, and general covariance is restored. In the following we refer to the covariant version of Ho\v{r}ava and Melby-Thompson, and the running coupling $\lambda$ in the extrinsic curvature term of the action is not set to $1$ \cite{daSilva11}.\\ \indent With the perspective of Lorentz symmetry breaking, the suitable variables in Ho\v{r}ava-Lifshitz theory are the lapse function, the shift vector and the spatial metric, $N$, $N_i$, $g_{ij}$ respectively, according to the Hamiltonian formulation of General Relativity developed by Dirac \cite{Dirac58} and Arnowitt, Deser and Misner \cite{adm59}. Then the line element can be rewritten as $$ ds^2=-N^2 dt^2+g_{ij} \left(dx^i+N^i dt\right)\left(dx^j+N^j dt\right). $$ The theory can be assumed to satisfy the projectability condition, i.e. the lapse function only depends on time $N=N(t)$, while the total gravitational action is given by \begin{equation}\label{HLaction} S_g=\zeta^2\int dt\ d^3x\ N\sqrt{g}\left(\mathcal{L}_K-\mathcal{L}_V+\mathcal{L}_\phi+\mathcal{L}_ A+\mathcal{L}_{\lambda}\right), \end{equation} where $g=\text{det}(g_{ij})$ and \begin{eqnarray*} \mathcal{L}_K&=&K_{ij} K^{ij} -\lambda K^2,\\ \mathcal{L}_\phi&=&\phi\ \mathcal{G}^{ij}\left(2K_{ij}+\nabla_i\nabla_j \phi\right),\\ \mathcal{L}_A&=&\frac{A}{N}\left(2 \Lambda_g-R\right),\\ \mathcal{L}_\lambda&=&(1-\lambda)\left[(\nabla \phi)^2+2 K \nabla^2\phi\right]. \end{eqnarray*} Note that in this subsection covariant derivatives and Ricci terms refer to the $3$-metric $g_{ij}$. $K_{ij}$ represents the extrinsic curvature $$ K_{ij}=g_i^k\nabla_k n_j, $$ $n_j$ being a unit normal vector of the spatial hypersurface, and $\mathcal{G}_{ij}$ is the $3$-~dimensional generalised Einstein tensor $$ \mathcal{G}_{ij}=R_{ij}-\frac{1}{2}g_{ij} R+\Lambda_g g_{ij}. $$ We mention that the parameter $\lambda$ characterizes deviations of the kinetic part of the action from GR. The most general parity-invariant Lagrangian density up to six order in spatial derivatives reads as \cite{sotiriou09} \begin{align} \mathcal{L}_{V} \nonumber &= 2 \zeta^2 g_0 +g_1R+\frac{1}{\zeta^2}\left(g_2\ R^2+g_3\ R_{ij} R^{ij}\right)+\\ \nonumber \\ \nonumber & + \frac{1}{\zeta^4}\left[g_4\ R^3+g_5\ R R_{ij} R^{ij}+g_6\ R^i_j R^j_k R_i^{k} + \right. \\ \nonumber \\ & +\left. g_7\ R \nabla^2 R+ g_8\ (\nabla_i R_{jk}) (\nabla^i R^{jk})\right], \end{align} where in physical units $\zeta^2=(16 \pi G)^{-1}$, $G$ being the Newtonian constant, and the couplings $g_s\ (s=0,1,\dots,8)$ are all dimensionless.\\ \indent We are here interested in vacuum static spherically symmetric solutions. These have been derived in \cite{greenwald10} and \cite{lin12} for the case $\lambda=1$ and $\lambda\neq1$, respectively (see also \cite{Kiritsis:2009sh,Kehagias:2009is}). Omitting the details of the derivation, and despite the large class of solutions, we mention that in both cases the SdS solution can be extracted. Hence, various constraints on the parameters and functions of the theory are derived, both due to equations of motion and Solar System tests \cite{greenwald10,lin12}. When $\lambda=1$, which is the GR value, and similarly to what happens in the original version presented in \cite{HMT10}, the SdS solution is recovered, with the choice $\phi=A=0$, and the effective cosmological constant arises from the $g_0$ coupling, namely \begin{equation}\label{LambdaHL} \Lambda_{HL}=\frac{1}{2}\zeta^2 g_0. \end{equation} On the other hand, if one desires to consider $\lambda$ as a free parameter, one has to consider the Newtonian pre-potential $\phi$, as well as the gauge field $A$, as part of the metric on which matter fields couple, as shown in \cite{lin14}. \section{Preliminary sensitivity analysis on the possibility of constraining $\Lambda$ with New Horizons}\label{track} \subsection{Suggested data analysis} New Horizons \cite{2008SSRv..140....3S, 2008SSRv..140...49G} is a spacecraft which, launched in 2006, flew by Pluto on the 14th of July 2015 without entering into orbit around it. Orbital maneuvers were recently implemented\footnote{See http://pluto.jhuapl.edu/News- Center/News-Article.php?page=20151105 on the Internet.} to target the spacecraft towards the Trans-Neptunian Object (TNO) $2014~\textrm{MU}_{69}$ of the Kuiper Belt in an extended mission scenario. New Horizons is spin-stabilized and therefore it will be possible to perform radio-science experiments \cite{2010LRR....13....4T} due to the dedicated Radio Science Experiment (REX) apparatus \cite{2008SSRv..140..217T} carried on board and the innovative regenerative tracking technique \cite{2013aero.confE.170J}. The precision in Doppler measurements will be better than $\sigma_{\dot\rho} = 0.1$ mm s$^{-1}$ throughout the entire mission \cite{2008SSRv..140...23F}, while ranging will be precisely better than $\sigma_{\rho} = 10$ m (1$\sigma$) over 6 years after 2015, i.e. at geocentric distances to beyond 50 au \cite{2008SSRv..140...23F}.\\ \indent It is interesting to preliminarily investigate the potential ability of New Horizon's tracking to improve the currently existing bounds on, e.g., the cosmological constant $\Lambda$. To this aim, we will numerically simulate the range and range-rate signatures of the extra-acceleration caused by a cosmological constant in the Solar System, by comparing their magnitudes with the previously quoted figures for New Horizons. However, it should be stressed that it is just a preliminary sensitivity analysis based on the expected precision of the probe's measurements: actual overall accuracy will be finally set by several sources of systematic uncertainties like, e.g., the heat dissipation from the Radioisotope Thermoelectric Generator (RTG) and the ability in accurately modeling the orbital maneuvers. In this respect, the extensive modeling of such non-gravitational perturbations for the Pioneer spacecraft, recently made in the framework of the Pioneer Anomaly investigations \cite{ 2008PhRvD..78j3001B, 2010AcAau..66..467R, 2010SSRv..151..123R, 2010SSRv..151...75B, 2011AnP...523..439R, 2012JSpRo..49..212S, 2012PhRvL.108x1101T, 2012PhLB..711..337F, 2014PhRvD..90b2004M} should be helpful. \\ \indent We numerically integrate the barycentric equations of motion of the major bodies of the Solar System and of New Horizons, with and without $\Lambda$, in Cartesian rectangular coordinates. Both integrations share the same initial conditions, retrieved from the WEB interface HORIZONS run by JPL, NASA, and the time interval is set to 10 yr starting from a date posterior to the flyby of Pluto. Then, from the solutions of the perturbed and unperturbed equations of motion, we numerically produce differential time series $\Delta\rho(t),\Delta\dot\rho(t)$ of the Earth-New Horizons range $\rho$ and range-rate $\dot \rho$. The amplitudes of such simulated signatures can be compared to $\sigma_{\rho},\sigma_{\dot\rho}$ in order to preliminarily guess the value of $\Lambda$ which makes them compatible. It turns out that the range allows for tighter constraints than the range-rate. \\ \indent In Fig. \ref{figura} we present our results. In particular, we depict the simulated time series $\Delta\rho(t)$ for $\Lambda = 10^{-45}$ m$^{-2}$. \begin{figure}[ht] \centering \centerline{ \vbox{ \begin{tabular}{cc} \epsfysize= 8.0 cm\epsfbox{NH_Lambda_range.eps} \\ \end{tabular} }} \caption{Simulated signature $\Delta\rho$ induced by $\Lambda = 10^{-45}$ m$^{-2}$ on the geocentric range of New Horizons over a decade-time span 2015-2025. It was obtained by taking the difference \textcolor{black}{$\Delta\rho(t)$} between \textcolor{black}{two time series of $\rho(t)=\sqrt{\ton{x_{\textrm{NH}}(t) - x_\oplus(t)}^2 + \ton{y_{\textrm{NH}}(t) - y_\oplus(t)}^2 + \ton{z_{\textrm{NH}}(t) - z_\oplus(t)}^2}$ calculated by numerically integrating the barycentric equations of motion of New Horizons and the major bodies of the Solar System in Cartesian rectangular coordinates with and without the $\Lambda-$induced acceleration. All the standard Newton-Einstein dynamics for pointlike bodies was modeled in both the integrations which shared also the same initial conditions for August 5, 2015, retrieved from the WEB interface HORIZONS maintained by JPL, NASA. The range-rate signature $\Delta\dot\rho(t)$, not shown here, was obtained by numerically differentiating the time series for $\Delta\rho(t)$.}} \label{figura} \end{figure} It can be noticed that the size of the $\Lambda$-induced signatures is about 20 m. Thus, the possibility of constraining $\Lambda$ to a $\simeq 10^{-45}$ m$^{-2}$ level over the next ten years by means of New Horizons does not seem implausible. If indeed it will be realized practically, it would represent an improvement by more than one-two orders of magnitude with respect to the latest results appeared in the literature \textcolor{black}{\cite{2013MNRAS.433.3584X,2014RAA....14..527L}}. However, it must be stressed once again that the analysis presented here has to be intended as a sketchy one just to explore the potential opportunity offered by New Horizons; suffice it to say that it assumes a straightforward path over the years, without accounting for orbital maneuvers and corrections. \textcolor{black}{Finally, it should be remarked that the present analysis is based only on the orbital dynamics of both the major bodies of the Solar System and the probe itself. In fact, range and range-rate are not directly observable since they are calculated through the actually measured round-trip time of flight of the photons and their frequency shift, respectively. Thus, in principle, the impact of $\Lambda$ on the propagation of the electromagnetic waves connecting the spacecraft and the Earth \cite{2008PhRvD..77d3004S, 2008A&A...484..103S, 2012PhRvD..85f7302C, 2016PhRvD..93d4013D, 2016Univ....2....5A} should be taken into account as well. A detailed calculation of such an aspect of the measurement modeling is beyond the scopes of the present work.} \subsection{Induced constraints on the models} Having elaborated the observational constraints on the cosmological constant $\Lambda$ we may proceed to the constraining of the various gravitational modifications. In particular, we will use the SdS solution and the expression of the obtained effective $\Lambda$ in terms of the model parameters of each case, extracted in Section \ref{sec:SdS}, in order to provide constraints and bounds on these model parameters. \\ \indent In case of $f(R)$ gravity, from the expression of the effective cosmological constant $\Lambda_{fR}$ of (\ref{LambdafR}) we obtain a constraint on the curvature scalar $R$ that turns out to be constant both in metric and Palatini approach in order to have a SdS solution, and, from numerical estimation of $\Lambda$, we obtain $R\sim 10^{-46}~\textrm{m}^{-2}$. Moreover, since through the scalar equation (\ref{eq:fieldmetricscalar1vac}) the Ricci scalar is related to the analytical expression of the Lagrangian, or at least to the ratio $f(R)/f^{\prime}(R)$, and thus on the parameters of the specific model, we can easily extracts the constraints on them too. \\ \indent In case of $f(T)$ gravity, as already remarked, the function $\gamma(r)$ can be chosen to achieve the desired constant value of the torsion scalar through (\ref{eq:torsionscalar}), thus the expression (\ref{eq:defLambdaf}) for $\Lambda_{fT}$ does not allow to break this degeneracy and impose constraints on the Lagrangian. \\ \indent In case of massive gravity, the effective $\Lambda_{mg}$ (\ref{lambdamassive}) allows to infer upper limits on the graviton mass. Assuming $c_3 \sim O(1)$, numerical values on $\Lambda_{mg}$ will directly constraint $m$. Restoring SI units, i.e. replacing it with $m_g=\hbar m/c$, the observational constraints on the cosmological constant translate into $$ m_g\sim10^{-69}~\textrm{g}=0.56\times10^{-36} \textrm{eV~c}^{-2}. $$ We stress here that, as expected, our Solar System analysis can infer more stringent constraints on the graviton mass than the analysis of of the same model using cosmological data \cite{cardone12b}), in which $m$ is related to the present value of the Hubble parameter. Moreover, we can then compare our result with the upper limit $m_g < 7.68 \cdot 10^{-55}$ g from the dynamics in the Solar System \cite{talmadge88} and the more stringent limit, namely $m_g <10^{-59}~\textrm{g}$, derived by requiring the dynamical properties of a galactic disk to be consistent with observations \cite{alves10} (see also \cite{Goldhaber08} for a comprehensive review on the phenomenology of graviton mass and experimental limits). The improvement in the obtained bounds is obvious. \\ \indent Finally, for the case of Ho\v{r}ava-Lifshitz gravity, using the expression (\ref{LambdaHL}) for the effective cosmological constant $\Lambda_{HL}$ in terms of the coupling constant associated with the $0-th$ order spatial derivative, namely $g_0$, we can extract its corresponding bound. It proves more convenient to rescale $g_0$ through the Planck mass (or equivalently the gravitational constant $\zeta^2=(16 \pi G)^{-1}$) in order to obtain a dimensionless quantity $\tilde{g}_0$. Hence, we finally obtain $$\tilde{g}_0\sim 10^{-113}. $$ Similarly to the case of massive gravity, the above bound is more strict than the corresponding cosmological ones \cite{Dutta:2009jn}. \section{Summary and conclusions}\label{theend} In this work we have considered that the gravitational field of an isolated source like the Sun, can be described by the Schwarzschild-de Sitter (SdS) geometry. Such solution exists in the large majority of modified gravity theories, as expected, and in particular the effective cosmological constant is determined by the specific parameters of the given theory. Hence, one can use Solar System data in order to constrain the SdS solution, and thus eventually to extract constraints on the parameters of the gravitational modification.\\ \indent We have considered some of the recently most studied modified gravities, namely $f(R)$ and $f(T)$ theories, dRGT massive gravity, and Ho\v{r}ava-Lifshitz gravity, and after giving their SdS solution we have explored the possibility of using future extended radio-tracking data from the currently ongoing New Horizons mission in the outskirts peripheries of the Solar System, in order to constrain the effective cosmological constant, and thus the modified gravity parameters. In particular, we showed that an improvement of one-two orders of magnitude may be possible, provided that steady trajectory arcs several years long will be processed, and orbital maneuvers will be accurately modeled. Despite its necessarily tentative and incomplete character, it turns out that such an idea should be worth of further and more detailed consideration, especially concerning the model-building of gravitational modifications.
2,877,628,090,038
arxiv
\subsection{} Given smooth proper schemes $X_1, X_2$ over a field $k$ and an object $E\in D^b(X_1\times X_2)$ of the bounded derived category of coherent sheaves on $X_1\times X_2$ define a triangulated functor \begin{equation} \Phi_E: D^b(X_1) \tilde{o} D^b(X_2) \end{equation} sending a bounded complex $M$ of coherent sheaves on $X_1$ to $ Rp_{2 *}(E\stackrel{L}{\otimes} p^*_1 M)$, where $p_i: X_1\times X_2 \tilde{o} X_i$ are the projections. Recall that a triangulated functor $D^b(X_1) \tilde{o} D^b(X_2)$ is said to be of the Fourier-Mukai type if it is isomorphic to $\Phi_E$ for some $E$. Let $Y$ be a smooth projective scheme over $\Spec {\mathbb Z}_p$ and let $X$ be its special fiber, $i: X \hookrightarrow Y$ the closed embedding. Consider the triangulated functor $G: D^b(X) \tilde{o} D^b(X)$ given by the formula $$ G = L i^* \circ i_* $$ We shall see that in general $G$ is not of the Fourier-Mukai type. \begin{Th} Let $Z$ a smooth projective scheme over $\Spec {\mathbb Z}_p$, $Y= Z\times Z$, $X= Y\times _{\Spec {\mathbb Z}_p} \Spec {\mathbb F}_p$ . Assume that \begin{enumerate} \item The Frobenius morphism $Fr: \overline{Z} \tilde{o} \overline{Z}$, where $\overline{Z}= Z \times \Spec {\mathbb F}_p$, does not lift modulo $p^2$. \item $H^1(X, T_X)=0$, where $T_X$ is the tangent sheaf on $X$. \end{enumerate} Then $G = L i^* \circ i_*: D^b(X) \tilde{o} D^b(X)$ is not of the Fourier-Mukai type. \end{Th} For example, let $GL_n$ be the general linear group over $\Spec {\mathbb Z}_p$, $B\subset GL_n$ a Borel subgroup. Then, by Theorem 6 from \cite{btlm}, for any $n>2$, the flag variety $Z= GL_n/B$ satisfies the first assumption of the Theorem {\it i.e.}, the Frobenius $Fr: \overline{Z} \tilde{o} \overline{Z}$ does not lift on $Z \times \Spec {\mathbb Z}/p^2 {\mathbb Z}$. By (\cite{klt}, Theorem 2), we have that $H^1(\overline{Z}, T_{\overline{Z}})= H^1(\overline{Z}, {\mathcal O}_{\overline{Z}})=0$. It follows that $H^1(X, T_X)=0$. Hence, by the Theorem, for $n>2$, $G: D^b(X) \tilde{o} D^b(X)$ is not of the Fourier-Mukai type. \begin{proof} Assume the contrary and let $E\in D^b(X\times X)$ be the Fourier-Mukai kernel. By definition, for every $M\in D^b(X)$ we have a functorial isomorphism \begin{equation}\label{pr1} G(M)\buildrel{\sim}\over{\longrightarrow} Rp_{2 *}(E\stackrel{L}{\otimes} p^*_1 M). \end{equation} By the projection formula (\cite{h}, Chapter \rm{II}, Prop. 5.6) we have that $$i_* \circ L i^* \circ i_*(M) \buildrel{\sim}\over{\longrightarrow} i_*(M) \stackrel{L}{\otimes} i_*({\mathcal O}_X) \buildrel{\sim}\over{\longrightarrow} i_*(M) \otimes ({\mathcal O}_Y \rar{p} {\mathcal O}_Y)\buildrel{\sim}\over{\longrightarrow} i_*(M) \oplus i_*(M)[1]$$ In particular, if $M $ is a coherent sheaf then $\underline{H}^i(G(M)) \simeq M$ for $i=0, -1$ and $\underline{H}^i(G(M))=0$ otherwise. Applying this observation and formula (\ref{pr1}) to skyscraper sheaves, $M= \delta_x$, $x\in X(\overline {\mathbb F}_p)$, we conclude that the coherent sheaves $\underline{H}^i(E)$ are set theoretically supported on the diagonal $\Delta_X \subset X\times X$. Applying the same formulas to $M= {\mathcal O}_X$ we see that $ p_{2 *}(\underline{H}^i(E))={\mathcal O}_X$ for $i=0, -1$ and $ p_{2 *}(\underline{H}^i(E))=0$ otherwise. In fact, every coherent sheaf $F$ on $X\times X$ which is set theoretically supported on the diagonal and such that $p_{2 *} F = {\mathcal O}_X$ is isomorphic to ${\mathcal O}_{\Delta_X}$. It follows that $\underline{H}^0(E)= \underline{H}^{-1}(E)= {\mathcal O}_{\Delta_X}$. In the other words, $E$ fits into an exact triangle in $ D^b(X\times X)$ \begin{equation}\label{pr5} {\mathcal O}_{\Delta_X}[1] \rar{\alpha} E \rar{} {\mathcal O}_{\Delta_X} \rar{\beta} {\mathcal O}_{\Delta_X}[2] \end{equation} for some $\beta \in Ext^2_{{\mathcal O}_{X\times X}}( {\mathcal O}_{\Delta_X}, {\mathcal O}_{\Delta_X}).$ We wish to show that the second assumption in the Theorem implies that $\beta = 0$, while the first one implies that $\beta \ne 0$. For every $M \in D^b(X)$, (\ref{pr5}) gives rise to an exact triangle \begin{equation}\label{pr3} M[1] \rar{\alpha_M} G(M)\rar{} M \rar{\beta_M} M[2] \end{equation} Our main tool is the following result. \begin{lm} For a coherent sheaf $M$ the following conditions are equivalent. \begin{enumerate} \item $\beta_M=0$. \item $G(M)\buildrel{\sim}\over{\longrightarrow} M\oplus M[1]$. \item There exists a morphism $\lambda: G(M) \tilde{o} M[1]$ such that $\lambda \circ \alpha_M$ is an isomorphism. \item $M$ admits a lift modulo $p^2$ {\it i.e.}, there is a coherent sheaf $\tilde M$ on $Y$ flat over ${\mathbb Z}/p^2 {\mathbb Z}$ such that $i^*\tilde M \simeq M$. \end{enumerate} \end{lm} \begin{proof} The equivalence of (1), (2) and (3) is immediate. Let us check that (3) is equivalent to (4). A morphism $\lambda: G(M) \tilde{o} M[1]$ gives rise by adjunction a morphism $\gamma: i_*M \tilde{o} i_* M[1]$. Note that $\tilde M = \cone \gamma [1]$ is a coherent sheaf on $Y$ which is an extension of $i_*M$ by itself. It suffices to prove that $\lambda \circ \alpha_M: M[1] \tilde{o} M[1]$ is an isomorphism if and only if $\tilde M$ is flat over ${\mathbb Z}/p^2 {\mathbb Z}$. Indeed, from the exact triangle $$ Li^*i_* M \tilde{o} Li^* (\tilde M) \tilde{o} Li^*i_*M \tilde{o} Li^*i_* M[1]$$ we get a long exact sequence of the cohomology sheaves $$ 0\tilde{o} M\tilde{o} L_1i^* (\tilde M) \tilde{o} M\rar{\lambda \circ \alpha_M} M\tilde{o} i^* (\tilde M) \tilde{o} M \tilde{o} 0 $$ Thus $\lambda \circ \alpha_M$ is an isomorphism if and only if in the exact sequence $$0\tilde{o} i_*M \tilde{o} \tilde M \tilde{o} i_*M \tilde{o} 0$$ the image of second map is the kernel of the multiplication by $p$ on $\tilde M$ and also the image of this map. The latter is equivalent to flatness of $\tilde M$ over ${\mathbb Z}/p^2 {\mathbb Z}$. \end{proof} We have a spectral sequence converging to $Ext^*_{{\mathcal O}_{X\times X}}( {\mathcal O}_{\Delta_X}, {\mathcal O}_{\Delta_X}) $ whose second page is $ H^*(X, {\mathcal E} xt^* _{{\mathcal O}_{X\times X}}( {\mathcal O}_{\Delta_X}, {\mathcal O}_{\Delta_X}))$. In particular, we have a homomorphism $$Ext^2_{{\mathcal O}_{X\times X}}( {\mathcal O}_{\Delta_X}, {\mathcal O}_{\Delta_X}) \tilde{o} H^0(X, {\mathcal E} xt^2 _{{\mathcal O}_{X\times X}}( {\mathcal O}_{\Delta_X}, {\mathcal O}_{\Delta_X})) \buildrel{\sim}\over{\longrightarrow} H^0(X, \wedge^2 T_X).$$ Let us check that the image $\mu$ of $\beta$ under this map is $0$. To do this we apply the Lemma to skyscraper sheaves $\delta_x$, where $x$ runs over closed points of $X$. On the one hand, the evaluation of the bivector field $\mu$ at $x$ is equal to the class of $\beta_{\delta_x}$ in $Ext^2_{{\mathcal O}_{X}}( \delta_x, \delta_x) \buildrel{\sim}\over{\longrightarrow} \wedge^2 T_{x, X}.$ On the other hand, by the Lemma, $\beta_{\delta_x}=0$ since $\delta_x$ is liftable modulo $p^2$. Next, the assumption that $H^1(X, T_X)=0$ implies that $\beta$ lies in the image of the map \begin{equation}\label{pr4} v: H^2(X, {\mathcal O}_X) \buildrel{\sim}\over{\longrightarrow} H^2 (X, {\mathcal E} xt^0 _{{\mathcal O}_{X\times X}}( {\mathcal O}_{\Delta_X}, {\mathcal O}_{\Delta_X}) ) \tilde{o} Ext^2_{{\mathcal O}_{X\times X}}( {\mathcal O}_{\Delta_X}, {\mathcal O}_{\Delta_X}) . \end{equation} The map (\ref{pr4}) has a left inverse $u: Ext^2_{{\mathcal O}_{X\times X}}( {\mathcal O}_{\Delta_X}, {\mathcal O}_{\Delta_X}) \tilde{o} H^2(X, {\mathcal O}_X)$ which takes $\beta$ to $\beta_{{\mathcal O}_X}$. But, by the Lemma, the later class is equal to $0$ since ${\mathcal O}_X$ is liftable modulo $p^2$. It follows that $\beta$ is $0$. On the other hand, let $\Gamma \subset X= \overline{Z} \times \overline{Z}$ be the graph of the Frobenius morphism $Fr: \overline{Z} \tilde{o} \overline{Z}$ and ${\mathcal O}_{\Gamma}$ the structure sheaf of $\Gamma$ viewed as a coherent sheaf on $X$. Then, by our first assumption, the sheaf ${\mathcal O}_{\Gamma}$ is not liftable modulo $p^2$. Hence, by the Lemma, $\beta_{{\mathcal O}_{\Gamma}}$ is not $0$. This contradiction completes the proof. \end{proof} {\bf Acknowledgements.} I would like to thank Alberto Canonaco and Paolo Stellari: their interest prompted writing this note. Also, I am grateful to Alexander Samokhin for stimulating discussions and references.
2,877,628,090,039
arxiv
\subsection{Fibre Assignment}\label{sec:tiling_code} In eBOSS the selection of targets for spectroscopic observation was performed through the survey `tiling' process with the goal of maximising the number of targets that receive a fibre with a given number of tiles \citep{blanton03}. The fraction of targets of a given type that receive a fibre defines the tiling completeness. Collision groups are defined as a set of targets where each member of the group is in a fibre collision (i.e. with separation below $62\arcsec$) with at least one other member, such that they cannot all be observed within a single exposure. The tiling is performed to maximize the tiling efficiency (i.e., the fraction of science fibres assigned to targets) while reaching the desired densities and decollided completeness\footnote{The decollided set of targets consists of targets not in collision groups combined with colliding targets that can be assigned a fibre on a single plate \citep[see][]{dawson16}.} of various targets. As described in \cite{blanton03} in detail, the tiling was performed in three steps: \begin{enumerate} \item first, the full survey footprint was divided into multiple chunks. In eBOSS different chunks were observed independently even if they partially overlap, i.e. targets in the chunks overlap regions can potentially be targeted in more than one chunk (See \cite{ross20} for details on how these areas are treated in the LSS catalogue creation); \item given the angular distribution of targets, tile centers in each chunk were initially drawn from a uniform covering of the celestial sphere which were then perturbed with respect to these initial positions to maximise the tiling completeness; \item finally, given the tiling solution from (ii) fibres were assigned to targets, eventually solving collisions between targets. \end{enumerate} Steps i-iii were adopted for the sufficiently large chunks. For chunks with narrow or small geometry, step (ii) in the procedure above was not able to produce an optimal configuration of tiles to reach the desired fibre efficiency and target density. For these chunks we manually placed evenly-located tiles for step (ii). Given that different tracers were observed simultaneously, eBOSS observations adopted a tiered-priority system for assigning fibres to targets. A collision between two targets with different priorities is referred to as \textit{`knockout'} as opposed to fibre collisions that occur between targets of the same type. For the chunks with luminous red galaxies and quasar targets, fibres were assigned in two rounds: a) all non-LRG targets get maximum priority and receive fibres first requiring a $100\%$ tiling completeness for their decollided set; b) in the second round, remaining fibres are assigned to LRG targets at a lower priority. The requirement of $100\%$ decollided completeness for LRG targets was lifted since they were selected with a higher number density than the available fibres and they collide with targets with higher priority.\footnote{In fact, the tile centers of these LRG/QSO chunks were chosen using a downsampled set of the main LRG samples (by 15\%) in order to decrease the density of tiles; once the tile centers were decided, the fibre assignment pipeline was applied again to the entire sample.} In the first round, collisions between non-LRG targets were resolved with following decreasing priorities: SPIDERS, TDSS, known quasars selected for re-observations, clustering quasars, variability-selected quasars, quasars from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey and white dwarfs. The tiling density is on average one tile per 5 square degrees for chunks observed as part of eBOSS while it is approximately one tile per 4 square degrees for the two chunks observed as part of SEQUELS. For the chunks dedicated to ELGs, the target densities were dominated by ELG targets in the first round with a small number of TDSS Few-Epoch Spectroscopy (FES) objects targeted at a density of $\sim 1\deg^{-2}$ at the same priority as ELG targets\footnote{Addititional TDSS targets were targeted on the same plates at lower priorities. The total density of TDSS targets that share plates with emission-line galaxies is $\sim 15 \deg^{-2}$.}. The tiling density of these chunks is on average one tile per around 4 square degrees. \subsection{Veto Masks} The portion of the survey area where spectroscopic observations are impossible is accounted for by different veto masks. Under the assumption that veto masks do not correlate with the target sample, their effect is to change the survey window function. As such these regions are masked from both data and random catalogues used for clustering measurements \citep{ross20,anand20}. In particular, for LRG and QSO samples, areas contaminated by bright stars are masked using the `bright-star' veto mask. Other bright objects such as bright local galaxies and bright stars missed by the bright-star mask were also visually identified and masked using the `bright-object' veto mask \citep{reid16}. Regions with bad or no photometric observations are removed using the `bad-field' veto mask. No fibre can be placed within $92\arcsec$ of the center of each plate where a hole is drilled for the centerpost. These regions are part of the `centerpost' veto mask. A mask around infrared bright stars was also applied to the LRG target file before tiling. Knockouts are also assumed to be un-correlated with the target sample and are accounted for by applying the `collision-priority' veto mask. In regions covered by a single tile, the collision-priority mask removes any QSO within $62\arcsec$ of a TDSS or SPIDERS target. The LRG collision-priority mask removes any LRG within $62\arcsec$ from a non-LRG target, regardless of whether it lies in single-pass or multi-pass regions. This is a conservative approach motivated by the fact that, being targeted at the lowest priority, not all knocked-out LRGs are targeted in areas covered by more than one tile. For further details on the construction and properties of different veto masks we refer the reader to \cite{ross20}. The sample of emission-line galaxies presents similar effects. However, veto masks for ELG sample were more complicated and built in form of pixelized masks. They account for issues related to different bright stars, systematics in the target selection and defects in the photometry. As anticipated in Sec. \ref{sec:tiling_code}, ELG targets share plates with some TDSS targets. In particular, TDSS FES-type targets have the same targeting priority as ELGs. To account for this a veto mask, similar to the collision-priority mask for LRG and QSO targets, is created around each TDSS-FES target. We refer the reader to \cite{anand20} for a detailed description of the veto masks for emission-line galaxies. \subsection{Weights} In redshift surveys the target selection is affected by different systematic effects that can alter the observed target density with respect to the true one \citep{ross12, delatorre13, bautista18, ross20}. Target selection from photometric data, fibre assignment and inaccurate redshift estimates are typical systematic effects that affect the observed target density in spectroscopic redshift surveys. Each systematic effect is corrected by applying a weight to each target. In eBOSS, targets in the LSS catalogues are assigned the following weights to correct for systematics or optimise the clustering measurements: \begin{itemize} \item $w_{\rm{sys}}$ corrects for spurious fluctuations in the photometric target selection; \item $w_{\rm{cp}}$ is the standard correction to fibre collisions adopted in eBOSS cosmological analyses. In the following part of this paper we will refer to the $w_{\rm{cp}}$ weighting as the `CP' correction (or weighting) method; \item $w_{\rm{noz}}$ accounts for redshift failures; \item $w_{\rm{FKP}}$ are the standard FKP weights \citep{feldman94} used to minimise the variance of the measurement. \end{itemize} The CP correction is a variant of the standard NN method (see \cite{ross20}). In particular, $w_{\rm{CP}}$ weights are computed for collision groups where the weight of each target missed due to a fibre collision is equally distributed among the observed members of the group rather than assigning it only to its nearest neighbour as in the standard NN method. In this work we use the PIP weights as an alternative to $w_{\rm{cp}}$ keeping all other weights as in eq. \eqref{eq:weights}. The overall standard weight is then: \begin{equation} w_{\rm{tot}} = w_{\rm{sys}}\times w_{\rm{noz}}\times w_{\rm{FKP}}\times w_{\rm{cp}}. \label{eq:weights} \end{equation}{} The same weights listed in this section are also assigned to the objects in the random catalogues used to perform clustering measurements. However, the task of these weights for random points is to match the radial selection function of eBOSS targets rather than correcting for the related systematic effects. An exception is for $w_{\rm{sys}}$ weights for random points that are used to correct for survey completeness in each sector when the CP correction is adopted to correct for fibre collisions. We report in Sec. \ref{sec:randoms} the procedure used in \cite{ross20} and \cite{anand20} to assign weights to objects in the random catalogues. \subsection{Measurement of the correlation function} We adopt the widely used least-biased and least-variance Landy-Szalay estimator \citep{landy93} to measure the two-point correlation functions: \begin{equation} \xi(\mathbf{s}) = \frac{DD(\mathbf{s})-2DR(\mathbf{s})}{RR(\mathbf{s})}+1, \label{eq:LS} \end{equation} where $DD$, $DR$ and $RR$ are the data-data, data-random and random-random pair counts normalised to the total number of corresponding weighted pairs and $\mathbf{s}$ is the pair separation vector in redshift space, i.e. when distances are inferred though the observed redshifts. We measure two types of correlation functions: \begin{itemize} \item[-] the projected correlation function $w_p(r_p)$ that is commonly used in literature to constrain the Halo Occupation Distribution (HOD) models, \begin{equation} w_p(r_p) = 2\int_0^{\pi_{\rm{max}}}\xi\left(r_p,\pi\right)d\pi\ . \label{eq:wp} \end{equation} \item[-] The multipoles $\xi^{(\ell)}$ of the redshift space 3D correlation function, widely used to measure and model anisotropic 3D clustering in redshift space, \begin{equation} \xi^{(\ell)}(s) = \left(2\ell+1\right)\int_{0}^{+1}\xi(s,\mu)L_\ell(\mu)d\mu. \label{eq:mps} \end{equation}{} \end{itemize} In eq. \eqref{eq:wp}, $r_p$ and $\pi$ are the transverse and parallel to the line-of-sight components of the pair separations $\mathbf{s}$. The integral in eq. \eqref{eq:wp} is truncated at $\pi_{\rm{max}}=80\,h^{-1}{\rm Mpc}$ as measurements at larger scales are noise dominated. Given the angular coordinates $\left(\rm{RA},\rm{DEC}\right)$ and redshifts $z$ we compute $r_p$ and $\pi$ following \cite{fisher94}. In eq. \eqref{eq:mps}, the angle-averaged pair separation is $s^2=r_p^2+\pi^2$ and the cosine of the angle between pair separation and line-of-sight is $\mu=\pi/s$. In order to resolve the clustering at sub-$\,h^{-1}{\rm Mpc}$ scales we use a logarithmic binning for the pair separation $s$ and its transverse to the line-of-sight component $r_p$ \begin{equation} \log s_{i+1} = \log s_i+\Delta s_{\log}, \end{equation}{\label{eq:log_binning}} where $\Delta s_{\log}=0.1$. The logarithmic mean of the bin edges is used as the sampling point \citep{mohammad18}. The line-of-sight component $\pi$ of the pair separation is binned using a $1\,h^{-1}{\rm Mpc}$ linear binning. When computing the multipoles of the two-point correlation function we divide $\mu$ between $\left[0,1\right]$ in 200 linear bins. \subsection{Pairwise-Inverse Probability (PIP) Weighting} The PIP weight for a given pair is defined as the inverse of the probability of it being targeted within an ensemble set of possible realisations of the survey, which includes all possible pairs of targets, and from which the actual realisation of the survey undertaken can be considered to be drawn at random. The selection probabilities depend strongly on the particular fibre assignment algorithm adopted to select targets from a parent photometric catalogue for the spectroscopic follow-up, and are therefore difficult to model except by rerunning the actual algorithm adopted. We thus rely on inferring the selection probabilities by generating multiple replicas of the survey target selection, changing the "random seed" for each run such that different choices are made in which target to select for follow-up spectroscopy (see Sec. \ref{sec:survey_realizations}). The inverse probability is then simply estimated as the number of realisations $N_{runs}$ in which a given pair could have been targeted divided by the number of times it was actually targeted \citep[see][for a discussion about inverse-probability estimators; specifically, following the nomenclature introduced in that work, we adopt the zero-truncated estimator]{bianchi19}. The PIP correction gives unbiased measurements of the two-point correlation function provided that there are no pairs with zero probability of being targeted in the ensemble of survey realisations. Following \cite{bianchi17}, rather than storing pairwise weights for individual pairs we store what are referred to as bitwise weights $w_i^{(b)}$ for each target. These are simply binary arrays of length $N_{runs}$ where each bit (either one or zero) represents the outcome of the corresponding fibre assignment run for target $i$ (either this target is, or is not included in run $b$). Bitwise weights are then combined \emph{``on the fly''} to compute the pairwise weights between target $m$ and target $n$ as: \begin{equation} w_{mn} = \frac{N_{runs}}{\popcnt \left[w_m^{(b)}\&w_n^{(b)}\right]}.\label{eq:pip} \end{equation} In eq. \eqref{eq:pip} $\popcnt$ and $\&$ are standard bitwise operators. In particular, $\popcnt$ is the `population count' operator that given an array, in this case a bit sequence of 0 and 1, returns the number of elements different than 0. In eq. \eqref{eq:pip} $\&$ is the bitwise `\verb and ' that, given two arrays of equal length, performs the logical `AND' operation on each pair of the corresponding bits and returns the result as an array with length equal to that of input arrays. The weights $w_m$ for individual targets, called individual-inverse-probability (IIP) weights, can be calculated simply by replacing $m=n$ in eq. \eqref{eq:pip}. The same random catalogue is valid for all fibre assignment runs, and so the pair counts in eq. \eqref{eq:LS} are now: \begin{equation}{\label{eq:pip_pairs}} \left.\begin{aligned} DD(\vec{s}) &= \sum_{\vec{x}_m - \vec{x}_n \approx \vec{s}} \mathrm{w}_{mn}w^{'}_{\rm{tot,m}}w^{'}_{\rm{tot,n}} \ ,\\ DR(\vec{s}) &= \sum_{\vec{x}_m - \vec{y}_n \approx \vec{s}} \mathrm{w}_{m}w^{'}_{\rm{tot,m}}w_{\rm{tot,n}} \ , \end{aligned}\right. \end{equation} In eq. \eqref{eq:pip_pairs} $w^{'}_{\rm{tot}}=w_{\rm{sys}}\times w_{\rm{noz}}\times w_{\rm{FKP}}$ and $w_{mn}$ and $w_m$ are PIP and IIP weights, respectively. The $RR$ pairs are computed using the overall weights in eq. \eqref{eq:weights}. \subsection{Angular Up-weighting} The PIP weighting scheme is unbiased only if there are no pairs with zero selection probability. This is not the case for pairs with separations below the fibre-collision scale that fall in the single pass regions of the survey. Indeed these pairs are systematically missed regardless of the number of survey realisations used to infer the selection probabilities since at least one of the two targets cannot be observed. However, a fraction of colliding pairs are targeted in regions where two or more tiles overlap. Under the assumption that the set of un-observed pairs is statistically equivalent to the set of observed pairs, we can use the angular up-weighting scheme proposed in \cite{percival17} to recover the small-scale clustering. Although a reasonable assumption, survey designs may produce scenarios where this ansatz is not valid. In eBOSS, to maximise the targeting efficiency, areas where tiles overlap correlate to some extent with the regions of high targeting density. This can introduce a bias in the measurements from eBOSS DR16 catalogues shown in Fig. \ref{fig:wp_lrg_data}-\ref{fig:mps_elg_data} that cannot be corrected using methods available in literature. The angular up-weighting is performed both on the $DD$ and $DR$ pair counts. Eq. \eqref{eq:pip_pairs} then becomes, \begin{equation}{\label{eq:pip+ang}} \left.\begin{aligned} DD(\vec{s}) &= \sum_{\substack{\vec{x}_m - \vec{x}_n \approx \vec{s}\\\vec{u}_m\cdot \vec{u}_n\approx\cos{\theta}}} \mathrm{w}_{mn}w^{'}_{\rm{tot,m}}w^{'}_{\rm{tot,n}}\times \wdd \ ,\\ DR(\vec{s}) &= \sum_{\substack{\vec{x}_m - \vec{y}_n \approx \vec{s}\\\vec{u}_m\cdot \vec{v}_n\approx\cos{\theta}}} \mathrm{w}_{m}w^{'}_{\rm{tot,m}}w_{\rm{tot,n}}\times\wdr \ , \end{aligned}\right. \end{equation} where $\vec{u}=\vec{x}/x$. The angular weights $\wdd$ and $\wdr$ in eq. \eqref{eq:ang_weights}, used to up-weight $DD$ and $DR$ pair counts respectively, are defined as, \begin{equation}{\label{eq:ang_weights}} \left.\begin{aligned} w_{\rm{ang}}^{\rm{DD}}(\theta) &= \frac{DD^{\rm{par}}\left(\theta\right)}{DD^{\rm{fib}}_{\rm{PIP}}\left(\theta\right)} \ ,\\ w_{\rm{ang}}^{\rm{DR}}(\theta) &= \frac{DR^{\rm{par}}\left(\theta\right)}{DR^{\rm{fib}}_{\rm{IIP}}\left(\theta\right)} \ . \end{aligned}\right. \end{equation} The superscripts -$\rm{par}$ and -$\rm{fib}$ in eq. \eqref{eq:ang_weights} denote pairs of targets from the reference parent sample and pairs of targets that receive fibres, respectively. The subscript $\rm{PIP}$ and $\rm{IIP}$ denote the fact that the pair counts are up-weighted using the pairwise-inverse-probability weights (PIP) or their counterpart, the individual-inverse-probability weights (IIP), for the $DR$ pairs. In the following part of this paper we will use the abbreviation PIP+ANG to refer to the overall weighting outlined in eq. \eqref{eq:pip+ang}. The angular weights derived in \cite{percival17} correct for the geometrical selection given by the survey targeting strategy. Because fiber assignment is independent of the properties upon which one normally selects sub-samples, eq. \eqref{eq:ang_weights} can therefore be applied to any sub-sample of the parent catalogue selected e.g. by colour or redshift. This is demonstrated by \cite{mohammad18}, where angular weights derived using the VIPERS parent sample were successfully applied to correct for fiber assignment for sub-samples selected in two different redshift bins. Angular up-weighting in the framework of the generalized inverse-probability weighting is discussed in \cite{bianchi19}. \begin{figure} \centering \includegraphics[scale=0.09]{plots/tiling_stats_LRG.pdf} \caption{The distribution of the fraction of targets that get a fibre among 1860 fibre assignment runs on the LRG DR16 catalogue. The vertical red dashed line shows the mean of the distribution while the vertical shaded band represents the standard deviation. The vertical blue line shows the fraction of targets that received a fibre for the actual eBOSS observation.} \label{fig:tiling_stat_lrg} \end{figure} \begin{figure} \centering \includegraphics[scale=0.09]{plots/tiling_stats_QSO.pdf} \caption{As in Fig. \ref{fig:tiling_stat_lrg}, here for the catalogue of quasars.} \label{fig:tiling_stat_qso} \end{figure} \begin{figure} \centering \includegraphics[scale=0.09]{plots/tiling_stats_ELG.pdf} \caption{As in Fig. \ref{fig:tiling_stat_lrg}, here for the catalogue of emission-line galaxies.} \label{fig:tiling_stat_elg} \end{figure} \subsection{Survey Realisations}\label{sec:survey_realizations} To infer realistic selection probabilities, we generate multiple random realisations of the survey that are statistically equivalent to the actual observations. The eBOSS tiling algorithm resolves collisions between targets in a random fashion by means of a random number generator, except for targets in collision groups with more than two objects. For targets within these collision groups, the algorithm performs a procedure designed to optimise the number of fibres allocated to targets \citep{blanton03}. As we will see later, this causes some problems for the PIP weights close to the fibre collision scale, as a result of there being zero probability pairs. However, these effects are significantly below the noise level for any single realisation of the survey. We generate multiple survey realisations by re-running the eBOSS fibre-assignment algorithm many times, changing the random seed used to initiate the random number generator each time. However, this feature is hard-coded in a large IDL-based software package that consists of several codes that optimize the fibre assignment and run sanity tests. Furthermore, the default setup only allows a single survey realisation to be generated at a time. For a processing time of $\sim1$-$2\mathrm{hr}$ this presents the main limitation for our purpose where thousands of survey realisations are needed for real datasets and for a relatively large number of mock samples. We have thus modified the relevant components of the eBOSS tiling software package in order to generate multiple survey realisations simultaneously by running it in parallel on a multi-processor computer cluster. In order to ease this task for future surveys we suggest implementing target selection algorithms in well-documented packages written in a fast and widely used programming language. In re-running the fibre assignment, we keep the spatial distribution of tiles across all fibre-assignment runs fixed, matching the distribution used in eBOSS observations. The accuracy of the selection probabilities calculated in this way depends on the number of survey realisations used. We use a total of 1860 fibre assignment runs to infer the selection probabilities when using eBOSS DR16 catalogues. The first of these runs corresponds to that used for the actual eBOSS observations. The outcome of these runs is stored in bitwise weights that are then used to compute the PIP weights and correct the pair counts following eq. \eqref{eq:pip}-\eqref{eq:pip+ang}. Figures \ref{fig:tiling_stat_lrg}, \ref{fig:tiling_stat_qso} and \ref{fig:tiling_stat_elg} show the distribution of the fraction of targets that receive a fibre among these 1860 survey realisations (red histograms) and highlight the specific fibre assignment realisation that was used for eBOSS observations (vertical blue lines). The run used for eBOSS spectroscopic observations (vertical blue line) shows a fraction of targets with fibre higher than the mean of the distribution, but within 1-$\sigma$ of the mean. For individual mocks the number of fibre assignment runs is limited to 310, a fair compromise between the accuracy of selection probabilities and the computation time. As shown in Fig. \ref{fig:tiling_qso}, a small fraction of plates were not observed even though they were included in the process of fibre assignment. Targets that were assigned a fibre from one of the un-observed plates thus contribute to lowering the survey completeness on the edges of the observed area. To correctly account for this, we run the fibre assignment using the full set of tiles and, once the fibres are assigned, we flag all targets that get a fibre from one of the un-observed plates as missed. \subsection{Adding Contaminants}\label{sec:contaminants} Raw mocks were built to reproduce the clustering of the eBOSS DR16 LSS catalogues. However, they significantly differ from real data catalogues, used in the survey tiling, in the radial selection function and number density. We modify raw mocks to make them as realistic as possible and match the features of the data catalogues as following: \begin{enumerate} \item all objects in the raw mocks are flagged as eBOSS clustering targets. In particular, LRGs and QSOs are assigned \verb EBOSS_TARGET1 \ bits \verb LRG1_WISE \ and \verb QSO1_EBOSS_CORE, respectively, while ELGs are flagged with \verb EBOSS_TARGET2 \ bits \verb ELG1_NGC \ (in NGC) and \verb ELG1_SGC \ (in SGC); \item mocks are then down-sampled to match the radial distribution of eBOSS clustering targets. In order to replicate the number density of targets we match the number of mock targets to the weighted number of eBOSS targets in narrow redshift bins; \item eBOSS data targets with un-known redshifts or redshifts outside the mocks redshift range are added to each mock; \item spectroscopically confirmed stars initially misidentified as eBOSS targets in data are added as contaminants to the mock catalogues; \item lack of mock targets in a posteriori vetoed regions, i.e. vetoed after survey tiling and spectroscopic observations, is compensated by adding targets from the eBOSS data in these regions to the mocks; \item a set of ancillary targets were observed simultaneously with eBOSS targets. When running the fibre assignment on mocks we supplement the mock catalogues resulting from steps i-v with the same ancillary target sample used in real observations. However, in data catalogues there is a strong correlation between ancillary targets and QSO catalogue. Namely $\sim 50\%$ of ancillary targets are within less than $2\arcsec$ of a quasar. These cases were treated as duplicates in eBOSS observations and they lower the number of fibres required to target quasar and ancillary samples. In mocks this fraction decreases to below $\sim 10\%$ since the correlation occurs mainly between ancillary targets and data targets that are added to mocks as contaminants in (iii)-(v). This has an effect of leaving a small number of fibres for mock LRG targets and therefore degrading the completeness of LRG sample targeted at a lower priority. To improve the completeness of mock LRG samples we randomly down-sample $\sim50\%$ of ancillary targets in chunks used for targeting LRG and QSO; \item finally, eBOSS `clustering' quasars that have known redshifts are not re-observed in eBOSS. These constitute $\sim 21\%\ (13\%)$ of eBOSS \verb QSO1_EBOSS_CORE sample in NGC (SGC) over the redshift range covered by mocks. We thus flag the same fractions of mock quasars as known. As such these objects are not candidates to receive a fibre. \end{enumerate}\label{en:mock_mod} \begin{figure} \centering \includegraphics[scale=0.09]{plots/Nz_emock_data_ELG-LRG-QSO.pdf} \caption{Weighted number of eBOSS DR16 LRG (red histogram), QSO (green histogram) and ELG (blue histogram) targets as a function of redshift $z$. The same quantities from 100 mocks are shown by the grey lines although they are not visually distinguishable from the coloured lines due to the tiny difference between data and mocks $N(z)$. Mock and data $N(z)$ histograms perfectly overlap at redshifts not covered by mock catalogues as all targets at these redshifts are taken from data catalogues (see Sec. \ref{sec:tests}).}\label{fig:Nz_lrg-qso} \end{figure} The redshift distribution of the modified mocks (not accounting for contaminants added in steps iii-v) is shown in Fig. \ref{fig:Nz_lrg-qso} along with the same quantity from eBOSS DR16 catalogues. The total number of targets in mocks differs, on average, from those in data catalogues by less than $\sim 5\%$. We implement steps i-vii in order to make the fibre collisions in the mocks as close as possible to the real data. However, we are limited by the fact that EZmock catalogues are built using approximate prescriptions and are not based on the outcome of a full N-body simulation. As such it is inevitable that mock and data will present different small-scale clustering and consequently will have different collision rates. Nevertheless, this does not affect our conclusions as our tests are performed in order to demonstrate that we can recover the correct small-scale clustering without bias from the fibre collision issue. We therefore need to compare the clustering recovered from the mocks to the true clustering of the mocks, rather than to that of the data. It is worth noting that the three tracers show different small-scale clustering from each other as well as from the data, providing breadth to the tests performed. Finally we process mocks obtained from steps i-vii through the eBOSS tiling code to implement the realistic fibre assignment. In running the fibre assignment on mock samples we follow the same procedure described in Sec. \ref{sec:tiling_code}. In particular, the full survey area is split into multiple chunks, identical to those used for eBOSS data catalogues and we use the eBOSS tiered-priority system \citep{dawson16} to solve collisions between different types of targets. In the following we will refer to mocks resulting from step i-ii as ``parent'' mocks while we will use the term ``spectroscopic'' mocks to refer to the corresponding sub-samples of mock targets that were assigned a fibre. We do not include systematics other than fibre collisions in our parent and spectroscopic mocks. This choice is motivated by the fact that issues such as systematics in the photometric target selection and redshift failures are correlated with different galaxy properties \citep{ross12,scodeggio18,ross20,anand20} and extremely difficult to accurately reproduce in simulated datasets. We therefore set $w_{\rm{sys}}=w_{\rm{noz}}=1$. The FKP weights are computed as $w_{\rm{FKP}} = 1/(1+\bar{n}(z)P_0)$ where $\bar{n}(z)$ is the mean number density of mock targets and $P_0=4000\,h^{-3}{\rm Mpc^3}$ for ELG \citep{anand20}, $P_0=6000\,h^{-3}{\rm Mpc^3}$ for QSO and $P_0=10000\,h^{-3}{\rm Mpc^3}$ for LRG \citep{ross20}. We infer the PIP weights through 310 fibre assignment runs on each mock. The difference in the small-scale clustering between mocks and data also affects the implementation of the angular up-weighting for the mock catalogues. The key requirement for the angular up-weighting to be unbiased is that the sample of observed targets (labelled -fib in eq. \eqref{eq:pip+ang}), used for the clustering measurements, is statistically equivalent to the parent catalogue (labelled -par in eq. \eqref{eq:pip+ang}). In the case of mocks the `spectroscopic' mock sample is significantly different, in terms of small-scale clustering, than the mock catalogue used for fibre assignment since the latter one contains contaminants from eBOSS target catalogues added in steps iii-v. We therefore use mocks obtained from steps i-ii as the reference parent samples (discarding any target added from the eBOSS target catalogue) to compute quantities labelled with -$\rm{par}$ and the corresponding spectroscopic mock samples as the targeted samples to compute quantities labelled with -$\rm{fib}$ in eq. \eqref{eq:pip+ang}. The goal of this section is to perform robustness tests for the novel PIP and PIP+ANG up-weighting scheme. We present an indirect test of the validity of the standard CP correction for fibre collisions adopted in eBOSS in Sec. \ref{sec:results} where a comparison between PIP, PIP+ANG and CP correction schemea is presented on the eBOSS DR16 LSS catalogues. \subsection{Effect of PIP and Angular Up-Weighting} \label{sec:ang_discussion} \begin{figure} \centering \includegraphics[scale=0.10]{plots/w_ang_mocks.pdf} \caption{Angular weights used to up-weight $DD$ pair counts (eq. \ref{eq:pip+ang}) in the clustering measurements for luminous red galaxies (red), quasars (green) and emission-line galaxies (blue). Left and right panels display the weights for north (NGC) and south (SGC) galactic caps. Markers show the mean estimate from 100 EZmocks with error-bars being the error on the mean. Lines show the counterparts from DR16 catalogues with shaded bands showing the related errors on a single realisation obtained from 100 EZmocks.}\label{fig:w_ang} \end{figure} \begin{figure} \centering \includegraphics[scale=0.095]{plots/dd-rr_ratio_wp_LRG_100.pdf} \caption{Ratio between $DD$ and $RR$ pair counts, averaged over 100 LRG EZmocks in SGC. Each panel displays the ratio as a function of the radial component $\pi$ of the pair separation, in a single bin in the transverse to the line-of-sight component $r_p$ of the pair separation. The $DD/RR$ ratio at two $r_p$ scales below the fibre collision scale is shown in the top panels while the same quantity for two values of $r_p$ larger than the fibre collision scale are plotted in the bottom panels. Blue lines show the mean estimate from 100 parent mocks while blue filled and red empty markers show the result of PIP and PIP+ANG correction applied to the corresponding mocks affected by fibre collisions. Shaded bands and error bars show the errors on the mean estimates.}\label{fig:ratio_dd-rr} \end{figure} We discuss here the impact that the PIP and angular up-weighting are expected to have on clustering measurements presented in Sec. \ref{sec:tests} and Sec. \ref{sec:results} for mocks and eBOSS DR16 catalogues, respectively. The PIP up-weighting provides un-biased pair counts only if there are no pairs with zero probability of being targeted in a random realisation of the survey. In eBOSS, zero-probability pairs do originate from fibre collisions in single-pass regions. These zero-probability pairs however, are confined at angular separations below the fibre-collision angle $\theta^{\mathrm{fc}}=62\arcsec$. At these separations the PIP weighting properly up-weights pairs in the overlaps between multiple tiles but misses out those in areas covered by a single tile. At angular separations $\theta<\theta^{\mathrm{fc}}$ we therefore expect the PIP up-weighting to under-estimate the pair counts inferred from the spectroscopic sample with respect to those from the parent catalogue. PIP up-weighting provide virtually un-biased pair count at separations $\theta>\theta^{\mathrm{fc}}$ where no or a very small number of zero-probability pairs (those resulting from an optimisation of fibre assignment in collision groups with more than two objects) are expected. The effect is quantified by means of the angular weights $\wdd$ in eq. \eqref{eq:pip+ang} used to correct the $DD$ pair counts and shown in Fig. \ref{fig:w_ang} for luminous red galaxies (red), quasars (green) and emission-line galaxies (blue). Indeed, in all cases $\wdd$ is greater than unity for separations below $\theta^{\mathrm{fc}}$ and sharply reaches 1 for larger values of angular separation $\theta$. A difference is noticeable between NGC (left panel in Fig. \ref{fig:w_ang}) and SGC (right panel in Fig. \ref{fig:w_ang}) due to the different tiling density between the two caps. For LRGs and QSOs for example, tiles in the SEQUELS chunks in the NGC are more tightly packed with respect to those in eBOSS chunks. This increases the fraction of the area covered by more than one tile decreasing the fraction of zero-probability pairs, which in turn decreases $\wdd$. The angular weights $\wdd$ for LRGs tend to have significantly lower amplitudes in the EZmocks (markers with error-bars) compared to the eBOSS DR16 catalogue. This results from a lower intrinsic clustering of targets in the mocks with respect to eBOSS data combined with the fact that LRGs are targeted at the lowest priority. For quasars and emission-line galaxies the difference between mock and data in Fig \ref{fig:w_ang} is significantly smaller. In order to assess how the PIP and PIP+ANG corrections impact the measurements of the projected correlation function or the multipole moments $\xi^{(\ell)}$ at different scales below the fibre collision scale, it is useful to work with the anisotropic two-point correlation function $\xi^s(r_p,\pi)$, measured as a function of the parallel $\pi$ and transverse to the line-of-sight $r_p$ components of the pair separation, in terms of its natural estimator, \begin{equation} \xi(\mathbf{s}) = \frac{DD(\mathbf{s})}{RR(\mathbf{s})}-1. \label{eq:Nt} \end{equation} With respect to the angular separations, the transverse scale that corresponds to the fibre-collision angle $\theta^{\mathrm{fc}}$ in the fiducial cosmology varies with redshift. We denote with $r_p^{\mathrm{fc}}$ the transverse scale spanned by $\theta^{\rm{fc}}$ at the maximum redshift of the sample. The PIP-corrected $DD(r_p,\pi)$ pair counts are then expected to be negatively biased at any $\pi$ for $r_p<r_p^{\rm{fc}}$ that in turn results in an under-estimation of the ratio $DD/RR$ and thus of the two-point correlation function with respect to the reference one. At scales $r_p>r_p^{\rm{fc}}$ all pairs are at angular separations $\theta>\theta^{\rm{fc}}$, a regime where PIP-corrections are un-biased, we expect $DD(r_p,\pi)$ corrected using PIP weights, and thus also the anisotropic two-point correlation function $\xi^s(r_p,\pi)$, to match its value from the reference parent sample. This is illustrated in Fig. \ref{fig:ratio_dd-rr} for luminous red galaxies using 100 EZmocks where we can compare the PIP up-weighted measurements with the ones from parent samples. In Fig. \ref{fig:ratio_dd-rr} we show the ratio $DD(r_p,\pi)/RR(r_p,\pi)$ from eq. \eqref{eq:Nt} at two transverse scales smaller (top panels) and two larger than $r_p^{\rm{fc}}\sim0.7\,h^{-1}{\rm Mpc}$. As expected, the PIP correction strongly underestimate the $DD/RR$ ratio, or equivalently the $DD$ pair counts, for $\bar{r_p}<0.7\,h^{-1}{\rm Mpc}$ where the angular up-weighting is needed to properly account for zero-probability pairs. At transverse scales $\bar{r_p}>0.7\,h^{-1}{\rm Mpc}$, on the other hand, angular up-weighting has negligible effect as $\wdd\sim1$ and the PIP and PIP+ANG up-weighting are both un-biased. \subsection{Projected Correlation Function} \label{sec:wp_mocks} \begin{figure} \centering \includegraphics[scale=0.09]{plots/wp_ang_input_ezmocks_eBOSS-SEQUELS_LRG_100.pdf} \caption{Projected correlation function of LRG EZmocks built as described in Sec. \ref{sec:tests} in the two caps, NGC (left panels) and SGC (right panels). Top panels: mean estimate from 100 parent mocks (continuous lines), from catalogues affected by fibre collisions corrected using the PIP-only (empty markers with dashed error-bars) and corrected using the combined PIP and angular up-weighting (PIP+ANG, filled points with continuous error-bars). The blue shaded bands and error bars show the error on the mean. The vertical red shaded bands show the transverse scales in the EZmocks fiducial cosmology corresponding to the fibre-collision angle between the minimum and maximum redshift of the sample. For separations $r_p$ larger than the fibre-collision scale PIP-only and the joint PIP+ANG corrections provide almost identical results. Therefore, empty (PIP) and filled (PIP+ANG) markers are these scales can not be easily distinguished. Empty markers at scales smaller than the fibre-collision are not visible in the plot because they are well below the minimum limit set on the y axis. Bottom panels: mean of the differences between the corrected measurements from mocks affected by fibre collisions and the corresponding parent mock. To reduce the range of variation, each quantity in the bottom panel is multiplied by $r_p$. For comparison, in the bottom panels that show the differences, the red continuous lines and hatch regions (RAW) show the mean measurements from spectroscopic mocks and related errors in the case where no correction for fibre collisions is applied.}\label{fig:wp_lrg_mocks} \end{figure} \begin{figure} \centering \includegraphics[scale=0.10]{plots/wp_ang_input_ezmocks_eBOSS-SEQUELS_QSO_100.pdf} \caption{Same as in Fig. \ref{fig:wp_lrg_mocks} here for the QSO EZmocks.}\label{fig:wp_qso_mocks} \end{figure} \begin{figure} \centering \includegraphics[scale=0.10]{plots/wp_ang_input_ezmocks_ELG_100.pdf} \caption{Same as in Fig. \ref{fig:wp_lrg_mocks} here for the ELG EZmocks.}\label{fig:wp_elg_mocks} \end{figure} Mean estimates of the projected correlation function $w_p(r_p)$ and the corresponding statistical errors from a set of 100 mocks are shown in the top panels of Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks} for the three tracers. The bottom panels of Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks} show the difference between measurements from the spectroscopic (see Sec. \ref{sec:contaminants}) and the parent mocks. In the bottom panels of Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks} we report the case where no correction for fibre collisions or survey completeness is applied (red continuous lines with hatch error regions) to the measurement from the spectroscopic mocks. Given the sparsity of the sample and being targeted at higher priority, missing observations have negligible effect on the clustering of quasars (see Fig. \ref{fig:wp_qso_mocks}) on scales larger than the fibre-collision scale where missing targets resemble the effect of a random depletion. Raw estimate of the correlation function of luminous red galaxies (see Fig. \ref{fig:wp_lrg_mocks}), on the other hand, shows negligible offset with respect to the reference one on scales $\sim1$-$10\,h^{-1}{\rm Mpc}$ and scales $\gtrsim100\,h^{-1}{\rm Mpc}$, while an offset at $\sim2\sigma$ level is clearly visible on scales between $\sim30$-$100\,h^{-1}{\rm Mpc}$. Raw measurements of the projected correlation function of emission-line galaxies mocks in Fig. \ref{fig:wp_elg_mocks} show an overall agreement with the clustering of the parent samples with a marginal offset at scales of $\sim10\,h^{-1}{\rm Mpc}$. These raw measurements show that although the fibre collisions have a limited impact compared to the statistical errors, its strength strongly depends on the intrinsic clustering of the tracers and the features of the particular fibre-assignment algorithm adopted. Measurements from spectroscopic mocks, corrected using the PIP technique and averaged over 100 mocks, are shown as empty markers in the top panels of Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks} along with the reference measurements from parent mocks (continuous lines). For all three tracers the agreement between the PIP-only corrected and reference measurements is remarkable for transverse scales larger than $\sim 1\,h^{-1}{\rm Mpc}$ (bottom panels in Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks}). However, PIP weighting fails to recover the input clustering at transverse scales $r_p$ below the fibre-collision scale (vertical red shaded bands in Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks}). The interpretation of the systematic offset at these scales, when PIP-only up-weighting is applied, follows from the discussion in Sec. \ref{sec:ang_discussion}. In particular, in the limit of small $r_p$ and small $\pi$, where targets are strongly clustered, the anisotropic correlation function $\xi^s(r_p,\pi)$ is well approximated by $DD/RR$ and the PIP-corrected $\xi^s(r_p,\pi)$ results are offset by a factor of $\wdd$ with respect to the reference. At small $r_p$ and large $\pi$, specifically between $\pi\sim50\,h^{-1}{\rm Mpc}$ and the upper limit in the integral in eq. \eqref{eq:wp} fixed at $80\,h^{-1}{\rm Mpc}$, i.e. in the regime of weak intrinsic clustering, $DD/RR$ approaches unity and $\xi^s(r_p,\pi)$ tends to 0. At these scales, the underestimation in the PIP up-weighted $DD$ pair counts reduces the corresponding $\xi^s(r_p,\pi)$ to very small or negative values and the scaling by $\wdd$ between the PIP-corrected and reference $\xi^s(r_p,\pi)$ is not valid anymore. This drives the PIP-corrected projected correlation function $w_p(r_p)$, obtained integrating $\xi^s(r_p,\pi)$ in eq. \eqref{eq:wp}, to values below the lower limit on y-axis shown in the top panels of Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks} enhancing the relative difference between PIP up-weighted and reference measurements well above the factor of $\wdd$. The angular up-weighting uses the fraction of close pairs lost in single-pass regions to restore the $DD/RR$ ratio at small transverse scales $r_p$, to its expectation values. As a result, when PIP weights are supplemented with the angular up-weighting (filled markers with error-bars in Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks}), we are able to successfully recover the clustering signal down to very small scales $\sim0.1\,h^{-1}{\rm Mpc}$ without altering the large-scale measurements. It is important to stress that, as opposed to the raw measurements, the performance of both the PIP and joint PIP and angular up-weighting does not vary with the type of tracers. As anticipated in Sec. \ref{sec:survey_realizations}, the optimization performed by the fibre assignment algorithm within the collision groups can give rise to the `zero-probability' pairs at the scale of fibre collisions. This is likely to be the source of the small deviation in the PIP+ANG corrected measurements at the close-pair scales (vertical red shaded bands) seen in Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks} with respect to the reference. The effect, stronger for the LRGs in Fig. \ref{fig:wp_lrg_mocks}, is well within the statistical error for a single realisation and becomes evident only when averaged over a high number of samples. We therefore do not consider this further: as demonstrated in the plot it is small and limited to a narrow range of scales close to the collision scale. \subsection{Multipoles}\label{sec:mps_mocks} \begin{figure} \centering \includegraphics[scale=0.10]{plots/mps_ang_ezmocks_eBOSS-SEQUELS_LRG_100.pdf} \caption{Multipoles of the redshift-space two-point correlation functions from 100 EZmocks of luminous red galaxies. In all panels, measurements are averaged over 100 mocks and the error-bars correspond to the error on the mean. Top row: mean measurements from the reference parent mock catalogues (lines with shaded bands) and from the corresponding samples of targeted objects that are corrected using PIP-only (empty markers with dashed error-bars) and combined PIP and angular (PIP+ANG, filled markers with thick error-bars) weighting schemes. Bottom three panels: mean difference between the raw measurements (grey dotted lines and hatches), PIP (empty markers), PIP+ANG (filled markers) corrections and the reference measurements.}\label{fig:mps_lrg_mocks} \end{figure} \begin{figure} \centering \includegraphics[scale=0.10]{plots/mps_ang_ezmocks_eBOSS-SEQUELS_QSO_100.pdf} \caption{Same as in Fig. \ref{fig:mps_lrg_mocks}, here for the quasar mock samples.}\label{fig:mps_qso_mocks} \end{figure} \begin{figure} \centering \includegraphics[scale=0.10]{plots/mps_ang_ezmocks_ELG_100.pdf} \caption{Same as in Fig. \ref{fig:mps_lrg_mocks}, here for the mock samples of emission-line galaxies.}\label{fig:mps_elg_mocks} \end{figure} We now test the corrections for the multipoles of the redshift-space two-point correlation function. We limit the tests to the first three even multipoles, namely the monopole, quadrupole and hexadecapole that are mostly used to detect and model the redshift-space distortions and the baryon acoustic oscillations in large galaxy redshift surveys. Results are shown in Fig. \ref{fig:mps_lrg_mocks}-\ref{fig:mps_elg_mocks}, and match the results presented for the projected correlation functions in Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks}. Differences between `raw' measurements from spectroscopic mocks not corrected for fibre collisions or survey completeness, and the reference measurements from the parent mocks are shown in the bottom panels of Fig. \ref{fig:mps_lrg_mocks}-\ref{fig:mps_elg_mocks} (dashed lines with hatch error regions). Raw measurements tend to under-estimate the reference clustering up to scales of $\sim10$-$20\,h^{-1}{\rm Mpc}$ for all tracers although the bias is more severe for luminous red galaxies due to their higher clustering and being targeted at the lowest priority. At larger scales a mild bias appears for luminous red galaxies while it is not clearly visible for quasars and emission-line galaxies due to relatively large statistical errors at these scales. The PIP corrections are un-biased on scales larger than $\sim10\rm{-}20\,h^{-1}{\rm Mpc}$ recovering the input clustering signal within less than $\sim1$-$\sigma$ errors. However, they systematically under-estimate the clustering at smaller scales. The nature of the systematic bias in the PIP-corrected measurements of the multipole moments $\xi^{(\ell)}$ is the same discussed in Sec. \ref{sec:ang_discussion} and in Sec. \ref{sec:wp_mocks} for the projected correlation function. However, the effect, confined at small transverse scales $r_p$ in the anisotropic $\xi^s(r_p,\pi)$ and projected $w_p(r_p)$ correlation functions, is now spread to all angle-averaged separations $s$ in the multipoles $\xi^{(\ell)}$. At angle-averaged scales smaller than the transverse fibre-collision scale $s<r_p^{\rm{fc}}$, PIP weighting underestimates the reference $DD(s,\mu)$ pair count at any value of $\mu$ resulting in a strong negative bias in the measured multipoles. At scales $s\gtrsim r_p^{\rm{fc}}$ the bias is reduced due to the fact that the PIP-corrected $DD(s,\mu)$ pair counts are underestimated, with respect to the reference, only between $1<\mu<\mu^{\rm{fc}}$ with, \begin{equation} \mu^{\rm{fc}} = \left[1-\left({r_p^{\rm{fc}}}/{s}\right)\right]^{1/2}. \label{eq:mu_fc} \end{equation} At scales $s$ significantly larger than $r_p^{\rm{fc}}$, $\mu^{\rm{fc}}$ approaches unity and the underestimate in the $DD$ pair counts (see Sec. \ref{sec:ang_discussion}) between $[\mu^{\rm{fc}},1]$ has negligible effect on the multipole moments $\xi^{(\ell)}$. The systematic bias at a given scale $s$ is also higher for higher order multipoles. This follows from the $\mu$ dependence of the Legendre polynomials in eq. \eqref{eq:mps}. As for the projected correlation function $w_p(r_p)$, the angular weights properly up-weight the $DD$ pair counts below the fibre collision scale providing un-biased measurements of the anisotropic correlation function $\xi(s,\mu)$ and its multipoles. The combined PIP+ANG correction results are un-biased at all scales explored in this work at a level well below the statistical errors, as shown in the bottom panels of Fig. \ref{fig:mps_lrg_mocks}-\ref{fig:mps_elg_mocks}. \subsection{Projected Correlation Function} \begin{figure} \centering \includegraphics[scale=0.10]{plots/wp_ang_eBOSS-SEQUELS_LRG_v7_2.pdf} \caption{Top panel: measurements of the projected correlation function of the eBOSS DR16 LRG sample using different correction schemes: close-pair weighting (CP, dashed red lines), PIP weighting (PIP, black dashed-dotted line), combined PIP and angular up-weighting (PIP+ANG, blue continuous lines). The shaded band shows the 1-$\sigma$ errors estimated using 100 mock samples. Bottom panel: difference between the measurements obtained using CP and PIP weighting with respect to measurements using combined PIP and angular up-weighting. The shaded band shows the 1-$\sigma$ statistical error. To reduce the range of variation, each quantity in the bottom panel is multiplied by $r_p$. As in Fig. \ref{fig:wp_lrg_mocks} the vertical shaded red bands show the transverse scales corresponding to the fibre-collision angle between the minimum and maximum redshift of the sample in the eBOSS fiducial cosmology.}\label{fig:wp_lrg_data} \end{figure} \begin{figure} \centering \includegraphics[scale=0.10]{plots/wp_ang_eBOSS-SEQUELS_QSO_v7_2.pdf} \caption{Equivalent of Fig. \ref{fig:wp_lrg_data} for the eBOSS DR16 quasar catalogue.}\label{fig:wp_qso_data} \end{figure} \begin{figure} \centering \includegraphics[scale=0.10]{plots/wp_ang_eBOSS_ELG_v7_2.pdf} \caption{Equivalent of Fig. \ref{fig:wp_lrg_data} for the eBOSS DR16 catalogue of emission-line galaxies.}\label{fig:wp_elg_data} \end{figure} Figures \ref{fig:wp_lrg_data}, \ref{fig:wp_qso_data} and \ref{fig:wp_elg_data} show the measured projected correlation functions from the eBOSS DR16 samples of luminous red galaxies, quasars and emission-line galaxies respectively. Measurements (top panels) are corrected using the standard CP (red dashed lines), PIP (black dash-dotted lines) and the joint PIP and angular up-weighting (PIP+ANG, blue thick lines). In the bottom panels of the same figures we show the difference of the standard CP and PIP corrections with respect to the joint PIP+ANG up-weighting taken as the reference since it is found to be un-biased within data precision over all scales in the tests performed on mock catalogues. The PIP up-weighting matches the PIP+ANG correction for scales larger than the fibre-collision scales (vertical red shaded bands) while it significantly underestimates the clustering at smaller scales. The CP corrections perform very similarly to the PIP ones with deviations consistent with the statistical noise on scales larger than the collision scale. The agreement between the CP and PIP corrections at scales larger than the fibre-collision scale shows that the selection probabilities are highly un-correlated on these scales and can be well approximated using an empirical prescription such as the nearest-neighbour method. This directly follows from the features of the eBOSS fibre assignment algorithm that uses a random seed to resolve collisions. The strong difference between the CP and PIP corrections with respect to the PIP+ANG weighting below the fibre collision scale (vertical red shaded bands in Fig. \ref{fig:wp_lrg_data}-\ref{fig:wp_elg_data}) is due to the effect discussed in Sec. \ref{sec:ang_discussion} and reflects the trend observed for mocks in Sec. \ref{sec:wp_mocks}. Comparing Fig. \ref{fig:wp_lrg_data}-\ref{fig:wp_elg_data} for eBOSS DR16 samples with their equivalent for mocks in Fig. \ref{fig:wp_lrg_mocks}-\ref{fig:wp_elg_mocks} it is clear that the eBOSS DR16 targets show a higher intrinsic clustering at scales below $~1\,h^{-1}{\rm Mpc}$. Therefore, collisions are expected to occur at a higher rate in eBOSS catalogue with respect to the mocks. This is the source of the increase in the absolute size of the small-scale difference between PIP/CP and the PIP+ANG weighting scheme between the DR16 data and the mocks. \subsection{Multipoles} \begin{figure} \centering \includegraphics[scale=0.10]{plots/mps_ang_eBOSS-SEQUELS_LRG_v7_2.pdf} \caption{Top panel: measurements of the redshift-space multipole correlation functions of eBOSS DR16 LRG sample corrected using the combined PIP and angular up-weighting. The shaded band shows 1-$\sigma$ errors estimated using 100 mock samples. Bottom panels: difference of the measurements obtained using CP and PIP weighting with respect to the one using combined PIP and angular up-weighting. The shaded bands show the 1-$\sigma$ statistical error from the top panels.}\label{fig:mps_lrg_data} \end{figure} \begin{figure} \centering \includegraphics[scale=0.10]{plots/mps_ang_eBOSS-SEQUELS_QSO_v7_2.pdf} \caption{Same as in Fig. \ref{fig:mps_lrg_data}, here for the eBOSS DR16 quasar catalogue.}\label{fig:mps_qso_data} \end{figure} \begin{figure} \centering \includegraphics[scale=0.10]{plots/mps_ang_eBOSS_ELG_v7_2.pdf} \caption{Same as in Fig. \ref{fig:mps_lrg_data}, here for the eBOSS DR16 catalogue of emission-line galaxies.}\label{fig:mps_elg_data} \end{figure} In Fig. \ref{fig:mps_lrg_data}, \ref{fig:mps_qso_data} and \ref{fig:mps_elg_data} we show the measurements of the redshift-space multipole correlation functions for the eBOSS DR16 samples of luminous red galaxies, quasars and emission-line galaxies, respectively. The top panels show only the measurements performed using our reference PIP+ANG up-weighting. In the bottom panels we plot the deviations of the CP and PIP corrections with respect to the measurements that use the joint PIP and angular up-weighting. As for the projected correlation functions shown in Fig. \ref{fig:wp_lrg_data}-\ref{fig:wp_elg_data}, CP and PIP corrections provide similar results with discrepancies consistent with statistical uncertainties. The difference of the CP and PIP up-weighting with respect to the joint PIP and angular corrections appears to be significant at scales smaller than $\sim10\,h^{-1}{\rm Mpc}$. Such a difference is due to the presence of zero-probability pairs at transverse separations below the fibre-collision scale as discussed in Sec. \ref{sec:ang_discussion}. These results are consistent with the measurements from the mock catalogues discussed in Sec. \ref{sec:mps_mocks}. At scales below $\sim10\,h^{-1}{\rm Mpc}$ CP and PIP up-weighting perform better for the quasar catalogue and emission-line galaxies with respect to the sample of luminous red galaxies. This is expected since the luminous red galaxies show a higher small-scale clustering compared to the quasars and emission-line galaxies (see Fig. \ref{fig:wp_lrg_data}-\ref{fig:wp_elg_data}). \section{Introduction} \input{1-introduction.tex} \section{Survey}\label{sec:survey} \input{2-survey} \section{Random Catalogues}\label{sec:randoms} \input{3-randoms} \section{Methodology}\label{sec:method} \input{4-methodology} \section{Validation using mock catalogues}\label{sec:tests} \input{5-tests_mocks} \section{Results}\label{sec:results} \input{6-results} \section{Summary and Conclusions} \input{7-conclusions}\label{sec:conclusions} \section*{Acknowledgements} \addcontentsline{toc}{section}{Acknowledgements} This research was supported by the Centre for the Universe at the Perimeter Institute. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. We acknowledge support provided by Compute Ontario (www.computeontario.ca) and Compute Canada (www.computecanada.ca). H.~J.~S. is supported by the U.S.~Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0014329. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 693024). \bibliographystyle{mnras}
2,877,628,090,040
arxiv
\section{Introduction} Electron bunches of femtosecond-to-attosecond-scale duration are useful tools for studying ultrafast atomic-scale processes, including structural phase transitions in condensed matter~\cite{Siwick2003An-Atomic-Level,Baum20074D-Visualizatio,Morrison2014A-photoinduced-,Gedik2007Nonequilibrium-,Musumec2010Capturing-ultra,ScianiMiller2011}, sub-cycle changes in oscillating electromagnetic waveforms~\cite{Ryabov2016Electron-micros}, and the dynamics of biological structures~\cite{Fitzpatrick2013Exceptional-rig,Anthony-W.-P.-Fitzpatrick20134D-Cryo-Electro}. High-density electron bunches of sub-femtosecond durations are potentially useful in high-resolution, time-resolved atomic diffraction~\cite{Baum2017NatPhys}, as sources of extreme-ultraviolet radiation through inverse Compton scattering~\cite{PhysRevLett.104.234801,PhysRevSTAB.14.070702,Kiefer_rel_elec_mirrors,PhysRevLett.119.254801}, and as injection bunches for compact charged-particle accelerators~\cite{RJEngland_DLA,FerrariE_ACHIP}. Existing schemes for electron bunch compression include the use of electrostatic elements~\cite{WangGedikIEEE2012}, time-varying fields within radio-frequency (RF) cavities~\cite{Gao12,vanOudheusden2010PRL,Chatelain2012,Kassier2012,Gliserin2015Sub-phonon-peri}, electromagnetic transients~\cite{Baum2007Attosecond-elec,Hilbert2009Temporal-lenses,WLJ2015NJP,PriebeNatPhoton,KozakNatPhys2017,PhysRevLett.120.103203,KealhoferSci2016,EhbergerOSA2017}, and a combination of optical laser pulses and dielectric membranes~\cite{Baum2017NatPhys}. In all of these schemes, space charge effects and velocity spread enforce a tradeoff between electron bunch charge and pulse duration. Consequently, whereas an electron bunch of pulse duration 0.1-1 ps may contain ~250 fC~\cite{vanOudheusden2010PRL}, electron bunches of attosecond-scale durations (attobunches) are typically realized with single or very few electrons~\cite{Baum2017NatPhys,PriebeNatPhoton,KozakNatPhys2017,PhysRevLett.120.103203,KealhoferSci2016,EhbergerOSA2017,Zewail187,Aidelsburger2010PNAS}. Here, we use \textit{ab initio} numerical simulations and complementary analytical theory to show that high-charge electron bunches of attosecond-scale durations can be produced by interfering coherent terahertz and optical pulses. We study two regimes of operation: in the first regime, 5 MeV electrons are compressed into attobunches of about $20$ as duration, each containing $\sim 240$ electrons. In the second regime, 5 MeV electrons are compressed into bunches of $<400$ as duration, each containing $\sim 1$ fC of charge. By comparison, theoretical predictions of electron bunch compression using realistic bunches have so far been limited to about 200 as~\cite{KozakNatPhys2017} in the single-electron regime. Experimentally, the shortest electron bunches produced to date lie in the single-electron regime, with durations of 655 as~\cite{PriebeNatPhoton} and 820 as~\cite{Baum2017NatPhys}, and indirect measurements indicating durations as short as 260 as~\cite{PhysRevLett.120.103203}. In addition, we obtain fully closed-form expressions for the dynamics of electrons subject to a general combination of counter-propagating pulses. Given a specific initial electron bunch configuration, these analytical tools enable us to predict various key features of our compression scheme, such as the bunch duration at focus (maximum compression), and the final kinetic energy (KE) spread. Our analytical predictions agree well with our \textit{ab initio} numerical simulation results in regimes where space charge effects are negligible. Our work complements existing theoretical formulations for the behavior of charged particles in counter-propagating electromagnetic fields, which have been confined to the sub-relativistic regime~\cite{Hilbert2009Temporal-lenses,PhysRevA.98.013407}. In the proposed scheme, shown in figure~\ref{Fig_01}(a), the counter-propagating terahertz and optical pulses interfere to form an intensity grating, which is velocity-matched to the relativistic (few MeV-scale) electrons by choosing the proper carrier frequency for each pulse. The ponderomotive force, which is proportional to the negative intensity gradient, compresses the electrons into a train of attobunches. Bunch compression schemes based on intensity gratings have previously been studied only in the regime where both electromagnetic pulses are at optical/infrared frequencies for applications like electron acceleration~\cite{Hafizi1997Vacuum-beat-wav,Kozak2015Electron-accele,Esarey1995PRE}, and the compression of non-relativistic, single and few-electron bunches~\cite{Baum2007Attosecond-elec,Hilbert2009Temporal-lenses,KozakNatPhys2017,PhysRevLett.120.103203}. Here, we show that combining terahertz frequencies with optical frequencies creates an intensity grating that can be used to compress relativistic electron pulses achievable in lab-scale setups~\cite{Maxson2017Direct-Measurem,fs_time_res_diffraction,SLAC_MeV_rev_sci_instr,C4FD00204K} to attosecond scale durations with as much as 1 fC of charge per attobunch. We use counter-propagating terahertz pulses of $<100~\mathrm{\mu}$J and optical pulses of $<100$ mJ, which are readily obtained with today's technology~\cite{Yeh2007Generation-of-1,single_cycle_1THz,OR_organic_crystals,THz_DFG,HuangOptLett2013,Dhillon2017a,Fulop_THz_OR,Fulop_mJ_THz,THz_0.4mJ,THz_0.9mJ}. The absence of material structures in the interaction region of this scheme removes the possibility of material damage, allowing the intensity of our lasers to be scaled to arbitrarily high values for rapid focusing and strong compression of relativistic electron bunches. Due to the suppression of space charge effects at relativistic energies~\cite{HastingsApplPhysLett2006,MusumeciApplPhysLett2010}, the resulting attobunches can hold substantially higher charge than existing attobunches in the non-relativistic, single-electron regime~\cite{Baum2017NatPhys,KozakNatPhys2017,PhysRevLett.120.103203}. Our \textit{ab initio} simulations (as described in the next section) exactly model the interactions of electrons with each other as well as with external laser fields. In particular, our simulations account for both near-field and far-field space charge effects, where near-field refers to fields responsible for the Coulomb force, and far-field refers to fields associated with radiation from the electron. We model the external laser fields using exact, finite-energy, non-paraxial solutions to Maxwell's equations. This is critical for accuracy since terahertz pulses from compact sources usually operate in the near-single-cycle limit and have beam waists tightly focused down to wavelength-scale dimensions~\cite{single_cycle_1THz} in order to achieve desired on-axis field strengths. \begin{centering} \begin{figure}[ht!] \centering \includegraphics[width = 160mm]{Fig_01} \caption{High-charge, relativistic (5 MeV) attosecond electron pulses formed by a terahertz-optical intensity grating. The scheme we study is illustrated in (a): (i) A co-propagating optical pulse (blue waveform) and a counter-propagating few-cycle terahertz pulse (green waveform) are incident on a relativistic electron bunch (yellow ellipse) of mean velocity $v_{0}$. (ii) The pulses overlap, forming a sub-luminal intensity grating (solid turquoise profile) which co-propagates with the electron bunch. (iii) After the interaction, the impulse imparted by the grating compresses the electrons into a train of attobunches. The heatmaps in (b) and (c) show the electron density time-evolution using a centered coordinate system $z - \langle z \rangle$. The electron density spatial distributions at the focal times are shown in (d) and (e). In (b) and (d), electrons with $10^{-3} \%$ initial relative kinetic energy (KE) spread, and an average of 1250 electrons per grating period, interact with a 90.3 mJ optical pulse and a 39.0 $\mu$J terahertz pulse, resulting in bunches containing 246 electrons in $\sim20$ as durations (FWHM). In panels (c) and (e), a 20 fC bunch with a (FWHM) duration of about 16.5 fs and relative KE spread of 0.146\% interacts with a 6.66 mJ optical pulse and a 16.9 $\mu$J terahertz pulse, resulting in electron bunches of $< 400$ as (FWHM) containing approximately 1 fC of charge within the FWHM. Only the two central grating periods are plotted. The data in (b)-(e) are the result of electrodynamic simulations in which non-paraxial laser fields as well as near- and far-field space charge effects are exactly taken into account.} \label{Fig_01} \end{figure} \end{centering} \section{Results} \subsection{High-charge attosecond electron bunches} Figure~\ref{Fig_01} presents results from 2 regimes of our study: (i) a regime where $\sim 20$ as electron bunch durations containing 246 electrons are realized and (ii) a regime where $<400$ as electron bunch durations are realized with fC-scale charge per bunch. The durations of the compressed bunches are stated using full width at half maximum (FWHM) values. In all simulation results presented in this section, the optical (co-propagating) and terahertz (counter-propagating) pulses have central wavelengths of $\lambda_{1} = 0.65~\mathrm{\mu m}$ and $\lambda_{2} = 300~\mathrm{\mu{m}}$ respectively, are linearly-polarized in $x$, and propagate in the $\pm z$ direction. The electron bunches have a mean KE of $\langle \mathrm{KE}\rangle =5$ MeV. The velocity of the intensity grating, $v_{gr}$, is matched to the mean velocity of the electrons, $v_{0}$, by choosing wavelengths, $\lambda_{1}$ and $\lambda_{2}$, such that~\cite{Baum2007Attosecond-elec,Esarey1995PRE}: \begin{equation} v_{gr} = v_{0} = c\bigg{(} \frac{\lambda_{2} - \lambda_{1}}{\lambda_{1}+\lambda_{2}} \bigg{)} \label{eqn_standing_wave_condition} \end{equation} where $c$ is the speed of light in free space. In the lab frame, the grating period is given by \begin{equation} \lambda_{gr} = \frac{\lambda_{1}}{2\gamma_{0}^{2}(1 - \beta_{0})} = \frac{\lambda_{2}}{2\gamma_{0}^{2}(1 + \beta_{0})} \end{equation} where $\vec{\beta}=\beta_{0}\hat{z}=(v_{0}/c)\hat{z}$ is the normalized mean velocity of the electron bunch propagating in $\hat{z}$. The corresponding Lorentz factor is $\gamma_{0} = 1/\sqrt{1 - \beta_{0}^{2}}$. The mean KE of the electrons is $\langle \mathrm{KE} \rangle = (\gamma_{0} - 1)m_{e}c^{2}$, where $m_{e}$ is the electron rest mass. (\ref{eqn_standing_wave_condition}) shows the necessity of combining very disparate counter-propagating laser wavelengths where relativistic electrons are concerned: for $v_{gr}$ close to the speed of light, $v_{0} \sim c$, $\lambda_{2} \gg \lambda_{1}$ is necessary. The use of relativistic electrons takes us into a regime beyond what has been studied for compressing non-relativistic electrons and gives us an opportunity to leverage the developments of high-intensity terahertz pulses in combination with optical pulses in our scheme. Figures \ref{Fig_01}(b) and \ref{Fig_01}(d) show the electron density distribution obtained by averaging over 300 sets of \textit{ab initio} simulations using an initial 5 MeV electron bunch containing 2 fC of charge uniformly distributed across 10 grating periods. After the laser-electron interaction, each resulting attobunch has about 1250 electrons contained within each $\lambda_{gr}$, and 246 electrons within the FWHM duration of 20 as. The non-paraxial optical and terahertz pulses have energies of 90.3 mJ and 39.0 $\mu$J respectively. The optical pulse has a duration of 80 fs (intensity FWHM) and a peak on-axis field strength $E_{01} \approx 4.96\times 10^{10}$ V/m. The terahertz pulse has a 1 ps duration (intensity FWHM) and a peak on-axis field strength $E_{02}\approx 2.95\times10^{8}$ V/m. Both laser pulses have the same waist radius, $w_{0}=450~\mathrm{\mu{m}}$. During interaction, the bunch has a radius of about $15~\mathrm{\mu{m}}$. The initial relative KE spread is $\sigma_{\mathrm{KE}}/\langle\mathrm{KE}\rangle = 10^{-3} \%$. While this value is small, relative KE spreads as low as $\sigma_{\mathrm{KE}}/\langle\mathrm{KE}\rangle = 4\times10^{-4} \%$ have been predicted for existing RF gun set-ups~\cite{PhysRevSTAB.18.120102}. The full set of electron bunch and laser pulse parameters, as well as a plot of the non-paraxial terahertz pulse electric field spatial profile is found in Supporting Information Section S.5(iv). The second scenario, shown in figures \ref{Fig_01}(c) and \ref{Fig_01}(e), involves compressing an electron bunch of $\langle \mathrm{KE} \rangle=5$ MeV, 20 fC (total charge), 16.5 fs FWHM duration, relative energy spread $\sigma_{\mathrm{KE}}/\langle \mathrm{KE} \rangle\approx 0.146 \%$ and $8~\mathrm{\mu{m}}$ bunch radius, into a train of sub-400 as duration, fC-scale electron bunches. The electron density heatmap and distribution are averaged over 200 sets of \textit{ab initio} simulation results. The initial electron bunch was modelled after the bunch experimentally demonstrated in \cite{Maxson2017Direct-Measurem} (see Supporting Information Section S.5(v)). Both pulsed lasers have the same beam waist: $w_{0} = 200~\mathrm{\mu{m}}$. The optical pulse has a duration of 30 fs (intensity FWHM) and an on-axis peak field strength $E_{01} \approx 5\times10^{10}$ V/m, corresponding to a pulse energy of 6.66 mJ. The terahertz pulse (figure S10 in Supporting Information Section S.5(v)) has a duration of 1 ps (intensity FWHM) and an on-axis peak field strength $E_{02}\approx4.18\times10^8$ V/m, corresponding to a pulse energy of 16.9 $\mu$J. Such optical and terahertz pulses are readily achievable today in a table-top setup~\cite{Yeh2007Generation-of-1,single_cycle_1THz,OR_organic_crystals,THz_DFG,HuangOptLett2013,Dhillon2017a,Fulop_THz_OR,Fulop_mJ_THz,THz_0.4mJ,THz_0.9mJ}. At the focus, we observe the formation of electron bunches with about 1 fC of charge in a FWHM duration of 367 as (figure \ref{Fig_01}(e)). \begin{figure}[ht!] \centering \includegraphics[width = \textwidth]{Fig_02} \caption{Transverse dynamics of a 5 MeV multi-electron bunch and dependence of bunch duration (FWHM) and charge on initial KE spread. (a) and (c) show the evolving transverse dynamics of the bunch plotted in figures \ref{Fig_01}(b) and \ref{Fig_01}(d); (b) and (d) show the transverse dynamics of the bunch plotted in figures~\ref{Fig_01}(c) and \ref{Fig_01}(e). (a) and (b) show the evolution of the bunch radius, $\sigma_{x}$, while (c) and (d) show the evolution of the transverse normalized momentum, $\sigma_{\gamma\beta_{x}}$. The vertical dotted line labeled ``OL'' shows the time when the laser pulse peaks overlap and the vertical dashes labeled ``MC'' correspond to the time of maximum compression as shown in figure~\ref{Fig_01}. The cases with no electron-intensity grating interaction are plotted using the red dotted lines. The intensity grating imparts a significant momentum spread only during interaction. (e) The bunch FWHM duration at the focus increases with increasing initial KE spread. Durations of about $40$ as can be achieved at spreads of $10^{-2}\%$, and durations of $\leq20$ as can be achieved for spreads of $10^{-3}\%$ and lower. (g) shows the electron density distribution at the time of maximum compression of the central well (red distribution), which are the values used to plot the 0.2 fC case in (e), for different values of initial electron KE spread. The bunch durations in attoseconds are indicated above each peak.} \label{Fig_02} \end{figure} Figures \ref{Fig_02}(a) and \ref{Fig_02}(c) show the transverse dynamics induced by the intensity grating for the case studied in figures \ref{Fig_01}(b) and \ref{Fig_01}(d) while and those in figures \ref{Fig_02}(b) and \ref{Fig_02}(d) correspond to the case studied in figures \ref{Fig_01}(c) and \ref{Fig_01}(e). The evolution of $\sigma_{x}$ and $\sigma_{\gamma\beta_{x}}$ with space-charge effects, but without laser-electron interaction, has been plotted using red dotted lines. It can be seen that the laser interaction imparts large transverse momenta in $x$ only during the time of interaction (vertical dotted line labeled ``OL''), but long after interaction, the transverse dynamics are practically indistinguishable from the case with no electron-intensity grating interaction. For the case shown in figure \ref{Fig_02}(c), the compression is strong enough such that maximum compression (vertical dashes labeled ``MC'') occurs before the grating has completely faded. Thus, the transverse momentum spread is still significant ($\sigma_{\gamma\beta_{x}} = 1.28\times10^{-3}$) compared to the case without the grating ($\sigma_{\gamma\beta_{x}} = 0.11\times10^{-3}$). For the case shown in figure~\ref{Fig_02}(d), maximum compression is attained just after the intensity grating has faded. Hence, the transverse momentum spread at maximum compression ($\sigma_{\gamma\beta_{x}} = 2.11\times10^{-3}\%$) is similar to the case where there is no electron-grating interaction ($\sigma_{\gamma\beta_{x}} = 2.01\times10^{-3}$). Hence, when low transverse momentum spread and bunch expansion is desired, care should be taken to ensure maximum compression is attained long after the intensity grating has faded. Figures~\ref{Fig_02}(e) and \ref{Fig_02}(f) show the achievable electron bunch duration at the focus and the amount of charge contained within the FWHM duration as a function of initial electron KE spread for a fixed amount of total charge. The laser pulse and electron bunch parameters used (except for the charge amount and initial KE spread) are the same as those used to produce figures~\ref{Fig_01}(b) and \ref{Fig_01}(d). Figure~\ref{Fig_02}(e) indicates that with initial relative KE spreads on the order of $0.1\%$, which is achievable with the current state-of-the-art few-MeV beamlines~\cite{Maxson2017Direct-Measurem}, compressed bunches of durations on the order of hundreds of attoseconds can already be realized. For initial KE spreads on the order of about $10^{-2}\%$, sub-100 as bunches can be attained, and $\lesssim 10^{-3}\%$ initial KE spread yields bunches which have durations of 20 as and below at the focus. It should be noted that the charge contained within the attobunches can be enhanced by increasing the initial charge values without increasing the attobunch durations significantly (even up to 10 fC) due to the relativistic suppresion of space charge effects at few-MeV electron energies. Figure~\ref{Fig_02}(g) shows the electron density distribution for all attobunches at the time of maximum compression of the attobunch closest to the grating center (red distribution, values used to plot figure~\ref{Fig_02}(e)) for the 0.2 fC case. Our results indicate that despite each attobunch having differing focal times which depend on their relative distance from the center of the intensity grating, the final bunch durations across the entire macrobunch are similar, and by appropriate selection of laser pulse durations, the focal times for each attobunch can be controlled (Supporting Information Section S.2). Our results show that a combination of terahertz and optical technologies can be enabling concepts for the realization of high-charge electron bunches of sub-fs durations. \subsection{Theoretical predictions of key bunch parameters} We now present fully closed-form expressions for the behavior of charged particles subject to a pair of counter-propagating electromagnetic pulses. These expressions, which neglect space charge effects, have been used to predict various key properties of our bunch compression scheme -- including the focal time (maximum compression), the bunch duration at focus, and the final KE spread -- and show excellent agreement with the results of our \textit{ab initio} simulations in regimes where space charge and non-paraxial laser pulse effects are small (see figure \ref{Fig_03}). \begin{figure}[ht!] \centering \includegraphics[width = \textwidth]{Fig_03} \caption{Dependence of compressed electron bunch properties on laser field amplitudes and initial relative KE spread. $E_{01}E_{02}$ denotes product of the peak field strengths (on-axis values for non-paraxial case), which is varied as a parameter in (a)-(c). $\sigma_{\mathrm{KE}}/\langle \mathrm{KE}\rangle$ is the initial relative KE spread and is varied as a parameter in (d)-(f). The FWHM duration at focus is plotted in (a) and (d), the focal time in (b) and (e), and the final KE spread in (c) and (f). Circles indicate theory, crosses indicate simulations where the laser fields are modelled as plane wave pulses, and the triangles indicate simulations where the laser fields are modelled as exact non-paraxial pulses ($w_{0} = 300~\mathrm{\mu{m}}$). The terahertz and optical pulses have FWHM durations of 1 ps and 30 fs respectively. The electron bunch mean KE is 5 MeV and has a radius of 10 $\mu$m. Space charge effects are neglected in this comparison. We note the excellent agreement without the theoretical predictions and plane wave simulation results. Discrepancies between plane wave and non-paraxial simulations results show the non-trivial influence of the transverse laser pulse profile in our scenarios.} \label{Fig_03} \end{figure} We start from the Newton-Lorentz equations of motion, which describe the dynamics of electrons in arbitrary electromagnetic fields. Treating the counter-propagating laser pulses as pulsed plane waves and considering an electron moving in an arbitrary direction such that the transverse ($x,y$-direction) momenta are small compared to the longitudinal ($z$-direction) momentum, we obtain the normalized electron velocity long after interaction as (see Supporting Information Sections S.1 to S.3 for detailed derivations): \begin{equation} \beta_{z,f}' \approx \Bigg{(} \sqrt{\frac{\pi}{\alpha_{a}}}\frac{T_{1}'}{\omega'}\cos(\Delta\theta)\frac{e^{2}E_{01}'E_{02}'}{m_{e}^{2}c^{2}}\sin(2k'z_{OLe}' + \phi_{0})\exp\Bigg{\{} \frac{-4(z_{OLe}' - z_{OL}')^{2}}{c^{2}[T_{1}'^{2}(1+\beta_{z,i}')^{2} + T_{2}'^{2}(1 - \beta_{z,i}')^{2}]} \Bigg{\}} \Bigg{)} + \beta_{z,i}' \label{eqn_betafp} \end{equation} where the primes on the variables indicate that they are evaluated in the frame moving at normalized velocity $\vec{\beta}=\beta_{0}\hat{z}$. We define this to be the primed frame. For our electron bunch compression scheme, we take $\beta_{0}$ as the mean normalized velocity of the electron bunch being compressed. $E_{0j}'$ and $T_{j}'$ respectively refer to the electric field amplitude and pulse duration of the laser pulse labelled by subscript $j$, where $j=1$ ($j=2$) refers to the laser pulse which co-propagates (counter-propagates) with respect to the electron bunch. $\omega' = k'c$ is the central angular frequency of the laser pulses (which have the same frequency in the primed frame), $\Delta\theta$ is the relative angle between the polarization vectors associated with the two laser pulses (which we set to $0$ here for the strongest compression), $\phi_{0}$ is a phase constant that depends on the carrier envelope phase of each laser pulse, and $\beta_{z,i}'$ is the initial normalized electron speed. The intensity peaks of the counter-propagating laser pulses overlap at position $z' = z_{OL}'$ and time $t' = t_{OL}'$, and we define the longitudinal electron position at the time $t' = t_{OL}'$ to be $z_{OLe}'$ in the limit where the laser field strengths go to zero. $\alpha_{a}$ is defined as \begin{equation} \alpha_{a} \equiv (1 - \beta_{z,i}')^{2} + \frac{T_{1}'^{2}}{T_{2}'^{2}}(1+\beta_{z,i}')^{2}. \end{equation} We also obtain the corresponding electron position long after interaction as \begin{equation} \begin{split} z_{f}'(t') &= \beta_{z,f}'ct' + z_{OLe}' - \beta_{z,i}'ct_{OL}'\\ &+\Bigg{(} \frac{\alpha_{b}}{\alpha_{a}}\sqrt{\frac{\pi}{\alpha_{a}}}\frac{T_{1}'}{2\omega'}\cos(\Delta\theta)\frac{e^{2}E_{01}'E_{02}'}{m_{e}^{2}c}\sin(2k'z_{OLe}' + \phi_{0}) \exp\Bigg{\{} \frac{-4(z_{OLe}' - z_{OL}')^{2}}{c^{2}[T_{1}'^{2}(1 + \beta_{z,i}')^{2} + T_{2}'^{2}(1 - \beta_{z,i}')^{2}]} \Bigg{\}} \Bigg{)} \end{split} \label{eqn_zfp} \end{equation} where \begin{equation} \alpha_{b} = \frac{2}{c}\bigg{\{} \frac{T_{1}'^{2}}{T_{2}'^{2}}(1+\beta_{z,i}')[z_{OLe}' - z_{OL}' - (1+\beta_{z,i}')ct_{OL}'] -(1 - \beta_{z,i}')[z_{OLe}' - z_{OL}' + (1 - \beta_{z,i}')ct_{OL}'] \bigg{\}}. \end{equation} When the bunch has vanishing longitudinal velocity spread, i.e. $\beta_{z,i}'=0$, the general expression for the focal time, defined as the time between $t_{OL}'$ and the electrons reaching maximum compression, $t_{comp}'$ , is: \begin{equation} t_{comp}' - t_{OL}' = \frac{m_{e}}{K_{0}'\sqrt{\pi}}\sqrt{\frac{1}{T_{1}'^{2}} + \frac{1}{T_{2}'^{2}}}\exp\Bigg{[} \frac{4(z_{OLe}' - z_{OL}')^{2}}{c^{2}(T_{1}'^{2} + T_{2}'^{2})} \Bigg{]} + \frac{z_{OLe}' - z_{OL}'}{c}\Bigg{(} \frac{T_{2}'^{2} - T_{1}'^{2}}{T_{1}'^{2}+T_{2}'^{2}} \Bigg{)}. \label{eqn_foc_times} \end{equation} Here, $K_{0}' = (2e^{2}E_{01}'E_{02}'\cos\Delta\theta)/(m_{e}c^{2})$. In the special case where we consider the electrons near the center of the intensity grating ($z_{OLe}' \approx z_{OL}'$) and $T_{1}' = T_{2}' = T'$,$E_{01}' = E_{02}' = E_{0}'$ , (\ref{eqn_foc_times}) reduces to \begin{equation} t_{comp}' - t_{OL}' = \Bigg{(} \frac{m_{e}}{K_{0}'}\sqrt{\frac{2}{\pi}}\Bigg{)}\frac{1}{T'} \end{equation} which agrees with the analytical result obtained in \cite{Hilbert2009Temporal-lenses}, modulo a factor of $\sqrt{2/\pi}$ which comes from our choice of a Gaussian pulsed profile. The overlap of the optical and terahertz pulses results in a finite-length intensity grating in which electrons farther from the center of the intensity grating generally experience a weaker compressive force. This effect is taken into account through the exponential factors in (\ref{eqn_betafp}) and (\ref{eqn_zfp}), as well as through $\alpha_{b}$. The results in figure \ref{Fig_03} show the excellent agreement between our analytical predictions (circles) and numerical results when the laser pulses are modelled as pulsed plane waves (crosses). The discrepancy between the plane wave simulations and the exact numerical results using non-paraxial pulses (triangles) shows the importance of taking into account the transverse profiles of the focused optical and terahertz pulses in our simulations. Nevertheless, we also note that these exact results follow the trend predicted by our theory relatively well in the regime considered in figure \ref{Fig_03}. In figure \ref{Fig_03}, the 5 MeV, $10~\mathrm{\mu{m}}$-radius electron bunch was modelled using $3.75\times10^{5}$ particles, and has a uniform random distribution in $z$ over a length of $\lambda_{gr}$. The initial bunch is normally-distributed in $x$ and $y$. The initial momentum spread for all cases is normally-distributed in all directions and isotropic: $\sigma_{\gamma\beta_{x}} = \sigma_{\gamma\beta_{y}} = \sigma_{\gamma\beta_{z}}$. We used the following initial relative KE spreads: $\sigma_{\mathrm{KE}}/\langle\mathrm{KE}\rangle= $ 0.02\%, 0.06\%, 0.10\%, and 0.14\%. The corresponding momentum spreads are $\sigma_{\gamma\beta_{i}}=1.9615\times10^{-3}$, $5.8848\times10^{-3}$, $9.8075\times10^{-3}$, and $1.3731\times10^{-2}$ respectively ($i\in\{x,y,z\}$). All electron bunch and laser pulse parameters are listed in Supporting Information Section S.5(vi). Figures \ref{Fig_03}(a) and \ref{Fig_03}(d) show that a larger initial electron bunch KE spread makes it more difficult to compress the bunch unless higher laser field strengths field strengths are used. Figure \ref{Fig_03} thus highlights the importance of low energy spread in realizing attosecond bunches. As seen in figure \ref{Fig_03}(a), a change in relative initial KE spread from 0.02\% to 0.14\% can cause the electron bunch durations at the focus to increase by almost an order of magnitude. In the limit where $E_{01}E_{02}$ is small, we see from figure \ref{Fig_03}(a)-(c) that it is possible to obtain fs-scale electron bunches with a very small (practically negligible) change in energy spread, at the cost of a longer focal time. In figure \ref{Fig_03}(b), the decrease in the focal time approximately as $1/E_{01}'E_{02}' = 1/E_{01}E_{02}$ agrees with the trend predicted by (\ref{eqn_foc_times}). Using the formalism described here, the predicted durations and focal times for the cases shown in figures~\ref{Fig_01}(b)-\ref{Fig_01}(e) are also in good agreement with our \textit{ab initio} simulations. For the case shown in figures~\ref{Fig_01}(b) and \ref{Fig_01}(d), the predicted FWHM duration for both the left and right attobunches is $9$ as, which is a good estimate of the numerically computed values of $21$ as and $20$ as; the theoretical time of maximum compression is $0.123$ ns, which is very close to the actual value of $0.127$ ns. For the case shown in figures~\ref{Fig_01}(c) and \ref{Fig_01}(e), the theoretically predicted durations of the left and right attobunches are $352$ as and $338$ as respectively while the numerically computed durations are $391$ as and $367$. The theoretical time of maximum compression is $0.032$ ns, which is very close to the actual value of $0.033$ ns. \section{Discussion} \begin{figure}[ht!] \centering \includegraphics[width = 100mm]{Fig_04} \caption{Operating regimes for electron pulse compression. The colormap shows the electron bunch kinetic energies which can be matched by a range of co-propagating and counter-propagating wavelengths ($\lambda_{1}$ and $\lambda_{2}$ respectively). Only the region corresponding to $\lambda_{1} < \lambda_{2}$ is plotted. The region in which $\lambda_{1} > \lambda_{2}$ corresponds to a counter-propagating grating. The region bounded by the black dashed lines correspond to the terahertz regime~\cite{Dhillon2017a} for $\lambda_{2}$, and the black dotted lines within further divide this frequency range into bandwidths that are currently attainable through optical rectification of LiNbO3 and organic crystals, difference frequency generation (DFG), and plasma ionization. The yellow star marks the 31 keV, non-relativistic case, studied in~\cite{Baum2007Attosecond-elec}, whereas the blue star marks the 5 MeV, relativistic case which we study in this paper.} \label{Fig_04} \end{figure} Here, we present an overview of the electron kinetic energies which can be matched using sources of coherent light at various wavelengths, as well as a brief comparison between our scheme and existing electron bunch compression schemes. The interest in working with electrons of larger kinetic energies is due to the relativistic suppression of space charge effects, which allows shorter bunch durations to be achieved in this compression scheme. The development of intense, coherent terahertz sources on a table-top scale~\cite{Yeh2007Generation-of-1,single_cycle_1THz,OR_organic_crystals,THz_DFG,HuangOptLett2013,Dhillon2017a,Fulop_THz_OR,Fulop_mJ_THz,THz_0.4mJ,THz_0.9mJ} as figure \ref{Fig_04} shows, unlocks a range of electron kinetic energies spanning 4 orders of magnitude (keV to 10 MeV). By contrast, using only wavelengths falling in the optical to near-infrared regime (0.4 $\mathrm{\mu{m}}$ to 1.4 $\mathrm{\mu{m}}$) would limit us to electron kinetic energies of 100 keV or less. Although the mechanism here can be extended to electron kinetic energies on the order of $10^{2}$ MeV and higher, much larger laser intensities would be involved for effective compression. The study of the use of this mechanism for such ultrarelativistic electrons is beyond the scope of this work. We note that alternative techniques for producing highly-compressed electron bunches of kinetic energies from tens-of-MeV to GeV include the use of compact inverse free electron laser systems~\cite{PRL_hightrapping_efficiency} and undulator modulators~\cite{PRL_single_cycle_XUV} have already been demonstrated and proposed. These methods may be more practical when larger dedicated accelerator facilities are available. However, for compact acceleration schemes such as dielectric laser acceleration~\cite{RJEngland_DLA}, the ability to produce fC-scale, few-MeV electron bunches modulated to sub-fs scales as injection sources, like those presented in this work, are of interest. A number of laser-based sources of intense terahertz radiation, suitable for the use in the present compression scheme, as well as other forms of charged particle manipulation, have already been reported in the literature. Single-cycle and quasi-single-cycle terahertz radiation centered at 1 to 2 THz with peak field strengths on the order of 1 MV/cm have been achieved using optical rectification of LiNbO3 with tilted pulse front pumping~\cite{Yeh2007Generation-of-1,single_cycle_1THz} and optical rectification of organic crystals with high non-linear constants~\cite{OR_organic_crystals}. Semiconductors have also been shown to be a promising alternative for generating high-energy THz pulses using this technique~\cite{Fulop_THz_OR}. Terahertz pulse energies on the order of tens of $\mu$J are already routinely produced~\cite{Dhillon2017a} from these compact sources and energies up to 1 mJ~\cite{THz_0.9mJ,THz_0.4mJ} have already been demonstrated. With the high field strengths accompanying these high pulse energies, shorter attobunch durations and focusing times can be achieved, which our results in figure~\ref{Fig_03} predicts. The development of compact THz sources of higher energies, would alleviate the need for extremely tight-focusing of the THz pulse in order to achieve the desired field strengths. Difference frequency generation (DFG) of optical parameteric amplifiers have been used to produced narrow-band, multi-cycle pulses at mid-infrared frequencies (15-30 terahertz) and higher fields strengths of 100 MV/cm~\cite{THz_DFG}. While ultra-broadband terahertz radiation can be produced using plasma ionization~\cite{THz_plasma_ionization}, the field strengths are typically lower than those achieved using optical rectification. However, they could potentially be used for the compression of low-charge or single-electron bunches with small energy spreads over longer focal distances. We note that greater flexibility in our choice of wavelength for matching a given electron kinetic energy can be achieved by tilting the counter-propagating pulses~\cite{Hilbert2009Temporal-lenses,KozakNatPhys2017,PhysRevLett.120.103203,Kozak2015Electron-accele}. In this case, however, too large a tilt angle will lead to restrictions on the transverse size of the electron bunch. Nevertheless, the concept of tilting laser pulses could be implemented in the terahertz-optical scheme to accommodate an even wider range of electron kinetic energies. Dielectric membranes, in combination with an optical laser pulse, have been used to compress non-relativistic (70 keV), single-electron bunches to attosecond-scale durations~\cite{Baum2017NatPhys}. When non-relativistic electrons are considered, the laser field strength required to modulate the bunch remains low enough to avoid material damage. However, relativistic electron bunches require much higher intensities for compression to attosecond time-scales, making material damage more likely. The scheme studied in the present paper allows high-intensity lasers to be used without the risk of material damage. \section{Conclusion} We presented a scheme in which counter-propagating terahertz and optical pulses are used to compress relativistic electrons into a train of attosecond-duration bunches. Due to the space-charge suppression at few MeV-scale energies, significant amounts of charge can be contained within each attobunch, compared to previously realized attobunches that have only single or very few electrons. Our \textit{ab initio} simulations take near- and far-field space charge effects (associated with the Coulomb force and the electron radiation respectively) into account, and use exact, non-paraxial pulse profiles to model single-cycle, tightly-focused terahertz pulses; this is a significant advance over previous numerical studies of similar intensity grating compression schemes, which assumed non-interacting electrons and planar or paraxial electromagnetic waves. We presented results for attosecond electron bunch compression in two regimes. The first case involved the compression of a lower-charge electron cloud into attobunches with durations of about $20$ (FWHM), containing about 246 electrons. Such short-duration bunches could be used, for instance, as sources of high-quality coherent radiation through processes like inverse Compton scattering~\cite{Kiefer_rel_elec_mirrors}, Smith-Purcell radiation~\cite{Sergeeva2017Smith-Purcell-r}, transition radiation~\cite{Zhang2017Transition-radi}, and through electron-plasmon scattering~\cite{WLJ_nat_photon_2016,Rosolen_LightSci_2018}. We find that the realization of this scenario depends on having kinetic energy spreads which are extremely low but feasible~\cite{PhysRevSTAB.18.120102}. In the second, the initial electron bunch contains 20 fC of charge and is comparable to the bunches that can be produced by existing few-MeV scale electron sources. In this case, we showed that the electrons can be compressed into smaller bunches of sub-400 as durations (FWHM), each containing up to 1 fC of charge. Besides electron diffraction applications (e.g. time-resolved atomic diffraction in~\cite{Baum2017NatPhys}), these bunches could potentially serve as pre-accelerated injection sources for compact dielectric laser acceleration (DLA) schemes, in which fC-scale, few-MeV electron bunches are desirable as input~\cite{RJEngland_DLA}. The modulated sub-fs bunches generated by our scheme can fit into the phase space acceleration buckets -- typically also of sub-laser wavelength length-scales -- which could improve the accelerated beam quality~\cite{RJEngland_DLA}. The sub-micron transverse bunch dimensions required for injection into typical optical DLA schemes can be achieved through the use of electron beam focusing optics. Our results indicate that attosecond-scale electron bunches are not inherently limited to the few-to-single-electron regime, which has been the focus of other studies. \begin{acknowledgements} We thank the National Supercomputing Center (NSCC) Singapore for the use of their computing resources. LJW acknowledges support from the Science and Engineering Research Council (SERC; grant no. 1426500054) of the Agency for Science, Technology and Research (A*STAR), Singapore. All authors made critical contributions to this manuscript and declare no competing financial interests. \end{acknowledgements}
2,877,628,090,041
arxiv
\section{Introduction} Free boundary problems (FBP) are of great importance, both physically and mathematically. FBP are boundary value problems for partial differential equations where an unknown moving boundary must be determined \cite{AlSo,Ca,Ru,Ta}. In this paper, we formulate a FBP for a nonlinear diffusion-convection equation namely Rosen-Fokas-Yorstos equation \cite{FoYo,Ro}. This equation describes fluid diffusion with convective effects in porous media and has multiple applications, for example, to ground water hydrology, oil reservoir engineering and other biological applications as the drug propagation in the arterial tissues. In \cite{BrTa2019,BuDeLiFi2018} a FBP on a finite interval is formulated and solved for a nonlinear diffusion-convection equation which describe drug diffusion in arterial tissues after the drug is released by an arterial stent and the problem is reduced to a system of nonlinear integral equations. We will study a one-dimensional FBP for the diffusion-convection equation with a variable Dirichlet condition(which is the one novelty with respect to \cite{BrTa2019,BuDeLiFi2018}) at the fixed face x=0 and a Stefan like condition on the free boundary which has a convective term. The present paper is organized as follows: In Section 2, we introduce the FBP and through several transformations we map the FBP for the nonlinear diffusion-convection equation into an equivalent FBP for the linear heat-diffusion equation. In Section 3, we give an equivalent integral formulation to problem which requires to solve a system of three coupled nonlinear Volterra integral equations. Section 4 is subdivided into two subsections: in subsection 4.1 , fixed one unknown, we prove existence and uniqueness of the solution, local in time, by using Banach fixed point theorem, in subsection 4.2 we use the Schauder fixed point theorem to prove that there exists at least one solution of this unknown. We can remark that sequential transformations used on Section 2 have been previously used in different physical context as modelled, in particular, by moving boundary problems, for example \cite{Br1990,BrTa1998,Fo,FoRoSc,Ro,Ro2019,RoBr,RoStCl} \section{Free boundary problem} We consider the free boundary $s=s(t)>0$, defined for $t>0$, and $u(x,t) $ which satisfy a diffusion-convection equation with the following conditions: \begin{equation} u_{t}=u^{2}(Du_{xx}-u_{x})\;\;\;,\;\;0<x<s(t)\;\;,\;\;t>0\;\;, \label{calor1} \end{equation} \begin{equation} u(0,t)=f(t)\;,\;\;t>0\;\;, \label{calor110} \end{equation} \begin{equation} u(s(t),t)=\beta>0\;,\;\;t>0\;\;, \label{tf} \end{equation} \begin{equation} Du_{x}(s(t),t)-u(s(t),t)=-\dot{s}(t)\;,\;\;t>0\;\;,\label{stefan} \end{equation} \begin{equation} u(x,0)=u_{0}(x)>\beta \;,\;\;0\leq x \leq b\;\;, \label{t2} \end{equation} \begin{equation} s(0)=b\; \label{tempborde} \end{equation} where $D$ is the diffusivity, $u_{0}$ is the initial concentration and $f=f(t)$ is the concentration in the fixed face $x=0$. We assume that: \begin{equation} f\in C^{1}[0,\sigma],\quad u_{0}\in C^{1}[0,b],\quad u_{0}(0)=f(0),\quad u_{0}(b)=\beta, \quad f(t)>\frac{3\beta}{2}\label{hip} \end{equation} Following \cite{BrTa2019,BuDeLiFi2018,Ro} we will transform this problem in the one which is governed by the Burgers equation. We have: \begin{lemma} A) If $u=u(x,t)$, $s=s(t)$ is a solution to the problem (\ref{calor1})-(\ref{tempborde}) then $v=v(z,t)$, $z_{0}(t)$, $z_{1}(t)$ defined by: \begin{equation} v(z,t)=u(x,t),\label{firsttrans} \end{equation}where \begin{equation} z(x,t)= C_{1}+\int_{0}^{t}\left(u(0,\tau)-Du_{x}(0,\tau)\right)d\tau + \int_{0}^{x}\frac{1}{u(\eta,t)}d\eta\label{zeta} \end{equation} \begin{equation} z_{0}(t)=z(0,t)=C_{1}+\int^{t}_{0} \left(f(\tau)-D\frac{v_{z}(z_{0}(\tau),\tau)}{f(\tau)}\right)d\tau\label{zeta0} \end{equation} \begin{equation} z_{1}(t)=z(s(t),t)=C_{2}+(\beta+1)t-\frac{D(\beta +1)}{\beta^{2}}\int^{t}_{0} v_{z}(z_{1}(\tau),\tau)d\tau\label{zeta1} \end{equation} with $C_{1}$ an arbitrary constant, is a solution to the problem given by the Burgers equation \begin{equation} v_{t}=Dv_{zz}-2vv_{z}\;\;\;,\;\;z_{0}(t)<z<z_{1}(t)\;\;,\;\;t>0\;\;, \label{calor11} \end{equation} with the following initial and boundary conditions: \begin{equation} v(z_{0}(t),t)=f(t)\;,\;\;t>0\;\;, \label{calor111} \end{equation} \begin{equation} v(z_{1}(t),t)=\beta\;,\;\;t>0\;\;, \label{tf1} \end{equation} \begin{equation} D\frac{v_{z}(z_{1}(t),t)}{v(z_{1}(t),t)}-v(z_{1}(t),t)=-\frac{\beta}{\beta +1}\dot{z}_{1}(t)\;,\;\;t>0\;\;,\label{stefan1} \end{equation} \begin{equation} v(z,0)=v_{0}(z)\;,\;\;C_{1}\leq z\leq C_{2}\;\;, \label{t21} \end{equation} \begin{equation} z_{0}(0)=C_{1}\;,\;\;z_{1}(0)=C_{2} \label{tempborde1} \end{equation} where \begin{equation} v_{0}(z)=u_{0}(g^{-1}(z)),\quad\quad g(x)=C_{1}+\int^{x}_{0}\frac{1}{u_{0}(\eta)}d\eta \end{equation} \begin{equation} C_{2}=C_{1}+U_{0}=C_{1}+\int^{b}_{0}\frac{1}{u_{0}(\eta)}d\eta\label{c2} \end{equation} and the constants $b$, $C_{1}$ and $C_{2}$ satisfy the following relation \begin{equation} b=\int^{C_{2}}_{C_{1}}v_{0}(z) dz \end{equation} B) Conversely if $v=v(z,t)$, $z_{0}(t)$, $z_{1}(t)$ is the solution to the problem $(\ref{calor11})-(\ref{tempborde1})$ then $u=u(x,t)$, $s=s(t)$ given by \begin{equation} u(x,t)=v(z,t),\label{secondtrans} \end{equation} with \begin{equation} x(z,t)= \int_{z_{0}(t)}^{z} v(\eta,t)d\eta,\label{equis} \end{equation} \begin{equation} s(t)=x(z_{1}(t),t)=\int_{z_{0}(t)}^{z_{1}(t)} v(\eta,t)d\eta\ \label{yy}\end{equation} is a solution to the problem $(\ref{calor1})-(\ref{tempborde})$. \end{lemma} \begin{proof} A) From $(\ref{firsttrans})$, $(\ref{zeta})$ and by $(\ref{calor1})$ we have \[ z_{x}=\tfrac{1}{u(x,t)}=\tfrac{1}{v(z,t)},\quad z_{t}=u(x,t)-Du_{x}(x,t)=v(z,t)-D\tfrac{v_{z}(z,t)}{v(z,t)}, \] and \[ u_{x}(x,t)=\tfrac{v_{z}(z,t)}{v(z,t)},\quad u_{xx}(x,t)=\tfrac{v_{zz}(z,t)}{v^{2}(z,t)}-\tfrac{v^{2}_{z}(z,t)}{v^{3}(z,t)}, \] \[ u_{t}(x,t)=v_{t}(z,t)+v_{z}\left(v(z,t)-D\tfrac{v_{z}(z,t)}{v(z,t)}\right). \] Then, from (\ref{calor1}) we get (\ref{calor11}) which is the Burgers equation for the dependent variable $v(z,t)$. Taking into account $(\ref{zeta})$ the domain $D=\left\lbrace(x,t)/0<x<s(t), t>0\right\rbrace$ for $u(x,t)$ is transformed into the domain $D^{*}=\left\lbrace(z,t)/z_{0}(t)<z<z_{1}(t),t>0)\right\rbrace$ for $v(z,t)$, where $z_{0}(t)$ and $z_{1}(t)$ are given by \[ z_{0}(t)=z(0,t)=C_{1}+\int_{0}^{t}\left(u(0,\tau)-Du_{x}(0,\tau)\right)d\tau \] \[ z_{1}(t)=z(s(t),t)=C_{1}+\int_{0}^{t}\left(u(0,\tau)-Du_{x}(0,\tau)\right)d\tau + \int_{0}^{s(t)}\frac{1}{u(\eta,t)}d\eta\] If we derivate $z_{1}$ respect to variable $t$ and we use $(\ref{calor1})$ and the conditions $(\ref{calor110})$-$(\ref{tempborde})$, we obtain the follow relation \[ \dot{z}_{1}(t)=\tfrac{\beta+1}{\beta}\dot{s}_{1}(t). \] Then, from $(\ref{stefan})$ we have $(\ref{stefan1})$ and the expression $(\ref{zeta1})$ for $z_{1}(t)$, where $z_{1}(0)=C_{1}+\int_{0}^{b}\frac{1}{u_{0}(\eta)}d\eta=C_{2}$. Equations $(\ref{calor111})$ and $(\ref{tf1})$ follows inmediatly from $(\ref{calor110})$ and $(\ref{tf})$ respectively. For $t=0$ we have that \[z=C_{1}+ \int_{0}^{x}\frac{1}{u_{0}(\eta)}d\eta=g(x), \] then $(\ref{t2})$ is equivalent to $v(z,0)=u_{0}\left(g^{-1}(z)\right)$ for $C_{1}\leq z \leq C_{2}$ where $C_{2}=C_{1}+ \int_{0}^{b}\frac{1}{u_{0}(\eta)}d\eta$. Therefore $(\ref{t21})$ holds. To prove B) we consider $(\ref{secondtrans})$, $(\ref{equis})$ and the $(\ref{calor11})-(\ref{tempborde1})$ which are satisfied by $v=v(z,t)$, $z_{0}(t)$, $z_{1}(t)$. We have \[x_{z}=v(z,t), \quad x_{t}=Dv_{z}-v^{2}(z,t). \] Moreover, for $z=z_{0}(t)$ is $x=0$ and for $z=z_{1}(t)$ is $x=\int_{z_{0}(t)}^{z_{1}(t)} v(\eta,t)d\eta=s(t).$ Since \[ v_{t}=Du^{2}_{x}u-u_{x}u^{2}+u_{t}, \quad v_{z}=u_{x}u,\quad v_{zz}=u_{xx}u^{2}+u^{2}_{x} u \] then $(\ref{calor11})$ yields $(\ref{calor1}).$ The conditions $(\ref{calor110})$, $(\ref{tf})$ and $(\ref{t2})$ follows inmediatly from $(\ref{calor111})$, $(\ref{tf1})$ and $(\ref{t21})$ respectively. To prove $(\ref{stefan})$, from $(\ref{yy})$ we calculate $\dot{s}(t)$ and use $(\ref{calor11})$ and $(\ref{tf1})$. We have \[ \dot{s}(t)=v(z_{1}(t),t)\dot{z}_{1}(t)-v(z_{0}(t),t)\dot{z}_{0}(t)+\int_{z_{0}(t)}^{z_{1}(t)} v_t(\eta,t)d\eta\] \[=\beta-D\frac{v_{z}(z_{1}(t),t)}{\beta}= \beta-Du_{x}(s(t),t) \] and $(\ref{stefan})$ holds. \end{proof} \begin{remark}Eq. $(\ref{zeta})$ is equivalent to the relations \begin{equation} z_{x}=\frac{1}{u(x,t)},\quad\quad\quad z_{t}=u(x,t)-Du_{x}(x,t). \end{equation} Eq. $(\ref{equis})$ is equivalent to \begin{equation} x_{z}=v(z,t), \quad x_{t}=Dv_{z}-v^{2}(z,t). \end{equation} \end{remark} Now we introduce the Galilean Transformation given by \begin{equation} V(y,t)=v(z,t)-\beta, \quad\quad\quad y=z-2\beta t\quad\quad t>0\label{2} \end{equation} to obtain de following result: \begin{lemma} Under the transformation $(\ref{2})$ the problem (\ref{calor11})-(\ref{c2}) is equivalent to the following FBP: \begin{equation} V_{t}=DV_{yy}-2VV_{y}\;\;\;,\;\;y_{0}(t)<y<y_{1}(t)\;\;,\;\;t>0\;\;, \label{cal} \end{equation} \begin{equation} V(y_{0}(t),t)=f(t)-\beta\;,\;\;t>0\;\;, \label{cal1} \end{equation} \begin{equation} V(y_{1}(t),t)=0\;,\;\;t>0\;\;, \label{t1} \end{equation} \begin{equation} D\frac{V_{y}(y_{1}(t),t)}{\beta}=\frac{\beta(1-\beta)-\beta\dot{y}_{1}(t)}{\beta +1}\;,\;\;t>0\;\;,\label{ste1} \end{equation} \begin{equation} V(y,0)=V_{0}(y)\;,\;\;C_{1}\leq y \leq C_{2}\;\;, \label{tt21} \end{equation} \begin{equation} y_{0}(0)=C_{1}\;,\;\;y_{1}(0)=C_{2} \label{temp} \end{equation} where \begin{equation} V_{0}(y)=v_{0}(y)-\beta \end{equation} \begin{equation} y_{0}(t)=C_{1}-2\beta t+\int^{t}_{0} \left(f(\tau)-D\frac{V_{y}(y_{0}(\tau),\tau)}{f(\tau)}\right)d\tau\label{f} \end{equation} \begin{equation} y_{1}(t)=C_{2}+(1-\beta)t-\frac{D(\beta +1)}{\beta^{2}}\int^{t}_{0} V_{y}(y_{1}(\tau),\tau)d\tau.\label{fr} \end{equation} \end{lemma} \begin{proof} The Galilean transformation $(\ref{2})$ leaves invariant the Burgers equation $(\ref{calor11})$. The free boundaries $y_{0}(t)$ and $y_{1}(t)$ given by $(\ref{f})$-$(\ref{fr})$ are obtained from $(\ref{zeta0})$-$(\ref{zeta1})$. The conditions $(\ref{cal1})$-$(\ref{temp})$ follows from $(\ref{calor111})$-$(\ref{tempborde1})$. Conversely, if we define \[v(z,t)=V(y,t)+\beta, \quad\quad\quad z=y+2\beta t\quad\quad t>0, \] from (\ref{cal})-(\ref{fr}) we obtain (\ref{calor11})-(\ref{c2}) with $z_{0}(t)$ and $z_{1}(t)$ given by $(\ref{zeta0})$ and $(\ref{zeta1})$ respectively. \end{proof} Let us now transform problem $(\ref{cal})-(\ref{fr})$ in the one which is governed by a heat-diffusion equation using the Hopf Cole transformation given by \begin{equation} w(y,t)=C(t)V(y,t)\eta(y,t), \quad y_{0}(t)\leq y \leq y_{1}(t)\;\;,\;\;t>0,\label{tres} \end{equation}with \begin{equation} C(t)=1-\int^{t}_{0} w_{y}(y_{1}(\tau),\tau)d\tau,\label{ce} \end{equation}and \begin{equation} \eta(y,t)=exp\left(\tfrac{1}{D}\int_{y}^{y_{1}(t)}V(\xi,t)d\xi\right).\label{eta} \end{equation} We have the following result: \begin{theorem} Under transformation $(\ref{tres})-(\ref{eta})$ problem $(\ref{cal})-(\ref{fr})$ is equivalent to the free boundary problem $(\ref{calu})-(\ref{free})$ given by: \begin{equation} w_{t}=Dw_{yy}\;\;\;,\;\;y_{0}(t)<y<y_{1}(t)\;\;,\;\;t>0\;\;, \label{calu} \end{equation} \begin{equation} w(y_{0}(t),t)=(f(t)-\beta)\left(C(t)+\frac{1}{D}\int_{y_{0}(t)}^{y_{1}(t)}w(\xi,t)d\xi\right)\;,\;\;t>0\;\;, \label{cal1.} \end{equation} \begin{equation} w(y_{1}(t),t)=0\;,\;\;t>0\;\;, \label{t1.} \end{equation} \begin{equation} \frac{Dw_{y}(y_{1}(t),t)}{\beta C(t)}=\frac{\beta(1-\beta)-\beta\dot{y}_{1}(t)}{\beta +1}\;,\;\;t>0\;\;,\label{ste1.} \end{equation} \begin{equation} w(y,0)=F(y)\;,\;\;C_{1}\leq y \leq C_{2}\;\;, \label{tt21.} \end{equation} \begin{equation} y_{0}(0)=C_{1}\;,\;\;y_{1}(0)=C_{2} \label{tempu.} \end{equation} where \begin{equation} F(y)=V_{0}(y)exp\left(\tfrac{1}{D}\int_{y}^{C_{2}}V_{0}(\xi)d\xi\right)=V_{0}(y)\left(1-\frac{1}{D}\int_{y}^{C_{2}}w(\xi,0)d\xi\right)\label{efemay} \end{equation} and the free boundaries $y_{0}=y_{0}(t)$ and $y_{1}=y_{1}(t)$ are given by: \begin{equation} y_{0}(t)=C_{1}-\beta^{2}\int^{t}_{0} \frac{1}{f(\tau)}d\tau-D\int^{t}_{0}\frac{w_{y}(y_{0}(\tau),\tau)}{w(y_{0}(\tau),\tau)}\left(1-\tfrac{\beta}{f(\tau)}\right)d\tau,\label{freee} \end{equation} \begin{equation} y_{1}(t)=C_{2}+(1-\beta)t+\frac{D(\beta +1)}{\beta^{2}}log\left(1-\int^{t}_{0} w_{y}(y_{1}(\tau),\tau)d\tau\right).\label{free} \end{equation} \end{theorem} \begin{proof} To prove the equivalence of the two problems we will deduce the inverse transformation to the relation$(\ref{tres})$ by considering the definition $(\ref{eta})$, we have \[ log\left(\eta(y,t)\right)=\tfrac{1}{D}\int_{y}^{y_{1}(t)}V(\xi,t)d\xi \] then \[ \eta_{y}(y,t)=-\tfrac{1}{D}V(y,t)\eta(y,t)=-\tfrac{1}{D}\frac{w(y,t)}{C(t)} .\] Integrating on variable $y$, it follows that \[ \eta(y,t)=\frac{C(t)+\frac{1}{D}\int_{y}^{y_{1}(t)}w(\xi,t)d\xi}{C(t)}. \] Therefore, we have that the inverse relation to the generalized Hopf-Cole transformation $(\ref{tres})$ is expressed by: \begin{equation} V(y,t)=\frac{w(y,t)}{C(t)+\frac{1}{D}\int_{y}^{y_{1}(t)}w(\xi,t)d\xi}.\label{inverse} \end{equation} Under transformation $(\ref{inverse})$ the Burgers equation $(\ref{cal})$ is mapped into the linear heat-diffusion equation $(\ref{calu}).$ The initial and boundary conditions $(\ref{cal1.})-(\ref{tempu.})$ are easily obtained from $(\ref{cal1})-(\ref{temp})$. The expressions $(\ref{freee})$ and $(\ref{free})$ for the free boundaries are obtained from $(\ref{f})$ and $(\ref{fr})$ respectively. The converse is proved analogously. \end{proof} \section{Integral formulation} In this section, we give an integral formulation of the free boundary problem $(\ref{calu})-(\ref{free})$. We have the following equivalence theorem. \begin{theorem} Let $(\ref{hip})$ and $0<D<2$ be. The solution to the free boundary problem $(\ref{calu})-(\ref{free})$ has the following integral representation \begin{equation} w(y,t)=\int\nolimits_{C_{1}}^{C_{2}}G(y,t;\xi ,0)F(\xi )d\xi +D \int_{0}^{t}\phi_{1}(\tau )G(y,t;y_{1}(\tau ),\tau )d\tau \label{z} \end{equation} \[ +\beta^{2}\int_{0}^{t} \frac{h(\tau)}{f(\tau)} G(y,t;y_{0}(\tau ),\tau )d\tau -D\beta\int_{0}^{t} \frac{\phi_{2}(\tau)}{f(\tau)} G(y,t;y_{0}(\tau ),\tau ) d\tau \] \[-D\int_{0}^{t} h(\tau) N_{y}(y,t;y_{0}(\tau ),\tau ) d\tau, \] with \begin{equation} h(t)=(f(t)-\beta)\left(C(t)+\frac{1}{D}\int_{y_{0}(t)}^{y_{1}(t)}w(\xi,t)d\xi\right),\label{ache} \end{equation} \begin{equation} y_{0}(t)=C_{1}-\beta^{2}\int^{t}_{0} \frac{1}{f(\tau)}d\tau-D\int^{t}_{0}\tfrac{\phi_{2}(\tau)}{h(\tau)}\left(1-\tfrac{\beta}{f(\tau)}\right)d\tau,\label{ycero} \end{equation} \begin{equation} y_{1}(t)=C_{2}+(1-\beta)t+\tfrac{D(\beta +1)}{\beta^{2}}ln\left(1-\int^{t}_{0} \phi_{1}(\tau)d\tau\right) \label{ese} \end{equation} and $\phi_{1}$, $\phi_{2}$ are defined by \begin{equation} \phi_{1}\left(t\right) =\frac{\partial w }{\partial y}\left( y_{1}(t),t\right) \;\;,\;\;\phi_{2}\left( t\right) = \frac{\partial w }{\partial y}\left( y_{0}(t),t\right) \label{def} \end{equation} if and only if it satisfies the following system of two Volterra integral equations: \[ \phi_{1}\left(t\right) =\frac{2}{2-D }\left\{ \int\nolimits_{C_{1}}^{C_{2}}N(y_{1}(t),t;\xi ,0)F^{\prime }(\xi )d\xi+ D \int_{0}^{t}\phi_{1}(\tau )G_{y}(y_{1}(t),t;y_{1}(\tau ),\tau )d\tau \right. \] \[ +\beta^{2}\int_{0}^{t} \frac{h(\tau)}{f(\tau)} G_{y}(y_{1}(\tau),t;y_{0}(\tau ),\tau )d\tau -D\beta\int_{0}^{t} \frac{\phi_{2}(\tau)}{f(\tau)} G_{y}(y_{1}(\tau),t;y_{0}(\tau ),\tau ) d\tau , \] \begin{equation}\left.-\int_{0}^{t} h'(\tau) N(y_{1}(t),t;y_{0}(\tau ),\tau ) d\tau \right\rbrace, \label{ecintegralf} \end{equation} \[ \phi_{2}\left(t\right) =\frac{2f(t)}{2f(t)-D\beta}\left\{-\beta^{2}\frac{h(t)}{f(t)} +\int\nolimits_{C_{1}}^{C_{2}}N(y_{0}(t),t;\xi ,0)F^{\prime }(\xi )d\xi \right. \] \[ + D \int_{0}^{t}G_{y} (y_{0}(t),t;y_{1}(\tau ),\tau )\phi_{1}(\tau )d\tau +\beta^{2}\int_{0}^{t}\frac{h(\tau)}{f(\tau)}G_{y}(y_{0}(t),t;y_{0}(\tau ),\tau )(\tau )d\tau \] \begin{equation} \left. - D\beta \int_{0}^{t}\frac{\phi_{2}(\tau)}{f(\tau)}G_{y}(y_{0}(t),t;y_{0}(\tau ),\tau )(\tau )d\tau -\int_{0}^{t}h'(\tau)N(y_{0}(t),t;y_{0}(\tau ),\tau )(\tau )d\tau \right\}, \label{ecintegralf1} \end{equation} where $G$, $N$ are the Green and Neumann functions respectively, and $K\;$is the fundamental solution to the heat equation, defined by \begin{equation} G\left( x,t,\xi ,\tau \right) =K\left( x,t,\xi ,\tau \right) -K\left( -x,t,\xi ,\tau \right), \label{defG} \end{equation} \begin{equation} N\left( x,t,\xi ,\tau \right) =K\left( x,t,\xi ,\tau \right) +K\left( -x,t,\xi ,\tau \right), \label{defN} \end{equation} \begin{equation} K\left( x,t,\xi ,\tau \right) =\left\{ \begin{array}{ll} \frac{1}{2\sqrt{\pi D\left( t-\tau \right) }}\exp \left( -\frac{\left( x-\xi \right) ^{2}}{4D\left( t-\tau \right) }\right) & t>\tau \\ 0 & t\leq \tau \end{array} \right. \label{defK} \end{equation} and $y_{0}\;$, $y_{1}$ are given by $\left( \ref{ycero}\right) $ and $% \left( \ref{ese}\right) $ respectively. Moreover, function $h(t)=w(y_{0}(t),t)$ must satisfy the integral relation \begin{equation} h(t)= (f(t)-\beta)\left(1-\int_{0}^{t}\phi_{1}(\tau)d\tau+\frac{1}{D}\int_{y_{0}(t)}^{y_{1}(t)}w(y,t)dy\right).\label{eqache} \end{equation} \end{theorem} \begin{proof} Let $w (y,t)$, $y_0 (t)$, $y_1(t)$ be the solution to the problem $(\ref{calu})-(\ref{free})$. We integrate on the domain \[ D_{t,\epsilon}=\left\{ \left( \xi ,\tau \right) \text{ }/\text{ }% y_{0}(\tau )<\xi <y_{1}\left( \tau \right) ,\text{ }\epsilon <\tau <t-\epsilon \right\} \;(\epsilon >0), \] $\;$ the Green identity \begin{equation} D\left( Gw _{\xi}-wG_{\xi }\right) _{\xi }-\left( Gw \right) _{\tau }=0\;\; \label{ngreen} \end{equation} and we let $\epsilon \rightarrow 0$, to obtain the integral representation for $w (y,t)\;$\cite{Fr1959,Ru} \begin{equation} w(y,t)=\int\nolimits_{C_{1}}^{C_{2}}G(y,t;\xi ,0)w(\xi,0) d\xi +D \int_{0}^{t}w_{\xi}(y_{1}(\tau),\tau)G(y,t;y_{1}(\tau ),\tau )d\tau \label{now} \end{equation} \[ +\beta^{2}\int_{0}^{t} \frac{w(y_{0}(\tau),\tau)}{f(\tau)} G(y,t;y_{0}(\tau ),\tau )d\tau -D\beta\int_{0}^{t} \frac{w_{\xi}(y_{0}(\tau),\tau)}{f(\tau)} G(y,t;y_{0}(\tau ),\tau ) d\tau \] \[+D\int_{0}^{t} w(y_{0}(\tau),\tau) G_{\xi}(y,t;y_{0}(\tau ),\tau ) d\tau . \] By using the definitions of $\phi_{1}$ and $\phi_{2}$ given by $(\ref{def})$, the definition of $h$ and boundary conditions we have $(\ref{z}).$ If we differentiate (\ref{now}) in variable $y$ and we let $% y\rightarrow y_{0}^{+}(t)$ and $y\rightarrow y_{1}^{-}(t),$ by using the jump relations \cite{Fr1959} we obtain the system of integral equations $% \left( \ref{ecintegralf}\right) $ and $\left( \ref{ecintegralf1}\right) $ for $\phi_{1}$ and $\phi_{2}.$ Moreover, from (\ref{ce}) and (\ref{ache}) we have the equation (\ref{eqache}). Conversely, the function $w(y,t)$ defined by (\ref{z}), where $% \phi_{1} $ and $\phi_{2}$ are the solutions of $\left( \ref{ecintegralf}\right) $ and $% \left( \ref{ecintegralf1}\right) ,$ satisfies the conditions $(\ref{calu})$, $(\ref{ste1.})$ - $(\ref{tempu.})$. In order to prove the conditions $(\ref{cal1.})$ and $(\ref{t1.})$ we define \[ \mu _{1}\left( t\right) =w(y_{1}(t),t)\;\text{% and}\;\mu _{2}\left( t\right) =h(t)-w(y_{0}(t),t).\; \] \\ If we integrate the Green identity\ (\ref{ngreen})$\;$over the domain $% D_{t,\varepsilon }$ $\left( \varepsilon >0\right) $ and we let $\varepsilon \rightarrow 0,$ we obtain that \[ w(y,t)=\int\nolimits_{C_{1}}^{C_{2}}G(y,t;\xi ,0)w(\xi,0)d\xi +D\int_{0}^{t} G(y,t;y_{1}(\tau ),\tau )\phi_{1}(\tau)d\tau \] \[-D\int_{0}^{t} G_{y}(y,t;y_{1}(\tau ),\tau )w (y_{1}(\tau ),\tau )d\tau +\int_{0}^{t} G(y,t;y_{1}(\tau ),\tau )w(y_{1}(\tau),\tau)y^{'}_{1}(\tau)d\tau \] \[ -\int_{0}^{t}G(y,t;y_{0}(\tau ),\tau )\left[w(y_{0}(\tau),\tau)y^{'}_{0}(\tau) -D\phi_{2}(\tau)\right]d\tau \] \begin{equation} +D\int_{0}^{t} G_{\xi}(y,t;y_{0}(\tau ),\tau )w(y_{0}(\tau),\tau) d\tau. \label{zbis} \end{equation} \\ Then, if we compare this last expression (\ref{zbis}) with (\ref{z}) we deduce that \[ \int_{0}^{t}G(y,t;y_{0}(\tau ),\tau )\left[\frac{\beta^{2}}{f(\tau)}\mu_{2}(\tau)-\frac{D\phi_{2}(\tau)}{f(\tau)}\left(\beta+\frac{w(y_{0}(\tau),\tau)(f(\tau)-\beta)}{h(\tau)}\right)+D\phi_{2}(\tau)\right]d\tau \] \[ +D\int_{0}^{t} G_{y}(y,t;y_{0}(\tau ),\tau )\mu_{2}(\tau) d\tau+D\int_{0}^{t} G_{y}(y,t;y_{1}(\tau ),\tau )\mu_{1}(\tau)d\tau \] \begin{equation} -\int_{0}^{t} G(y,t;y_{1}(\tau ),\tau )\mu_{1}(\tau)\left[(1-\beta)-\frac{D\phi_{1}(\tau)(\beta+1}{\beta^{2}C(\tau)}\right] d\tau =0. \label{des} \end{equation} \\ By taking $y\rightarrow y_{1}^{-}(t)$ and $y\rightarrow y_{0}^{+}(t)$ in (\ref{des}), and the jump relations we obtain that $\mu _{1}$ and $\mu _{2}\;$ must satisfy the following system of Volterra integral equations: \begin{equation} \mu _{1}(t)=\frac{-2}{D}\int_{0}^{t} \left\lbrace DG_{y}(y_{1}(t),t;y_{1}(\tau ),\tau )-G(y,t;y_{1}(\tau ),\tau )\left[(1-\beta)-\tfrac{D\phi_{1}(\tau)(\beta+1}{\beta^{2}C(\tau)}\right]\right\rbrace\mu_{1}(\tau), \label{fiuno} \end{equation} \[ + \left\lbrace DG_{y }(y_{1}(t),t;y_{0}(\tau ),\tau )+G(y_{1}(t),t;y_{0}(\tau ),\tau )\left[\tfrac {\beta}{f(\tau)}-D\phi_{2}(\tau)\left(\tfrac {\beta}{f(\tau}-\tfrac{1}{h(\tau)}\right)\right]\right\rbrace\mu_{2}(\tau)d\tau \] \\ \begin{equation} \mu _{2}(t)=\frac{2}{D}+\int_{0}^{t} \left\lbrace DG_{y}(y_{0}(t),t;y_{0}(\tau ),\tau )-G(y_{0}(t),t;y_{0}(\tau ),\tau )\right.\label{fidos} \end{equation} \[ \left.\left[\tfrac {\beta}{f(\tau)}-D\phi_{2}(\tau)\left(\tfrac {\beta}{f(\tau}-\tfrac{1}{h(\tau)}\right)\right]\right\rbrace \mu_{2}(\tau) \] \[ +\left\lbrace D G_{y}(y_{0}(t),t;y_{1}(\tau ),\tau )- G(y_{0}(t),t;y_{1}(\tau ),\tau )\left[(1-\beta)-\frac{D\phi_{1}(\tau)(\beta+1}{\beta^{2}C(\tau)}\right]\right\rbrace\mu_{1}(\tau) d\tau . \] \\ Following \cite{Mi}, it's easy to see that there exist a unique solution $% \mu _{1}\equiv \mu _{2}\equiv 0$ to the system of Volterra integral equations (\ref{fiuno})-(\ref{fidos}). Then $(\ref{cal1.})$ and $(\ref{t1.})$ are verified and the result holds.\medskip \end{proof} \begin{section}{Existence of the solution} In order to prove existence of solution $w=w(y,t)$, $y=y_{0}(t)$ and $y=y_{1}(t)$ of $(\ref{calu})-(\ref{free})$ and taking into account the result of Theorem 3.1 we will demonstrate that there exists at least a local solution $\phi_{1}$,$\phi_{2}$ and $h$ to the coupled nonlinear integral equations $\left( \ref{ecintegralf}\right) $, $% \left( \ref{ecintegralf1}\right)$ and $(\ref{eqache})$. \\ We will proceed in the following way: Fixed positive constants $H$,$R$, $S$ and $\sigma$ we define the set $\Pi=\Pi(H,R,S,\sigma)$ given by \begin{equation} \Pi :=\left\lbrace h\in C^{1}[0,\sigma]/h(t)\geq H , \left\|h\right\|\leq R,\left\|h'\right\|\leq S\right\rbrace \end{equation} where $\left\|h\right\|=\max \limits_{ t\in[0,\sigma]}\left|h(t)\right|$. Clearly $\Pi$ is a compact and convex set in $C^{1}[0,\sigma]$. \\ For each fixed function $h\in\Pi_1=\left\lbrace h\in C^{1}[0,1]/h(t)\geq H , \left\|h\right\|\leq R,\left\|h'\right\|\leq S\right\rbrace$ we will use the Banach fixed point Theorem in order to prove that there exist unique solutions $\phi_{1},\;\phi_{2}\;\in C^{0}\left[ 0,\sigma \right] \;$ to the system of two Volterra integral equations (\ref {ecintegralf})\ and (\ref{ecintegralf1}). Then for suitable $H,R,S$ and $\sigma$, by using Shauder's fixed point Theorem we will demonstrate that there exists at least a solution $h \in \Pi_{1}$ of (\ref{eqache}). \begin{subsection}{Existence and uniqueness of $\phi_{1},\;\phi_{2}$} We consider the Banach space \[ \textbf{C}[0,\sigma]=\left\{ \stackrel{\longrightarrow }{\phi^{*}}=\binom{\phi_{1}}{\phi_{2}}% /\;\phi_{i}:\left[ 0,\sigma \right] \rightarrow {\Bbb R} ,\quad i=1,2,\;\text{continuous} \right\} \] with the norm \[ \left\| \stackrel{\longrightarrow }{\phi^{*}}\right\| _{\sigma }:=\max \limits_{ t\in \left[ 0,\sigma \right] }\left| \phi_{1}(t)\right| +\max\limits_{ t\in \left[ 0,\sigma \right] }\left| \phi_{2}(t)\right| \] and the subset: \[ C_{M,\sigma}=\left\{ \stackrel{\longrightarrow }{\phi^{*}}\in \textbf{C}[0,\sigma] /\left\| \stackrel{\longrightarrow }{\phi^{*}}\right\|_{\sigma }\leq M\right\} \] with $\sigma \,\,$ and $M$ positive numbers to be determinate. \\ We define the map $\chi:C_{M,\sigma }\longrightarrow C_{M,\sigma },$ such that \[ \chi\left( \stackrel{% \longrightarrow }{\phi^{*}}\right)(t) =\binom{\chi_{1}(\phi_{1}(t),\phi_{2}(t))}{\chi_{2}(\phi_{1}(t),\phi_{2}(t)))% } \] where \[ \chi_{1}(\phi_{1}(t),\phi_{2}(t))=\tfrac{2}{2-D }\left\{ \int\nolimits_{C_{1}}^{C_{2}}N(y_{1}(t),t;\xi ,0)F^{\prime }(\xi )d\xi+ D \int_{0}^{t}\phi_{1}(\tau )G_{y}(y_{1}(t),t;y_{1}(\tau ),\tau )d\tau \right. \] \[ +\beta^{2}\int_{0}^{t} \frac{h(\tau)}{f(\tau)} G_{y}(y_{1}(\tau),t;y_{0}(\tau ),\tau )d\tau -D\beta\int_{0}^{t} \frac{\phi_{2}(\tau)}{f(\tau)} G_{y}(y_{1}(\tau),t;y_{0}(\tau ),\tau ) d\tau , \] \begin{equation}\left.-\int_{0}^{t} h'(\tau) N(y_{1}(t),t;y_{0}(\tau ),\tau ) d\tau \right\rbrace, \label{F1} \end{equation} \[ \chi_{2}(\phi_{1}(t),\phi_{2}(t))= \frac{2f(t)}{2f(t)-D\beta}\left\{-\beta^{2}\frac{h(t)}{f(t)} +\int\nolimits_{C_{1}}^{C_{2}}N(y_{0}(t),t;\xi ,0)F^{\prime }(\xi )d\xi \right. \] \[ + D \int_{0}^{t}G_{y} (y_{0}(t),t;y_{1}(\tau ),\tau )\phi_{1}(\tau )d\tau +\beta^{2}\int_{0}^{t}\frac{h(\tau)}{f(\tau)}G_{y}(y_{0}(t),t;y_{0}(\tau ),\tau )(\tau )d\tau \] \begin{equation} \left. - D\beta \int_{0}^{t}\frac{\phi_{2}(\tau)}{f(\tau)}G_{y}(y_{0}(t),t;y_{0}(\tau ),\tau )(\tau )d\tau -\int_{0}^{t}h'(\tau)N(y_{0}(t),t;y_{0}(\tau ),\tau )(\tau )d\tau \right\}. \label{F2} \end{equation} We will prove that for suitable $M$ and $\sigma$, the map $\chi$ is well defined and it is also a contraction, therefore by the Banach fixed point Theorem it has a unique fixed point. Firstly, we give some preliminary results \begin{lemma}\label{cotasy} Let $f(t)>\frac{3\beta}{2}$, $0<D<2$ and $\phi_{i}\in C^{0}\left[ 0,\sigma \right] ,\max \limits_ {t\in \left[ 0,\sigma \right]}\left| \phi_{i}(t)\right| \leq M,(i=1,2)$. If $$2(1+\beta)\left(1+\frac{M}{\beta^{2}}\right)\sigma \leq C_{2},\quad\quad 2\left(\beta+2\frac{MD}{H}\right)\sigma\leq C_{1}\;$$ then $y_{0}$ and $y_{1}$ defined by (\ref{ycero})$\;$% and (\ref{ese}) satisfies \begin{equation} \left| y_{0}(t)-y_{0}(\tau) \right| \leq \left(\beta+2\tfrac{DM}{H}\right)\left| t-\tau \right| \text{ \ ,\ }\forall \tau ,t\in \left[ 0,\sigma \right], \label{io} \end{equation} \begin{equation} \tfrac{C_{1}}{2}\leq y_{0}(t) \leq 3\tfrac{C_{1}}{2},\forall t\in \left[ 0,\sigma \right], \label{io1} \end{equation} \begin{equation} \left| y_{1}(t)-y_{1}(\tau) \right| \leq (1+\beta)\left(1+\tfrac{M}{\beta^{2}}\right)\left|t-\tau \right| \text{ \ ,\ }\forall \tau ,t\in \left[ 0,\sigma \right],\label{io2} \end{equation} \begin{equation} \tfrac{C_{2}}{2}\leq y_{1}(t)\leq 3\tfrac{C_{2}}{2},\text{ }% \forall t\in \left[ 0,\sigma \right].\label{io3} \end{equation} \end{lemma} \begin{proof} It follows inmediatly from definitions (\ref{ycero})-(\ref{ese}) and assumptions on data. \end{proof} To prove the following Lemmas we need to use the classical inequality \begin{equation} \dfrac{\exp \left( \frac{-x^{2}}{\alpha \left( t-\tau \right) }\right) }{% \left( t-\tau \right) ^{\frac{n}{2}}}\leq \left( \frac{n\alpha }{2ex^{2}}% \right) ^{^{\frac{n}{2}}}\;,\;\alpha ,x>0\;,\;t>\tau \;,\;n\in {\Bbb N.} \label{exp} \end{equation} \begin{lemma}\label{cotasint} $\;$Let $\sigma \leq 1.$ For each function $h\in \Pi_{1}$ under the hypothesis of Lemma $\ref{cotasy}$ and $C_{1}<\frac{U_{0}}{2}$ we have that following properties are satisfied \begin{equation} \int\nolimits_{C_{1}}^{C_{2}}\left| F^{\prime }(\xi )\right| \left| N(y_{1}(t),t;\xi ,0)\right| d\xi \leq \left\| F^{\prime }\right\| \leq A_{1}(u_{0},U_{0},\beta, D), \label{i} \end{equation} \begin{equation} D\int_{0}^{t}\left| G_{y}(y_{1}(t),t;y_{1}(\tau ),\tau )\phi_{1}(\tau )\right| d\tau \leq A_{2}(M,D,U_{0},C_{1})\;\sqrt{\sigma}, \label{olvidada} \end{equation} \begin{equation} \beta^{2}\int_{0}^{t}\left| G_{y}(y_{1}(t),t;y_{0}(\tau ),\tau )\frac{h(\tau )}{f(\tau)}\right| d\tau \leq A_{3}(R,D,\beta,C_{2},C_{1})\;\sqrt{\sigma} ,\label{olv} \end{equation} \begin{equation} \beta D\int_{0}^{t}\left| G_{y}(y_{1}(t),t;y_{0}(\tau ),\tau )\frac{\phi_{2}(\tau )}{f(\tau)}\right| d\tau \leq A_{4}(M,D,C_{2},C_{1})\;\sqrt{\sigma},\label{olv1} \end{equation} \medskip \begin{equation} \int_{0}^{t}\left| h'(\tau )\right| \left| N(y_{1}(t),t;y_{0}(\tau ),\tau )\right| d\tau \leq A_{5}(S,D)\sqrt{\sigma} ,\label{ii} \end{equation} \medskip \begin{equation} \int\nolimits_{C_{1}}^{C_{2}}\left| F^{\prime }(\xi )\right| \left| N(y_{0}(t),t;\xi ,0)\right| d\xi \leq \left\| F' \right\| \leq A_{1}(u_{0},U_{0},\beta, D), \label{iii} \end{equation} \medskip \begin{equation} D\int_{0}^{t}\left| G_{y}(y_{0}(t),t;y_{1}(\tau ),\tau )\phi_{1}(\tau )\right| d\tau \leq A_{4}(D,M,C_{1},C_{2})\sqrt{\sigma}, \label{olvidada1} \end{equation} \begin{equation} \beta^{2}\int_{0}^{t}\left| G_{y}(y_{0}(t),t;y_{0}(\tau ),\tau )\frac{h(\tau )}{f(\tau)}\right| d\tau \leq A_{6}(R,M,D,\beta,C_{2},C_{1})\;\sqrt{\sigma} ,\label{olv2} \end{equation} \begin{equation} \beta D\int_{0}^{t}\left| G_{y}(y_{0}(t),t;y_{0}(\tau ),\tau )\frac{\phi_{2}(\tau )}{f(\tau)}\right| d\tau \leq A_{7}(M,D,C_{1})\;\sqrt{\sigma},\label{delhoy} \end{equation} \medskip \begin{equation} \int_{0}^{t}\left| h'(\tau )\right| \left| N(y_{0}(t),t;y_{0}(\tau ),\tau )\right| d\tau \leq A_{5}(S,D)\sqrt{\sigma} ,\label{iibis} \end{equation} \begin{equation} \beta^{2}\frac{h(t)}{f(t)}\leq \beta R, \end{equation} \medskip where \[ A_{1}(u_{0},U_{0},\beta, D)=exp\left(\tfrac{\left\|u_{0}\right\|+\beta U_{0}}{D}\right) \left[\left\|\frac{u_{0}^{'}}{u_{0}}\right\|+\frac{\left\|u_{0}\right\|+\beta}{D} \right], \] \[ A_{2}(M,D,U_{0},C_{1})=\frac{M\sqrt{D}}{2\sqrt{\pi }}\left[ 2M+\frac{3}{C_{2}^{2}}\left( \frac{2D% }{3e}\right) ^{3/2}\right] , \] \[ A_{3}(R,D,\beta,C_{2},C_{1})=\frac{R\beta}{2\sqrt{D\pi}}(A_{31}+A_{32)}), \] \[A_{31}=\frac{3C_{2}-C_{1}}{2}\left(\frac{24D}{e(C_{2}-3C_{1})^{2}}\right)^{\frac{3}{2}},\quad\quad A_{32}=\frac{18\sqrt{6}}{e^{3/2}(C_{1}+C_{2})^{2}}, \] \[ A_{4}(M,D,C_{2},C_{1})=\frac{M\sqrt{D}}{2\sqrt{\pi }}(A_{31}+A_{32)}), \] \[ A_{5}(S,D)=2{\frac{S}{% \sqrt{D\pi }}}, \] \[ A_{6}(R,M,D,\beta,C_{1})=\frac{\beta R M\sqrt{D}}{2\sqrt{\pi }}\left[ 2M+\frac{3}{C_{1}^{2}}\left( \frac{2D% }{3e}\right) ^{3/2}\right], \]\[ A_{7}(M,D,\beta,C_{1}))=DM\left[ 2M+\frac{3}{C_{1}^{2}}\left( \frac{2D% }{3e}\right) ^{3/2}\right] , \] \medskip \end{lemma} \begin{proof} To prove (\ref{i}) we consider \[ \int\nolimits_{C_{1}}^{C_{2}}\left| F^{\prime }(\xi )\right| \left| N(y_{1}(t),t;\xi ,0)\right| d\xi \leq \left\| F^{\prime }\right\| \int_{0}^{\infty }\left| N(y_{1}(t),t;\xi ,0)\right| d\xi \leq \left\| F^{\prime}\right\| . \] From $(\ref{efemay})$ we have \[ F'(y)=exp\left(\tfrac{1}{D}\int_{y}^{C_{2}}V_{0}(\xi)d\xi\right) \left[V^{'}_{0}(y)-\frac{1}{D}V_{0}^{2}(y)\right] \] then \[ \left\|F'\right\|\leq exp\left(\tfrac{\left\|V_{0}\right\|(C_{2}-C_{1})}{D}\right) \left[\left\|V_{0}^{'}\right\|+\tfrac{1}{D}\left\|V_{0}\right\|\right]\] \[\leq exp\left(\tfrac{(\left\|u_{0}\right\|+\beta) U_{0}}{D}\right) \left[\left\|\tfrac{u_{0}^{'}}{u_{0}}\right\|+\tfrac{\left\|u_{0}\right\|+\beta}{D} \right]=A_{1}(u_{0},U_{0},\beta, D). \] Following the proof given in \cite{BrTa2006,BrNa2012} and taking $C_{1}<\frac{U_{0}}{2}$ we obtain $(\ref{olvidada})$, $(\ref{olv})$, $(\ref{olv1})$,$(\ref{olvidada1})$, $(\ref{olv2})$ and $(\ref{delhoy}).$ To prove (\ref{ii}) we take into account that \[ \left| N(y_{1}(t),t;y_{0}(\tau ),\tau )\right| \leq \frac{1}{% \sqrt{\pi \left( t-\tau \right) }} \] so, we obtain \[ \int_{0}^{t}\left| {h^{'}}(\tau )\right| \left| N(y_{1}(t,t;y_{0}(\tau ),\tau )\right| d\tau \leq 2\sqrt{% \frac{t}{D\pi }} S. \] The inequalities (\ref{iii}) and (\ref{iibis}) are proved in the same way as (\ref{i}) and (\ref{ii}) respectively. \end{proof} \begin{lemma}\label{cotasyi} Let $y_{01}$ and $y_{02}$ be the functions corresponding to $\phi_{21}$ and $\phi_{22}$ in $C^{0}[0,\sigma ]$ respectively, and $y_{11}$ and $% y_{12}$ be the functions corresponding to $\phi_{11}$ and $\phi_{12}$ in $% C^{0}[0,\sigma ]$ respectively with $ \max\limits_{t\in \left[ 0,\sigma \right]} \left| \phi_{ij}(t)\right| \leq M,\,\quad i,j=1,2$. Under hypothesis Lemma \ref{cotasy} we have \begin{equation} \left\{ \begin{array}{c} \left| y_{01}(t)-y_{02}(t) \right| \leq \frac{2D}{H}\sigma\left\| \phi_{11}-\phi_{12}\right\| _{\sigma } ,\\ \\ \left| y_{01}(t)-y_{02}(\tau) \right| \leq \left(\beta+2\tfrac{DM}{H}\right) \left| t-\tau \right| ,\text{ }i=1,2, \\ \\ \frac{C_{1}}{2}\leq y_{0i}(t) \leq \frac{3C_{1}}{2},\text{ }% \forall t\in \left[ 0,\sigma \right] ,\text{ }i=1,2, \end{array} \right. \end{equation}\label{desy} and \begin{equation} \left\{ \begin{array}{c} \left| y_{11}(t)-y_{12}(t) \right| \leq \frac{\beta +1}{\beta^{2}}\sigma \left\| \phi_{21}-\phi_{22}\right\| _{\sigma } ,\\ \\ \left| y_{1i}(t)-y_{1i}(\tau) \right| \leq (1+\beta)\left(1+\tfrac{M}{\beta^{2}}\right)\left| t-\tau \right| ,\text{ }i=1,2, \\ \\ \frac{C_{2}}{2}\leq y_{1i}(t) \leq \frac{3C_{2}}{2},% \text{ }\forall t\in \left[ 0,\sigma \right] ,\text{ }i=1,2. \end{array} \right. \end{equation}\label{desyi} \end{lemma} \begin{proof} It follows inmediatly from definitions (\ref{ycero})-(\ref{ese}) and assumptions on data. \end{proof} \begin{lemma}\label{cotasintyi} If we take $\sigma \leq 1$, $ \tfrac{4M}{H}\left(\beta + \tfrac{2DM}{H}\right)\sigma \leq 1$ and we assume the hypothesis of Lemma \ref{cotasyi} then we have \begin{equation} \int_{C_{1}}^{C_{2}}\left| F^{^{\prime }}(\xi )\right| \left| N(y_{11}(t),t;\xi ,0)-N(y_{12}(t),t;\xi ,0)\right| d\xi \label{1iiprimero} \end{equation} \[ \leq \frac{2\left\|F^{\prime }\right\|_{[C_{1},C_{2}]}}{D\sqrt{\pi}}\left\| \phi_{11}-\phi_{12}\right\|_{\sigma} \sqrt{\sigma } \leq P_{1}(u_{0},D,\beta,U_{0}) \left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\sqrt{\sigma}, \] \begin{equation} D\int_{0}^{t}\left| \phi_{11}(\tau )G_{y}(y_{11}(t),t;y_{11}(\tau ),\tau )-\phi_{12}(\tau )G_{y}(y_{12}(t),t;y_{12}(\tau ),\tau )\right| d\tau \label{1v} \end{equation} \[ \leq P_{2}(M,D,C_{2})\sigma \left\| \phi_{11}-\phi_{12}\right\|\leq P_{2}(M,D,C_{2})\ \left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\| \;\sqrt{\sigma} , \] \begin{equation} \beta^{2}\int_{0}^{t} \tfrac{h(\tau)}{f(\tau)}\left|G_{y}(y_{11}(t),t;y_{01}(\tau ),\tau )-G_{y}(y_{12}(t),t;y_{02}(\tau ),\tau )\right| d\tau \label{1iiii} \end{equation} \[\leq P_{3}(R,\beta,C_{1},C_{2})\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\sqrt{\sigma} , \nonumber \] \begin{equation} D\beta\int_{0}^{t} \tfrac{1}{f(\tau)}\left| \phi_{21}(\tau )G_{y}(y_{11}(t),t;y_{01}(\tau ),\tau )-\phi_{22}(\tau )G_{y}(y_{12}(t),t;y_{02}(\tau ),\tau )\right| d\tau \label{1iiiiii} \end{equation} \[\leq P_{4}(D,M,C_{1},C_{2})\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\sqrt{\sigma}, \nonumber \] \begin{equation} \int_{0}^{t}\left|h'(\tau )\right| \left| N(y_{11}(t),t;y_{01}(\tau ),\tau)-N(y_{12}(t),t,y_{02}(\tau ),\tau )\right| d\tau \label{1iii} \end{equation} \[ \leq P_{5}(S,D,C_{1},C_{2})\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\sqrt{\sigma} , \] \begin{equation} \int_{C_{1}}^{C_{2}}\left| F^{^{\prime }}(\xi )\right| \left| N(y_{01}(t),t;\xi ,0)-N(y_{02}(t),t;\xi ,0)\right| d\xi \label{1ii} \end{equation} \[ \leq \frac{2\left\|F^{\prime }\right\|_{[C_{1},C_{2}]}}{D\sqrt{\pi}}\left\| \phi_{2,1}-\phi_{2,2}\right\|_{\sigma} \sqrt{\sigma } \leq P_{1}(u_{0},D,\beta,U_{0}) \left\|\vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\sqrt{\sigma}, \] \begin{equation} D\int_{0}^{t}\left| \phi_{11}(\tau )G_{y}(y_{01}(t),t;y_{11}(\tau ),\tau )-\phi_{12}(\tau )G_{y}(y_{02}(t),t;y_{12}(\tau ),\tau )\right| d\tau \label{1iibis} \end{equation} \[ \leq P_{4}(D,M,C_{1},C_{2})\ \left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\| \;\sqrt{\sigma} , \] \begin{equation} \beta^{2}\int_{0}^{t}\frac{h(\tau)}{f(\tau)}\left|G_{y}(y_{01}(t),t;y_{01}(\tau ),\tau )-G_{y}(y_{02}(t),t;y_{02}(\tau ),\tau )\right| d\tau \label{1v} \end{equation} \[ \leq P_{6}(D,H,R,M,C_{1})\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\| \;\sqrt{\sigma}, \] \begin{equation} D\int_{0}^{t}\left| \phi_{21}(\tau )G_{y}(y_{01}(t),t;y_{01}(\tau ),\tau )-\phi_{22}(\tau )G_{y}(y_{02}(t),t;y_{02}(\tau ),\tau )\right| d\tau \label{1vbis} \end{equation} \[ \leq P_{7}(M,D,C_{1})\ \left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\| \;\sqrt{\sigma}, \] \begin{equation} \int_{0}^{t}\left| h'(\tau )\right| \left| N(y_{11}(t),t;y_{01}(\tau ),\tau )-N(y_{12}(t),t,y_{0 2}(\tau ),\tau )\right| d\tau \label{1vi} \end{equation} \[ \leq P_{5}(S,D,C_{1},C_{2})\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\| \sqrt{\sigma} , \] where \begin{equation} P_{1}(u_{0},D,\beta,U_{0})=\tfrac{2}{D\sqrt{\pi}}\left[\left(\left\| u_{0}\right\|+\beta\right)exp\left(\tfrac{U_{0}}{\beta}\left(\left\| u_{0}\right\|+\beta\right)\right)+\tfrac{\left(\left\| u_{0}\right\|+\beta\right)^{2}}{D}\right], \label{defp1} \end{equation} \begin{equation} P_{2}(M,D,C_{2})=\tfrac{\sqrt{D}}{4\sqrt{\pi }}\left[ 6M+\tfrac{3}{C_{2}^{2}}\left( \tfrac{2% }{3e}\right) ^{3/2}+\tfrac{6M}{C_{2}^{2}}\left( \tfrac{6}{e}\right) ^{3/2}\right], \label{defp2} \end{equation} \begin{equation} P_{3}(R,\beta,C_{1},C_{2})=R\beta(P_{31}+P_{32}), \end{equation} with \begin{equation} P_{31}(C_{1},C_{2})=\tfrac{1}{\sqrt{\pi }e^{3/2}}\left[ \tfrac{\sqrt{6}\left( 3C_{2}-C_{1}\right) ^{2}}{16(C_{2}-3C_{1})^{3}}+\tfrac{27\sqrt{3}}{4}+\tfrac{12% \sqrt{6}}{(C_{2}-3C_{1})^{3}}+\tfrac{6\sqrt{3}}{(C_{2}+C_{1})^{3}}\right], \label{defp31} \end{equation} \begin{equation} P_{32}(C_{1},C_{2})=\tfrac{12\sqrt{6}}{\sqrt{\pi }e^{3/2}}\left[ \tfrac{1}{% (C_{2}-3C_{1})^{3}}+\tfrac{9}{8}+\tfrac{\left( 3C_{2}-C_{1}\right) ^{2}}{% 8(C_{2}-3C_{1})^{3}}+\tfrac{1}{(C_{2}+C_{1})^{2}}\right], \label{defp32} \end{equation} \begin{equation} P_{4}(D,M,C_{1},C_{2})=D\left[M(P_{31}+P_{32})+P_{41}\right], \end{equation} where \begin{equation} P_{41}(C_{1},C_{2})=\tfrac{\sqrt{6}}{\sqrt{\pi e}}\left[ \tfrac{1}{% (C_{2}-3C_{1})^{2}}+\tfrac{1}{(C_{2}+C_{1})^{2}}\right] ,\label{defp3} \end{equation} \begin{equation} P_{5}(S,D,C_{1},C_{2})=\tfrac{6^{3/2}SD}{\sqrt{\pi }e^{3/2}}\left[ \frac{% 3C_{2}-C_{1}}{(C_{2}-3C_{1})^{3}}+\frac{3}{(C_{2}+C_{1})^{2}}\right], \label{defp5} \end{equation} \begin{equation} P_{6}(D,H,R,M,C_{1})= \beta R\left\lbrace(2D)^{-1} (D\pi)^{-1/2}\left[\tfrac{2D}{H}+\tfrac{2}{H}\left(\beta + \tfrac{2DM}{H}\right)^{2}\right]\right. \end{equation} \[ \left. +\left(\tfrac{6}{eC_{1}^{2}}\right)^{3/2}\tfrac{18C_{1}^{2}+1}{4\sqrt{\pi}}\tfrac{4D}{H}\right\rbrace, \] \begin{equation} P_{7}(M,D,C_{1})=\tfrac{\sqrt{D}}{4\sqrt{\pi }}\left[ 6M+\tfrac{3}{C_{1}^{2}}\left( \tfrac{2% }{3e}\right) ^{3/2}+\tfrac{6M}{C_{1}^{2}}\left( \tfrac{6}{e}\right) ^{3/2}\right]. \label{defp2} \end{equation} \end{lemma} \begin{proof} The inequalities (\ref{1iiprimero})-(\ref{1iibis}) and (\ref{1vbis})-(\ref{1vi}) are obtained following \cite{BrNa2012}. We will show the proof of (\ref{1v}), following \cite{Sh}. We write \[ \left| G_{y}(y_{01}(t),t;y_{01}(\tau ),\tau )-G_{y}(y_{02}(t),t;y_{02}(\tau ),\tau )\right| \] \[ \leq \left| K_{y}(y_{01}(t),t;y_{01}(\tau ),\tau )-K_{y}(y_{02}(t),t;y_{02}(\tau ),\tau )\right|\] \[+\left| K_{y}(-y_{01}(t),t;y_{01}(\tau ),\tau )-K_{y}(-y_{02}(t),t;y_{02}(\tau ),\tau )\right|. \] Taking into account that \[ \left| K_{y}(y_{01}(t),t;y_{01}(\tau ),\tau )-K_{y}(y_{02}(t),t;y_{02}(\tau ),\tau )\right|\] \[ \leq(2D(t-\tau))^{-1} \left|K(y_{01}(t),t;y_{01}(\tau ),\tau )\left[\left(y_{01}(t)-y_{01}(\tau )\right)-\left(y_{02}(t)-y_{02}(\tau )\right)\right]\right. \] \[ +\left. \left[K(y_{01}(t),t;y_{01}(\tau ),\tau )-K(y_{02}(t),t;y_{02}(\tau ),\tau )\right]\left(y_{02}(t)-y_{02}(\tau )\right)\right|\] \[ \leq (2D(t-\tau))^{-1} K(y_{01}(t),t;y_{01}(\tau ),\tau ) \left|\left[\left(y_{01}(t)-y_{01}(\tau )\right)-\left(y_{02}(t)-y_{02}(\tau )\right)\right]\right. \] \[ \left.+\left[1-exp(m(t,\tau)\right]\left(y_{02}(t)-y_{02}(\tau )\right) \right|, \] where \[ m(t,\tau)=\frac{\left(y_{01}(t)-y_{01}(\tau )\right)^{2}-\left(y_{02}(t)-y_{02}(\tau )\right)^{2}}{4D(t-\tau)} \] \[=\frac{\left[\left(y_{01}(t)-y_{01}(\tau )\right)-\left(y_{02}(t)-y_{02}(\tau )\right)\right]\left[\left(y_{01}(t)-y_{01}(\tau )\right)+\left(y_{02}(t)-y_{02}(\tau )\right)\right]}{4D(t-\tau)}. \] We have \[ \left|\left(y_{01}(t)-y_{01}(\tau )\right)-\left(y_{02}(t)-y_{02}(\tau )\right)\right| \leq D\int_{\tau}^{t}\tfrac{1}{h(\eta)} \left| 1- \tfrac{\beta}{f(\eta)}\right|\left|\phi_{21}(\eta)-\phi_{22}(\eta)\right| d\eta \] \[ \leq \tfrac{2D}{H}\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|(t-\tau), \] and \[ \left|\left(y_{01}(t)-y_{01}(\tau )\right)+\left(y_{02}(t)-y_{02}(\tau )\right)\right| \leq 2\left(\beta + \tfrac{2DM}{H}\right)(t-\tau)\leq 4\left(\beta + \tfrac{2DM}{H}\right)\sigma, \] then \[ \left| m(t,\tau)\right|\leq\tfrac{2}{H}\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\left(\beta + \tfrac{2DM}{H}\right)\sigma, \] and taking into account that $\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\leq 2M$ we have \[ \left| m(t,\tau)\right|\leq\tfrac{4M}{H}\left(\beta + \tfrac{2DM}{H}\right)\sigma. \] If we assume that $\sigma$ satisfies \[\tfrac{4M}{H}\left(\beta + \tfrac{2DM}{H}\right)\sigma \leq 1, \] we obtain that \[\left|1-exp(m(t,\tau)\right|\leq 2\left| m(t,\tau)\right|\leq \tfrac{2}{H}\left(\beta + \tfrac{2DM}{H}\right)\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\sigma. \] Therefore \[ \left| K_{y}(y_{01}(t),t;y_{01}(\tau ),\tau )-K_{y}(y_{02}(t),t;y_{02}(\tau ),\tau )\right|\leq \] \[ \leq (2D)^{-1} K(y_{01}(t),t;y_{01}(\tau ),\tau ) \left[\tfrac{2D}{H}+\tfrac{2}{H}\left(\beta + \tfrac{2DM}{H}\right)^{2}\sigma\right]\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\| \] \[ \leq (4D)^{-1} (D\pi(t-\tau))^{-1/2} \left[\tfrac{2D}{H}+\tfrac{2}{H}\left(\beta + \tfrac{2DM}{H}\right)^{2}\sigma\right]\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|. \] Using the mean value theorem we may write \[\left| K_{y}(-y_{01}(t),t;y_{01}(\tau ),\tau )-K_{y}(-y_{02}(t),t;y_{02}(\tau ),\tau )\right| \] \[\leq \left| K(n(t,\tau),t;0,\tau)\left( \frac{n^{2}(t,\tau)}{4D^{2}(t-\tau)^{2}}-\frac{1}{2D(t-\tau)}\right)\right| \left|y_{01}(t)+y_{01}(\tau)-y_{02}(t)- y_{02}(\tau)\right| \] where $n=n\left( t,\tau\right)$ is between $y_{01}(t)+y_{01}(\tau)\;$ and $y_{02}(t)+ y_{02}(\tau)$. Since \[ \left|y_{01}(t)+y_{01}(\tau)-y_{02}(t)- y_{02}(\tau)\right|\leq \frac{4D}{H}\sigma \left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|, \] and \[C_{1}\leq n\left( t,\tau\right)\leq 6 C_{1},\] by using $(\ref{exp})$ we have \[\left| K(n(t,\tau),t;0,\tau)\left( \frac{n^{2}(t,\tau)}{4D^{2}(t-\tau)^{2}}-\frac{1}{2D(t-\tau)}\right)\right|\leq\left(\frac{6}{eC_{1}^{2}}\right)^{3/2}\frac{18C_{1}^{2}+1}{4\sqrt{\pi}}, \] then \[\left| K_{y}(-y_{01}(t),t;y_{01}(\tau ),\tau )-K_{y}(-y_{02}(t),t;y_{02}(\tau ),\tau )\right| \leq \left(\tfrac{6}{eC_{1}^{2}}\right)^{3/2}\tfrac{18C_{1}^{2}+1}{4\sqrt{\pi}}\tfrac{4D}{H}\sigma \left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|. \] Collecting the results we have \[ \left| G_{y}(y_{01}(t),t;y_{01}(\tau ),\tau )-G_{y}(y_{02}(t),t;y_{02}(\tau ),\tau )\right| \] \[ \leq \left\lbrace(4D)^{-1} (D\pi(t-\tau))^{-1/2} \left[\tfrac{2D}{H}+\tfrac{2}{H}\left(\beta + \tfrac{2DM}{H}\right)^{2}\right]+\left(\tfrac{6}{eC_{1}^{2}}\right)^{3/2}\tfrac{18C_{1}^{2}+1}{4\sqrt{\pi}}\tfrac{4D}{H}\right\rbrace \left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|, \] then \[ \beta^{2}\int_{0}^{t}\frac{h(\tau)}{f(\tau)}\left|G_{y}(y_{01}(t),t;y_{01}(\tau ),\tau )-G_{y}(y_{02}(t),t;y_{02}(\tau ),\tau )\right|d\tau \] \[\leq P_{6}(D,H,R,M,C_{1}\left\| \vec{\phi_{1}^{*}}-\vec{\phi_{2}^{*}}\right\|\sqrt{\sigma}, \] where \[P_{6}(D,H,R,M,C_{1})= \beta R\left\lbrace(2D)^{-1} (D\pi)^{-1/2}\left[\tfrac{2D}{H}+\tfrac{2}{H}\left(\beta + \tfrac{2DM}{H}\right)^{2}\right]\right. \] \[ \left. +\left(\tfrac{6}{eC_{1}^{2}}\right)^{3/2}\tfrac{18C_{1}^{2}+1}{4\sqrt{\pi}}\tfrac{4D}{H}\right\rbrace. \] Then, (\ref{1v}) has been proved \end{proof} \begin{theorem}\label{teocont} Let hypothesis $(\ref{hip})$ be. Fixed $0<C_{1}<\frac{U_{0}}{2}$ and $h\in \Pi$. If $\sigma $ satisfies the following inequalities \begin{equation} \sigma \leq 1,\,\quad 2(1+\beta)\left(1+\frac{M}{\beta^{2}}\right)\sigma \leq C_{2},\quad\quad \; \label{nose} \end{equation} \begin{equation} \left(\beta+2\frac{MD}{H}\right)\sigma\leq C_{1},\quad\quad\tfrac{4M}{H}\left(\beta + \tfrac{2DM}{H}\right)\sigma \leq 1, \end{equation} \begin{equation} H_{1}\left( C_{1},C_{2},U_{0},M,D,\beta,R,S,\sigma \right) \leq 1, \label{ache1} \end{equation} \begin{equation} H_{2}(C_{1},C_{2},U_{0},M,D,\beta,R,S,\sigma )\leq 1, \label{ache2} \end{equation} where $M$ is given by \begin{equation} M\left(u_{0,}U_{0},f,D,\beta,R\right)=1+\left(\frac{1}{2-D}+\frac{\|f\|}{\beta(3-D)}\right)2A_{1}+2\frac{\|f\|}{3-D}R, \label{erre} \end{equation} and \[ H_{1}\left( C_{1},U_{0},f,M,D,\beta,R,S,\sigma\right) =\left\lbrace\left(\frac{2}{2-D}\right) \left[A_{2}+A_{3}+A_{4}+\frac{2 S}{\sqrt{\pi D}}\right]\right. \] \begin{equation} \left. +\frac{2 \|f\|}{\beta (3-D)}\left[A_{4}+A_{5}+A_{6}+ \frac{2 S}{\sqrt{\pi D}}\right]\right\rbrace\sqrt{\sigma} ,\label{defache1} \end{equation} \[ H_{2}\left(C_{1},U_{0},f,M,D,\beta,R,S,\sigma \right) = \left\lbrace\frac{2}{2-D}\left[P_{1}+P_{2}+P_{3}+P_{4}+P_{5}\right]\right. \] \begin{equation} \left. +\frac{2 \|f\|}{\beta (3-D)}\left[P_{1}+P_{4}+P_{5}+P_{6} +P_{7}\right]\right\rbrace\sqrt{\sigma}, \label{defache2} \end{equation} then the map $\chi:C_{M,\sigma }\longrightarrow C_{M,\sigma }$ is well defined and it is a contraction map. Therefore there exists a unique solution $\phi_{1}^{*}$, $\phi_{2}^{*}$ on $C_{M,\sigma }$ to the system of integral equations (\ref{ecintegralf}) and (\ref{ecintegralf1}). \end{theorem} \begin{proof} Firstly, we demonstrate that $\chi $ maps $C_{M,\sigma }\;$into itself, that is \[ \left\| X\left( \stackrel{\longrightarrow }{\phi^{*}}\right) \right\| _{\sigma }=\max \limits_{t\in \left[ 0,\sigma \right] }\left| \chi_{1}(\phi_{1}(t),\phi_{2}(t))\right| +\max \limits_{t\in \left[ 0,\sigma \right] } \left| \chi_{2}(\phi_{1}(t),\phi_{2}(t))\right| \leq M \] Taking into account Lemma \ref{cotasint} we have \[ \left| \chi_{1}(\phi_{1}(t),\phi_{2}(t))\right| \leq \frac{2}{2-D} \left\lbrace A_{1}+\left[A_{2}+A_{3}+A_{4}+\frac{2 S}{\sqrt{\pi D}}\right]\sqrt{\sigma}\right\rbrace, \] \[ \left| \chi_{2}(\phi_{1}(t),\phi_{2}(t))\right| \leq \frac{2 \|f\|}{\beta (3-D)}\left\lbrace\beta R+A_{1}+\left[A_{4}+A_{5}+A_{6}+ \frac{2 S}{\sqrt{\pi D}}\right]\sqrt{\sigma}\right\rbrace, \] and then \[ \left\| \chi\left( \stackrel{\longrightarrow }{\phi^{*}}\right) \right\| _{\sigma }\leq 2A_{1}\left[\frac{1}{2-D}+\frac{\|f\|}{\beta (3-D)}\right] +\frac{2 \|f\| R}{(3-D)}+H_{1}(C_{1},C_{2},U_{0},M,D,\beta,R,S,\sigma) )\] where $H_{1}$ is given by $\left( \ref{defache1}\right).$ Selecting $M$ by $% \left( \ref{erre}\right) \;$and $\sigma \;$such that $\left( \ref{ache1}% \right) \;$holds, we obtain $\left\| \chi\left( \stackrel{\longrightarrow }{\phi^{*}% }\right) \right\| _{\sigma }\leq M.$ Now, we will prove that \[ \left\| \chi\left( \stackrel{\longrightarrow }{\phi_{1}^{*}}\right) -\chi\left( \stackrel{\longrightarrow }{\phi_{2}^{*}}\right) \right\| _{\sigma }\leq H_{2}\left( C_{1},C_{2},U_{0},M,D,\beta,R,S,\sigma \right) \left\| \stackrel{\longrightarrow }{\phi_{1}^{*}}-\stackrel{\longrightarrow }{\phi_{2}^{*}}% \right\| _{\sigma } \] where $\stackrel{\longrightarrow }{\phi_{1}^{*}}=\binom{\phi_{11}}{\phi_{12}}\;,\;% \stackrel{\longrightarrow }{\phi_{2}^{*}}=\binom{\phi_{21}}{\phi_{22}}$ $\in C_{M,\sigma }$. Taking into account Lemma \ref{cotasintyi} we have \[ \left\| \chi\left( \stackrel{\longrightarrow }{\phi_{1}^{*}}\right) -\chi\left( \stackrel{\longrightarrow }{\phi_{2}^{*}}\right) \right\| _{\sigma }= \max\limits_{t\in \left[ 0,\sigma \right] }\left| \chi_{1}\left( \phi_{11}\left( t\right) ,\phi_{12}\left( t\right) \right) -\chi_{1}\left( \phi_{21}\left( t\right) ,\phi_{22}\left( t\right) \right) \right| \] \[ +\max\limits_{t\in \left[ 0,\sigma \right] }\left| \chi_{2}\left( \phi_{11}\left( t\right) ,\phi_{12}\left( t\right) \right) -\chi_{2}\left( \phi_{21}\left( t\right) ,\phi_{22}\left( t\right) \right) \right| \] \[ \leq\left\lbrace\frac{2}{2-D}\left[P_{1}+P_{2}+P_{3}+P_{4}+P_{5}\right]\right. \] \begin{equation} \left. +\frac{2 \|f\|}{\beta (3-D)}\left[P_{1}+P_{4}+P_{5}+P_{6} +P_{7}\right]\right\rbrace\sqrt{\sigma}\left\| \stackrel{\longrightarrow }{\phi_{2}^{*}}-\stackrel{\longrightarrow }{\phi_{1}^{*}}% \right\| _{\sigma } \end{equation} \[ = H_{2}\left( C_{1},C_{2},U_{0},M,D,\beta,R,S,\sigma \right) \left\| \stackrel{\longrightarrow }{\phi_{2}^{*}}-\stackrel{\longrightarrow }{\phi_{1}^{*}}% \right\| _{\sigma }. \] By hypothesis (\ref{nose})-(\ref{erre}) we have that $\chi$ is a contraction and therefore, there exists a unique fixed point $\phi^{*}=\binom{\phi_{1}}{\phi_{2}}$ such that $\chi(\phi^{*})=\phi^{*}$ this is $$\chi_{1}(\phi_{1}(t),\phi_{2}(t))=\phi_{1}(t),\quad\quad \chi_{2}(\phi_{1}(t),\phi_{2}(t))=\phi_{2}(t). $$ \end{proof} \begin{theorem} For each $h\in\Pi_{1}$, under hypothesis of Theorem \ref{teocont} there exists a unique integral representation for $w$, $y_{0}$ and $y_{1}$ given by $(\ref{z})$,$(\ref{ycero})$ and $(\ref{ese})$ respectively, where $\phi_{1}$ and $\phi_{2}$ are the unique solutions of (\ref{ecintegralf}) and (\ref{ecintegralf1}) \end{theorem} \end{subsection} \subsection{Existence of at least a solution of $w_{h}(y_{0h}(t),t)=h(t)$} In this subsection we assume that all the hypothesis of Theorem \ref{teocont} are valid, which guarantee the existence and uniqueness of $w=w_{h}$, $y_{0}=y_{0h}$ and $y_{1}=y_{1h}$ for each $h\in\Pi_{1}$. Now we will prove that for suitable values of $H,R,S$ and $\sigma$ there exists $h\in\Pi(H,R,S,\sigma)$ such that \begin{equation} w_{h}(y_{0h}(t),t)=h(t)\label{ecuache} \end{equation} for $t\in [0,\sigma]$, where $w_{h}$ and $y_{0h}$ are established for above theorem. We define the map $Z$ on $\Pi$ such that for each $h\in\Pi\subset \Pi_{1}$ $(\sigma\leq1)$ $$Z(h)(t)=w_{h}(y_{0h}(t),t),$$ this is \begin{equation} Z(h)(t)=(f(t)-\beta)\left(1-\int_{0}^{t}\phi_{1h}(\tau) d\tau +\frac{1}{D}\int_{y_{0h}(t)}^{y_{1h}(t)}w_{h}(\xi,t)d\xi\right),\label{ache1} \end{equation} where $\phi_{1h}$, $w_{h}$, $y_{0h}$ and $y_{1h}$ are the solutions obtained in above section. We will use the Schauder's fixed point theorem which states: \textit{For any continuous function $L$ mapping a compact convex set to itself there is $x_{0}$ such that $L(x_{0})=x_{0}$ }. \begin{lemma}\label{propZh}If \begin{equation} \left(\left\|f\right\|+\beta\right)(2C_{1}+3U_{0})<2D\label{ipp} \end{equation} then, for $h\in \Pi$ function $Z(h)\in C^{1}[0,\sigma]$ and satisfies \begin{equation} Z(h)(t)>\frac{\beta}{2} ,\label{H} \end{equation} \begin{equation} \left\|Z(h)\right\|\leq \frac{2D\left(\left\|f\right\|+\beta\right)(1+M)}{2D-\left(\left\|f\right\|+\beta\right)(2C_{1}+3U_{0})},\label{RR} \end{equation} \begin{equation} \left\|Z'(h)\right\|\leq \tfrac{2D\left(\left\|f\right\|+\beta\right)(1+M)}{2D-\left(\left\|f\right\|+\beta\right)(2C_{1}+3U_{0})}\left\lbrace\tfrac{2\left\|f'\right\|}{\beta} +\left(\left\|f\right\|+\beta\right) \left[\tfrac{\beta}{D}+M\left(\tfrac{2}{\beta}+\tfrac{1}{D}\right)\right]\right\rbrace\label{S} \end{equation} \[ +M\left(\left\|f\right\|+\beta\right).\] \end{lemma} \begin{proof} From definition of $Z(h)$ and $(\ref{hip})$ we have $Z(h)(t)>\frac{\beta}{2}$. Taking into account $(\ref{ache1})$ and the fact that \[ w_{h}(\xi,t) \leq w_{h}(y_{0}(t),t),\quad y_{0}(t)\leq \xi \leq y_{1}(t) \] we have \[ Z(h)(t)\leq (f(t)+\beta)\left(1-\int_{0}^{t}\phi_{1h}(\tau) d\tau)+w_{h}(y_{0}(t),t)\left(y_{1}(t)-y_{0}(t)\right)\right). \] Since $\left\|\phi_{1h}\right\|\leq M$, $\sigma\leq 1$ and taking into account Lemma $\ref{cotasyi}$ we obtain \[\left\|Z(h)\right\| \leq\left(\left\|f\right\|+\beta\right)\left(1+M+\left\|Z(h)\right\|\frac{2C_{1}+3U_{0}}{2D}\right), \] or equivalently \[ \left\|Z(h)\right\|\leq \frac{2D\left(\left\|f\right\|+\beta\right)(1+M)}{2D-\left(\left\|f\right\|+\beta\right)(2C_{1}+3U_{0})}. \] If we derivate $Z(h)$ respect to variable $t$, we get \[ (Z(h))'(t)=f'(t)\left(1-\int_{0}^{t}\phi_{1h}(\tau) d\tau +\tfrac{1}{D}\int_{y_{0h}(t)}^{y_{1h}(t)}w_{h}(\xi,t)d\xi\right)\] \[+(f(t)-\beta)\left[-\phi_{1h}(t)+\tfrac{1}{D} w_{h}(y_{0h}(t),t) y'_{0h}(t)+\frac{1}{D}\int_{y_{0h}(t)}^{y_{1h}(t)} w_{ht}(\xi,t)d\xi\right]\, \] and using eq. $(\ref{calu} )$, $(\ref{ycero})$ and $(\ref{def})$ we obtain \[ (Z(h))'(t)=f'(t)\left(1-\int_{0}^{t}\phi_{1h}(\tau) d\tau +\tfrac{1}{D}\int_{y_{0h}(t)}^{y_{1h}(t)}w_{h}(\xi,t)d\xi\right)\] \[+(f(t)-\beta)\left[-\phi_{1h}(t)-\tfrac{1}{D} w_{h}(y_{0h}(t),t) y'_{0h}(t)+\phi_{h1}(t)-\phi_{h2}(t)\right] \] \[ =f'(t)\frac{Z(h)(t)}{f(t)-\beta}+(f(t)-\beta)\left[\tfrac{Z(h)\beta^{2}}{Df(t)} +\phi_{h2}(t)\left(\tfrac{Z(h)(t)}{h(t)}-1-\tfrac{Z(h)(t)\beta}{Df(t)}\right)\right]. \] Then we have \[ \left|(Z(h))'(t)\right|\leq \left\|f'\right\|\tfrac{2\left\|Z(h)\right\|}{\beta} +\left(\left\|f\right\|+\beta\right)\left[\tfrac{\left\|Z(h)\right\|\beta}{D}+M\left(\tfrac{2\left\|Z(h)\right\|}{\beta}+1+\tfrac{\left\|Z(h)\right\|}{D}\right)\right]\] \[ \leq\left\|Z(h)\right\|\left\lbrace\tfrac{2\left\|f'\right\|}{\beta} +\left(\left\|f\right\|+\beta\right) \left[\tfrac{\beta}{D}+M\left(\tfrac{2}{\beta}+\tfrac{1}{D}\right)\right]\right\rbrace+M\left(\left\|f\right\|+\beta\right), \] \[ \leq \tfrac{2D\left(\left\|f\right\|+\beta\right)(1+M)}{2D-\left(\left\|f\right\|+\beta\right)(2C_{1}+3U_{0})}\left\lbrace\tfrac{2\left\|f'\right\|}{\beta} +\left(\left\|f\right\|+\beta\right) \left[\tfrac{\beta}{D}+M\left(\tfrac{2}{\beta}+\tfrac{1}{D}\right)\right]\right\rbrace+M\left(\left\|f\right\|+\beta\right) \] and the lemma holds. \end{proof} Next, we define \begin{equation} E_{1}=1+2\left(\frac{1}{2-D}+\frac{\|f\|}{\beta (3-D)}\right)A_{1},\quad\quad E_{2}=2\frac{\|f\|}{(3-D)}, \end{equation}and $$E_{3}=\frac{2D\left(\left\|f\right\|+\beta\right)}{2D-\left(\left\|f\right\|+\beta\right)(2C_{1}+3U_{0})}.$$ \begin{lemma}\label{ZhenPi} We assume $(\ref{ipp})$ and \begin{equation}E_{3}E_{2}<1.\label{ee} \end{equation} If we take \begin{equation} H=\frac{\beta}{2},\quad\quad R=\frac{E_{3}(1+E_{1})}{1-E_{3}E_{2}}\quad,\label{defHR} \end{equation} \begin{equation} S= E_{3}\left\lbrace\tfrac{2\left\|f'\right\|}{\beta} +\left(\left\|f\right\|+\beta\right) \left[\tfrac{\beta}{D}+M\left(\tfrac{2}{\beta}+\tfrac{1}{D}\right)\right]\right\rbrace \label{defS}\end{equation} where $M$ is given by \begin{equation} M=\frac{E_{1}+E_{2}E_{3}}{1-E_{3}E_{2}}\label{emee} \end{equation} then $Z_{h}\in\Pi.$ \end{lemma} \begin{proof} From $(\ref{erre})$ we have \[ M=E_{1}+E_{2}R,\] then by $(\ref{RR})$ we have \[E_{3}(1+M)=R \quad \Leftrightarrow\quad E_{3}(1+E_{1})+E_{3}E_{2}R=R.\] Therefore, if we define $R=\frac{E_{3}(1+E_{1})}{1-E_{3}E_{2}}$ we have $ \left\|Z(h)\right\|\leq R.$ Moreover we have $Z(h)(t)>\frac{\beta}{2}=H$ and by Lemma \ref{propZh} we have $\left\|(Z(h))'\right\|\leq S.$ This yields $Z(h)\in\Pi$ and the proof is complete. \end{proof} \begin{remark} Assumption $(\ref{ee})$ is equivalent to \begin{equation} \frac{4D\|f\|\left(\left\|f\right\|+\beta\right)}{\left[2D-\left(\left\|f\right\|+\beta\right)(2C_{1}+3U_{0})\right](3-D)}<1 \label{ip} \end{equation} \end{remark} \begin{theorem} We assume hypothesis of Lemma $\ref{ZhenPi}$. There exists at least a solution $h^{*}\in\Pi$ such that $Z(h^{*})=h^{*}$. \end{theorem} \begin{proof} Taking into account above lemmas and using Schauder's fixed-point theorem we obtain that there exists at least a solution $h^{*}\in\Pi$ such that $Z(h^{*})=h^{*}.$ \end{proof} We can now formulate our main result. \begin{theorem} \label{teoremafinal}Fixed $C_{1}<\frac{U_{0}}{2}$. Let $H$, $R$, $S$ and $M$ given by $(\ref{defHR})$, $(\ref{defS})$ and $(\ref{emee})$ respectively. If $(\ref{ipp})$ and $(\ref{ip})$ hold,\begin{equation} \sigma \leq 1,\,\quad 2(1+\beta)\left(1+\frac{M}{\beta^{2}}\right)\sigma \leq C_{2},\quad\quad \; \label{nose3} \end{equation} \begin{equation} \left(\beta+2\frac{MD}{H}\right)\sigma\leq C_{1}\quad\quad\tfrac{4M}{H}\left(\beta + \tfrac{2DM}{H}\right)\sigma \leq 1 \end{equation} \begin{equation} H_{1} \leq 1, \quad\quad H_{2}\leq 1 \label{ache23} \end{equation} where $H_{1}$and $H_{2}$ are given by $(\ref{defache1})-(\ref{defache2})$ then there exists solution to the free boundary problem $(\ref{calu})-(\ref{free})$ given by \begin{equation} w^{*}(y,t)=\int\nolimits_{C_{1}}^{C_{2}}G(y,t;\xi ,0)F(\xi )d\xi +D \int_{0}^{t}\phi^{*}_{1}(\tau )G(y,t;y^{*}_{1}(\tau ),\tau )d\tau \label{1z+} \end{equation} \[ +\beta^{2}\int_{0}^{t} \frac{h^{*}(\tau)}{f(\tau)} G(y,t;y^{*}_{0}(\tau ),\tau )d\tau -D\beta\int_{0}^{t} \frac{\phi^{*}_{2}(\tau)}{f(\tau)} G(y,t;y^{*}_{0}(\tau ),\tau ) d\tau \] \[-D\int_{0}^{t} h^{*}(\tau) N_{y}(y,t;y^{*}_{0}(\tau ),\tau ) d\tau \] and \begin{equation} y^{*}_{0}(t)=C_{1}-\beta^{2}\int^{t}_{0} \frac{1}{f(\tau)}d\tau-D\int^{t}_{0}\tfrac{1}{h^{*}(\tau)}\left(1-\tfrac{\beta}{f(\tau)}\right)\phi^{*}_{2}(\tau)d\tau\label{ycero11} \end{equation} \begin{equation} y^{*}_{1}(t)=C_{2}+(1-\beta)t+\tfrac{D(\beta +1)}{\beta^{2}}ln\left(1-\int^{t}_{0} \phi^{*}_{1}(\tau)d\tau\right) \label{ese11} \end{equation} where $\phi_{1}^{*}$, $\phi_{2}^{*}$ are the unique solutions to the system of two Volterra integral equations (\ref {ecintegralf})\ and (\ref{ecintegralf1}) corresponding to $h^{*}$ a solution of $Z(h)=h.$ \end{theorem} \section{Parametric solution to the problem (\ref{calor1})-(\ref{tempborde}) } Assuming hypothesis of Theorem $\ref{teoremafinal}$, if we invert the transformations given by $(\ref{firsttrans})$, $(\ref{2})$ and $(\ref{tres})$ we obtain the explicit parametric representation of the solution to free boundary problem $(\ref{calor1})-(\ref{tempborde})$ given by \begin{equation} u^{*}(x,t)=\frac{w^{*}(y,t)}{1-\int_{0}^{t}\phi^{*}_{1}(\tau) d\tau+\frac{1}{D}\int_{y}^{y^{*}_{1}(t)} w^{*}(\xi,t)d\xi}+\beta \quad \end{equation} \begin{equation} x=\int_{y^{*}_{0}(t)+2\beta t}^{y+2\beta t} \left[\frac{w^{*}(\mu,t)}{1-\int_{0}^{t}\phi^{*}_{1}(\tau) d\tau+\frac{1}{D}\int_{\mu}^{y^{*}_{1}(t)} w^{*}(\xi,t)d\xi}+\beta\right] d\mu \end{equation} with \[ y^{*}_{0}(t)<y<y^{*}_{1}(t),\quad\quad 0<t<\sigma \] and \begin{equation} s(t)=\int_{y^{*}_{0}(t)+2\beta t}^{y^{*}_{1}(t)+2\beta t} \left[\frac{w^{*}(\mu,t)}{1-\int_{0}^{t}\phi^{*}_{1}(\tau) d\tau+\frac{1}{D}\int_{\mu}^{y^{*}_{1}(t)} w^{*}(\xi,t)d\xi}+\beta\right] d\mu \end{equation} where $w^{*}=w^{*}(y,t)$ is given by $(\ref{1z+})$, $y^{*}_{0}(t)$, $y^{*}_{1}(t)$ are given by $(\ref{ycero11})$ and $(\ref{ese11})$ respectively, with $\phi_{1}^{*}$, $\phi_{2}^{*}$ the unique solutions to (\ref{ecintegralf})\ and (\ref{ecintegralf1}) corresponding to the solution $h^{*}$ to equation $(\ref{ecuache}).$ \end{section} \section*{ACKNOWLEDGEMENT} The present work has been partially sponsored by the Project PIP No 0275 from CONICET-UA, Rosario, Argentina, ANPCyT PICTO Austral 2016 No 0090 and the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement 823731 CONMECH. The authors would like to thanks the anonymous referees whose insighful comments have benefited the presentation of this article.
2,877,628,090,042
arxiv
\section{Hopf algebras} \label{sec:hopf} Let us start with a brief history of Hopf algebras: Hopf algebras were originally introduced to mathematics in 1941 to enable similar aspects of groups and algebras to be described in a unified manner \cite{Hopf}. An article by Woronowicz in 1987 \cite{Woronowicz}, which provided explicit examples of non-trivial (non-co-commutative) Hopf algebras, triggered the interest of the physics community. This led to applications of Hopf algebras in the field of integrable systems and quantum groups. In physics, Hopf algebras received a further boost in 1998, when Kreimer and Connes re-examined the renormalization of quantum field theories and showed that they can be described by a Hopf algebra structure \cite{Kreimer:1998dp,Connes:1998qv}. Since then, Hopf algebras have appeared in several facets of physics. Let us now consider the definition of a Hopf algebra. The presentation in this section closely follows that in \cite{Weinzierl:2003ub}. References for further reading are \cite{Sweedler,Kassel,Majid:1990vz,Manchon:2004,Frabetti:2008}. Let $R$ be a commutative ring with unit $1$. An algebra over the ring $R$ is an $R$-module together with a multiplication $\cdot$ and a unit $e$. We will always assume that the multiplication is associative. In physics, the ring $R$ will almost always be a field $K$ (examples are the rational numbers ${\mathbb Q}$, the real numbers ${\mathbb R}$, or the complex number ${\mathbb C}$). In this case the $R$-module will actually be a $K$-vector space. Note that the unit $e$ can be viewed as a map from $R$ to $A$ and that the multiplication $\cdot$ can be viewed as a map from the tensor product $A \otimes A$ to $A$ (e.g., one takes two elements from $A$, multiplies them, and obtains one element as the outcome): \begin{align} & \mbox{Multiplication:} & \cdot \; : & \;\; A \otimes A \rightarrow A, \nonumber \\ & \mbox{Unit:} & e \; : & \;\; R \rightarrow A. \end{align} Instead of multiplication and a unit, a co-algebra has the dual structures: a co-multiplication $\Delta$ and a co-unit $\bar{e}$. The co-unit $\bar{e}$ is a map from $A$ to $R$, whereas the co-multiplication $\Delta$ is a map from $A$ to $A \otimes A$: \begin{align} & \mbox{Co-multiplication:} & \Delta \; : & \;\; A \rightarrow A \otimes A, \nonumber \\ & \mbox{Counit:} & \bar{e} \; : & \;\; A \rightarrow R. \end{align} Note that mapping of a co-multiplication and co-unit proceeds in the reverse direction compared to multiplication and unit. We will always assume that the co-multiplication $\Delta$ is co-associative. What does co-associativity mean? We can easily derive it from associativity as follows: For $a,b,c \in A$ associativity requires \begin{eqnarray} \label{condition_associativity} \left( a \cdot b \right) \cdot c & = & a \cdot \left( b \cdot c \right). \end{eqnarray} We can re-write condition~(\ref{condition_associativity}) in the form of a commutative diagram: \begin{eqnarray} \begin{CD} A \otimes A \otimes A @>{\mathrm{id} \otimes \cdot}>> A \otimes A \\ @VV{\cdot \otimes \mathrm{id}}V @VV{\cdot}V \\ A \otimes A @>{\cdot}>> A \\ \end{CD} \end{eqnarray} We obtain the condition for co-associativity by reversing all arrows and by exchanging multiplication with co-multiplication. We thus obtain the following commutative diagram: \begin{eqnarray} \begin{CD} A @>{\Delta}>> A \otimes A \\ @VV{\Delta}V @VV{\Delta \otimes \mathrm{id}}V \\ A \otimes A @>{\mathrm{id} \otimes \Delta}>> A \otimes A \otimes A \\ \end{CD} \end{eqnarray} The general form of the co-product is \begin{eqnarray} \Delta(a) & = & \sum\limits_i a_i^{(1)} \otimes a_i^{(2)}, \end{eqnarray} where $a_i^{(1)}$ denotes an element of $A$ appearing in the first slot of $A \otimes A$ and $a_i^{(2)}$ correspondingly denotes an element of $A$ appearing in the second slot. Sweedler's notation \cite{Sweedler} consists of omitting the dummy index $i$ and the summation symbol: \begin{eqnarray} \Delta(a) & = & a^{(1)} \otimes a^{(2)} \end{eqnarray} The sum is implicitly understood. This is similar to Einstein's summation convention, except that the dummy summation index $i$ is also dropped. The superscripts ${}^{(1)}$ and ${}^{(2)}$ indicate that a sum is involved. Using Sweedler's notation, co-associativity is equivalent to \begin{eqnarray} a^{(1) (1)} \otimes a^{(1) (2)} \otimes a^{(2)} & = & a^{(1)} \otimes a^{(2) (1)} \otimes a^{(2) (2)}. \end{eqnarray} As it is irrelevant whether we exchange the second co-product with the first or the second factor in the tensor product, we can simply write \begin{eqnarray} \Delta^2\left(a\right) & = & a^{(1)} \otimes a^{(2)} \otimes a^{(3)}. \end{eqnarray} If the co-product of an element $a \in A$ is of the form \begin{eqnarray} \Delta\left(a\right) & = & a \otimes a, \end{eqnarray} then $a$ is referred to as a group-like element. If the co-product of $a$ is of the form \begin{eqnarray} \Delta\left(a\right) & = & a \otimes e + e \otimes a, \end{eqnarray} then $a$ is referred to as a primitive element. In an algebra we have for the unit $1$ of the underlying ring $R$ and the unit $e$ of the algebra the relation \begin{eqnarray} a \;\; = \;\; 1 \cdot a \;\; = \;\; e \cdot a \;\; = \;\; a \end{eqnarray} for any element $a \in A$ (together with the analog relation $a = a \cdot 1 = a \cdot e = a$). In terms of commutative diagrams this is expressed as \begin{eqnarray} \label{axiom_unit} \begin{CD} A \otimes A @= A \otimes A \\ @A{e \otimes \mathrm{id}}AA @VV{\cdot}V \\ R \otimes A @=^{\!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\! \cong \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\,} A \\ \end{CD} & & \hspace*{15mm} \begin{CD} A \otimes A @= A \otimes A \\ @A{\mathrm{id} \otimes e}AA @VV{\cdot}V \\ A \otimes R @=^{\!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\! \cong \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\,} A \\ \end{CD} \end{eqnarray} In a co-algebra we have the dual relations obtained from eq.~(\ref{axiom_unit}) by reversing all arrows and by exchanging multiplication with co-multiplication as well as by exchanging the unit $e$ with the co-unit $\bar{e}$: \begin{eqnarray} \begin{CD} A \otimes A @= A \otimes A \\ @V{\bar{e} \otimes \mathrm{id}}VV @AA{\Delta}A \\ R \otimes A @=^{\!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\! \cong \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\,} A \\ \end{CD} & & \hspace*{15mm} \begin{CD} A \otimes A @= A \otimes A \\ @V{\mathrm{id} \otimes \bar{e}}VV @AA{\Delta}A \\ A \otimes R @=^{\!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\! \cong \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\,\,} A \\ \end{CD} \end{eqnarray} A bi-algebra is an algebra and a co-algebra at the same time, such that the two structures are compatible with each other. In terms of commutative diagrams, the compatibility condition between the product and the co-product is expressed as \begin{eqnarray} \begin{CD} A \otimes A @>{\cdot}>> A @>{\Delta}>> A \otimes A \\ @VV{\Delta \otimes \Delta}V & & @AA{\cdot \otimes \cdot}A \\ A \otimes A \otimes A \otimes A & @>{\mathrm{id} \otimes \tau \otimes \mathrm{id}}>> & A \otimes A \otimes A \otimes A\\ \end{CD} \end{eqnarray} where $\tau : A \otimes A \rightarrow A \otimes A$ is the map, which exchanges the entries in the two slots: $\tau(a \otimes b) = b \otimes a$. Using Sweedler's notation, the compatibility between the multiplication and co-multiplication is expressed as \begin{eqnarray} \label{bialg} \Delta\left( a \cdot b \right) & = & \left( a^{(1)} \cdot b^{(1)} \right) \otimes \left( a^{(2)} \cdot b^{(2)} \right). \end{eqnarray} It is common practice to write the right-hand side of eq.~(\ref{bialg}) as \begin{eqnarray} \left( a^{(1)} \cdot b^{(1)} \right) \otimes \left( a^{(2)} \cdot b^{(2)} \right) & = & \Delta\left(a\right) \Delta\left(b\right). \end{eqnarray} In addition, there is a compatibility condition between the unit and the co-product \begin{eqnarray} \label{compatibility_unit_coproduct} \begin{CD} R \otimes R \cong R @>{e}>> A \\ @V{e \otimes e}VV @VV{\Delta}V \\ A \otimes A @= A \otimes A \\ \end{CD} \end{eqnarray} as well as a compatibility condition between the co-unit and the product, which is dual to eq.~(\ref{compatibility_unit_coproduct}): \begin{eqnarray} \label{compatibility_counit_product} \begin{CD} A @>{\bar{e}}>> R \cong R \otimes R \\ @A{\cdot}AA @AA{\bar{e} \otimes \bar{e}}A \\ A \otimes A @= A \otimes A \\ \end{CD} \end{eqnarray} The commutative diagrams in eq.~(\ref{compatibility_unit_coproduct}) and eq.~(\ref{compatibility_counit_product}) are equivalent to \begin{eqnarray} \Delta e = e \otimes e, \;\;\; \mbox{and} \;\;\; \bar{e}\left(a \cdot b \right) = \bar{e}\left(a\right) \bar{e}\left(b\right), \;\;\; \mbox{respectively.} \end{eqnarray} A Hopf algebra is a bi-algebra with an additional map from $A$ to $A$, known as the antipode $S$, which fulfills \begin{eqnarray} \label{antipode_def1} \begin{CD} A @>{\bar{e}}>> R @>{e}>> A \\ @VV{\Delta}V & & @AA{\cdot}A \\ A \otimes A & @>{\mathrm{id} \otimes S}>{S \otimes \mathrm{id}}> & A \otimes A\\ \end{CD} \end{eqnarray} An equivalent formulation is \begin{eqnarray} \label{antipode_def2} a^{(1)} \cdot S\left( a^{(2)} \right) \;\; = \;\; S\left(a^{(1)}\right) \cdot a^{(2)} \;\; = \;\; e \cdot \bar{e}(a). \end{eqnarray} A bi-algebra that has an antipode (satisfying the commutative diagram~(\ref{antipode_def1}) or eq.~(\ref{antipode_def2})) is unique. If a Hopf algebra $A$ is either commutative or co-commutative, then \begin{eqnarray} S^2 & = & \mathrm{id}. \end{eqnarray} A bi-algebra $A$ is graded, if it has a decomposition \begin{eqnarray} A & = & \bigoplus\limits_{n \ge 0} A_n, \end{eqnarray} with \begin{eqnarray} A_n \cdot A_m \subseteq A_{n+m}, & & \;\;\; \Delta\left(A_n\right) \subseteq \bigoplus\limits_{k+l=n} A_k \otimes A_l. \end{eqnarray} Elements in $A_n$ are said to have degree $n$. The bi-algebra is graded connected, if in addition one has \begin{eqnarray} A_0 & = & R \cdot e. \end{eqnarray} It is useful to know that a graded connected bi-algebra is automatically a Hopf algebra \cite{Ehrenborg}. An algebra $A$ is commutative if for all $a,b \in A$ one has \begin{eqnarray} \label{def_commutative} a \cdot b & = & b \cdot a. \end{eqnarray} A co-algebra $A$ is co-commutative if for all $a \in A$ one has \begin{eqnarray} \label{def_cocommutative} a^{(1)} \otimes a^{(2)} & = & a^{(2)} \otimes a^{(1)}. \end{eqnarray} With the help of the swap map $\tau$ we may express commutativity and co-commutativity equivalently as \begin{eqnarray} \cdot \tau = \cdot, \;\;\; \mbox{and} \;\;\; \tau \Delta = \Delta, \;\;\; \mbox{respectively.} \end{eqnarray} Let us now consider a few examples of Hopf algebras. \begin{enumerate} \item The group algebra. Let $G$ be a group and denote by $KG$ the vector space with basis $G$ over the field $K$. Then $KG$ is an algebra with the multiplication given by the group multiplication. The co-unit, the co-product, and the antipode are defined for the basis elements $g \in G$ as follows: The co-unit $\bar{e}$ is given by: \begin{eqnarray} \bar{e}\left( g\right) & = & 1. \end{eqnarray} The co-product $\Delta$ is given by: \begin{eqnarray} \Delta\left( g\right) & = & g \otimes g. \end{eqnarray} The antipode $S$ is given by: \begin{eqnarray} S\left( g \right) & = & g^{-1}. \end{eqnarray} Having defined the co-unit, the co-product, and the antipode for the basis elements $g \in G$, the corresponding definitions for arbitrary vectors in $KG$ are obtained by linear extension. $KG$ is a co-commutative Hopf algebra, which means that $KG$ is commutative if $G$ is commutative. \item Lie algebras. A Lie algebra ${\mathfrak g}$ is not necessarily associative nor does it have a unit. To overcome this obstacle one considers the universal enveloping algebra $U({\mathfrak g})$, obtained from the tensor algebra $T({\mathfrak g})$ by factoring out the ideal generated by \begin{eqnarray} X \otimes Y - Y \otimes X - \left[ X, Y \right], \end{eqnarray} with $X, Y \in {\mathfrak g}$. The co-unit $\bar{e}$ is given by: \begin{eqnarray} \bar{e}\left( e\right) = 1, & & \bar{e}\left( X\right) = 0. \end{eqnarray} The co-product $\Delta$ is given by: \begin{eqnarray} \Delta(e) = e \otimes e, & & \Delta(X) = X \otimes e + e \otimes X. \end{eqnarray} The antipode $S$ is given by: \begin{eqnarray} S(e) = e, & & S(X) = -X. \end{eqnarray} \item Quantum SU(2). The Lie algebra $su(2)$ is generated by three generators $H$, $X_\pm$ with \begin{eqnarray} \left[ H, X_\pm \right] = \pm 2 X_\pm, & & \left[ X_+, X_- \right] = H. \end{eqnarray} To obtain the deformed algebra $U_q(su(2))$, the last relation is replaced with \cite{Majid:1990vz,Schupp:1993hn} \begin{eqnarray} \left[ X_+, X_- \right] & = & \frac{q^H - q^{-H}}{q-q^{-1}}. \end{eqnarray} The undeformed Lie algebra $su(2)$ is recovered in the limit $q \rightarrow 1$. The co-unit $\bar{e}$ is given by: \begin{eqnarray} \bar{e}\left( e\right) = 1, & & \bar{e}\left( H\right) = \bar{e}\left( X_\pm\right) = 0. \end{eqnarray} The co-product $\Delta$ is given by: \begin{eqnarray} \Delta(H) & = & H \otimes e + e \otimes H, \nonumber \\ \Delta(X_\pm) & = & X_\pm \otimes q^{H/2} + q^{-H/2} \otimes X_\pm. \end{eqnarray} The antipode $S$ is given by: \begin{eqnarray} S(H) = -H, & & S(X_\pm) = - q^{\pm 1} X_\pm . \end{eqnarray} \item Symmetric algebras. Let $V$ be a finite dimensional vector space with basis $\{v_i\}$. The symmetric algebra $S(V)$ is the direct sum \begin{eqnarray} S(V) & = & \bigoplus\limits_{n=0}^\infty S^n(V), \end{eqnarray} where $S^n(V)$ is spanned by elements of the form $v_{i_1} v_{i_2} ... v_{i_n}$ with $i_1 \le i_2 \le ... \le i_n$. The multiplication is defined by \begin{eqnarray} \left( v_{i_1} v_{i_2} ... v_{i_m} \right) \cdot \left( v_{i_{m+1}} v_{i_{m+2}} ... v_{i_{m+n}} \right) & = & v_{i_{\sigma(1)}} v_{i_{\sigma(2)}} ... v_{i_{\sigma(m+n)}}, \end{eqnarray} where $\sigma$ is a permutation on $m+n$ elements such that $i_{\sigma(1)} \le i_{\sigma(2)} \le ... \le i_{\sigma(m+n)}$. The co-unit $\bar{e}$ is given by: \begin{eqnarray} \bar{e}\left( e\right) = 1, \;\;\; & & \bar{e}\left( v_1 v_2 ... v_n\right) = 0. \end{eqnarray} The co-product $\Delta$ is given for the basis elements $v_i$ by: \begin{eqnarray} \Delta(v_i) & = & v_i \otimes e + e \otimes v_i. \end{eqnarray} Using (\ref{bialg}) one obtains for a general element of $S(V)$ \begin{eqnarray} \Delta\left( v_1 v_2 ... v_n \right) & = & v_1 v_2 ... v_n \otimes e + e \otimes v_1 v_2 ... v_n \nonumber \\ & & + \sum\limits_{j=1}^{n-1} \sum\limits_\sigma v_{\sigma(1)} ... v_{\sigma(j)} \otimes v_{\sigma(j+1)} ... v_{\sigma(n)}, \end{eqnarray} where $\sigma$ runs over all $(j,n-j)$-shuffles. A $(j,n-j)$-shuffle is a permutation $\sigma$ of $(1,...,n)$ such that \begin{eqnarray} \sigma(1) < \sigma(2) < ... < \sigma(j) & \mbox{and} & \sigma(k+1) < ... < \sigma(n). \nonumber \end{eqnarray} The antipode $S$ is given by: \begin{eqnarray} S( v_{i_1} v_{i_2} ... v_{i_n}) & = & (-1)^n v_{i_1} v_{i_2} ... v_{i_n}. \end{eqnarray} \item Shuffle algebras. Consider a set of letters $A$. The set $A$ is known as the alphabet. A word is an ordered sequence of letters: \begin{eqnarray} w & = & l_1 l_2 ... l_k, \end{eqnarray} where $l_1, ..., l_k \in A$. The word of length zero is denoted by $e$. The shuffle algebra ${\cal A}$ on the vector space spanned by words is defined by \begin{eqnarray} \left( l_1 l_2 ... l_k \right) \cdot \left( l_{k+1} ... l_r \right) & = & \sum\limits_{\mathrm{shuffles} \; \sigma} l_{\sigma(1)} l_{\sigma(2)} ... l_{\sigma(r)}, \end{eqnarray} where the sum runs over all permutations $\sigma$, which preserve the relative order of $1,2,...,k$ and of $k+1,...,r$. The name ``shuffle algebra'' is related to the analogy of shuffling cards: If a deck of cards is divided into two parts and then shuffled, the relative order within the two individual parts is conserved. A shuffle algebra is also known under the name ``mould symmetral'' \cite{Ecalle}. The empty word $e$ is the unit in this algebra: \begin{eqnarray} e \cdot w = w \cdot e = w. \end{eqnarray} The recursive definition of the shuffle product is given by \begin{eqnarray} \label{def_recursive_shuffle} \left( l_1 l_2 ... l_k \right) \cdot \left( l_{k+1} ... l_r \right) = l_1 \left[ \left( l_2 ... l_k \right) \cdot \left( l_{k+1} ... l_r \right) \right] + l_{k+1} \left[ \left( l_1 l_2 ... l_k \right) \cdot \left( l_{k+2} ... l_r \right) \right]. \nonumber \\ \end{eqnarray} It is a well-known fact that the shuffle algebra is actually a (non-co-commutative) Hopf algebra \cite{Reutenauer}. The co-unit $\bar{e}$ is given by: \begin{eqnarray} \bar{e}\left( e\right) = 1, \;\;\; & & \bar{e}\left( l_1 l_2 ... l_n\right) = 0. \end{eqnarray} The co-product $\Delta$ is given by: \begin{eqnarray} \Delta\left( l_1 l_2 ... l_k \right) & = & \sum\limits_{j=0}^k \left( l_{j+1} ... l_k \right) \otimes \left( l_1 ... l_j \right). \end{eqnarray} This particular co-product is also known as the deconcatenation co-product. The antipode $S$ is given by: \begin{eqnarray} S\left( l_1 l_2 ... l_k \right) & = & (-1)^k \; l_k l_{k-1} ... l_2 l_1. \end{eqnarray} The shuffle algebra is generated by the Lyndon words \cite{Reutenauer}. If one introduces a lexicographic ordering on the letters of the alphabet $A$, a Lyndon word is defined by the property $w < v$ for any sub-words $u$ and $v$ such that $w= u v$. \item Rooted trees. An individual rooted tree is shown in Fig.~(\ref{fig15}). \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig15} \caption{\label{fig15} Illustration of a rooted tree. The root is drawn at the top and is labeled $x_0$.} \end{center} \end{figure} We consider the algebra generated by rooted trees. Elements of this algebra are sets of rooted trees, conventionally known as forests. The product of two forests is simply the disjoint union of all trees from the two forests. The empty forest, consisting of no trees, will be denoted by $e$. Before we are able to define a co-product, we first need the definition of an admissible cut. A single cut is a cut of an edge. An admissible cut of a rooted tree is any assignment of single cuts such that any path from any vertex of the tree to the root has at most one single cut. An admissible cut $C$ maps a tree $t$ to a monomial in trees $t_1 \cdot ... \cdot t_{n+1}$. Precisely one of these sub-trees $t_j$ will contain the root of $t$. We denote this distinguished tree by $R^C(t)$, and the monomial consisting of the $n$ other factors by $P^C(t)$. The co-unit $\bar{e}$ is given by: \begin{eqnarray} \bar{e}(e) = 1, \;\;\; & & \bar{e}\left(t_1 \cdot ... \cdot t_k \right) = 0 \;\;\;\mbox{for}\; k \ge 1. \end{eqnarray} The co-product $\Delta$ is given by: \begin{eqnarray} \Delta(e) & = & e \otimes e, \nonumber \\ \Delta(t) & = & t \otimes e + e \otimes t + \sum\limits_{\mathrm{adm. cuts} \; C \mathrm{of} \; t} P^C(t) \otimes R^C(t), \nonumber \\ \Delta\left(t_1 \cdot ... \cdot t_k\right) & = & \Delta\left(t_1\right) \; ... \; \Delta\left(t_k\right). \end{eqnarray} The antipode $S$ is given by: \begin{eqnarray} S(e) & = & e, \nonumber \\ S(t) & = & -t - \sum\limits_{\mathrm{adm. cuts} \; C \; \mathrm{of} \; t} S\left( P^C(t) \right) \cdot R^C(t), \nonumber \\ S\left( t_1 \cdot ... \cdot t_k \right) & = & S\left(t_1\right) \cdot ... \cdot S\left(t_k\right). \end{eqnarray} \end{enumerate} It is possible to classify the examples discussed above into four groups according to whether they are commutative or co-commutative. \begin{itemize} \item Commutative and co-commutative: Examples are the group algebra of a commutative group or the symmetric algebras. \item Non-commutative and co-commutative: Examples are the group algebra of a non-commutative group or the universal enveloping algebra of a Lie algebra. \item Commutative and non-co-commutative: Examples are shuffle algebra or the algebra of rooted trees. \item Non-commutative and non-co-commutative: Examples are given by quantum groups. \end{itemize} Whereas research on quantum groups focused primarily on non-commutative and non-co-commutative Hopf algebras, it turns out that for applications in perturbative quantum field theories commutative, but not necessarily co-commutative, Hopf algebras such as shuffle algebras, symmetric algebras, and rooted trees are the most important. \section{Applications in particle physics} \label{sec:hopf_physics} Let us now discuss two important applications of Hopf algebras in perturbative quantum field theory: The combinatorics of renormalization and the Hopf algebras related to multiple polylogarithms. The former topic is related to ultraviolet divergences occurring in Feynman integrals, whereas the latter topic concerns functions to which Feynman integrals evaluate. The presentation in this section follows \cite{Weinzierl:2013yn}. We start our discussion with a short introduction to Feynman integrals. \subsection{Feynman integrals} \label{subsection:feynman} The perturbative expansion of quantum field theory can be organized in terms of Feynman graphs. Feynman graphs can be considered as a pictorial notation for mathematical expressions arising in the context of perturbative quantum field theory. Fig.~(\ref{fig16}) shows an example of a Feynman graph. Each part in a Feynman graph corresponds to a specific expression and the full Feynman graph corresponds to the product of these expressions. For scalar theories the correspondence is as follows: An internal edge corresponds to a propagator \begin{eqnarray} \frac{i}{q^2-m^2}, \end{eqnarray} an external edge to the factor $1$. In scalar theories, a vertex also corresponds to the factor $1$. In addition, for each internal momentum not constrained by momentum conservation, there is an integration \begin{eqnarray} \int \frac{d^Dk}{\left(2\pi\right)^D}, \end{eqnarray} where $D$ denotes the dimension of space-time. Let us now consider a Feynman graph $G$ with $m$ external edges, $n$ internal edges, and $l$ loops. With each internal edge we associate, apart from its momentum and its mass, a positive integer $\nu$, which provides the power to which the propagator occurs. (We may think of $\nu$ as the relict of neglecting vertices of valency $2$. The number $\nu>1$ corresponds to $\nu-1$ mass insertions on this edge). The momenta flowing through the internal lines can be expressed through the independent loop momenta $k_1$, ..., $k_l$ and the external momenta $p_1$, ..., $p_m$ as \begin{eqnarray} q_i & = & \sum\limits_{j=1}^l \rho_{ij} k_j + \sum\limits_{j=1}^m \sigma_{ij} p_j, \;\;\;\;\;\; \rho_{ij}, \sigma_{ij} \in \{-1,0,1\}. \end{eqnarray} We define the Feynman integral by \begin{eqnarray} \label{Feynman_integral_1} I_G & = & \left( \mu^2 \right)^{\nu-l D/2} \int \prod\limits_{r=1}^{l} \frac{d^Dk_r}{i\pi^{\frac{D}{2}}}\; \prod\limits_{j=1}^{n} \frac{1}{(-q_j^2+m_j^2)^{\nu_j}}, \end{eqnarray} with $\nu=\nu_1+...+\nu_n$. The inclusion of an arbitrary scale $\mu$, the factors $i \pi^{D/2}$ in the measure, and a minus sign for each propagator are the conventions used in these lectures. Feynman parametrization makes use of the identity \begin{eqnarray} \label{feynman_parametrisation} \prod\limits_{j=1}^{n} \frac{1}{P_{j}^{\nu_j}} & = & \frac{\Gamma\left(\nu\right)}{\prod\limits_{i=1}^n \Gamma\left(\nu_j\right)} \int\limits_\Delta \omega \left( \prod\limits_{i=1}^n x_i^{\nu_i-1} \right) \left( \sum\limits_{j=1}^{n} x_{j} P_{j} \right)^{-\nu}, \end{eqnarray} where $\omega$ is a differential of the form $(n-1)$ given by \begin{eqnarray} \omega & = & \sum\limits_{j=1}^n (-1)^{j-1} \; x_j \; dx_1 \wedge ... \wedge \widehat{dx_j} \wedge ... \wedge dx_n. \end{eqnarray} The hat indicates that the corresponding term is omitted. The integration is over \begin{eqnarray} \Delta & = & \left\{ \left[ x_1 : x_2 : ... : x_n \right] \in {\mathbb P}^{n-1} | x_i \ge 0, 1 \le i \le n \right\}. \end{eqnarray} We use eq.~(\ref{feynman_parametrisation}) with $P_j=-q_j^2+m_j^2$. We can write \begin{eqnarray} \label{eq_poly_calc_1} \sum\limits_{j=1}^{n} x_{j} (-q_j^2+m_j^2) & = & - \sum\limits_{r=1}^{l} \sum\limits_{s=1}^{l} k_r M_{rs} k_s + \sum\limits_{r=1}^{l} 2 k_r \cdot Q_r + J, \end{eqnarray} where $M$ is a $l \times l$ matrix with scalar entries and $Q$ is a $l$-vector with $D$-vectors as entries. After Feynman parametrization, it becomes possible to integrate over the loop momenta $k_1$, ..., $k_l$ and we obtain \begin{eqnarray} \label{Feynman_integral_2} I_G & = & \frac{\Gamma(\nu-lD/2)}{\prod\limits_{j=1}^{n}\Gamma(\nu_j)} \int\limits_{\Delta} \omega \left( \prod\limits_{j=1}^n x_j^{\nu_j-1} \right) \frac{U^{\nu-(l+1) D/2}}{F^{\nu-l D/2}}. \end{eqnarray} The functions $U$ and $F$ are given by \begin{eqnarray} \label{eq_poly_calc_2} U = \mbox{det}(M), & & F = \mbox{det}(M) \left( J + Q M^{-1} Q \right)/\mu^2, \end{eqnarray} where $U$ and $F$ are both graph polynomials and have an alternative definition in terms of spanning trees and spanning forests \cite{Bogner:2010kv}. A few remarks are in order: The integral over the Feynman parameters is an $(n-1)$-dimensional integral in projective space ${\mathbb P}^{n-1}$, where $n$ is the number of internal edges of the graph. Singularities may arise if the zero sets of $U$ and $F$ intersect the region of integration. The dimension $D$ of space-time only appears in the exponents of the integrand and the exponents act as a regularization. A Feynman integral has an expansion as a Laurent series in the parameter $\varepsilon=(4-D)/2$ of dimensional regularization: \begin{eqnarray} \label{Feynman_integral_3} I_G & = & \sum\limits_{j=-2l}^\infty c_j \varepsilon^j. \end{eqnarray} The Laurent series of an $l$-loop integral can have poles in $\varepsilon$ up to the order $(2l)$. The poles in $\varepsilon$ correspond to ultraviolet or infrared divergences. The coefficients $c_j$ are functions of the scalar products $p_j \cdot p_k$, the masses $m_i$, and (in a trivial way) of the arbitrary scale $\mu$. Transforming a Feynman integral from the form in eq.~(\ref{Feynman_integral_1}) to the form of eq.~(\ref{Feynman_integral_2}) is straightforward and will be illustrated by an example below. The challenging part is to obtain the expansion in eq.~(\ref{Feynman_integral_3}) and to find explicit expressions for the coefficients $c_j$ in eq.~(\ref{Feynman_integral_3}). As an example for the transition from eq.~(\ref{Feynman_integral_1}) to eq.~(\ref{Feynman_integral_2}) let us consider the two-loop double box graph in fig.~(\ref{fig16}). \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig16} \end{center} \caption{\label{fig16} Illustration of a ``double box''-graph: A two-loop Feynman diagram with four external and seven internal lines. The momenta flowing out along the external lines and those flowing through the internal lines are labelled $p_1$, ..., $p_4$ and $q_1$, ..., $q_7$, respectively.} \end{figure} In fig.~(\ref{fig16}) there are two independent loop momenta. We can choose them to be $k_1=q_3$ and $k_2=q_6$. Then all other internal momenta are expressed in terms of $k_1$, $k_2$, and the external momenta $p_1$, ..., $p_4$: \begin{eqnarray} \begin{array}{lll} q_1 = k_1 - p_1, & q_2 = k_1 - p_1 - p_2, & q_4 = k_1 + k_2, \\ q_5 = k_2 - p_3 - p_4, & q_7 = k_2 - p_4. & \\ \end{array} \end{eqnarray} We will consider the case \begin{eqnarray} \label{specification_double_box} & & p_1^2 = 0, \;\;\; p_2^2 = 0, \;\;\; p_3^2 = 0, \;\;\; p_4^2 = 0, \nonumber \\ & & m_1 = m_2 = m_3 = m_4 = m_5 = m_6 = m_7 = 0. \end{eqnarray} We define \begin{eqnarray} s = \left(p_1+p_2\right)^2=\left(p_3+p_4\right)^2, & & t = \left(p_2+p_3\right)^2=\left(p_1+p_4\right)^2. \end{eqnarray} We have \begin{eqnarray} \lefteqn{ \sum\limits_{j=1}^7 x_j \left(-q_j^2\right) = - \left(x_1+x_2+x_3+x_4\right) k_1^2 - 2 x_4 k_1 \cdot k_2 - \left( x_4+x_5+x_6+x_7\right) k_2^2 } & & \nonumber \\ & & + 2 \left[ x_1 p_1 + x_2 \left( p_1 + p_2 \right) \right] \cdot k_1 + 2 \left[ x_5 \left( p_3 + p_4 \right) + x_7 p_4 \right] \cdot k_2 - \left( x_2 + x_5 \right) s. \end{eqnarray} In comparison with eq.~(\ref{eq_poly_calc_1}) we find \begin{eqnarray} M & = & \left( \begin{array}{cc} x_1+x_2+x_3+x_4 & x_4 \\ x_4 & x_4+x_5+x_6+x_7 \\ \end{array} \right), \nonumber \\ Q & = & \left( \begin{array}{c} x_1 p_1 + x_2 \left( p_1 + p_2 \right) \\ x_5 \left( p_3 + p_4 \right) + x_7 p_4 \\ \end{array} \right), \nonumber \\ J & = & \left( x_2 + x_5 \right) \left(-s\right). \end{eqnarray} Plugging this into eq.~(\ref{eq_poly_calc_2}) we obtain the graph polynomials as \begin{eqnarray} U & = & \left( x_1+x_2+x_3 \right) \left( x_5+x_6+x_7 \right) + x_4 \left( x_1+x_2+x_3+x_5+x_6+x_7 \right), \nonumber \\ F & = & \left[ x_2 x_3 \left( x_4+x_5+x_6+x_7 \right) + x_5 x_6 \left( x_1+x_2+x_3+x_4 \right) + x_2 x_4 x_6 + x_3 x_4 x_5 \right] \left( \frac{-s}{\mu^2} \right) \nonumber \\ & & + x_1 x_4 x_7 \left( \frac{-t}{\mu^2} \right). \end{eqnarray} \subsection{Renormalization} \label{subsect:renorm} Let us now consider the ultraviolet (or short-distance) singularities of Feynman integrals. These singularities are removed by renormalization \cite{Zimmermann:1969jj}. The combinatorics involved in the renormalization are governed by a Hopf algebra \cite{Kreimer:1998dp,Connes:1998qv}. The relevant Hopf algebra is that which is generated by rooted trees. We determine the relation between a Feynman graph and the corresponding rooted trees by starting from the fact that sub-graphs may give rise to sub-divergences. That is, the rooted trees encode the nested structure of sub-divergences. This is best explained by an example. Fig.~(\ref{fig1}) shows a three-loop two-point function. This Feynman integral has an overall ultraviolet divergence and two sub-divergences, corresponding to the two fermion self-energy corrections. \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig1} \caption{\label{fig1} Three-loop two-point function with an overall ultraviolet divergence and two sub-divergences. We find the corresponding rooted tree by first drawing boxes around all ultraviolet-divergent sub-graphs. The rooted tree is obtained from the nested structure of these boxes.} \end{center} \end{figure} We obtain the corresponding rooted tree by drawing boxes around all ultraviolet-divergent sub-graphs. The rooted tree is obtained from the nested structure of these boxes. \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig2} \caption{\label{fig2} Example with overlapping singularities. This graph corresponds to a sum of rooted trees} \end{center} \end{figure} Graphs with overlapping singularities correspond to a sum of rooted trees. This is illustrated for a two-loop example with an overlapping singularity in fig.~(\ref{fig2}). We recall that the co-unit applied to any non-trivial rooted tree $t\neq e$ yields zero: \begin{eqnarray} \bar{e}\left(t\right) & = & 0, \;\;\;\;\;\; t \neq e. \end{eqnarray} Let us further recall the recursive definition of the antipode for the Hopf algebra of rooted trees: \begin{eqnarray} S(t) & = & -t - \sum\limits_{\mathrm{adm. cuts} \; C \; \mathrm{of} \; t} S\left( P^C(t) \right) \cdot R^C(t). \end{eqnarray} The antipode satisfies (see eq.~(\ref{antipode_def2})) for any non-trivial rooted tree $t \neq e$ \begin{eqnarray} \label{untwisted} S\left(t^{(1)}\right) t^{(2)} & = & 0, \end{eqnarray} where we used Sweedler's notation. Eq.~(\ref{untwisted}) will be our starting point. However, rather than obtaining zero on the right-hand side, we are interested in a finite quantity. This can be achieved as follows: Let $R$ be an operation, which approximates a tree by another tree with the same singularity structure, and which satisfies the Rota-Baxter relation \cite{EbrahimiFard:2006iy}: \begin{eqnarray} \label{rotabaxter} R\left( t_1 t_2 \right) + R\left( t_1 \right) R\left( t_2 \right) & = & R\left( t_1 R\left( t_2 \right) \right) + R\left( R\left( t_1 \right) t_2 \right). \end{eqnarray} In physics, we may think about $R(t)$ as the appropriate counter-terms. For example, minimal subtraction ($\overline{MS}$) \begin{eqnarray} R\left( \sum\limits_{k=-L}^\infty c_k \varepsilon^k \right) & = & \sum\limits_{k=-L}^{-1} c_k \varepsilon^k \end{eqnarray} fulfills the Rota-Baxter relation. I simplify the notation by omitting the distinction between a Feynman graph and the evaluation of the graph. One can now twist the antipode with $R$ and define a new map \begin{eqnarray} S_R(t) & = & - R \left( t + \sum\limits_{\mathrm{adm. cuts} \; C \; \mathrm{of} \; t} S_R \left( P^C(t) \right) \cdot R^C(t) \right). \end{eqnarray} From the multiplicativity constraint (\ref{rotabaxter}) it follows that \begin{eqnarray} S_R\left(t_1 t_2 \right) & = & S_R\left(t_1 \right) S_R\left(t_2 \right). \end{eqnarray} If we replace $S$ by $S_R$ in (\ref{untwisted}) we obtain \begin{eqnarray} \label{twisted} S_R\left(t^{(1)}\right) t^{(2)} & = & \mathrm{finite}, \end{eqnarray} because by definition $S_R$ differs from $S$ only by finite terms. Eq. (\ref{twisted}) is equivalent to the forest formula \cite{Zimmermann:1969jj}. It should be noted that $R$ is not unique and different choices for $R$ correspond to different renormalization prescriptions. There is certainly more that could be said on the Hopf algebra of renormalization. In this regard, we refer the reader to the original literature \cite{Krajewski:1998xi,Connes:1998qv,Kreimer:1998iv,Connes:1999yr,Connes:2000fe,vanSuijlekom:2006fk,Ebrahimi-Fard:2010,Ebrahimi-Fard:2012}. \subsection{Multiple polylogarithms} \label{subsect:polylogs} Let us now revisit eq.~(\ref{Feynman_integral_3}) and ask, which functions occur in the coefficients $c_j$. For one-loop integrals there is a satisfactory answer: If we restrict our attention to the coefficients $c_j$ with $j\le 0$ (i.e., to $c_{-2}$, $c_{-1}$ and $c_0$), then these coefficients can be expressed as a sum of algebraic functions of the scalar products of the external momenta and the mass times two transcendental functions, whose arguments are again algebraic functions of the scalar products and the mass. The two transcendental functions are the logarithm and the dilogarithm: \begin{eqnarray} \label{log_dilog} \mathrm{Li}_1(x) & = & \sum\limits_{n=1}^\infty \frac{x^n}{n} = - \ln(1-x), \nonumber \\ \mathrm{Li}_2(x) & = & \sum\limits_{n=1}^\infty \frac{x^n}{n^2}. \end{eqnarray} \subsubsection{Sum representation of multiple polylogarithms} \label{sum_repr_polylogs} Beyond one loop an answer to the above question is not yet known. However, we know that the following generalizations occur: From eq.~(\ref{log_dilog}) it is not too difficult to imagine that the generalization includes the classical polylogarithms defined by \begin{eqnarray} \mathrm{Li}_m(x) & = & \sum\limits_{n=1}^\infty \frac{x^n}{n^m}. \end{eqnarray} However, explicit calculations for two loops and beyond show that a wider generalization towards functions of several variables is needed and one arrives at the multiple polylogarithms defined by \cite{Goncharov_no_note,Goncharov:2001,Borwein} \begin{eqnarray} \label{def_multiple_polylogs_sum} \mathrm{Li}_{m_1,...,m_k}(x_1,...,x_k) & = & \sum\limits_{n_1>n_2>\ldots>n_k>0}^\infty \frac{x_1^{n_1}}{{n_1}^{m_1}}\ldots \frac{x_k^{n_k}}{{n_k}^{m_k}}. \end{eqnarray} The number $k$ is referred to as the depth of the sum representation of the multiple polylogarithm. Methods for the numerical evaluation of multiple polylogarithms can be found in \cite{Vollinga:2004sn}. The values of the multiple polylogarithms at $x_1=...=x_k=1$ are known as multiple $\zeta$-values: \index{multiple zeta values} \begin{eqnarray} \zeta_{m_1,...,m_k} & = & \mathrm{Li}_{m_1,m_2,...,m_k}(1,1,...,1) = \sum\limits_{n_1 > n_2 > ... > n_k > 0}^\infty \;\;\; \frac{1}{n_1^{m_1}} \cdot ... \cdot \frac{1}{n_k^{m_k}}. \end{eqnarray} Important specializations of multiple polylogarithms are the harmonic polylogarithms \cite{Remiddi:1999ew,Gehrmann:2000zt} \begin{eqnarray} H_{m_1,...,m_k}(x) & = & \mathrm{Li}_{m_1,...,m_k}(x,\underbrace{1,...,1}_{k-1}), \end{eqnarray} Further specializations leads to Nielsen's generalized polylogarithms \cite{Nielsen} \begin{eqnarray} S_{n,p}(x) & = & \mathrm{Li}_{n+1,1,...,1}(x,\underbrace{1,...,1}_{p-1}). \end{eqnarray} Although many Feynman integrals evaluate to multiple polylogarithms, it should be noted that there are Feynman integrals that cannot be expressed in terms of this class of functions. A prominent example is the two-loop sunrise integral with non-zero internal mass. Here, elliptic generalizations of multiple polylogarithms occur \cite{Bloch:2013tra,Adams:2014vja,Adams:2015gva}. These are the focus of current research and beyond the scope of these lectures. \subsubsection{Integral representation of multiple polylogarithms} \label{integral_repr_polylogs} In eq.~(\ref{def_multiple_polylogs_sum}) we have defined multiple polylogarithms through a sum representation. In addition, multiple polylogarithms have an integral representation. To discuss the integral representation it is convenient to introduce the following functions for $z_k \neq 0$: \begin{eqnarray} \label{Gfuncdef} G(z_1,...,z_k;y) & = & \int\limits_0^y \frac{dt_1}{t_1-z_1} \int\limits_0^{t_1} \frac{dt_2}{t_2-z_2} ... \int\limits_0^{t_{k-1}} \frac{dt_k}{t_k-z_k}. \end{eqnarray} The number $k$ is referred to as the depth of the integral representation. In this definition one variable is redundant due to the following scaling relation: \begin{eqnarray} \label{G_scaling_relation} G(z_1,...,z_k;y) & = & G(x z_1, ..., x z_k; x y) \end{eqnarray} If one further defines $g(z;y) = 1/(y-z)$, then one has \begin{eqnarray} \frac{d}{dy} G(z_1,...,z_k;y) & = & g(z_1;y) G(z_2,...,z_k;y) \end{eqnarray} and \begin{eqnarray} \label{Grecursive} G(z_1,z_2,...,z_k;y) & = & \int\limits_0^y dt \; g(z_1;t) G(z_2,...,z_k;t). \end{eqnarray} One can slightly enlarge the set and define $G(0,...,0;y)$ with $k$ zeros for $z_1$ to $z_k$ to be \begin{eqnarray} \label{trailingzeros} G(0,...,0;y) & = & \frac{1}{k!} \left( \ln y \right)^k. \end{eqnarray} This permits us to allow trailing zeros in the sequence $(z_1,...,z_k)$ by defining the function $G$ with trailing zeros via eq.~(\ref{Grecursive}) and eq.~(\ref{trailingzeros}). The multiple polylogarithms are related to the functions $G$ by conveniently introducing the following short-hand notation: \begin{eqnarray} \label{Gshorthand} G_{m_1,...,m_k}(z_1,...,z_k;y) & = & G(\underbrace{0,...,0}_{m_1-1},z_1,...,z_{k-1},\underbrace{0...,0}_{m_k-1},z_k;y) \end{eqnarray} Here, all $z_j$ for $j=1,...,k$ are assumed to be non-zero. One then finds \begin{eqnarray} \label{Gintrepdef} \mathrm{Li}_{m_1,...,m_k}(x_1,...,x_k) & = & (-1)^k G_{m_1,...,m_k}\left( \frac{1}{x_1}, \frac{1}{x_1 x_2}, ..., \frac{1}{x_1...x_k};1 \right). \end{eqnarray} The inverse formula reads \begin{eqnarray} G_{m_1,...,m_k}(z_1,...,z_k;y) & = & (-1)^k \; \mathrm{Li}_{m_1,...,m_k}\left(\frac{y}{z_1}, \frac{z_1}{z_2}, ..., \frac{z_{k-1}}{z_k}\right). \end{eqnarray} Eq.~(\ref{Gintrepdef}) together with eq.~(\ref{Gshorthand}) and eq.~(\ref{Gfuncdef}) defines an integral representation for the multiple polylogarithms. As an example, we obtain from eq.~(\ref{Gintrepdef}) and eq.~(\ref{G_scaling_relation}) the integral representation of harmonic polylogarithms: \begin{eqnarray} H_{m_1,...,m_k}(x) & = & \left(-1\right)^k G_{m_1,...,m_k}\left(1,...,1;x\right). \end{eqnarray} The function $G_{m_1,...,m_k}(1,...,1;x)$ is an iterated integral \cite{Chen,Brown:2013qva} in which only the two one-forms \begin{eqnarray} \omega_0 = \frac{dt}{t}, & & \omega_1 = \frac{dt}{t-1} \end{eqnarray} corresponding to $z=0$ and $z=1$ appear. If one restricts the possible values of $z$ to zero and the $n$-th roots of unity, one arrives at the class of cyclotomic harmonic polylogarithms \cite{Ablinger:2011te}. \subsubsection{Notation} \label{section_notation} Before we discuss the Hopf algebras associated with multiple polylogarithms it is worth explaining to mathematical purists the notation which we will use. Let us consider a Hopf algebra $H$ and an algebra $A$, together with a map \begin{eqnarray} f & : & H \rightarrow A. \end{eqnarray} The map $f$ is assumed to be an algebra homomorphism; therefore, for $h_1, h_2 \in H$ \begin{eqnarray} f\left(h_1 \cdot h_2\right) & = & f\left(h_1\right) \cdot f\left(h_2\right). \end{eqnarray} On $H$ we additionally have the dual structures (co-unit $\bar{e}$, co-multiplication $\Delta$) and the antipode $S$. As $A$ is only assumed to be an algebra, these structures do not exist on $A$. It is sometimes useful to consider the images of $\Delta(h)$, $\bar{e}(h)$, and $S(h)$ under the map $f$ in $A$. By abuse of notation we will write \begin{eqnarray} \label{notation_abuse} \Delta f(h), \;\;\; \bar{e} f(h), \;\;\; S f(h) \end{eqnarray} for \begin{eqnarray} \left( f \otimes f \right) \Delta(h), \;\;\; f\left(\bar{e}(h)\right), \;\;\; f\left( S(h) \right). \end{eqnarray} Eq.~(\ref{notation_abuse}) is merely a handy notation and does not define a Hopf algebra on $A$. In the examples in the next two subsections, $H$ will be either a shuffle algebra or a quasi-shuffle algebra, $A$ the complex numbers ${\mathbb C}$, and the map $f$ will be given by the evaluation of the functions $G$ or $\mathrm{Li}$, extended linearly on the vector space of words. \subsubsection{Shuffle algebra of multiple polylogarithms} \label{section_shuffle} Multiple polylogarithms have a rich algebraic structure. The representations as iterated integrals and nested sums induce a shuffle algebra and a quasi-shuffle algebra, respectively. Shuffle and quasi-shuffle algebras are Hopf algebras. Note that the shuffle algebra of multiple polylogarithms is distinct from the quasi-shuffle algebra of multiple polylogarithms. We first discuss the shuffle algebra of multiple polylogarithms. The starting point is the integral representation given in eq.~(\ref{Gfuncdef}): \begin{eqnarray} G(z_1,...,z_k;y) & = & \int\limits_0^y \frac{dt_1}{t_1-z_1} \int\limits_0^{t_1} \frac{dt_2}{t_2-z_2} ... \int\limits_0^{t_{k-1}} \frac{dt_k}{t_k-z_k}. \end{eqnarray} For fixed $y$, the ordered sequence of variables $z_1, z_2, ..., z_k$ forms a word $w = z_1 z_2 ... z_k$ and we have the shuffle algebra \begin{eqnarray} \label{G_shuffle_product} G(z_1,z_2,...,z_k;y) \cdot G(z_{k+1},...,z_r; y) = \sum\limits_{\mathrm{shuffles} \; \sigma} G(z_{\sigma(1)},z_{\sigma(2)},...,z_{\sigma(r)};y), \end{eqnarray} where the sum includes all permutations $\sigma$, which preserve the relative order of $1,2,...,k$ and of $k+1,...,r$. The unit $e$ is given by the empty word: \begin{eqnarray} e \;\; = \;\; G(;y). \end{eqnarray} An example for the multiplication is given by \begin{eqnarray} \label{example_G_product} G(z_1;y) G(z_2;y) & = & G(z_1,z_2;y) + G(z_2,z_1;y). \end{eqnarray} The proof that the integral representation of the multiple polylogarithms fulfills the shuffle product formula in eq.~(\ref{G_shuffle_product}) is sketched for the example in eq.~(\ref{example_G_product}) in fig.~(\ref{fig5}) \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig5} \caption{\label{fig5} Shuffle algebra from the integral representation: The shuffle product follows from replacing the integral over the square by an integral over the lower triangle and an integral over the upper triangle.} \end{center} \end{figure} and can easily be extended to multiple polylogarithms of higher depth by recursively replacing the two outermost integrations by integrations over the upper and lower triangle. For the co-product one has \begin{eqnarray} \Delta G(z_1,...,z_k;y) & = & \sum\limits_{j=0}^k G(z_{j+1},...,z_k;y) \otimes G(z_1,...,z_j;y) \end{eqnarray} and for the antipode one finds \begin{eqnarray} \label{Gantipodeexpli} S G(z_1,...,z_k;y) & = & (-1)^k G(z_k,...,z_1;y). \end{eqnarray} The shuffle multiplication is commutative; therefore, the antipode satisfies \begin{eqnarray} S^2 & = & \mathrm{id}. \end{eqnarray} From eq.~(\ref{Gantipodeexpli}) this is evident. \subsubsection{Quasi-shuffle algebra of multiple polylogarithms} \label{section_quasi_shuffle} Let us now consider the second Hopf algebra of multiple polylogarithms, which follows from the sum representation. This Hopf algebra is a quasi-shuffle algebra. A quasi-shuffle algebra is a slight generalization of a shuffle algebra. Assume that for the set of letters $A$ we have an additional operation \begin{eqnarray} \label{additional_operation} (.,.) & : & A \otimes A \rightarrow A, \nonumber \\ & & l_1 \otimes l_2 \rightarrow (l_1, l_2), \end{eqnarray} which is commutative and associative. Then we can define a new product of words recursively through \begin{eqnarray} \label{def_recursive_quasi_shuffle} \left( l_1 l_2 ... l_k \right) \ast \left( l_{k+1} ... l_r \right) & = & l_1 \left[ \left( l_2 ... l_k \right) \ast \left( l_{k+1} ... l_r \right) \right] + l_{k+1} \left[ \left( l_1 l_2 ... l_k \right) \ast \left( l_{k+2} ... l_r \right) \right] \nonumber \\ & & + (l_1,l_{k+1}) \left[ \left( l_2 ... l_k \right) \ast \left( l_{k+2} ... l_r \right) \right], \end{eqnarray} together with \begin{eqnarray} l \ast e \;\; = \;\; e \ast l \;\; = \;\; l. \end{eqnarray} This product is a generalization of the shuffle product and differs from the recursive definition of the shuffle product in eq.~(\ref{def_recursive_shuffle}) through the extra term in the last line of eq.~(\ref{def_recursive_quasi_shuffle}). This modified product is known under the names quasi-shuffle product \cite{Hoffman}, mixable shuffle product \cite{Guo}, stuffle product \cite{Borwein}, or mould symmetrel \cite{Ecalle}. Quasi-shuffle algebras are Hopf algebras. Co-multiplication and co-unit are defined as for the shuffle algebras. The co-unit $\bar{e}$ is given by: \begin{eqnarray} \bar{e}\left( e\right) = 1, \;\;\; & & \bar{e}\left( l_1 l_2 ... l_n\right) = 0. \end{eqnarray} The co-product $\Delta$ is given by: \begin{eqnarray} \Delta\left( l_1 l_2 ... l_k \right) & = & \sum\limits_{j=0}^k \left( l_{j+1} ... l_k \right) \otimes \left( l_1 ... l_j \right). \end{eqnarray} The antipode $S$ is recursively defined through \begin{eqnarray} S\left( l_1 l_2 ... l_k \right) & = & - l_1 l_2 ... l_k - \sum\limits_{j=1}^{k-1} S\left( l_{j+1} ... l_k \right) \ast \left( l_1 ... l_j \right), \;\;\;\;\;\; S(e) = e. \end{eqnarray} The sum representation of the multiple polylogarithms in eq.~(\ref{def_multiple_polylogs_sum}) gives rise to a quasi-shuffle algebra. We determine this by first introducing \cite{Moch:2001zr} \begin{eqnarray} Z(N;m_1,...,m_k;x_1,...,x_k) & = & \sum\limits_{N\ge n_1>n_2>\ldots>n_k>0} \frac{x_1^{n_1}}{{n_1}^{m_1}}\ldots \frac{x_k^{n_k}}{{n_k}^{m_k}}. \end{eqnarray} For $N=\infty$ we recover the multiple polylogarithms: \begin{eqnarray} \mathrm{Li}_{m_1,...,m_k}\left(x_1,...,x_k\right) & = & Z(\infty;m_1,...,m_k;x_1,...,x_k). \end{eqnarray} The recursive definition for the quasi-shuffle product of the $Z$-sums reads \begin{eqnarray} \label{quasi_shuffle_multiplication} \lefteqn{ Z(N;m_1,m_2,...,m_k;x_1,x_2,...,x_k) \ast Z(N;m_{k+1},...,m_r;x_{k+1},...,x_r) = } & & \\ & & \sum\limits_{i_1=1}^{N} \frac{x_1^{i_1}}{i_1^{m_1}} \; Z(i_1-1;m_2,...,m_k;x_2,...,x_k) \ast Z(i_1-1;m_{k+1},...,m_r;x_{k+1},...,x_r) \nonumber \\ & & + \sum\limits_{j_1=1}^{N} \frac{x_{k+1}^{j_1}}{j_1^{m_{k+1}}} \; Z(j_1-1;m_1,...,m_k;x_1,...,x_k) \ast Z(j_1-1;m_{k+2},...,m_k;x_{k+2},...,x_r) \nonumber \\ & & + \sum\limits_{i=1}^{N} \frac{(x_1 x_{k+1})^i}{i^{m_1+m_{k+1}}} \; Z(i-1;m_2,...,m_k;x_2,...,x_k) \ast Z(i-1;m_{k+2},...,m_r;x_{k+2},...,x_r). \nonumber \end{eqnarray} Note that a letter $l_j$ corresponds to a pair $(m_j;x_j)$. For $l_1=(m_1;x_1)$ and $l_2=(m_2;x_2)$ the additional operation in eq.~(\ref{additional_operation}) is given by \begin{eqnarray} \left(l_1,l_2\right) & = & \left( m_1+m_2; x_1 x_2 \right). \end{eqnarray} A simple example for the quasi-shuffle multiplication is given by \begin{eqnarray} \label{example_Li_product} \mathrm{Li}_{m_1}(x_1) \mathrm{Li}_{m_2}(x_2) & = & \mathrm{Li}_{m_1,m_2}(x_1,x_2) + \mathrm{Li}_{m_2,m_1}(x_2,x_1) + \mathrm{Li}_{m_1+m_2}(x_1x_2). \end{eqnarray} The proof that the sum representation of the multiple polylogarithms \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig4} \caption{\label{fig4} Quasi-shuffle algebra from the sum representation: The quasi-shuffle product follows from replacing the sum over the square by a sum over the lower triangle, a sum over the upper triangle, and a sum over the diagonal.} \end{center} \end{figure} fulfills the quasi-shuffle product formula in eq.~(\ref{quasi_shuffle_multiplication}) is sketched for the example in eq.~(\ref{example_Li_product}) in fig.~(\ref{fig4}) and can easily be extended to multiple polylogarithms of higher depth by recursively replacing the two outermost summations by summations over the upper triangle, the lower triangle, and the diagonal. Let us provide one further example for the quasi-shuffle product. Working out the recursive definition of the quasi-shuffle product we obtain \begin{eqnarray} \lefteqn{ \mathrm{Li}_{m_1,m_2}(x_1,x_2) \cdot \mathrm{Li}_{m_3}(x_3) = } \nonumber \\ & = & \mathrm{Li}_{m_1,m_2,m_3}(x_1,x_2,x_3) + \mathrm{Li}_{m_1,m_3,m_2}(x_1,x_3,x_2) + \mathrm{Li}_{m_3,m_1,m_2}(x_3,x_1,x_2) \nonumber \\ & & + \mathrm{Li}_{m_1,m_2+m_3}(x_1,x_2x_3) + \mathrm{Li}_{m_1+m_3,m_2}(x_1 x_3,x_2) \end{eqnarray} \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig3} \caption{\label{fig3} Pictorial representation of the quasi-shuffle multiplication law. The first three terms on the right-hand side correspond to the ordinary shuffle product, whereas the two last terms are the additional ``stuffle''-terms.} \end{center} \end{figure} This is shown pictorially in fig.~(\ref{fig3}). The first three terms correspond to the ordinary shuffle product, whereas the two last terms are the additional ``stuffle''-terms. In fig.~(\ref{fig3}) we show only the $x$-variables, which are multiplied in the stuffle-terms. Not shown in fig.~(\ref{fig3}) are the indices $m_j$, which are added in the stuffle-terms. \subsubsection{Hopf algebra related to the Hodge structure} \label{section_hodge} The multiple polylogarithms are periods of a mixed Hodge-Tate structure. From this Hodge structure one obtains a third Hopf algebra as follows \cite{Goncharov:2001,Goncharov:2002b}: Let $S$ be a set of pairwise distinct points in ${\mathbb C}$. We denote \begin{eqnarray} \label{def_I_iterated_integral} I\left(z_0;z_1,z_2,...,z_k;z_{k+1}\right) & = & \int\limits_{z_0}^{z_{k+1}} \frac{dt_k}{t_k-z_k} \int\limits_{z_0}^{t_k} \frac{dt_{k-1}}{t_{k-1}-z_{k-1}} ... \int\limits_{z_0}^{t_2} \frac{dt_1}{t_1-z_1}, \end{eqnarray} together with the convention \begin{eqnarray} \label{def_I_unit} I\left(z_0;z_1\right) & = & 1. \end{eqnarray} This is a slight generalization of eq.~(\ref{Gfuncdef}), allowing the starting point $z_0$ of the integration to be different from zero. We have \begin{eqnarray} G\left(z_1,...,z_k;y\right) & = & I\left(0;z_k,...,z_1;y\right), \nonumber \\ I\left(z_0;z_1,z_2,...,z_k;z_{k+1}\right) & = & G\left(z_k-z_0,...,z_1-z_0;z_{k+1}-z_0\right). \end{eqnarray} As an algebra one now considers the ${\mathbb Q}$-algebra generated by the iterated integrals of the form in eq.~(\ref{def_I_iterated_integral}) together with the relation~(\ref{def_I_unit}). The expression ``generated by'' means that there are no further relations implied, and a product such as \begin{eqnarray} I\left(z_0;z_1;z_2\right) \cdot I\left(z_3;z_4;z_5\right) \end{eqnarray} is left as it is. The co-product is more interesting. We define it by treating the quantities $I(z_0;z_1,...,z_k;z_{k+1})$ as abstract objects and we set \begin{eqnarray} \lefteqn{ \Delta I\left(z_0;z_1,z_2,...,z_k;z_{k+1}\right) = \sum\limits_{r=0}^k \;\; \sum\limits_{0 = i_0 < i_1 < ... < i_r < i_{r+1} = k+1} } & & \nonumber \\ & & \prod\limits_{p=0}^r I\left(z_{i_p};z_{i_p+1},z_{i_p+2},...,z_{i_{p+1}-1};z_{i_{p+1}}\right) \otimes I\left(z_0;z_{i_1},z_{i_2},...,z_{i_r};z_{k+1}\right). \end{eqnarray} In \cite{Goncharov:2002b} it is shown, that this defines a Hopf algebra. We remind the reader that we use a sloppy notation, as explained in section~\ref{section_notation}. For rigorous mathematicians the co-product is defined on the algebra of strings of the form $(z_0;z_1,...,z_k;z_{k+1})$. Furthermore, in addition one can consider this Hopf algebra modulo by considering the following relations: For identical start and end points of the integration one can impose the shuffle relation: \begin{eqnarray} \label{I_shuffle} I(z_0;z_1,...,z_k; z_{r+1}) \cdot I(z_0;z_{k+1},...,z_r; z_{r+1}) & = & \sum\limits_{\mathrm{shuffles} \; \sigma} I(z_0; z_{\sigma(1)},...,z_{\sigma(r)}; z_{r+1}). \nonumber \\ \end{eqnarray} The second relation is the path composition formula: \begin{eqnarray} I(z_0;z_1,...,z_r;z_{r+1}) & = & \sum\limits_{k=0}^r I(z_0;z_1,...,z_k; y) \cdot I(y;z_{k+1},...,z_r; z_{r+1}). \end{eqnarray} Finally, one sets for $k \ge 1$ \begin{eqnarray} \label{I_zero} I\left(z_0;z_1,...,z_k;z_0\right) & = & 0. \end{eqnarray} Imposing the relations in eqs.~(\ref{I_shuffle})-(\ref{I_zero}) still produces a Hopf algebra. Let us emphasize that in eq.~(\ref{def_I_iterated_integral}) we assume the points $z_1$, ..., $z_k$ to be pairwise distinct and each point not equal to $z_0$ nor to $z_{k+1}$. If this condition is not met, we might have to deal with divergent integrals. This point is discussed in detail in the original literature \cite{Goncharov:2001,Goncharov:2002b} and the lectures by C. Duhr \cite{Duhr:2014woa}. \subsubsection{Comparison of the various coproducts} We have now seen three different co-products for the pre-images of the multiple polylogarithms. We recall that the multiple polylogarithms can be viewed as a map from the shuffle algebra to ${\mathbb C}$ (discussed in section~\ref{section_shuffle}), as a map from the quasi-shuffle algebra to ${\mathbb C}$ (discussed in section~\ref{section_quasi_shuffle}) or as map from the algebra of strings of the form $(z_0;z_1,...,z_k;z_{k+1})$ to ${\mathbb C}$ (discussed in section~\ref{section_hodge}). In all three cases we have a co-product on the domain of the map (but not on the co-domain ${\mathbb C}$). We remind the reader of our notation introduced in section~\ref{section_notation}. It is instructive to discuss the differences between the various co-products. We consider the example \begin{eqnarray} G\left(z_1,z_2;1\right) \;\; = \;\; \mathrm{Li}_{11}\left(\frac{1}{z_1},\frac{z_1}{z_2}\right) \;\; = \;\; I\left(0;z_2,z_1;1\right). \end{eqnarray} For the shuffle algebra we have \begin{eqnarray} \Delta^{\mathrm{shuffle}} G\left(z_1,z_2;1\right) & = & G\left(z_1,z_2;1\right) \otimes e + e \otimes G\left(z_1,z_2;1\right) \nonumber \\ & & + G\left(z_2;1\right) \otimes G\left(z_1;1\right). \end{eqnarray} For the quasi-shuffle algebra we obtain \begin{eqnarray} \Delta^{\mathrm{quasi-shuffle}} \mathrm{Li}_{11}\left(\frac{1}{z_1},\frac{z_1}{z_2}\right) & = & \mathrm{Li}_{11}\left(\frac{1}{z_1},\frac{z_1}{z_2}\right) \otimes e + e \otimes \mathrm{Li}_{11}\left(\frac{1}{z_1},\frac{z_1}{z_2}\right) \nonumber \\ & & + \mathrm{Li}_{11}\left(\frac{z_1}{z_2}\right) \otimes \mathrm{Li}_{11}\left(\frac{1}{z_1}\right). \end{eqnarray} Translated to the $G$-notation this reads \begin{eqnarray} \Delta^{\mathrm{quasi-shuffle}} G\left(z_1,z_2;1\right) & = & G\left(z_1,z_2;1\right) \otimes e + e \otimes G\left(z_1,z_2;1\right) \nonumber \\ & & + G\left(\frac{z_2}{z_1};1\right) \otimes G\left(z_1;1\right). \end{eqnarray} Finally, for the Hopf algebra related to the Hodge structure we obtain \begin{eqnarray} \Delta^{\mathrm{Hodge}} I\left(0;z_2,z_1;1\right) & = & I\left(0;z_2,z_1;1\right) \otimes e + e \otimes I\left(0;z_2,z_1;1\right) \\ & & + I\left(0;z_2;z_1\right) \otimes I\left(0;z_1;1\right) + I\left(z_2;z_1;1\right) \otimes I\left(0;z_2;1\right). \nonumber \end{eqnarray} Again, translating to the $G$-notation we find \begin{eqnarray} \Delta^{\mathrm{Hodge}} G\left(z_1,z_2;1\right) & = & G\left(z_1,z_2;1\right) \otimes e + e \otimes G\left(z_1,z_2;1\right) \\ & & + G\left(\frac{z_2}{z_1};1\right) \otimes G\left(z_1;1\right) + G\left(\frac{z_1-z_2}{1-z_2};1\right) \otimes G\left(z_2;1\right). \nonumber \end{eqnarray} We see that the three co-products are different. \section{Dyson-Schwinger equations} \label{sec:dyson_schwinger} We now turn our attention to Dyson-Schwinger equations. One of the fundamental concepts of quantum field theory is Green's functions. For a specified set of external particles, Green's function can be thought of as the set of all Feynman diagrams (to all loop orders) with the specified external particles. It will be convenient to represent the set of all possible Feynman diagrams for a given set of external particles by a blob. \begin{figure} \begin{center} \includegraphics[scale=0.7]{fig6a} \includegraphics[scale=0.7]{fig6aa} \includegraphics[scale=0.7]{fig6b} \includegraphics[scale=0.7]{fig6bb} \includegraphics[scale=0.7]{fig6c} \caption{\label{fig6} Two-point and three-point functions in QED. In the two-point case the blob represents all possible Feynman diagrams with the specified set of external particles, in the three-point case the blob represent all possible one-particle irreducible Feynman diagrams with the specified set of external particles. } \end{center} \end{figure} In fig.~(\ref{fig6}) this is illustrated for the two-point and three-point functions in quantum electrodynamics (QED). In QED we have as two-point function the electron propagator and the photon propagator. As three-point function we have the electron-photon-vertex function. The Dyson-Schwinger equations are integral equations among Green's functions. As an example, the Dyson-Schwinger equations for the electron propagator and the photon propagator in QED \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig7a} \includegraphics[scale=0.8]{fig7b} \caption{\label{fig7} Dyson-Schwinger equations for the electron propagator and the photon propagator in QED.} \end{center} \end{figure} are shown in fig.~(\ref{fig7}). Note that the Dyson-Schwinger equations for the propagators involve Green's function for the vertex. In other words, a Dyson-Schwinger equation for a two-point function involves a three-point function. Let us then look at the Dyson-Schwinger equation for the electron-photon vertex. \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig8} \caption{\label{fig8} Dyson-Schwinger equation for the electron-photon vertex.} \end{center} \end{figure} This Dyson-Schwinger equation is shown in fig.~(\ref{fig8}) and involves the electron-positron scattering kernel, depending on four external particles. This leads to a coupled system of Dyson-Schwinger equations for Green's functions involving all possible numbers of external particles. In order to solve a Dyson-Schwinger equation we have to truncate the system. We discuss this for a simple example. Consider a toy model consisting of a fermion and a scalar particle. We perform two simplifications: First, we linearize the Dyson-Schwinger equation. In our toy model this implies that the Dyson-Schwinger equation for the scalar-fermion vertex reduces \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig9} \caption{\label{fig9} Linearised Dyson-Schwinger equation for a vertex} \end{center} \end{figure} to that shown in fig.~(\ref{fig9}). In comparison with fig.~(\ref{fig8}) we have replaced the full fermion propagator (the two blue blobs) with the corresponding Born propagator. In this way, the unknown function (the scalar-fermion vertex function) appears linearly on the right-hand side (not multiplied by any other unknown function). Secondly, we truncate the kernel at a certain loop order. \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig10} \caption{\label{fig10} Truncation of the kernel at two-loop order.} \end{center} \end{figure} For example, we could truncate the kernel at two-loop order, shown in fig.~(\ref{fig10}). The coupling constant is denoted by $a$. After this truncation, the kernel can be considered a known function. In other words, with a truncated kernel the Dyson-Schwinger equation in eq.~(\ref{fig9}) is a linear integral equation for the unknown scalar-fermion vertex (the red blob). Let us make one further simplification by setting the momentum of the scalar external particle to zero, as shown in fig.~(\ref{fig17}) and let us consider massless particles. \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig17} \caption{\label{fig17} Simplified kinematics for the vertex function: The external scalar particle has zero momentum. The fermion and the anti-fermion have (outgoing) momentum $q$ and $(-q)$, respectively.} \end{center} \end{figure} The renormalized vertex function $G_R(a,L)$ then only depends on the coupling $a$ and the quantity \begin{eqnarray} L & = & \ln\left(\frac{-q^2}{\mu^2}\right), \end{eqnarray} where $\mu$ is an arbitrary scale. As renormalization condition we impose \begin{eqnarray} \label{renorm_condition} G_R\left(a,0\right) & = & 1. \end{eqnarray} From dimensional analysis it follows that $G_R(a,L)$ must be of the form \begin{eqnarray} \label{functional_form_G_R} G_R\left(a,L\right) & = & \exp\left(-\gamma_G\left(a\right) L \right). \end{eqnarray} The anomalous dimension $\gamma_G$ depends only on the coupling $a$, but not on $L$. Plugging eq.~(\ref{functional_form_G_R}) into the truncated and linearized Dyson-Schwinger equation one obtains \begin{eqnarray} \label{Dyson_Schwinger_example} \exp\left(-\gamma_G(a)L \right) & = & 1 + \left( \exp\left(-\gamma_G(a)L\right) -1\right)\left[aF_1(\gamma_G)+a^2F_2(\gamma_G)\right], \end{eqnarray} where $F_1$ and $F_2$ are the Mellin-transforms of the one-loop and two-loop integral, respectively. Working these out, one finds \begin{eqnarray} \label{eq_gamma_G_1} 1 & = & -a\frac{1}{\gamma_G(1-\gamma_G)} \\ & & -a^2 \left\{ \frac{1}{\gamma_G^2(1-\gamma_G)^2} -4\sum_{n=1}^\infty n(1-2^{-2n})\zeta_{2n+1}\left[\gamma_G^{2n-2}+(1-\gamma_G)^{2n-2}\right] \right\}. \nonumber \end{eqnarray} The sum can be obtained: \begin{eqnarray} \label{eq_gamma_G_2} \lefteqn{ -4\sum_{n=1}^\infty n(1-2^{-2n})\zeta_{2n+1}\left[\gamma_G^{2n-2}+(1-\gamma_G)^{2n-2}\right] = } & & \\ & = & \frac{1}{\gamma_G} \left[ \psi'\left(1+\gamma_G\right) - \psi'\left(1-\gamma_G\right) \right] + \frac{1}{1-\gamma_G} \left[ \psi'\left(2-\gamma_G\right) - \psi'\left(\gamma_G\right) \right] \nonumber \\ & & - \frac{1}{2\gamma_G} \left[ \psi'\left(1+\frac{\gamma_G}{2}\right) - \psi'\left(1-\frac{\gamma_G}{2}\right) \right] \nonumber \\ & & - \frac{1}{2(1-\gamma_G)} \left[ \psi'\left(\frac{3-\gamma_G}{2}\right) - \psi'\left(\frac{1+\gamma_G}{2}\right) \right]. \nonumber \end{eqnarray} Eq.~(\ref{eq_gamma_G_1}) and eq.~(\ref{eq_gamma_G_1}) implicitly define $\gamma_G$ as a function of $a$. Given $a$, we may solve numerically for $\gamma_G$ \cite{Bierenbaum:2006gn}. Let us now consider the Hopf algebra side of this example. \begin{figure} \begin{center} \includegraphics[scale=0.7]{fig11a} \includegraphics[scale=0.7]{fig11b} \caption{\label{fig11} Feynman diagrams computed by the truncated and linearized Dyson-Schwinger equation.} \end{center} \end{figure} Fig.~(\ref{fig11}) shows some Feynman diagrams, which are computed by the truncated and linearized Dyson-Schwinger equation. \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig12} \caption{\label{fig12} Two letters, corresponding to the one-loop and two-loop contribution to the truncated kernel, respectively.} \end{center} \end{figure} It is convenient, to introduce two letters $u$ and $v$, as shown in fig.~(\ref{fig12}). The two letters correspond to the one-loop and two-loop contribution to the truncated kernel in fig.~(\ref{fig10}). With the help of these two letters, we may represent each Feynman diagram in fig.~(\ref{fig11}) by a word \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig13} \caption{\label{fig13} Feynman diagram from fig.~(\ref{fig11}) as represented by a word in the two letters $u$ and $v$.} \end{center} \end{figure} in these two letters, an example is shown in fig.~(\ref{fig13}). Let us further define two insertion operators $B_+^u$ and $B_+^v$ by the action on any words $w$: \begin{eqnarray} B_+^u w = u w, & & B_+^v w = v w. \end{eqnarray} With the help of these two operators we may re-write the Dyson-Schwinger equation in fig.~(\ref{fig9}) as \begin{eqnarray} \label{combinatorial_Dyson_Schwinger} X(a) & = & 1 + a B_+^u X(a) + a^2 B_+^v X(a) \end{eqnarray} Eq.~(\ref{combinatorial_Dyson_Schwinger}) is known as a combinatorial Dyson-Schwinger equation. In comparing the combinatorial Dyson-Schwinger eq.~(\ref{combinatorial_Dyson_Schwinger}) with eq.~(\ref{Dyson_Schwinger_example}) we see that inserting the letter $u$ corresponds in Mellin space to the multiplication of the Mellin-transform of the one-loop integral with the subtracted Green's function. The subtraction for Green's function implements ultraviolet renormalization and the renormalization condition in eq.~(\ref{renorm_condition}). In a similar way, the letter $v$ corresponds to the multiplication of the Mellin-transform of the two-loop integral with the subtracted Green's function. A solution to eq.~(\ref{combinatorial_Dyson_Schwinger}) is given by \begin{eqnarray} X(a) & = & \exp_\Sha\left( a u + a^2 v \right), \end{eqnarray} where $\exp_\Sha$ denotes the shuffle-exponential \begin{eqnarray} \exp_\Sha\left(w\right) & = & \sum\limits_{n=0}^\infty \frac{1}{n!} w^{\Sha n} \end{eqnarray} and $\Sha$ denotes the shuffle product: \begin{eqnarray} u^{\Sha n} & = & \underbrace{\; u \; {\scriptstyle \Sha} \; u \; {\scriptstyle \Sha} \; ... \; {\scriptstyle \Sha} \; u \;}_{n} = n! \; \underbrace{\; u u ... u \;}_{n}, \nonumber \\ u {\scriptstyle \Sha} v & = & u v + v u. \end{eqnarray} In terms of Hopf algebra we obtain the shuffle algebra in the two letters $u$ and $v$. For the first few terms of $X(a)$ in an expansion in $a$ we have \begin{eqnarray} X(a) & = & 1 + a u + a^2 \left( u u + v \right) + a^3 \left( u u u + u v + v u \right) + ... \end{eqnarray} $X(a)$ is a group-like element in this Hopf algebra. Therefore, for the co-product we have \begin{eqnarray} \Delta X(a) & = & X(a) \otimes X(a). \end{eqnarray} Combinatorial Hopf algebras are currently the subject of studies \cite{Bergbauer:2005fb,Kreimer:2006gm,Foissy:2011,Krueger:2014poa}. Although we only discussed a simple example here, more complicated cases can be envisaged. The challenge is to map the iterated structure of a Feynman graph to the iterated structure of the functions to which this graph evaluates. Techniques such as Mellin-Barnes \cite{Bierenbaum:2003ud,Kreimer:2012nk,Panzer:2014kia}, linear reducibility \cite{Brown:2008}, and algorithms based on nested sums \cite{Moch:2001zr} may prove useful in this respect. Examples of recent research in the field of Dyson-Schwinger equations can be found in \cite{Broadhurst:2000dq,Kreimer:2006ua,vanBaalen:2008tc,vanBaalen:2009hu,Bellon:2013sya,Bellon:2014,Clavier:2014osa}.
2,877,628,090,043
arxiv
\section{Introduction} \label{intro} The C-metric is one of the earliest known exact solutions to Einstein gravity, and still many of its features remain relevant for various reasons today. Its compact and elegant form appears almost oblivious to reality as the years unfold as new features and applications have been found with each passing decade. \par The C-metric was one of the building blocks used to construct the five-dimensional black ring \cite{Emparan:2001wk}, and to provide a description of localised braneworld black holes \cite{Emparan:1999wa,Emparan:1999fd}. In the context of the AdS/CFT correspondence, the C-metric with a negative cosmological constant was used to describe black funnels and droplets \cite{Hubeny:2009ru,Hubeny:2009kz}. Further analysis of its physical properties and causal structure continue to reveal many interesting physics. (See, e.g., Refs.~\cite{Kinnersley:1970zw,Farhoosh:1981kc,Bonnor:1983,Dias:2002mi,Krtous:2003tc,Krtous:2005ej} and related references therein.) Not long ago, Hong and Teo \cite{Hong:2003gx} cast the C-metric in a convenient factorised form in which the solution is parametrised in terms of the roots of its structure functions. Recently, in \cite{Chen:2015vma} this idea has been extended to C-metrics with non-zero cosmological constant. \par With the importance of the C-metric in Einstein gravity, it is natural to study analogous solutions in non-Einstein theories of gravity such as Weyl conformal gravity \cite{Weyl1,Weyl2,Bach1921}. Instead of the usual Einstein-Hilbert action, this formulation of gravity is based on local conformal invariance where the action involves the square of the Weyl tensor. Varying this action results in fourth-order equations of motion for the metric functions, though in the vacuum case, solutions of Einstein gravity are also vacuum solutions of conformal gravity. \par Among the most widely used solution in conformal gravity is the spherically symmetric solution obtained by Mannheim and Kazanas (MK) \cite{Mannheim:1988dj}. This solution resembles the Schwarzschild-(A)dS solution, with an additional linear term in its lapse function. The Newtonian limit of these solutions were investigated in \cite{Mannheim:1992tr,Barabash:1999bj}. The charged generalisation of the MK metric was given by Riegert \cite{Riegert:1984zz} and also by Mannheim and Kazanas \cite{1991PhRvD..44..417M} where the rotating generalisation was also given. Other types of solutions were obtained more recently, such as spacetimes with cylindrical symmetry \cite{Brihaye:2009xc,Verbin:2010tq,Said:2012pm}, the Kerr-NUT-(A)dS solution \cite{Liu:2012xn}, and topological black holes \cite{Klemm:1998kf,Cognola:2011nj,Lu:2012xu}. Several solutions have also been studied in theories beyond four-dimensional conformal gravity, such six-dimensional conformal gravity \cite{Lu:2013hx}, and a gravitational action that includes both the Einstein and conformal gravity terms \cite{Lu:2012xu}. \par One of the most promising features of conformal gravity is that it provides a likely explanation of astrophysical phenomena not accounted for in Einsteinian gravity, such as the fitting of galactic rotation curves without the need of introducing dark matter \cite{Mannheim:2010ti,Deliduman:2015vnu}. Furthermore, the constraints on the parameters obtained from the fitting is also consistent with observations of planetary perihelion precession \cite{Sultana:2012qp}. Further investigations of other experimental tests of gravity are also considered, such as gravitational time delay \cite{Edery:1997hu} and gravitational lensing \cite{Edery:2001at,Sultana:2010zz,Cattani:2013dla,Villanueva:2013gga}. \par The (neutral) C-metric in conformal gravity was studied in detail recently by Meng and Liu in \cite{Meng:2016gyt}. In their paper, the C-metric solution also includes a conformally coupled scalar field. A somewhat similar metric was briefly considered earlier in \cite{Liu:2010sz,Lu:2012xu} where the metric is in the form that is conformal to the Pleba\'{n}ski-Demia\'{n}ski metric. \par In this work, we attempt to derive a solution corresponding to a charged C-metric in conformal gravity and investigate its various properties, in a similar vein to what was previously done for the C-metric in Einstein gravity. In particular, we study the domain structure \cite{Chen:2015vma,Chen:2015zoa} of the solutions which involves analysing the structure of the Lorentzian coordinate regions in a two-dimensional plot.\footnote{The term `\emph{domain structure}' should not be confused with the formalism of the same name in Ref.~\cite{Harmark:2009dh}.} We also aim to show that conformal gravity C-metric contains reduces to the (charged) MK metric under an appropriate limit, similar to how the C-metric reduces to the Reissner-Nordstr\"{o}m solution in Einstein gravity \cite{Mann:1996gj,Hong:2003gx}. \par This paper is organised as follows. In Sec.~\ref{derivation} we present the derivation of the metric using a C-metric-type ansatz and solve the Bach-Maxwell equations describing conformal gravity coupled to an electromagnetic field. Subsequently in Sec.~\ref{sym} we focus on a special choice of parameters that affords various symmetries and provide a convenient form in which one of the structure functions are factorised. In Sec.~\ref{domain} we study the domain structure of the metric and find its possible Lorentzian coordinate regions. Some physical properties of the spacetime are studied in Sec.~\ref{physical}, and various interesting limiting cases of the metric are considered in Sec.~\ref{limits}. This paper ends with some closing remarks in Sec.~\ref{conclusion}. \section{Derivation of the metric} \label{derivation} Conformal Weyl gravity is described by the action\footnote{For the expression of the gravitational action we follow the notation of \cite{Riegert:1984zz}, with $(-+++)$ for a Lorentzian signature and a convenient normalisation of the coupling constant to the Maxwell field.} \begin{align} I&=\frac{1}{2\kappa}\int\mathrm{d}^4x\,\sqrt{-g}\brac{\mathcal{C}_{\mu\nu\rho\sigma}\mathcal{C}^{\mu\nu\rho\sigma}-\mathcal{F}^2}, \end{align} where $\mathcal{C}$ is the conformal Weyl tensor and $\mathcal{F}=\mathrm{d}\mathcal{A}$ is the Maxwell 2-form flux arising from a 1-form potential $\mathcal{A}$. Varying the action with respect to the metric $g$ and $\mathcal{A}$ gives the Bach-Maxwell equations \begin{align} W_{\mu\nu}\equiv\brac{2\nabla^\rho\nabla^\sigma+R^{\rho\sigma}}\mathcal{C}_{\mu\rho\sigma\nu}&=2\mathcal{F}_{\mu\lambda}{\mathcal{F}_\nu}^\lambda-\frac{1}{2}\mathcal{F}^2 g_{\mu\nu},\label{eom_Bach}\\ \nabla_\mu\mathcal{F}^{\mu\nu}&=0. \label{eom_Max} \end{align} \par We first solve Eq.~\Eqref{eom_Bach} in the vacuum case ($W_{\mu\nu}=0$), beginning with the ansatz \begin{align} \mathrm{d} s^2&=\frac{1}{(x-y)^2}\brac{Q(y)\mathrm{d} t^2-\frac{\mathrm{d} y^2}{Q(y)}+\frac{\mathrm{d} x^2}{P(x)}+P(x)\mathrm{d}\phi^2}, \end{align} where $P(x)$ and $Q(y)$ are functions of only $x$ and $y$, respectively. In the vacuum case, the linear combination $W_{xx}-W_{yy}=0$ leads to \begin{align} PP''''+QQ''''=0, \end{align} where primes denote derivatives with respect to their own arguments. This suggests a separation constant $K$ where $PP''''=K=-QQ''''$. Using this separation constant to eliminate the fourth derivatives in $W_{tt}$ and $W_{\phi\phi}$ leads to a single equation, \begin{align} 2Q'Q'''-Q''^2&=6K+2P'P'''-P''^2, \end{align} which may also be separated with another separation constant $4C$. Solving the resulting third-order ordinary differential equations gives third-order polynomials for $P$ and $Q$ with the requirement that $K=0$. The result is \begin{align} P(x)&=\frac{\brac{p_2^2+C}}{3p_1}x^3+p_2x^2+p_1x+p_0,\nonumber\\ Q(y)&=\frac{\brac{q_2^2+C}}{3q_1}y^3+q_2y^2+q_1y+q_0,\nonumber \end{align} where $p_0,\ldots, p_2$ and $q_0,\ldots,q_2$ are constant coefficients. \par To generalise this solution to include charges, we assume a Maxwell potential that takes the form $\mathcal{A}=ey\,\mathrm{d} t+gx\,\mathrm{d}\phi$, where $e$ and $g$ respectively denote the electric and magnetic charge parameter. Solving the equations of motion requires a slight modification of the $Q$ polynomial. The result is a nine-parameter metric \begin{align} \mathrm{d} s^2&=\frac{1}{(x-y)^2}\brac{Q(y)\mathrm{d} t^2-\frac{\mathrm{d} y^2}{Q(y)}+\frac{\mathrm{d} x^2}{P(x)}+P(x)\mathrm{d}\phi^2},\nonumber\\ P(x)&=\frac{\brac{p_2^2+C}}{3p_1}x^3+p_2x^2+p_1x+p_0,\nonumber\\ Q(y)&=\frac{\sbrac{q_2^2+C+3\brac{e^2+g^2}}}{3q_1}y^3+q_2y^2+q_1y+q_0, \end{align} which, together with the Maxwell potential $\mathcal{A}=ey\,\mathrm{d} t+gx\,\mathrm{d}\phi$, solves the Bach-Maxwell equations \Eqref{eom_Bach} and \Eqref{eom_Max}. \section{Additional symmetries} \label{sym} For certain special choices of $p_i$ and $q_i$, the metric will carry additional symmetries which allow further simplifications. For example, the solution considered in \cite{Meng:2016gyt} corresponds to the choice \begin{align} C&=\frac{q_2^2p_1-p_2^2q_1}{q_1-p_1},\nonumber\\ p_2&=\frac{1}{2} C_2,\quad q_2=-\frac{1}{2}\brac{C_1e_2+C_2},\nonumber\\ p_1&=C_3,\quad q_1=\frac{1}{2} C_1e_2^2+C_2e_2+C_3,\nonumber\\ p_0&=C_4,\quad q_0=-\brac{\frac{1}{6}C_1e_2^2+\frac{1}{2} C_2e_2^2+C_3e_2+C_4}. \end{align} In this form, there exists a three-parameter solution which brings $P$ and $Q$ to a form where the neutral solution is characterised by three parameters. \par In this paper, we shall focus our attention to the following choice of parameters: \begin{align} p_1=q_1,\quad |p_2|=|q_2|. \label{choice} \end{align} Note that the second condition leads to two possible choices, $p_2=\pm q_2$. We can encode the two distinct choices with $\epsilon=\pm 1$, and upon renaming the other constants, the metric reduces to \begin{align} \mathrm{d} s^2&=\frac{1}{(x-y)^2}\brac{Q(y)\mathrm{d} t^2-\frac{\mathrm{d} y^2}{Q(y)}+\frac{\mathrm{d} x^2}{P(x)}+P(x)\mathrm{d}\phi^2},\nonumber\\ P(x)&=c_0+c_1x+c_2x^2+c_3x^3,\nonumber\\ Q(y)&=\alpha+c_0+c_1y+\epsilon c_2y^2+\brac{c_3-\frac{e^2+g^2}{c_1}}y^3.\label{conC} \end{align} In this form, Eq.~\Eqref{conC} has additional similarities to their counterpart in Einstein gravity which we will explore further in the following sections. \par To organise our discussion below, we shall denote the case $\epsilon=1$ as Class I and $\epsilon=-1$ as Class II. One notable feature we see in \Eqref{conC} is that the charge term is qubic, not quartic as in Einstein-Maxwell theory. Therefore, the introduction of charges does not introduce an inner horizon to the spacetime. This is similar to the case of the charged MK solution where the inner horizon is also absent. Furthermore we note another departure from Einstein-Maxwell theory in the relation \begin{align} Q(\xi)-P(\xi)=\alpha+(\epsilon-1)c_2\xi^2-\frac{e^2+g^2}{c_1}\xi^3, \label{PQ_diff} \end{align} so that in general, the two structure functions are not identical up to a constant shift. \par It follows from Eq.~\Eqref{PQ_diff} that in the presence of charges and/or $\epsilon=-1$, the metric does not have the continuous coordinate-translation symmetries enjoyed by its Einstein-Maxwell counterpart. This constrains our ability to fix or eliminate the remaining parameters to cast the metric in a convenient form. \par Nevertheless, we can at least completely factorise one of the structure functions. If we consider factorising $P$, the metric can be reparametrised by introducing \begin{align} c_0=-\mu abc,\quad c_1=\mu(ab+ac+bc),\quad c_2=-\mu(a+b+c),\quad c_3=\mu. \end{align} With this parametrisation, Eq.~\Eqref{conC} becomes \begin{align} \mathrm{d} s^2&=\frac{1}{(x-y)^2}\brac{Q(y)\mathrm{d} t^2-\frac{\mathrm{d} y^2}{Q(y)}+\frac{\mathrm{d} x^2}{P(x)}+P(x)\mathrm{d}\phi^2},\nonumber\\ P(x)&=\mu (x-a)(x-b)(x-c),\nonumber\\ Q(y)&=\brac{\mu -\frac{e^2+g^2}{\mu (ab+ac+bc)}}y^3-\epsilon \mu (a+b+c)y^2+\mu (ab+ac+bc)y\nonumber\\ &\quad-\mu abc+\alpha, \label{fconC2} \end{align} and the Maxwell potential remains unchanged, \begin{align} \mathcal{A}=ey\,\mathrm{d} t+gx\,\mathrm{d}\phi. \label{Max} \end{align} This metric \Eqref{fconC2} and potential \Eqref{Max} will be the form used throughout the rest of this paper. In this form, $P$ is assumed to have real roots. \par In this form, the solution is invariant under the following transformations: \begin{enumerate} \item Rescaling symmetry, \begin{align} x\rightarrow \lambda x,\quad y&\rightarrow\lambda y,\quad t\rightarrow\lambda t,\quad \phi\rightarrow\lambda\phi,\nonumber\\ \mu\rightarrow\frac{\mu}{\lambda^3},\quad a&\rightarrow\lambda a,\quad b\rightarrow\lambda b,\quad c\rightarrow\lambda c,\nonumber\\ e&\rightarrow\frac{e}{\lambda^2},\quad g\rightarrow\frac{g}{\lambda^2},\label{rescaling_sym} \end{align} for a non-zero, positive constant $\lambda$. \item Reflection symmetry, \begin{align} x&\rightarrow -x,\quad y\rightarrow -y,\quad t\rightarrow-t,\quad \phi\rightarrow-\phi,\nonumber\\ \mu&\rightarrow-\mu,\quad a\rightarrow-a,\quad b\rightarrow-b,\quad c\rightarrow-c. \end{align} \item Parameter symmetry, \begin{align} a\leftrightarrow b,\quad a\leftrightarrow c,\quad b\leftrightarrow c. \end{align} \item Coordinate symmetry, \begin{align} x\leftrightarrow y, \end{align} followed by double-Wick rotations on the pairs $(t,\phi)$ and $(e,g)$, \begin{align} t&\rightarrow\mathrm{i}\phi,\quad\phi\rightarrow\mathrm{i} t,\nonumber\\ e&\rightarrow \mathrm{i} g,\quad g\rightarrow \mathrm{i} e. \end{align} \end{enumerate} Clearly, allowing $\lambda<0$ in the rescaling symmetry \Eqref{rescaling_sym} is equivalent to a positive rescaling followed by a reflection. If we invoke coordinate symmetry on Eq.~\Eqref{fconC2}, we arrive at a form where $Q$ is factorised: \begin{align} \mathrm{d} s^2&=\frac{1}{(x-y)^2}\brac{Q(y)\mathrm{d} t^2-\frac{\mathrm{d} y^2}{Q(y)}+\frac{\mathrm{d} x^2}{P(x)}+P(x)\mathrm{d}\phi^2},\nonumber\\ P(x)&=\brac{\mu +\frac{e^2+g^2}{\mu (ab+ac+cb)}}x^3-\epsilon \mu (a+b+c)x^2+\mu (ab+ac+bc)x\nonumber\\ &\quad-\mu abc-\alpha,\nonumber\\ Q(y)&=\mu (y-a)(y-b)(y-c). \label{fconC1} \end{align} Therefore we have two alternate forms, \Eqref{fconC2} and \Eqref{fconC1} in which either $P$ or $Q$ is completely factorised. In both cases, the Maxwell potential is still given by Eq.~\Eqref{Max}. \par It should be noted that, in general, the two metrics \Eqref{fconC1} and \Eqref{fconC2} describe different spacetimes. Thus, the analysis of the parameter ranges and domain structure performed below for \Eqref{fconC2} do not automatically apply to the form \Eqref{fconC1}. A separate, but similar analysis should be performed in order to determine the properties of the latter spacetime. \par The parameter symmetry can be used to fix the ordering of the roots as \begin{align} a\leq b\leq c. \label{abc_ordering} \end{align} We shall also use the reflection symmetry to fix \begin{align} \mu \geq 0. \label{mrange} \end{align} With the rescaling symmetry we can fix one of the roots to a particular value. Throughout this paper we will find it convenient to set \begin{align} c=b+\frac{1}{\mu}, \label{csub} \end{align} Note that this choice is consistent with \Eqref{abc_ordering} and \Eqref{mrange}. \par We now have a solution specified by $(\mu,a,b,\alpha,e,g)$, which are four spacetime parameters plus two electromagnetic charges. Altogether, we treat Eq.~\Eqref{fconC2} as a six-parameter solution. \section{Coordinate ranges and domain structure} \label{domain} \subsection{Construction of domain structures in conformal gravity} Since our metric is described by four spacetime parameters plus two charges, it is not possible to characterise its solutions in a systematic manner using the methods of \cite{Chen:2015vma,Chen:2015zoa}, where the parameter space for (A)dS C-metric is two-dimensional. Furthermore, the fact that the coefficients of $P$ and $Q$ are different leads to many different possible orderings of the roots of $P$ and $Q$.\footnote{This is in stark contrast in the Einstein gravity case, where since $P$ and $Q$ only differ by a constant shift, there are only two possible orderings of the roots. (See, for example, Fig.~1 of Ref.~\cite{Chen:2015zoa}.)} \par Nevertheless we can still consider the possible existence of certain domains by seeking direct numerical examples. We shall briefly review and outline our procedure in this subsection and present the possible domains in Secs.~\ref{ClassI} and \ref{ClassII}. Our method to find the domain structure is as follows. \par The roots of $P$ are already defined in terms of $a$, $b$ and $c=b+1/m$, where we use the symmetries to set $a\leq b\leq c$. Let us denote the roots of $Q$ in increasing order as \begin{align} y_1\leq y_2\leq y_3. \end{align} Furthermore, since in \Eqref{fconC2} our electric and magnetic charges only appear in the combination $e^2+g^2$, it will be useful to express the charges as a single quantity \begin{align} q=\sqrt{e^2+g^2}, \end{align} where we will simply refer to $q$ as the total charge. \par To determine the domain structure for a given set of parameters, one has to first establish the order of these six roots $\{a,b,c,y_1,y_2,y_3 \}$ relative to each other. Knowing the locations of the roots, we would then be able to determine the coordinate ranges where $Q(y)<0$ and $P(x)>0$ which is required for the metric \Eqref{fconC2} to have a Lorentzian $(-+++)$ signature. Plotting these ranges on a two-dimensional plot then gives us the domain structure of the spacetime. \par To demonstrate using a concrete example, let us take $\epsilon=1$, $\mu=1$, $\alpha=0.2$, $q=0.5$, $a=-1$, $b=-0.2$. With these parameters we can easily sketch the curves of $P$ and $Q$ on a common axis using, say, MAPLE or MATHEMATICA. \begin{figure} \begin{center} \includegraphics[scale=0.7]{fig_PQsketch1.eps} \includegraphics[scale=0.8]{fig_PQsketch2.eps} \end{center} \caption{An example showing the construction of a domain structure for $\epsilon=-1$, $\mu=1$, $\alpha=0.3$, $q=0$, $a=-1$, $b=0.2$, and $c=1$. On the left is the sketch (not to scale) of the functions $P$ (solid) and $Q$ (dotted) showing the ordering of the roots. On the right is the two-dimensional plot where the horizontal (respectively vertical) direction represents the $x$- ($y-$) coordinate. The shaded regions represent the static Lorentzian regions of interest.} \label{fig_PQsketch} \end{figure} \par From the sketch in the left-hand plot of Fig.~\ref{fig_PQsketch}, we can read off the ordering of the roots as \begin{align} a<y_1<b<y_2<y_3<c. \end{align} As mentioned above, to have the correct Lorentzian $(-+++)$ signature, we require $P(x)>0$ and $Q(y)<0$. The former is satisfied for the ranges $a<x<b$ and $x>c$, while the latter is satisfied for ranges $y<y_1$ and $y_1<y<y_3$. We then plot the coordinate ranges together on a two-dimensional diagram to find the ranges that satisfy all the required conditions simultaneously. These are shown in the shaded regions in the right-hand plot of Fig.~\ref{fig_PQsketch}. The possible shapes of the shaded regions are what we refer to as the `domain structure'. \par These two-dimensional figures are plots where the horizontal direction represents the $x$-coordinate and the vertical direction represents the $y$-coordinate. The vertical lines represent the symmetry axes ($P=0$) and the horizontal lines represent the horizons ($Q=0$), while the diagonal line is the conformal infinity where $x=y$. The left and right sides of the plots represent $x\rightarrow\pm\infty$, while the upper and lower sides represent $y\rightarrow\pm\infty$. As we will show explicitly in Sec.~\ref{physical}, these limits generally contain curvature singularities. The shaded areas are the static regions of Lorentzian signature, where the darker shade represents areas of particular interest. We are mainly interested in static Lorentzian regions between $a<x<b$, where we will eventually extract the Mannheim-Kazanas spacetime in Sec.~\ref{MKlim} below. Furthermore, our darker-shaded static Lorentzian regions should not include the sides where $x,\,y\rightarrow\pm\infty$ which would correspond to having an observer seeing a naked curvature singularity. \par Indeed, an observer might pass through horizons to access non-static regions that possibly have curvature singularities. Nevertheless, we wish to view the spacetime from a perspective that is exterior to the black hole. This is partly motivated by physical reasons since, in the MK limit of the metric which will be performed below, the darker-shaded regions are the ones with the most observational significance (for instance, gravitational lensing and other observations mentioned in Sec.~\ref{intro}). \subsection{Class I: \texorpdfstring{$\epsilon=1$}{epsilon=1}} \label{ClassI} First we note that, for Class I the two structure functions are related by \begin{align} Q(\xi)-P(\xi)=\alpha-\frac{(e^2+g^2)\xi^3}{\mu(ab+ac+bc)}. \label{PQ_diff_classI} \end{align} In the uncharged case the structure functions differ by only a constant shift. We will show in Sec.~\ref{AdSlim} below that the uncharged Class I case is precisely the Einsteinian (A)dS C-metric and has been studied in detail in \cite{Emparan:1999wa,Emparan:1999fd,Hubeny:2009ru,Hubeny:2009kz,Chen:2015vma,Chen:2015zoa}. Therefore we consider cases of non-zero charge unique to conformal gravity. \par By checking various numerical values of the metric parameters, we obtain the possible domain structures shown in Fig.~\ref{fig_rangeI}. We find the same five possible shapes that were present in the (A)dS C-metric in Einstein gravity, namely the square box, `chipped' box (a box with a corner cut off by conformal infinity), vertical trapezium, triangle, and horizontal trapezium. \begin{figure} \begin{center} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeI-01.eps} \caption{$\alpha=0.3$, $q=0.1$.} \label{fig_rangeI-01} \end{subfigure} \hspace{0.2cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeI-02.eps} \caption{$\alpha=-0.2$, $q=0.1$.} \label{fig_rangeI-02} \end{subfigure} \\ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeI-03.eps} \caption{$\alpha=-0.2$, $q=0.6$.} \label{fig_rangeI-03} \end{subfigure} \hspace{0.2cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeI-04.eps} \caption{$\alpha=0.5$, $q=0.8$.} \label{fig_rangeI-04} \end{subfigure} \\ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeI-05.eps} \caption{$\alpha=-0.95$, $q=1$.} \label{fig_rangeI-05} \end{subfigure} \end{center} \caption{Possible Class I domain structures for $\mu=1$ and various values of $\alpha$ and $q$. For Figs.~\ref{fig_rangeI-01}-\ref{fig_rangeI-04} the roots are chosen to be $a=-1$, $b=-0.2$ and $c=1$, while for Fig.~\ref{fig_rangeI-05} the roots are $a=-1$ and $b=1$. The shaded regions correspond to static regions with Lorentzian signature $(-+++)$, one of which is the region of our primary interest that is shaded in dark gray. The diagonal line represents the conformal infinity $x=y$.} \label{fig_rangeI} \end{figure} \par Figure \ref{fig_rangeI-01} shows a square box which is analogous to the de Sitter C-metric considered in \cite{Chen:2015vma}. It corresponds to a Lorentzian region bounded by two symmetry axes $x=a$ and $x=b$, and two horizons $y=y_2$ and $y=y_3$. From the perspective of an observer in this square box, the horizon $y=y_3$ conceals the curvature singularity at $y\rightarrow\infty$. Therefore we shall interpret $y=y_3$ as the black hole horizon. This horizon has a finite area, extending from one symmetry axes at $x=a$ to the other at $x=a$. Loosely speaking, we may say that this black hole horizon has a spherical topology. The second horizon is located at $y=y_2$, which is also finite and it conceals the observer from the conformal infinity, thus we shall refer to it as an acceleration, or cosmological horizon. \par The `chipped box' and vertical trapezium in Figs.~\ref{fig_rangeI-02} and \ref{fig_rangeI-03} respectively shows similarly finite black-hole horizons of spherical topology. For the `chipped' box, the second horizon at $y=y_2$ intersects the diagonal line $x=y$. Therefore the acceleration/cosmological horizon extends all the way `to conformal infinity', and does not intersect the second symmetry axis. Such boxes in Einstein gravity were interpreted as the `fast' accelerating AdS C-metrics, where the acceleration parameter exceeds the AdS curvature parameter, i.e., $A>\frac{1}{\ell}$ \cite{Dias:2002mi,Chen:2015vma}. For the vertical trapeziums there is no second horizon in the Lorentzian region; this is the analogue of the `slow' acceleration case $A<\frac{1}{\ell}$ in Einstein gravity. \par The triangle and vertical trapezium of Figs.~\ref{fig_rangeI-04} and \ref{fig_rangeI-05} contain black hole horizons that extend to conformal infinity and intersect only one symmetry axis. Thus we conclude that the horizon is infinite in extent and has the domain structure similar to the deformed hyperbolic black holes in Einstein gravity \cite{Chen:2015zoa}. \subsection{Class II: \texorpdfstring{$\epsilon=-1$}{epsilon=-1}} \label{ClassII} Proceeding to Class II solutions, for $\epsilon=-1$ the structure functions are related by \begin{align} Q(\xi)-P(\xi)=\alpha+2\mu(a+b+c)\xi^2-\frac{(e^2+g^2)\xi^3}{\mu(ab+ac+bc)}. \label{PQ_diff_classII} \end{align} Thus we see that the situation in Class II is more complicated, difference between $P$ and $Q$ also contains a quadratic term. It follows that there are three possible intersection points between the two structure functions. In uncharged case, the difference in \Eqref{PQ_diff_classII} is only quadratic, and only leads to two distinct intersection points when $\alpha$ is non-zero. \begin{figure} \begin{center} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeII-01.eps} \caption{$\alpha=0.05$, $q=0$.} \label{fig_rangeII-01} \end{subfigure} \hspace{0.2cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeII-02.eps} \caption{$\alpha=-0.6$, $q=0$.} \label{fig_rangeII-02} \end{subfigure} \\ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeII-03.eps} \caption{$\alpha=-1.5$, $q=0$.} \label{fig_rangeII-03} \end{subfigure} \hspace{0.2cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeII-04.eps} \caption{$\alpha=-0.7$, $q=0.5$.} \label{fig_rangeII-04} \end{subfigure} \\ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.8]{fig_rangeII-05.eps} \caption{$\alpha=-3$, $q=0$.} \label{fig_rangeII-05} \end{subfigure} \end{center} \caption{Possible Class II domain structures for $\mu=1$ and various values of $\alpha$ and $q$. For Figs.~\ref{fig_rangeII-01}-\ref{fig_rangeII-04} the roots are chosen to be $a=-1$ and $b=0.2$, while for Fig.~\ref{fig_rangeII-05} the roots are $a=-1$ and $b=1$. The shaded regions correspond to static regions with Lorentzian signature $(-+++)$, one of which is the region of our primary interest that is shaded in dark gray. The diagonal line represents the conformal infinity $x=y$.} \label{fig_rangeII} \end{figure} \par Seeking out various numerical examples, we find the possible domain structures in Fig.~\ref{fig_rangeII}. We find the same possible domains as in Class I. Thus we conclude that in general, Class I and Class II are physically similar in terms of the horizon configurations and the symmetry axes. The distinction between Class I and II, as we will discuss in Sec.~\ref{limits}, lies in the uncharged case $e=g=0$. In the uncharged case Class I immediately reduces to the Einsteinian C-metric while Class II does not, except for a specific choice of parameters. \section{Physical properties} \label{physical} Our domains of interest lie between $a\leq x\leq b$ where the boundaries are the symmetry axes where $P=0$. For a given periodicity of the angular coordinate $\phi$, the conical deficit at these axes can be calculated as \begin{align} \delta_{i}=2\pi-\kappa_{\mathrm{E}i}\Delta\phi, \end{align} where $i=a,b$ and $\kappa_{\mathrm{E}}$ is the Euclidean surface gravity \cite{Chen:2010zu}, or, the ratio between the circumference and the radius of an infinitesimally small circle around the respective axes. For our metric \Eqref{fconC2}, they are given by \begin{align} \kappa_{\mathrm{E}a}&=\frac{1}{2}\left|P'(a) \right|=\frac{1}{2} \mu(b-a)(c-a), \label{kappa_Ea}\\ \kappa_{\mathrm{E}b}&=\frac{1}{2}\left|P'(b) \right|=\frac{1}{2} \mu(b-a)(b-c). \label{kappa_Eb} \end{align} We can remove one of the two conical singularities by appropriately fixing the periodicity $\Delta\phi$. The two possible choices are \begin{align} \Delta\phi&=\frac{2\pi}{\kappa_{\mathrm{E}a}}:\quad\delta_a=0,\quad\delta_b=2\pi\frac{b-a}{c-a}, \label{delta_a}\\ \Delta\phi&=\frac{2\pi}{\kappa_{\mathrm{E}b}}:\quad\delta_a=-2\pi\frac{b-a}{c-b},\quad\delta_b=0. \label{delta_b} \end{align} Therefore, we see that the first choice removes the conical singularity at $x=a$, leaving a conical \emph{excess} at $x=b$, ($\delta_b>0$) which we regard as a cosmic strut pushing against the black hole, while the second choice removes the singularity at $x=b$ but leaves a conical \emph{deficit} at $x=a$, ($\delta_a<0$) which is regarded as a cosmic string pulling the black hole. (See, e.g., \cite{Hong:2003gx,Griffiths:2006tk,Griffiths:2009dfa} and references therein.) In either case, we have the interpretation that the black hole is being accelerated along the $x=a$ axis. \par Next we consider the curvature invariants of the spacetime. The Kretschmann invariant is, for Class I, \begin{align} R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}&=24\alpha^2+12\mu ^2(x-y)^6+\frac{24y(e^2+g^2)}{\mu (ab+ca+cb)}\brac{\mu (x-y)^5-\alpha x(x+y)}\nonumber\\ &\quad+\frac{12\brac{e^4+g^4}}{\mu ^2(ab+ac+cb)^2}\brac{3x^3+y^4-6x^3y-4xy^3+8x^2y^2}, \label{ClassIKret} \end{align} so in Class I, if either of $\mu$, $e$ or $g$ are non-zero, there are curvature singularities for $x,\,y,\rightarrow\pm\infty$. Therefore in these cases, the outermost edges of Figs.~\ref{fig_rangeI} and \ref{fig_rangeII} represent a curvature singularity. \par As mentioned in Sec.~\ref{domain}, the uncharged case of Class I reduces to the (A)dS C-metric of Einstein gravity. We also can see this here if we put $e=g=0$ in Eq.~\Eqref{ClassIKret}, the curvature invariant simply becomes $R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}=24\alpha^2+12\mu ^2(x-y)^6$. Comparing this to the Kretschmann invariant of the (A)dS C-metric in Einstein gravity, we see that $\mu$ plays the role of the `mass' parameter, where its vanishing leaves us with an empty, constant-curvature spacetime. \par The Kretschmann invariant for Class II is more complicated: \begin{align} R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}&=24\alpha^2+16\alpha \mu (a+b+c)(x^2+y^2+4xy)+\mu ^2\Bigl(12x^6-16x^5b+16x^4b^2\nonumber\\ &\quad -16ax^5+16a^2x^4-16x^5c+16x^4c^2+12y^6+180x^4y^2-240x^3y^3-72x^5y\nonumber\\ &\quad +16y^4a^2+16y^4b^2+16y^4c^2+16y^5a+16y^5b+16y^5c-72xy^5+180x^2y^4\nonumber\\ &\quad +128y^2cax^2+128y^2abx^2+128y^2cbx^2+64y^2x^2c^2+64y^2x^2b^2+64y^2a^2x^2\nonumber\\ &\quad+160y^3ax^2+32y^4ca+32y^4ab+32y^4cb-160x^3y^2a-160x^3y^2b\nonumber\\ &\quad-160x^3y^2c+80x^4yb+80x^4ya-80xy^4a-80xy^4b+80x^4yc-80xy^4c\nonumber\\ &\quad+160y^3cx^2+160y^3bx^2+32ax^4c+32ax^4b+32x^4bc\Bigr)\nonumber\\ &\quad-\frac{8y(e^2+g^2)}{\mu (ab+ac+bc)}\Bigl[3x\alpha (x+y)+\mu \Bigl(6x^4a+2ay^4-6yax^3+14y^2ax^2\nonumber\\ &\quad-4y^3ax+6x^4b+2y^4b-6ybx^3+14y^2bx^2-4y^3xb-15xy^4+15x^4y\nonumber\\ &\quad-6ycx^3-4xy^3c+3y^5+14y^2cx^2-3x^5+30y^3x^2+2y^4c\nonumber\\ &\quad-30y^2x^3+6x^4c\Bigl)\Bigl]\nonumber\\ &\quad+\frac{12y^2(e^4+g^4)}{\mu ^2(ab+ac+cb)^2}\brac{3x^4-6x^3y+8x^2y^2-4xy^3+y^4}. \label{ClassIIKret} \end{align} Nevertheless, we have a similar result that in general, there exist curvature singularities at $x,\,y\rightarrow\pm\infty$. \section{Limiting cases} \label{limits} \subsection{Mannheim-Kazanas metric} \label{MKlim} For a spacetime with a domain structure bounded by two symmetry axes $x=a$ and $x=b$, we have pointed out in Sec.~\ref{physical} that one cannot simultaneously remove both conical singularities by fixing an appropriate periodicity of $\phi$. Upon removal of a conical singularity at one axis, the other has either a conical excess or deficit given in Eqs.~\Eqref{delta_a} or \Eqref{delta_b}. Nevertheless, we see from these two equations that in both cases, $\delta_a$ and $\delta_b$ can be rendered simultaneously zero if $a\rightarrow b$. \par However, this entails shrinking the coordinate range $a<x<b$ to zero unless we scale $x$ accordingly. To ensure our coordinates are well defined in this limit, we introduce the transformation \begin{align} x=b-\frac{1}{2}\delta\brac{\cos\theta+1},\quad y=b+\frac{1}{r},\quad \phi=\frac{2\varphi}{\delta},\quad a=b-\delta. \end{align} Substituting this into the Class I ($\epsilon=1$) case of \Eqref{fconC2}, and taking the limit $\delta\rightarrow 0$, we obtain \begin{align} \mathrm{d} s^2&=-f(r)\mathrm{d} t^2+f(r)^{-1}\mathrm{d} r^2+r^2\brac{\mathrm{d}\theta^2+\sin^2\theta\,\mathrm{d}\varphi^2},\nonumber\\ f(r)&=w+\frac{u}{r}+vr-k r^2,\label{nonaccel_sph} \end{align} where $u$, $v$, $w$ and $k$ are given by \begin{align} u&=\frac{e^2+g^2-2\mu b-3\mu ^2b^2}{b(3\mu b+2)},\nonumber\\ v&=\frac{3b(e^2+g^2)}{2+3\mu b},\nonumber\\ w&=\frac{2+3\mu b+3(e^2+g^2)}{2+3\mu b},\nonumber\\ k&=\frac{2\alpha+3\alpha \mu b-b^2(e^2+g^2)}{2+3\mu b}. \label{nonaccelI} \end{align} The resulting Maxwell potential, up to an irrelevant constant term, is \begin{align} \mathcal{A}&=\frac{e}{r}\,\mathrm{d} t+ g\cos\theta\,\mathrm{d}\phi, \label{nonaccel_Max} \end{align} We can easily check that the parameters defined in \Eqref{nonaccelI} satisfy \begin{align} w^2-1-3uv=3\brac{e^2+g^2}, \label{MK_constraint} \end{align} showing that this is the charged black hole in conformal gravity \cite{Riegert:1984zz,1991PhRvD..44..417M}, albeit with different parametrisation. \par In the uncharged case, the reduction to the Schawrzschild-(A)dS can be seen by putting $e=g=0$ in \Eqref{nonaccelI}, the solution reduces to \begin{align} \mathrm{d} s^2&=-f(r)\mathrm{d} t^2+f(r)^{-1}\mathrm{d} r^2+r^2\brac{\mathrm{d}\theta^2+\sin^2\theta\,\mathrm{d}\varphi^2},\nonumber\\ f(r)&=1-\frac{\mu}{r}-\alpha r^2, \end{align} corresponding to the Schwarzschild-(A)dS solution with mass parameter $\mu=2m$ and curvature parameter $\alpha=-\frac{1}{\ell^2}=\frac{\Lambda}{3}$. \par If we apply this limiting procedure to Class II with $\epsilon=-1$, we obtain the same form as \Eqref{nonaccel_sph}, but with different coefficients of $r$: \begin{align} u&=\frac{e^2+g^2-2\mu b-3\mu ^2b^2}{b(2+3\mu b)},\nonumber\\ v&=\frac{b\sbrac{3(e^2+g^2)-8-36\mu ^2b^2-36\mu b}}{2+3\mu b},\nonumber\\ w&=\frac{3(e^2+g^2)-2-15\mu b-18\mu ^2b^2}{2+3\mu b},\nonumber\\ k&=\frac{18m^2b^4+18mb^3+2\alpha+3\alpha mb+4b^2-b^2(e^2+g^2)}{2+3\mu b}, \label{nonaccelII} \end{align} where they also satisfy Eq.~\Eqref{nonaccelII}. This is again the charged Mannheim-Kazanas spacetime with yet another parametrisation. \par For the uncharged case, taking $e=g=0$ in Eq.~\Eqref{nonaccelII} and further identifying \begin{align} m=\beta\brac{2-3\beta\gamma},\quad b=-\frac{1}{6\beta},\quad\alpha=k-\frac{\gamma}{12\beta}, \end{align} we see that \Eqref{nonaccel_sph} reduces to \begin{align} \mathrm{d} s^2&=-f(r)\mathrm{d} t^2+f(r)^{-1}\mathrm{d} r^2+r^2\brac{\mathrm{d}\theta^2+\sin^2\theta\,\mathrm{d}\varphi^2},\nonumber\\ f(r)&=1-3\beta\gamma-\frac{2(\beta-3\beta\gamma)}{r}+\gamma r-k r^2, \end{align} which is precisely the well-known Mannheim-Kazanas vacuum solution \cite{Mannheim:1988dj}. \subsection{(A)dS C-metric} \label{AdSlim} As mentioned above, the main feature of the conformal gravity C-metric that distinguishes it from its Einsteinian counterpart can be traced to the fact that $Q(\xi)-P(\xi)$ is not equal to a constant. Nevertheless, by inspection of Eqs.~\Eqref{PQ_diff}, \Eqref{PQ_diff_classI} or \Eqref{PQ_diff_classII} the difference can be equal to constant $\alpha$ by a suitable choice of parameters. \par To remove the cubic term from $Q(\xi)-P(\xi)$, we require $e=g=0$. Then, the entire Class I metric with $\epsilon=1$ satisfies \begin{align} R_{\mu\nu}=3\alpha g_{\mu\nu}, \label{EinsteinGrav} \end{align} showing that it is a solution to Einstein's equation with cosmological constant $\Lambda=3\alpha$. \par For Class II, Eq.~\Eqref{PQ_diff_classII} tells us that $Q(\xi)-P(\xi)$ can be made constant by setting $m(a+b+c)=0$ in addition to $e=g=0$. Recalling \Eqref{csub}, the former condition is equivalent to \begin{align} a+2b+\frac{1}{m}=0. \end{align} \section{Conclusion}\label{conclusion} In this paper we have attempted to derive a charged C-metric-type solution in conformal gravity. Starting with an anstaz that resembles the C-metric in Einstein gravity, we obtained a nine-parameter solution to the Bach-Maxwell equations. By construction, two of these parameters are the electric and magnetic charges, though at this stage, we have no reason to conclude that all the remaining seven parameters carry physical significance, as some of them are possibly kinematical parameters. \par We have focused our attention to a six-parameter subset of the solution. The motivation for doing so is two-fold. First, this subset contains some additional symmetries and it allows us to rewrite one of the structure functions in a convenient factorised form. Secondly, this choice is inspired by the analogy that the charged Einsteinian C-metric is a one-parameter generalisation of the Reissner-Nordstr\"{o}m solution, hence we expect a conformal gravity C-metric to also be a one-parameter generalisation of the charged Mannheim-Kazanas solution. Since only one of the structure functions are fully factorised, we obtain the possible domain structures of the C-metric by directly searching for numerical examples. Our charged C-metrics contain five possible domain shapes that are similar to those in the neutral (A)dS C-metric in Einstein gravity. \par In this paper we have mostly confined ourselves within one static Lorentzian region of interest. A further exploration of the metric can be done by extending across the horizons into different regions to study its global and causal structure. Since this requires extension of the spacetime across its horizons, it is probably more convenient to use the form given in Eq.~\Eqref{fconC1} instead of \Eqref{fconC2}. Furthermore, we have only considered a specific choice of parameters as given in Eq.~\Eqref{choice}. It would be interesting to explore other parameter choices in further detail, for instance a choice that contains the solution described by \cite{Meng:2016gyt}. \par It would also be interesting to consider null and time-like geodesics for this spacetime. In the spherically symmetric case of the Mannheim-Kazanas metric, it was shown in \cite{Edery:1997hu} that conformal gravity affects time-like and null geodesics very differently from Einstein gravity. Thus it would be interesting to see its corresponding cases for the C-metric. Furthermore, since solutions to the Bach-Maxwell equations are conformally invariant, it might be worth studying a metric with a gauge in which the overall conformal factor $(x-y)^{-2}$ is removed, for example, one of the gauges considered in \cite{Meng:2016gyt}. For metrics of this form the geodesic equations of time-like particles would possibly be separable, as it is this factor that originally prevented the separation of the time-like geodesic equations of the Einsteinian C-metric. \section*{Acknowledgement} The author would like to thank Qinghai Wang whose collaboration on a different work inspired the present one. \bibliographystyle{mybib}
2,877,628,090,044
arxiv
\section{Introduction} \subsection{Motivation}\label{Motivation} It is no exaggeration to say that type decomposition and Murray-von Neumann equivalence of projections are absolutely fundamental to the theory of von Neumann algebras. These concepts have been extended to other operators in a couple of different ways (see below), and this has certainly led to some interesting theory. However, only very specific aspects of the von Neumann algebra theory have been generalized to C*-algebras in this way. No doubt it was accepted that this is an unavoidable fact of life, that much of the von Neumann algebra theory simply can not be applied in any general way to the much larger class of C*-algebras. In the present paper we show this to be false. By choosing generalizations appropriately, using annihilators rather than projections, a surprising amount of the basic von Neumann algebra theory does indeed extend fully to C*-algebras. Let us first go back to the beginning and consider a von Neumann algebra $A$. Here, type decomposition is obtained by utilizing the projections $\mathcal{P}(A)$, and crucial to this is their order structure, specifically the fact $\mathcal{P}(A)$ is a complete orthomodular lattice. Also crucial is the fact that projections exist in abundance in an arbitrary von Neumann algebra. In C*-algebras, on the other hand, there may be no non-zero projections whatsoever, even in the simple case. And even when they are plentiful, they may fail to form a lattice. Consequently, to prove results for C*-algebras that generalize or are analogous to classical von Neumann algebra results, like those relating to type decomposition, involves finding an appropriate replacement for projections on which an appropriate analog of Murray-von Neumann equivalence can be defined. One example of this can be found in \cite{CuntzPedersen1979}, where projections in a C*-algebra $A$ are replaced with positive elements and $a,b\in A_+$ are said to be equivalent if there exists $(x_n)\subseteq A$ such that $a=\sum x_nx_n^*$ and $b=\sum x_n^*x_n$, where the sums are norm convergent. The close relationship between traces and Murray-von Neumann equivalence classes in von Neumann algebras generalizes to positive operators with this notion of equivalence, as demonstrated in \cite{CuntzPedersen1979}. An analogous classification and even a decomposition of an arbitrary C*-algebra into types I, II and III is also obtained in \cite{CuntzPedersen1979} Proposition 4.13. However, this decomposition is neither symmetric (the type III part can only be found in the quotient w.r.t. the type I part, not the other way around, and likewise the type II part is only obtained at the end as a quotient of quotients) nor consistent with the classical von Neumann algebra type classification (for example, $B(l^2)$ is a type I von Neumann algebra but not a type I C*-algebra\footnote{Here, and here only, we are using the terminology of \cite{CuntzPedersen1979}, where a C*-algebra is said to be of type I if all its representations are type I. Other standard terms for such C*-algebras are `GCR' and `postliminal'. Throughout the rest of this article we will use the term `postliminal'.} (see \cite{Pedersen1979} 6.1.2)). Furthermore, $A_+$ will not be a lattice unless $A$ is commutative and other natural order theoretic properties fail for $A_+$ in general, for example the sum of two finite elements of $A_+$ may be infinite (see \cite{CuntzPedersen1979} Corollary 7.10). Slight variants of the above can be obtained by changing the sums in the definition of equivalence, e.g. by specifying that they must be finite, or allowing them to represent supremums that might not necessarily converge in norm. Another quite different example is given in \cite{Cuntz1977}, where projections are replaced with arbitrary operators and $a,b\in A$ are said to be equivalent if $a\lessapprox b$ and $b\lessapprox a$, where $a\lessapprox b$ means $a=cbd$ for some $c,d\in A$. Again, this leads to a natural notion of a finite and factorial (simple) C*-algebra. The natural norm closed variant of this (i.e. $c_nbd_n\rightarrow a$, for some $(c_n),(d_n)\subseteq A$) has also received considerable attention and has a strong relation to the dimension functions on $A$ (see \cite{BlackadarHandelman1982}). Although again the order structures so obtained are generally less tractable and there is a limit to how far the analogy to projections in von Neumann algebras can be pushed (there does not appear to even be a canonical decomposition into types in this case, for example). However, there is another quite different, but very natural, candidate to replace projections with in an arbitrary C*-algebra $A$, one that seems to have been largely overlooked. Namely, we can use the left (or right) annihilator ideals, i.e. those of the form $\{a\in A:\forall b\in B(ab=0)\}$ for some $B\subseteq A$. Equivalently, we can use the hereditary C*-subalgebras corresponding to these left ideals (see \cite{Effros1963} Theorem 2.4 or \cite{Pedersen1979} Theorem 1.5.2), which we refer to simply as \emph{annihilators}. Indeed, the map $p\mapsto pAp$ is an order isomorphism from projections (with their canonical order $p\leq q\Leftrightarrow pq=p$) to a subset of annihilators (ordered by inclusion). This map is even surjective whenever $A$ is a von Neumann algebra (see \autoref{annisep} (\ref{annisep1})) or, more generally, an AW*-algebra (see \cite{Berberian1972}), thus yielding a precise correspondence between projections and annihilators in this case.\footnote{This correspondence is well known, as is the fundmental importance of studying projections in AW*-algebras. Despite this, however, there does not seem to be any indication, either in \cite{Berberian1972} or elsewhere in the operator algebra literature (except in \cite{Arzikulov2013}), that it was ever thought the annihilators themselves might be of interest in more general contexts.} Unlike projections, though, the annihilators still exist in abundance in an arbitrary C*-algebra, thanks to the continuous functional calculus (see the discussion preceeding \autoref{xab}). And we can see from the outset that they have greater potential to fulfil the role of projections in a von Neumann algebra, as they also always form a complete lattice. Furthermore, there is a natural orthocomplementation on annihilators, something we do not have for arbitrary hereditary C*-subalgebras (the collection of all hereditary C*-subalgebras may not even be complemented). It turns out that this orthocomplementation is not always orthomodular (see \autoref{nonorthoxpl}), but we can prove a close approximation to orthomodularity (see \S\ref{SvsO}) which allows much of the von Neumann algebra theory to be generalized fully to arbitrary C*-algebras. In particular, we can obtain a type decomposition that is symmetric and completely consistent with the classical type decomposition of von Neumann algebras, together with a natural analog of Murray-von Neumann equivalence that is again completely consistent with the classical notion. The type decomposition itself can actually be obtained in a very general order theoretic context, and in the specific case of annihilators in a C*-algebra, the definitions in \eqref{pI}, \eqref{pII}, \eqref{pIII}, and \eqref{pIV} yield the following result. \begin{thm} For any C*-algebra $A$ we have orthogonal annihilator ideals $A_\mathrm{I}$, $A_\mathrm{II}$, $A_\mathrm{III}$ and $A_\mathrm{IV}$ such that \[A_\mathrm{I}\oplus A_\mathrm{II}\oplus A_\mathrm{III}\oplus A_\mathrm{IV}\] is an essential ideal in $A$. \end{thm} When $A$ is a von Neumann algebra, $A_\mathrm{I}$, $A_\mathrm{II}$ and $A_\mathrm{III}$ are indeed the usual type I, II and III parts in the classical von Neumann algebra decomposition (see the comments after \eqref{pIV}). As every von Neumann algebra is an AW*-algebra, i.e. every annihilator ideal is of the form $pA$ for some central $p\in\mathcal{P}(A)$, a finite sum of annihilator ideals is again an annihilator ideal, coming from the sum of the corresponding projections. As the only essential annihilator ideal is the entire algebra itself, we have $A=A_\mathrm{I}\oplus A_\mathrm{II}\oplus A_\mathrm{III}$. As $\mathcal{P}(A)$ is orthomodular, the extra type IV part is $\{0\}$ here, although we do not know if the same is true for annihilators in C*-algebras. Indeed, this paper puts us in much the same position as von Neumann himself was in at the early stages of his investigation into von Neumann algebras (see \cite{vonNemann1930} and \cite{MurrayvonNemann1936}). Namely, we can decompose an arbitrary C*-algebra into various types but do not know if all these potential types are actually realized by some C*-algebra, most notably we do not know if there are any type IV C*-algebras (just as von Neumann did not verify the existence of type III von Neumann algebras until later in \cite{vonNemann1940}). It should now be clear that the annihilators in a C*-algebra are of fundamental importance. The C*-algebra theory required for the initial investigation presented here is not even particularly great, and this paper should be accessible to anyone familiar with the material in the relevant parts of the first few chapters of \cite{Pedersen1979}. Given this, it is very surprising that more papers analyzing the annihilator structure of C*-algebras have not been written before and, in our opinion, such an analysis is well overdue. We know of only one such article, namely \cite{Arzikulov2013}, where a few ideas similar to those presented here are also discussed. However, there are simple counterexamples to \cite{Arzikulov2013} Lemma 16.1 (see the discussion at the end of \S\ref{NCT}) which, unfortunately, is used repeatedly in \cite{Arzikulov2013} and thus puts the results there into question. The key point is that care must be taken to distinguish arbitrary open projections from those that are also topologically regular, which amounts to distinguishing aribitrary hereditary C*-subalgebras from annihilators. And there is no mention of an analog of Murray-von Neumann equivalence for annihilators in \cite{Arzikulov2013}, although analogs of Murray-von Neumann equivalence for arbitrary hereditary C*-subalgebras have been considered before (see \cite{OrtegaRordamThiel2012} and \cite{PeligradZsido2000}). The difference between annihilators and arbitrary hereditary C*-subalgebras may at first seem slight, but it turns out that it is the annihilators that are more amenable to attack by a fortuitous combination of non-commutative topology, order theory and algebra, as we proceed to demonstrate in this paper. It is really the order theory that is central here. Kaplansky initiated a program to isolate the algebraic structure of von Neumann algebras, and this is what allowed the von Neumann algebra theory to be generalized to AW*-algebras (and, more generally, Baer *-rings). All we are really doing is taking this a step further, isolating the order structure of projections in AW*-algebras in such a way that the theory can be generalized to annihilators in C*-algebras. Von Neumann himself isolated the order structure of projections in finite von Neumann algebras, those in which the projection lattice is modular, resulting in the elegant theory of continuous geometries (see \cite{vonNeumann1960}). Even when the projection lattice of a von Neumann algebra is not modular, it is still orthomodular, and this inspired the development of a large body of work on orthomodular lattices (see \cite{Kalmbach1983}). Type decompositions have also been obtained for certain orthomodular lattices, namely the dimension lattices of \cite{Loomis1955}, and these have been successively generalized to various other contexts, like the espaliers in \cite{GoodearlWehrung2005} and the effect algebras in \cite{FoulisPulmannova2013}. However, these other contexts still assume something equivalent to orthomodularity in the ortholattice case, like unique orthogonal complements.\footnote{This is no longer true for the pre-effect algebras in \cite{ChajdaKuhr2012} and presumably type decomposition could also be done for the subclass of pre-effect algebras corresponding to separative complete ortholattices, and probably some more general subclass (centrality is discussed in \cite{ChajdaKuhr2012}, although they do not quite go as far as doing type decomposition). But we decided to stick to ortholattices rather than pre-effect algebras, as these are certainly sufficient for analyzing the annihliators in a C*-algebra, and probably also more familiar to operator algebraists.} We diverge from this previous work with the simple observation that separativity, a significantly weaker assumption than orthomodularity, is sufficient for much of the development of the theory. And this is most fortunate, for we can only verify that the annihilators in an arbitrary C*-algebra are separative (although in strong sense quite close to orthomodularity \textendash\, see \autoref{epsep} and \autoref{nonorthoxpl}). Also, we work with what we call type relations, which are again more general than the dimension relations in dimension lattices (e.g. they need not satisfy finite (orthogonal) divisibility). Again, we see that these are sufficient for much of the theory to be developed which, yet again, is fortunate because we can only verify these weaker properties for what we believe to be the natural equivalence relation on annihilators generalizing Murray-von Neumann equivalence. We also make a number of other order theoretic observations and generalizations of our own that do not seem to have appeared elsewhere in the literature, even in the orthomodular case. Really, you could see this paper as bringing the theory of lattices and operator algebras back together after over half a century of divergence from von Neumann's seminal work in both fields, namely in continuous geometry and von Neumann algebras. \subsection{Outline} We start off in \S\ref{OrderTheory} by developing the theory of type decomposition in an abstract order theoretic context general enough to be applied later to annihilators in a C*-algebra. We take \cite{Kalmbach1983} as our primary reference, although we have to do things in greater generality as we are concerned with ortholattices that may not be orthomodular. In particular, we have to be careful to distinguish $[p]$ from $[p]_p$ (see the discussion following \autoref{[p]_q}). As such, this section should be of independent interest to lattice theorists, although many of the new results are relatively straightforward generalizations of known results for orthomodular lattices. As mentioned above, the key observation here is that separativity, rather than orthomodularity, is sufficient to prove \autoref{sepcomportho}. In \S\ref{AvsP}, we start by gathering together some relevant facts about projections and annihilators, and the relationship between them. In \S\ref{annsec} we set the stage for our investigation of the annihilators, defining them as the orthocompletion of a C*-algebra $A$ with respect to a certain preorthogonality relation. Next we introduce some standard non-commutative topological terminology in \S\ref{NCT} and show in \autoref{annproj} that the annihilators correspond precisely to the topologically regular open projections. Then we make some important observations about spectral projections and projections in general in \S\S\ref{specsec} and \ref{psec}. It is in \S\ref{SvsO} that we develop the theory needed to prove that the annihilators satisfy the all important property known as separativity. In fact, \autoref{epsep} says that the annihilators satisfy a strong form of separativity which is very close to orthomodularity. On the way, we also strengthen a result from \cite{AkemannEilers2002}, showing that non-regular open dense projections must, in fact, be as non-regular as possible (see \autoref{0sep} and the discussion at the start of \S\ref{SvsO}). With separativity out of the way, we are free to apply the theory in \S\ref{OrderTheory} and investigate the interplay between the algebraic structure of a C*-algebra $A$ and the order structure of its annihilators $[A]^\perp$. In \S\ref{annideals}, we see that the central annihilators are precisely the annihilator ideals and that the annihilators always have the relative centre property. Next, in \S\ref{Equivalence}, we define and investigate what we believe to be the natural analog of the fundamental notion of Murray-von Neumann equivalence. We then move on to discuss the abelian annihilators in \S\ref{AA}, starting with \autoref{commuteBoolean} which says that $A$ is commutative precisely when $[A]^\perp$ is a Boolean algebra. We then extend the results of \S\ref{AA} to homogeneous annihilators in \S\ref{HomogeneousAnnihilators}, and show that the annihilator notion of homogeneity is closely related to the more classical representation theoretic notion. In \S\ref{MVF} we investigate C*-algebras of continuous functions from topological spaces $X$ to finite rank matrices $M_n$. More specifically we show how to represent hereditary C*-subalgebras/open projections of such C*-algebras by lower semicontinuous projection valued functions on $X$. The moral of the story here is that the theory of annihilators turns out to be the theory of these projection functions modulo nowhere dense subsets of $X$. Lastly, in \S\ref{Examples}, we give a number of examples illustrating the subtle distinction between various order theoretic and algebraic notions. \subsection{Acknowledgements} The author would like to thank Charles Akemann and Dave Penneys for many helpful comments on earlier versions of this paper, as well as Vladimir Pestov, for giving the author the opportunity to pursue the research that lead to this paper. \section{Order Theory}\label{OrderTheory} \subsection{Basic Definitions}\label{BasicDefinitions} \begin{dfn}[Relation Terminology] A relation $R$ on a set $S$ is \begin{itemize} \item \emph{reflexive} if $sRs$, for all $s\in S$. \item \emph{transitive} if $sRt$ and $tRu\Rightarrow sRu$, for all $s,t,u\in S$. \item \emph{symmetric} if $sRt\Leftrightarrow tRs$, for all $s,t\in S$. \item \emph{antisymmetric} if $sRt$ and $tRs\Rightarrow s=t$, for all $s,t\in S$. \item \emph{annihilating} if $sRs\Rightarrow\forall t\in S(sRt)$, for all $s\in S$. \item a \emph{preorder} if $R$ is reflexive and transitive. \item a \emph{partial order} if $R$ is an antisymmetric preorder. \item an \emph{equivalence relation} if $R$ is a symmetric preorder. \item a \emph{preorthogonality relation} if $R$ is symmetric and annihilating. \item an \emph{orthogonality relation} if $R$ is a preorthogonality relation and, for $s,t\in S$, \begin{equation}\label{orthorel} \forall u\in S(s\perp u\Leftrightarrow t\perp u)\quad\Leftrightarrow\quad s=t. \end{equation} \end{itemize} \end{dfn} \begin{dfn}[Partial Order Terminology] Let $\mathbb{P}$ be a partial order. We call $\mathbb{P}$ a \emph{lattice} if every pair $p,q\in\mathbb{P}$ has a supremum (least upper bound) and infimum (greatest lower bound), denoted by $p\vee q$ and $p\wedge q$ respectively. A lattice $\mathbb{P}$ is \emph{complete} if every $S\subseteq\mathbb{P}$ has a supremum and infimum, denoted by $\bigvee S$ and $\bigwedge S$ respectively. If $\mathbb{P}$ has a greatest element $\mathbf{1}$ and least element $\mathbf{0}$ then $p$ and $q$ are \emph{complementary} if $p\vee q=\mathbf{1}$ and $p\wedge q=\mathbf{0}$. If every element of $\mathbb{P}$ has a complement then $\mathbb{P}$ is \emph{complemented}. For a preorder $\mathbb{P}$, we write $p<q$ to mean $p\leq q$ but $q\nleq p$, and we call $S\subseteq\mathbb{P}$ \emph{order-dense} in $\mathbb{P}$ if \begin{equation}\label{densedef} \forall p\in\mathbb{P}(p>\mathbf{0}\Rightarrow \exists s\in S(\mathbf{0}<s\leq p)), \end{equation} and we call $S$ \emph{join-dense} in $\mathbb{P}$ if, for all $p\in\mathbb{P}$ (with $p>\mathbf{0}$), \begin{equation}\label{jd} p=\bigvee\{q\in S:q\leq p\}. \end{equation} \end{dfn} \begin{dfn}[Function Terminology] A function $f$ on $\mathbb{P}$ is \begin{itemize} \item \emph{involutive} if $f(f(p))=p$, for all $p\in\mathbb{P}$. \item a \emph{complementation} if $p$ and $f(p)$ are complementary, for all $p\in\mathbb{P}$. \item \emph{order preserving} if $p\leq q\Rightarrow f(p)\leq f(q)$, for all $p,q\in\mathbb{P}$. \item \emph{antitone} if $p\leq q\Rightarrow f(q)\leq f(p)$, for all $p,q\in\mathbb{P}$. \item an \emph{orthocomplementation} if $f$ is an antitone involutive complementation. \item an \emph{order isomorphism} if $f$ is 1-1, onto, and $f$ and $f^{-1}$ are order preserving. \item an \emph{orthoisomorphism} if, further, $f(p^\perp)=f(p)^\perp$, for all $p\in\mathbb{P}$. \end{itemize} A partial order $\mathbb{P}$ with a distinguished orthocomplementation is an \emph{orthoposet} and, if $\mathbb{P}$ is also a lattice, an \emph{ortholattice}. \end{dfn} \subsection{Orthocompletions}\label{TheCompletion} We will be interested in a particular case of the following situation (see \S\ref{annsec}). We are given a relation $\perp$ on a set $S$ and, for $T\subseteq S$, define \[T^\perp=\{s\in S:\forall t\in T(t\perp s)\}\qquad\textrm{and}\qquad[S]^\perp=\{T^\perp:T\subseteq S\}.\] Also, for future reference, we make the following definition. \begin{dfn} We call $T\subseteq S$ \emph{essential} (w.r.t. $\perp$) if $T^{\perp\perp}=S$. \end{dfn} For a collection of subsets $\mathcal{T}$ of $S$, we have $(\bigcup\mathcal{T})^\perp=\bigcap\{T^\perp:T\in\mathcal{T}\}$. In particular, this means infimums, w.r.t. the inclusion order, always exist in $[S]^\perp$ are are simply given by intersections. Furthermore, $S=\emptyset^\perp$ is the largest element of $[S]^\perp$, while $S^\perp=\{s\in S:\forall t\in S(t\perp s)\}$ is the smallest, and $T\mapsto T^\perp$ is \begin{enumerate}\label{perp} \item\label{perp1} antitone, \item\label{perp2} involutive on $[S]^\perp$, if $\perp$ is symmetric, and \item\label{perp3} an orthocomplementation on $[S]^\perp$, if $\perp$ is a preorthogonality relation. \end{enumerate} \eqref{perp1} is immediate, and for \eqref{perp2} note that symmetry implies $T\subseteq T^{\perp\perp}$, for all $T\subseteq S$, and thus, by \eqref{perp1}, $(T^{\perp\perp})^\perp\subseteq T^\perp\subseteq (T^\perp)^{\perp\perp}$, i.e. $T^\perp=T^{\perp\perp\perp}$. Lastly, if $\perp$ is also annihilating then $T\cap T^\perp=S^\perp$, for all $T\in[S]^\perp$, and hence $T^\perp\vee T=T^\perp\vee T^{\perp\perp}=S$ so $T$ and $T^\perp$ are complementary, i.e. $[S]^\perp$ is a complete ortholattice. We call $[S]^\perp$ the \emph{orthocompletion} of $S$ w.r.t. $\perp$. In fact, we have really just proved a slightly more general version of \cite{MacLaren1964} Lemma 2.1, and what we have denoted by $[S]^\perp$ is denoted in \cite{MacLaren1964} by $L(S)$, where it is called the completion of $S$. As shown in \cite{MacLaren1964} Theorem 2.4, it really is the canonical completion by cuts when $S$ itself is an orthoposet. We define the preorder $\dashv$ \emph{induced} by $\perp$ on $S$ by \[s\dashv t\quad\Leftrightarrow\quad\{t\}^\perp\subseteq\{s\}^\perp.\] Note that $\dashv$ is a partial order if and only if $\perp$ satisfies \eqref{orthorel} and that \begin{equation}\label{sperpperp} s\mapsto\{s\}^{\perp\perp} \end{equation} is an order preserving map from $S$ (ordered by $\dashv$) to $[S]^\perp$ (ordered by $\subseteq$). If $\perp$ is symmetric then ${}^\perp$ is involutive and hence we actually have \[s\dashv t\quad\Leftrightarrow\quad\{s\}^{\perp\perp}\subseteq\{t\}^{\perp\perp}.\] If $T\subseteq S$ is join-dense in $S$ (see \eqref{jd}) w.r.t. $\dashv$ then, by (a slight generalization of) \cite{MacLaren1964} Theorem 2.5, the map \begin{equation}\label{jdorthoiso} U\mapsto U\cap T\quad\textrm{ is an orthoisomorphism witnessing}\quad[S]^\perp\cong[T]^\perp. \end{equation} Going in the other direction, given an orthoposet $\mathbb{P}$, we can always define an orthogonality relation $\perp$ by \[p\perp q\quad\Leftrightarrow\quad p\leq q^\perp.\] If $\mathbb{P}=[S]^\perp$, where $\perp$ is a preorthogonality relation on $S$ then, for $T,U\in[S]^\perp$, the relation $\perp$ on $[S]^\perp$ defined in this way is related to the original relation $\perp$ on $S$ by \[T\perp U\quad\Leftrightarrow\quad\forall t\in T\forall u\in U(t\perp u).\] Furthermore, for $T\in[S]^\perp$, we can consider the restriction of $\perp$ to $T$ and then \[[T]^\perp=[T]_T,\] according to \autoref{[p]_q}. \subsection{Relative Complements} \begin{dfn}\label{[p]_q} Given an ortholattice $\mathbb{P}$ and $p,q,r\in\mathbb{P}$ we define $r^{\perp_p}=r^\perp\wedge p$ and \[[q]_p=\{r^{\perp_p}:r\leq p\textrm{ and }r^{\perp_p}\leq q\}.\] \end{dfn} In particular, $[p]_p=\{r^{\perp_p}:r\leq p\}$. We also drop the subscript when $r=\mathbf{1}$, i.e. \[[p]=[p]_\mathbf{1}=\{q:q\leq p\}\quad\textrm{and}\quad[q]_p=[p]_p\cap[q].\] It is important to note that $[p]$ may not be an ortholattice (see the Hasse diagram below), in contrast to $[p]_p$. \begin{prp}\label{[p]_p} If $\mathbb{P}$ is an ortholattice and $p\in\mathbb{P}$ then $[p]_p$ is an ortholattice. If $q\in[p]_p$ then $[q]_q\subseteq[p]_p$ while, for any $q\in[p]$, we have \begin{equation}\label{[p]_peq} q^{\perp_p\perp_p}=\bigwedge\{r\in[p]_p:q\leq r\}. \end{equation} \end{prp} \begin{proof} First note that for $q,r\leq p$ we have $(q^\perp\wedge p)\wedge(r^\perp\wedge p)=(q\vee r)^\perp\wedge p\in[p]_p$ so infimums exist and agree with those in $\mathbb{P}$. Next note that, when $q\leq p$, we have $q\leq(p\wedge q^\perp)^\perp\wedge p$ so $q^\perp\geq((p\wedge q^\perp)^\perp\wedge p)^\perp$ and \[p\wedge q^\perp\geq((p\wedge q^\perp)^\perp\wedge p)^\perp\wedge p=((p\wedge q^\perp)\vee p^\perp)\wedge p\geq p\wedge q^\perp,\] so $^{\perp_p}$ is involutive and, therefore, actually characterizes the elements of $[p]_p$, i.e. \[\qquad[p]_p=\{q=q^{\perp_p\perp_p}:q\in\mathbb{P}\}.\] And if $q\in[p]_p$ and $r\in[q]_q$ then $r^{\perp_q}\leq p$ and hence $r^{\perp_q\perp_p}\in[p]_p$. Therefore $r=r^{\perp_q\perp_q}=r^{\perp_q\perp_p}\wedge q\in[p]_p$, i.e. $[q]_q\subseteq [p]_p$. As ${}^\perp$ is order reversing, so is ${}^{\perp_p}$ so, in particular, supremums also exist. Also, if $r\in[p]_p$ and $q\in[r]$ then $r^{\perp_p}\leq q^{\perp_p}$ and hence $q^{\perp_p\perp_p}\leq r^{\perp_p\perp_p}=r$, thus verifying \eqref{[p]_peq}. Finally, for any $q\in[p]_p$, we have $q^{\perp_p}\wedge q\leq q^\perp\wedge q\leq\mathbf{0}$ and hence also $q^{\perp_p}\vee_p q=\mathbf{0}^{\perp_p}=p$. Thus $q^{\perp_p}$ is a complement of $q$ in $[p]_p$, i.e. $[p]_p$ is an ortholattice with orthocomplement function ${}^{\perp_p}$. \end{proof} On the other hand, $[p]$ is always a sublattice of $\mathbb{P}$, while $[p]_p$ may not be. Indeed, while infimums in $[p]_p$ agree with those in $\mathbb{P}$, the same can not be said for supremums. For example, in the ortholattice represented by the following Hasse diagram ($x\leq y$ in such a diagram if and only if $y$ appears above $x$ and joined to it by lines), which appears as \cite{Kalmbach1983} Figure 6.5, $[p]_p=\{\mathbf{0},a^\perp,c^\perp,p\}$ and hence $a^\perp\vee_p c^\perp=p$, even though $a^\perp\vee c^\perp=b$. Also, $[p]=\{\mathbf{0},a^\perp,c^\perp,p,b\}$, which does not possess any orthocomplement functions. \begin{figure}[h!] \caption{}\label{H1} \begin{center} \begin{tikzpicture} \node (max) at (0,3) {$\mathbf{1}$}; \node (p) at (0,2) {$p$}; \node (a) at (-2,1) {$a$}; \node (b) at (0,1) {$b$}; \node (c) at (2,1) {$c$}; \node (cp) at (-2,0) {$c^\perp$}; \node (bp) at (0,0) {$b^\perp$}; \node (ap) at (2,0) {$a^\perp$}; \node (pp) at (0,-1) {$p^\perp$}; \node (min) at (0,-2) {$\mathbf{0}$}; \draw (min) -- (cp) -- (a) -- (max) -- (c) -- (ap) -- (min) (min) -- (pp) -- (bp) -- (a) (bp) -- (c) (max) -- (p) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (cp) -- (b) -- (ap); \end{tikzpicture} \end{center} \end{figure} \subsection{Order Types} \begin{dfn}\label{orthomod} A preorder $\mathbb{P}$ is \emph{separative} if, for all $p,q\in\mathbb{P}$, \[p\nleq q\quad\Rightarrow\quad\exists r\in\mathbb{P}(\mathbf{0}<r\leq p\textrm{ and }r\wedge q=\mathbf{0}).\] We call an orthoposet $\mathbb{P}$ \emph{orthomodular} if, for all $p,q\in\mathbb{P}$, $p\perp q\Rightarrow p\vee q$ exists, and \begin{equation}\label{orthomodeq} q\leq p\quad\Rightarrow\quad p=q\vee(p\wedge q^\perp) \end{equation} A lattice $\mathbb{P}$ is \emph{modular} if, for $p,q,r\in\mathbb{P}$, \[q\leq p\quad\Rightarrow\quad p\wedge(q\vee r)=q\vee(p\wedge r).\] A lattice $\mathbb{P}$ is \emph{distributive} if, for all $p,q,r\in\mathbb{P}$, \begin{equation}\label{distributive} p\wedge(q\vee r)=(p\wedge q)\vee(p\wedge r)\qquad\textrm{and}\qquad p\vee(q\wedge r)=(p\vee q)\wedge(p\vee r). \end{equation} A \emph{Boolean algebra} is a distributive complemented lattice. \end{dfn} Every element of a Boolean algebra in fact has a \emph{unique} complement and the map taking each element to this unique complement is an orthocomplement function. In fact, an ortholattice is uniquely complemented if and only if it is a Boolean algebra, by \cite{Kalmbach1983} \S3 Proposition 7. For an ortholattice, we immediately have \[\textrm{distributivity}\quad\Rightarrow\quad\textrm{modularity}\quad\Rightarrow\quad\textrm{orthomodularity}\quad\Rightarrow\quad\textrm{separativity}.\] To see that the first two of these implications can not be reversed, it suffices to note that the subspaces of a Hilbert space $H$ are modular (more generally, submodules of a module are modular, hence the name) but not distributive if $\dim(H)>1$, while the \emph{closed} subspaces are orthomodular but not modular if $\dim(H)=\infty$, by \cite{Kalmbach1983} \S5 Proposition 5. This last fact is actually key to showing that the projections in an infinite AW*-algebra are not modular (see \cite{Kaplansky1955} Theorem on page 1). There are also finite ortholattices that illustrate these differences, for example the Chinese latern MO2 represented by \cite{Kalmbach1983} Figure 2.1 11 is modular but not distributive, while the ortholattice in \cite{Kalmbach1983} Figure 3.2 is orthomodular but not modular. For an example of an ortholattice that is separative but not orthomodular, consider the orthodouble of the 8 element Boolean algebra given in \cite{Flachsmeyer1982} Figure 2b, as represented by the following Hasse diagram. \begin{center} \begin{tikzpicture} \node (max) at (0,2) {$\mathbf{1}$}; \node (a) at (-6,1) {$a$}; \node (b) at (-4,1) {$b$}; \node (c) at (-2,1) {$c$}; \node (d) at (-6,0) {$d$}; \node (e) at (-4,0) {$e$}; \node (f) at (-2,0) {$f$}; \node (ap) at (6,0) {$a^\perp$}; \node (bp) at (4,0) {$b^\perp$}; \node (cp) at (2,0) {$c^\perp$}; \node (dp) at (6,1) {$d^\perp$}; \node (ep) at (4,1) {$e^\perp$}; \node (fp) at (2,1) {$f^\perp$}; \node (min) at (0,-1) {$\mathbf{0}$}; \draw (min) -- (d) -- (a) -- (max) -- (c) -- (f) -- (min) (min) -- (e) -- (a) (e) -- (c) (max) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (d) -- (b) -- (f); \draw (min) -- (cp) -- (fp) -- (max) -- (dp) -- (ap) -- (min) (min) -- (bp) -- (fp) (bp) -- (dp) (max) -- (ep); \draw[preaction={draw=white, -,line width=6pt}] (cp) -- (ep) -- (ap); \end{tikzpicture} \end{center} For an example of an ortholattice that is not even separative, just consider $O_6$ in \cite{Kalmbach1983} Figure 3.1. \subsection{Separativity} Separativity is fundamental to our later work because it is precisely what is required to turn order-density into join-density. \begin{prp}\label{jdod} A preorder $\mathbb{P}$ is separative if and only if, for all $S\subseteq\mathbb{P}$, \[S\textrm{ is join-dense}\quad\Leftrightarrow\quad S\textrm{ is order-dense}.\] \end{prp} \begin{proof} Join-density certainly implies order-density, while if $S$ is not join-dense then we can find $p,q\in\mathbb{P}$ with $p\nleq q$ even though $q\geq r$, for all $r\in[p]\cap S$. If $\mathbb{P}$ is separative then we can find $t\in\mathbb{P}$ with $\mathbf{0}<t\leq p$ and $t\wedge q=\mathbf{0}$, and hence there is no $s\leq t$ with $\mathbf{0}<s\in S$, i.e. $S$ is not order-dense. On the other hand, if $\mathbb{P}$ is not separative then we have $p,q\in\mathbb{P}$ with $p\nleq q$ even though there is no $r\leq p$ with $r>\mathbf{0}$ and $r\wedge q=\mathbf{0}$. Now consider \[S=[q]\cup\{s\in\mathbb{P}:s\nleq p\}.\] If $t\nleq p$ then $t\in S$, while if $\mathbf{0}<t\leq p$ then $t\wedge q\neq\mathbf{0}$, i.e. there exists $s>\mathbf{0}$ with $t\geq s\in[q]\subseteq S$. Thus $S$ is order-dense, however, $S\cap[p]\subseteq[q]$ which, as $p\nleq q$, means that $p\neq\bigvee S\cap[p]$, so $S$ is not join-dense. \end{proof} In a lattice $\mathbb{P}$ another equivalent of separativity is obtained if we replace $p\nleq q$ in the definition of separativity with the apparently stronger condition $q<p$. For $p\nleq q$ implies $p\wedge q<p$ and hence we could find $r\leq p$ with $\mathbf{0}=(q\wedge p)\wedge r=q\wedge(p\wedge r)=q\wedge r$. \subsection{Perspectivity}\label{persec} \begin{dfn} If $\mathbb{P}$ is an ortholattice, we say $p,q\in\mathbb{P}$ are \begin{enumerate} \item \emph{perspective} if $p$ and $q$ have a common complement. \item \emph{orthoperspective} if $p$ and $q$ have a common orthogonal complement. \item \emph{semiorthoperspective} if $p$ and $q^\perp$ are complementary. \end{enumerate} These relations will be denoted by $\sim_\mathrm{p}$, $\sim_\mathrm{op}$ and $\sim_\mathrm{sop}$ respectively. \end{dfn} The definition of perspectivity is perfectly valid in an arbitrary lattice with $\mathbf{1}$ and $\mathbf{0}$ and is fundamental to the theory of continuous geometries (see \cite{vonNeumann1960}). Note that $p$ and $q^\perp$ are complementary if and only if $p^\perp$ and $q$ are complementary, so semiorthoperspectivity is a symmetric relation. Semiorthoperspective $p$ and $q$ are sometimes said to be `in position $p'$', while if $p$ is also semiorthoperspective to $q^\perp$ then they are `in generic position' or `in position $p$' (see \cite{Berberian1972} \S13 Definition 2 and Definition 3). Perspectivity is weaker than semiorthoperspectivity, however their transitive closures are the same. For if $p$ and $q$ have common complement $r$ then $p$ is semiorthoperspective to $r^\perp$ which is in turn semiorthoperspective to $q$. Orthoperspectivity on the other hand is much stronger, and is often just equality (see \autoref{orthoequiv}). Note that $p$ and $q$ are orthoperspective if and only if $(p\vee q)^\perp$ is complementary to both $p$ and $q$. \begin{prp}\label{orthoperpequiv} For a symmetric transitive relation $\sim$ on an ortholattice $\mathbb{P}$, the following are equivalent. \begin{enumerate} \item\label{orthoperpequiv1} $\sim$ is weaker than orthoperspectivity. \item\label{orthoperpequiv2} $q\leq p$ and $q^\perp\wedge p=\mathbf{0}\Rightarrow p\sim q$. \item\label{orthoperpequiv3} $q\sim q^{\perp_p\perp_p}$, for all $p\in\mathbb{P}$ and $q\in[p]$. \end{enumerate} \end{prp} \begin{proof}\ \begin{itemize} \item[\eqref{orthoperpequiv1}$\Rightarrow$\eqref{orthoperpequiv2}] $p\vee q=p$ so $p\vee(p\vee q)^\perp=\mathbf{1}$ and $q\vee(p\vee q)^\perp=(q^\perp\wedge p)^\perp=\mathbf{1}$ so $p$ and $q$ are orthoperspective and hence $p\sim q$. \item[\eqref{orthoperpequiv2}$\Rightarrow$\eqref{orthoperpequiv1}] If $p\vee(p\vee q)^\perp=\mathbf{1}$ then $p^\perp\wedge(p\vee q)=\mathbf{0}$ so orthoperspectivity and \eqref{orthoperpequiv2} imply $p\sim p\vee q\sim q$ which, by transitivity, gives $p\sim q$. \item[\eqref{orthoperpequiv2}$\Rightarrow$\eqref{orthoperpequiv3}] $q\leq q^{\perp_p\perp_p}$ and $q^\perp\wedge q^{\perp_p\perp_p}=q^{\perp_p}\wedge q^{\perp_p\perp_p}=\mathbf{0}$ so $q\sim q^{\perp_p\perp_p}$. \item[\eqref{orthoperpequiv3}$\Rightarrow$\eqref{orthoperpequiv2}] $q^{\perp_p\perp_p}=(q^\perp\wedge p)^{\perp_p}=\mathbf{0}^{\perp_p}=p$ so $p\sim q$. \end{itemize} \end{proof} (Transitivity only appears in \eqref{orthoperpequiv2}$\Rightarrow$\eqref{orthoperpequiv1} so orthoperspectivity satisfies \eqref{orthoperpequiv2} and \eqref{orthoperpequiv3}.) \begin{prp}\label{pqposp'} If $\mathbb{P}$ is an ortholattice, $p,q\in\mathbb{P}$, $[p]_p=[p]$ and $[q]=[q]_q$ then \[p'=(p\wedge q^\perp)^{\perp_p}\quad\textrm{and}\quad q'=(q\wedge p^\perp)^{\perp_q}\quad\textrm{are semiorthoperspective}.\] \end{prp} \begin{proof} Note that $s=p'\wedge q'^\perp\leq p$ and hence $s\perp(q\wedge p^\perp)$. But then $s\perp(q\wedge p^\perp)\vee q'=(q\wedge p^\perp)\vee(q\wedge p^\perp)^{\perp_q}=q$, as $[q]=[q]_q$, and hence $s\leq p\wedge q^\perp$ which, as $s\leq(p\wedge q^\perp)^\perp$, means $s=\mathbf{0}$. Now $q'\wedge p'^\perp=\mathbf{0}$ follows by a symmetric argument. \end{proof} \begin{cor}\label{sopcor} If $\mathbb{P}$ is an ortholattice, $p,q\in\mathbb{P}$, $[p]_p=[p]$ and $[q]=[q]_q$ then \[\exists r\in\mathbb{P}(p\leq r\sim_\mathrm{sop}q)\quad\Rightarrow\quad\exists s\in\mathbb{P}(p\sim_\mathrm{sop}s\leq q).\] \end{cor} \begin{proof} With $p'$ and $q'$ as above we have $p'\sim_\mathrm{sop}q'$. But $p\wedge q^\perp\leq r\wedge q^\perp=\mathbf{0}$ so $p'=\mathbf{0}^{\perp_p}=p$. So, setting $s=q'$, we are done. \end{proof} \subsection{Finiteness} \begin{dfn} Given a symmetric relation $\sim$ on $\mathbb{P}$, we call $p\in\mathbb{P}$ \emph{$\sim$-finite} if \[p\sim q\leq p\quad\Rightarrow\quad p=q.\] If $\mathbb{P}$ is an ortholattice, we call $p\in\mathbb{P}$ \emph{$\sim$-orthofinite} if \[p\sim q\leq p\quad\Rightarrow\quad p\wedge q^\perp=\mathbf{0}.\] We call $\sim$ itself \emph{(ortho)finite} if all elements of $\mathbb{P}$ are $\sim$-(ortho)finite. \end{dfn} Note that if $\sim$ is transitive and weaker than orthoperspectivity then, when $p$ is not orthofinite, there exists $q\leq p$ with $p\sim q\sim q^{\perp_p\perp_p}<p$, as $q^{\perp_p}\neq\mathbf{0}$, i.e. \begin{eqnarray*} p\textrm{ is orthofinite} &\Leftrightarrow& \{p\}^\sim\cap[p]_p=\{p\}.\textrm{ Also, by definition,}\\ p\textrm{ is finite} &\Leftrightarrow& \{p\}^\sim\cap[p]=\{p\}. \end{eqnarray*} \begin{dfn} We say $\sim$ is \emph{finitely additive} if, whenever $p\perp q$, $r\perp s$, $p\sim r$ and $q\sim r$, we have $p\vee q\sim r\vee s$. \end{dfn} \begin{prp} A finitely additive reflexive relation $\sim$ on an ortholattice $\mathbb{P}$ is orthofinite if and only if $\mathbf{1}$ is (ortho)finite. \end{prp} \begin{proof} If $\sim$ is orthofinite then, in particular, $\mathbf{1}$ is (ortho)finite. If $\sim$ is not orthofinite, we have $p\sim q\leq p$ with $p\wedge q^\perp\neq\mathbf{0}$. As $\sim$ is finitely additive and reflexive, we have $\mathbf{1}=p\vee p^\perp\sim q\vee p^\perp$ even though $(q\vee p^\perp)^\perp=p\wedge q^\perp\neq\mathbf{0}$ so $q\vee p^\perp\neq\mathbf{1}$. \end{proof} \subsection{Orthomodularity} Orthomodularity has a number of important equivalents (for more see \cite{Kalmbach1983} \S3 Theorem 2). \begin{prp}\label{orthoequiv} For an ortholattice $\mathbb{P}$, the following are equivalent. \begin{enumerate} \item\label{orthoequiv1} $\mathbb{P}$ is orthomodular. \item\label{orthoequiv2} Orthogonal complements are unique. \item\label{orthoequiv3} $[p]_p=[p]$, for all $p\in\mathbb{P}$. \item\label{orthoequiv4} Orthoperspectivity is finite \item\label{orthoequiv6} Orthoperspectivity is equality. \item\label{orthoequiv7} $\sim$-orthofiniteness and $\sim$-finiteness always coincide. \item\label{orthoequiv5} For all $p,q,r\in\mathbb{P}$, $q\leq p$ and $q\perp r\Rightarrow p\wedge(q\vee r)=q\vee(p\wedge r)$. \end{enumerate} \end{prp} \begin{proof}\ \begin{itemize} \item[(\ref{orthoequiv1})$\Rightarrow$(\ref{orthoequiv2})] If $q\leq p^\perp$ and $p\vee q=\mathbf{1}$ then $p^\perp\wedge q^\perp=\mathbf{1}^\perp=\mathbf{0}$ so orthomodularity gives $p^\perp=q\vee(q^\perp\wedge p^\perp)=q$, i.e. orthogonal complements are unique. \item[\eqref{orthoequiv2}$\Rightarrow$\eqref{orthoequiv1}] Given $q\leq p$ let $r=q\vee(p\wedge q^\perp)\leq p$. Then $p^\perp\vee r=p^\perp\vee q\vee(p\wedge q^\perp)=p^\perp\vee q\vee(p^\perp\vee q)^\perp=\mathbf{1}$ so \eqref{orthoequiv2} $r=p$, showing that $\mathbb{P}$ is orthomodular. \item[\eqref{orthoequiv2}$\Rightarrow$\eqref{orthoequiv4}] If $q\leq p=p^{\perp\perp}$ and $\mathbf{0}=q^\perp\wedge p=(q\vee p^\perp)^\perp$ then \eqref{orthoequiv2} gives $q=p^{\perp\perp}=p$. \item[\eqref{orthoequiv4}$\Rightarrow$\eqref{orthoequiv2}] If $p\in\mathbb{P}$ has an orthogonal complement $q<p^\perp$ then $p^\perp\wedge q^\perp=\mathbf{0}$. \item[\eqref{orthoequiv6}$\Rightarrow$\eqref{orthoequiv4}] Equality is finite. \item[\eqref{orthoequiv4}$\Rightarrow$\eqref{orthoequiv6}] If $p$ and $q$ are orthoperspective then so are $p$ and $p\vee q$ which, if orthoperspectivity is finite, means $p=p\vee q$. Likewise $q=p\vee q=p$. \item[\eqref{orthoequiv4}$\Rightarrow$\eqref{orthoequiv7}] $q\leq p$ and $q^\perp\wedge p=\mathbf{0}$ means $p$ and $q$ are orthoperspective which, from \eqref{orthoequiv4}, gives $p=q$. Thus $\sim$-orthofiniteness implies $\sim$-finiteness. \item[\eqref{orthoequiv7}$\Rightarrow$\eqref{orthoequiv4}] Orthoperspectivity is orthofinite. If it is not finite these notions differ. \item[\eqref{orthoequiv6}$\Rightarrow$\eqref{orthoequiv3}] $q$ and $q^{\perp_p\perp_p}$ are orthoperspective so \eqref{orthoequiv6} gives $q=q^{\perp_p\perp_p}\in[p]_p$ for $q\in[p]$. \item[\eqref{orthoequiv3}$\Rightarrow$\eqref{orthoequiv4}] If $q<p$ but $q^\perp\wedge p=\mathbf{0}$ then $q\notin[p]_p$. \item[\eqref{orthoequiv1}$\Rightarrow$\eqref{orthoequiv5}] See \cite{Kalmbach1983} \S3 Theorem 5. \item[\eqref{orthoequiv5}$\Rightarrow$\eqref{orthoequiv1}] Immediate by setting $r=q^\perp$. \end{itemize} \end{proof} So if $\mathbb{P}$ is not orthomodular then we have $[p]\neq[p]_p$, for some $p\in\mathbb{P}$, and there is no reason to think that properties $[p]$ inherits from $\mathbb{P}$, like separativity, are necessarily inherited by $[p]_p$. However, if we happen to know that $[p]_p$ is order-dense in $[p]$ (which is true for the annihilators we will be interested in \textendash\, see the comment after \autoref{prp1}), then $[p]_p$ will indeed be separative if $\mathbb{P}$ is. \begin{prp} If $\mathbb{P}$ is an orthomodular lattice and $p,q\in\mathbb{P}$ then \[p'=(p\wedge q^\perp)^{\perp_p}\quad\textrm{and}\quad q'=(q\wedge p^\perp)^{\perp_q}\quad\textrm{are maximal semiorthoperspective}.\] \end{prp} \begin{proof} Semiorthoperspectivity is immediate from \autoref{pqposp'} and \autoref{orthoequiv}. For maximality, note that if $s>p'$ then $\mathbf{0}\neq s\wedge p'^\perp=s\wedge p\wedge q^\perp$, by orthomodularity. Thus $s\wedge q^\perp\neq0$ so $s$ could not be semiorthoperspective to anything in $[q]$. \end{proof} \subsection{Modularity} As shown in \cite{Jacobson1985} Theorem 8.4, $\mathbb{P}$ is modular if and only if, for $p,q,r\in\mathbb{P}$, \begin{equation}\label{persp} p\vee r=q\vee r,\ p\wedge r=q\wedge r\textrm{ and }p\leq q\quad\Rightarrow\quad p=q. \end{equation}\label{perfin} In particular, this means perspectivity is a finite relation on $\mathbb{P}$. In fact, if $\mathbb{P}$ is an ortholattice then \begin{equation}\label{modperfin} \textrm{modularity}\qquad\Leftrightarrow\qquad\textrm{perspectivity is finite}. \end{equation} To see the converse, first note that if perspectivity is finite then so is orthoperspectivity and hence $\mathbb{P}$ is orthomodular, by \autoref{orthoequiv}. Now say $p,q\in\mathbb{P}$ satisfy the conditions on the left of \eqref{persp} then set $p'=p\wedge(p\wedge r)^\perp$ and $p''=p'\vee(p'\vee r)^\perp$ and likewise for $q'$ and $q''$. Then we see that $p'\wedge r=\mathbf{0}=q'\wedge r$ and hence $p''\wedge r=\mathbf{0}=q''\wedge r$, as well as $p''\vee r=\mathbf{1}=q''\vee r$. So if perspectivity is finite then $p''=q''$. As $\mathbb{P}$ is orthomodular, $p'\vee r=p\vee r=q\vee r=q'\vee r$, and so orthomodularity together with $p''=q''$ implies $p'=q'$ and hence orthomodularity together with $p\wedge r=q\wedge r$ finally yields $p=q$. \subsection{The Centre} \begin{dfn}\label{centredef} In an ortholattice $\mathbb{P}$, we say $s,t\in\mathbb{P}$ \emph{commute} if \eqref{distributive} holds whenever $p,q,r\in\{s,s^\perp,t,t^\perp\}$. We call $p\in\mathbb{P}$ \emph{central} if it commutes with all $q\in\mathbb{P}$. If $\mathbb{P}$ is complete and $p\in\mathbb{P}$, we define the \emph{central cover} $\mathrm{c}(p)$ of $p$ by \[\mathrm{c}(p)=\bigwedge\{q\geq p:q\textrm{ is central}\}.\] Given $T\subseteq\mathbb{P}$ we define \[\mathrm{c}T=\{\mathrm{c}(t):t\in T\}\] so $\mathrm{c}\mathbb{P}=\{p\in\mathbb{P}:p\textrm{ is central}\}$. We call $p,q\in\mathbb{P}$ \emph{very orthogonal} if $\mathrm{c}(p)\perp \mathrm{c}(q)$. \end{dfn} The only non-trivial instances of \eqref{distributive}, for $p,q,r\in\{s,s^\perp,t,t^\perp\}$, are of the form \begin{eqnarray} p\wedge q &=& p\wedge(p^\perp\vee q)\quad\textrm{and}\label{com1}\\ p &=& (p\wedge q)\vee(p\wedge q^\perp),\label{com2} \end{eqnarray} for $p\in\{s,s^\perp\}$ and $q\in\{t,t^\perp\}$, or $p\in\{t,t^\perp\}$ and $q\in\{s,s^\perp\}$. Thus, the order theoretic definition of commutativity agrees with the algebraic definition for projections in a C*-algebra, by \autoref{pq=qp}. In fact, for orthomodular lattices, each non-trivial instance of \eqref{distributive} is equivalent to every other and to \[\mathbf{1}=(s\wedge t)\vee(s^\perp\wedge t)\vee(s\wedge t^\perp)\vee(s^\perp\wedge t^\perp),\] by \cite{Kalmbach1983} \S3 Lemma 3 and Proposition 8. Even if $\mathbb{P}$ is not orthomodular, we still have the following important characterizations of centrality. \begin{dfn} Given partial orders $\mathbb{P}$ and $\mathbb{Q}$ we order $\mathbb{P}\times\mathbb{Q}$ coordinatewise, i.e. $(p,q)\leq(r,s)\Leftrightarrow p\leq r\textrm{ and }q\leq s$. If they are orthocomplemented, we make $\mathbb{P}\times\mathbb{Q}$ orthocomplemented by defining $(p,q)^\perp=(p^\perp,q^\perp)$. Given an ortholattice $\mathbb{P}$ and $p\in\mathbb{P}$, we say $\mathbb{P}$ is \emph{canonically isomorphic} to $[p]\times[p^\perp]$ to mean that $[p]=[p]_p$, $[p^\perp]=[p^\perp]_{p^\perp}$ and the (order preserving) maps $(q,r)\mapsto q\vee r$ and $q\mapsto(q\wedge p,q\wedge p^\perp)$, from $[p]\times[p^\perp]$ to $\mathbb{P}$ and vice versa, are inverse to each other. \end{dfn} \begin{thm}\label{centralequiv} If $\mathbb{P}$ is an ortholattice then the following are equivalent for $p\in\mathbb{P}$. \begin{enumerate} \item\label{centralequiv1} $q=(q\wedge p)\vee(q\wedge p^\perp)$, for all $q\in\mathbb{P}$. \item\label{centralequiv2} $p$ is central. \item\label{centralequiv3} $\mathbb{P}$ is canonically isomorphic to $[p]\times[p^\perp]$. \end{enumerate} \end{thm} \begin{proof} \item[\eqref{centralequiv1}$\Rightarrow$\eqref{centralequiv3}] See \cite{Kalmbach1983} \S3 Theorem 1 or \cite{MacLaren1964} Theorem 3.2. \item[\eqref{centralequiv3}$\Rightarrow$\eqref{centralequiv2}] Immediately verified by coordinatewise calculations. \item[\eqref{centralequiv2}$\Rightarrow$\eqref{centralequiv1}] Immediate from the definition of central. \end{proof} Note that \eqref{centralequiv3} is important because it means we can now do calculations coordinatewise in $[p]\times[p^\perp]$ rather than $\mathbb{P}$. For example, say we had $p\in \mathrm{c}\mathbb{P}$ and $q\in\mathbb{P}$ and we want to show that \begin{equation}\label{c(pwedgeq)} \mathrm{c}(p\wedge q)=p\wedge\mathrm{c}(q). \end{equation} If equality fails here then we would have $r\in\mathrm{c}\mathbb{P}$ with $r\geq p\wedge q$ with $r\ngeq p\wedge\mathrm{c}(q)$. By replacing $r$ with $r\wedge p\wedge\mathrm{c}(q)$ if necessary we may assume that $r<p\wedge\mathrm{c}(q)$. Then $r\vee(p^\perp\wedge\mathrm{c}(q))\geq(p\wedge q)\vee(p^\perp\wedge q)=q$ even though \[r\vee(p^\perp\wedge \mathrm{c}(q))<(p\wedge\mathrm{c}(q))\vee(p^\perp\wedge\mathrm{c}(q))=\mathrm{c}(q),\] where we know the first inequality is strict because the inequality in the first coordinates in $[p]\times[p^\perp]$ is strict. This contradicts the fact $\mathrm{c}(q)$ is the central cover of $q$ and we are done. Still assuming $p\in\mathrm{c}\mathbb{P}$, we have $q\wedge p=(q\wedge p^\perp)^{\perp_q}\in[q]_q$. For any $r\in[q]_q$, we have $r\wedge(q\wedge p)=(r\wedge q)\wedge p=r\wedge p$ and \[r\wedge(q\wedge p)^{\perp_q}=r\wedge q\wedge(q\wedge p)^\perp=r\wedge(q\wedge p^\perp)=r\wedge p^\perp.\] So $r=(r\wedge p)\vee(r\wedge p^\perp)=(r\wedge(q\wedge p))\vee_q(r\wedge(q\wedge p)^{\perp_q})$ and $q\wedge p\in\mathrm{c}_q[q]_q$, i.e. \begin{equation}\label{c_q} \{q\wedge p:p\in\mathrm{c}\mathbb{P}\}\subseteq\mathrm{c}_q[q]_q. \end{equation} In general this inclusion can be strict (see \autoref{H1}, where $\mathrm{c}_b[b]_b=[b]=\{b,c^\perp,a^\perp,\mathbf{0}\}$ even though $\mathrm{c}\mathbb{P}=\{\mathbf{1},\mathbf{0}\}$), although not for the annihilators in a C*-algebra (see \autoref{c_B}). This inclusion is also an equality for central elements, i.e. given $p\in\mathrm{c}\mathbb{P}$, we have $[p]=[p]_p$ and $\mathrm{c}\mathbb{P}\cong\mathrm{c}[p]\times\mathrm{c}[p^\perp]$ so \begin{equation}\label{c([p])} \mathrm{c}_p[p]_p=\mathrm{c}[p]. \end{equation} Another consequence of \eqref{centralequiv3} is that, for any $p\in\mathrm{c}\mathbb{P}$, $p^\perp$ is the unique complement of $p$ so, in fact, $\mathrm{c}\mathbb{P}$ is a Boolean algebra, by \cite{Kalmbach1983} \S3 Proposition 7. For orthomodular $\mathbb{P}$, the converse also holds. \begin{prp}\label{cenuniqcomp} For orthomodular $\mathbb{P}$, $p\in\mathrm{c}\mathbb{P}$ if $p^\perp$ is its only complement. \end{prp} \begin{proof} If $p$ is not central then $q>(q\wedge p)\vee(q\wedge p^\perp)$, for some $q\in\mathbb{P}$. Setting $q'=q\wedge(q\wedge p)^\perp$, we have $p\wedge q'=\mathbf{0}$ and, by orthomodularity, $q'>q\wedge p^\perp\geq q'\wedge p^\perp$. This implies $q''=q'\vee(p\vee q')^\perp\neq p^\perp$ and, again by orthomodularity, $p\wedge q''=p\wedge q'=\mathbf{0}$. Also $p\vee q''=p\vee q'\vee(p\vee q')^\perp=\mathbf{1}$, so $q''$ is a complement of $p$ different from $p^\perp$. \end{proof} For order theoretic type decompositions, what we actually need is an infinite version of \eqref{centralequiv3}, and the key extra condition required for this is separativity. \begin{thm}\label{sepcomportho} If $\mathbb{P}$ is a separative complete ortholattice, $q\in\mathbb{P}$ and $(p_\alpha)\subseteq\mathrm{c}\mathbb{P}$, \begin{enumerate} \item\label{sepcomportho1} $q\wedge\bigvee p_\alpha=\bigvee q\wedge p_\alpha$, and \item\label{sepcomportho2} $q=(q\wedge\bigvee p_\alpha)\vee(q\wedge(\bigvee p_\alpha)^\perp)$. \end{enumerate} \end{thm} \begin{proof} Both statements follow from the fact \[S=[\bigwedge p^\perp_\alpha]\cup\bigcup[p_\alpha]\] is join-dense in $\mathbb{P}$. For this, it suffices to prove $S$ is order-dense, by \autoref{jdod}. Now note that, for any $q\in\mathbb{P}$, $q\wedge p_\alpha=\mathbf{0}$ implies $q\leq p^\perp_\alpha$, as $q=(q\wedge p_\alpha)\vee(q\wedge p^\perp_\alpha)$. So if this were true for all $\alpha$, we would have $q\leq\bigwedge p^\perp_\alpha$. In any case, if $q>\mathbf{0}$, we have $s\in S$ with $\mathbf{0}<s\leq q$. \end{proof} So, by \eqref{sepcomportho2} above and \autoref{centralequiv} \eqref{centralequiv1}, if $\mathbb{P}$ is a separative complete ortholattice, \begin{equation}\label{centresublattice} \mathrm{c}\mathbb{P}\textrm{ is a complete sublattice.} \end{equation} Another important consequence is the following. \begin{cor}\label{ordertd} For orthogonal $(p_\alpha)\subseteq\mathrm{c}\mathbb{P}$ in a separative complete ortholattice $\mathbb{P}$, \begin{equation}\label{ordertdeq} [\bigvee p_\alpha]\cong\prod[p_\alpha]. \end{equation} \end{cor} \begin{proof} Define $f:[\bigvee_\alpha p_\alpha]\rightarrow\prod_\alpha [p_\alpha]$ and $g:\prod_\alpha[p_\alpha]\rightarrow[\bigvee_\alpha p_\alpha]$ by \[f(q)=\prod(p_\alpha\wedge q)\qquad\textrm{and}\qquad g((q_\alpha))=\bigvee q_\alpha.\] Take $(q_\alpha)\subseteq\mathbb{P}$ with $q_\alpha\leq p_\alpha$, for all $\alpha$. Given that $p_\alpha$ commutes with $q_\alpha$, we have $p_\alpha\wedge(q_\alpha\vee(\bigvee_{\beta\neq\alpha}q_\beta))\leq p_\alpha\wedge(q_\alpha\vee p_\alpha^\perp)=q_\alpha$, so $f\circ g$ is the identity map. But $g\circ f$ is also the identity map, by \autoref{sepcomportho} \eqref{sepcomportho1} and thus $g$ and $f$ are (canonical) isomorphisms inverse to each other. \end{proof} Given a collection of ortholattices $(\mathbb{P}_\alpha)$ and $(p_\alpha),(q_\alpha)\in\prod\mathbb{P}_\alpha$ with $(q_\alpha)\leq(p_\alpha)$, we have $(q_\alpha)^\perp\wedge(p_\alpha)=(q_\alpha^{\perp_\alpha})\wedge(p_\alpha)=(q_\alpha^{\perp_\alpha}\wedge p_\alpha)$, i.e. \[[(p_\alpha)]_{(p_\alpha)}=\prod[p_\alpha]_{p_\alpha}.\] This means that, by \autoref{ordertd}, given very orthogonal $(p_\alpha)\subseteq \mathbb{P}$ in a separative complete ortholattice $\mathbb{P}$, we also have \begin{equation}\label{bigveepalpha} [\bigvee p_\alpha]_{\bigvee p_\alpha}\cong\prod[p_\alpha]_{p_\alpha}. \end{equation} \subsection{Type Decomposition}\label{tdsec} \begin{dfn}\label{td} Given a complete ortholattice $\mathbb{P}$, we call $T\subseteq\mathbb{P}$ a \emph{type ideal} if, whenever we have pairwise very orthogonal $S\subseteq\mathbb{P}$, \begin{equation}\label{typedef} S\subseteq T\qquad\Leftrightarrow\qquad\bigvee S\in T. \end{equation} If \eqref{typedef} holds for arbitrary $S\subseteq T$ then $T$ is a \emph{complete ideal}. \end{dfn} These type ideals correspond to the type-determining subsets defined in \cite{FoulisPulmannova2010} \S4 (6) in the context of effect algebras. Note that if $T$ is a complete ideal then $T=[\bigvee T]$, i.e. complete ideals are precisely those subsets of $\mathbb{P}$ of the form $[p]$. And if $T$ is a type ideal in a complete Boolean algebra $\mathbb{P}$ then $T$ is actually a complete ideal. For then $\mathrm{c}\mathbb{P}=\mathbb{P}$ so if $p\leq q\in T$ then $q=p\vee(p^\perp\wedge q)$ and hence $p\in T$. While given any $(t_\alpha)\subseteq T$ we can define (very) orthogonal $(s_\alpha)$ by $s_\alpha=t_\alpha\wedge(\bigvee_{\beta<\alpha}t_\beta)^\perp$ and transfinite induction gives $\bigvee t_\alpha=\bigvee s_\alpha\in T$. The most common examples of type ideals come from the the type relations and type classes to be defined in the following sections. However, there is one important example we can give straight away, namely the \emph{equality type ideal} $\mathbb{P}_=$ of any ortholattice $\mathbb{P}$ defined by \begin{equation}\label{eqtypeideal} \mathbb{P}_==\{p\in\mathbb{P}:[p]=[p]_p\}. \end{equation} This is indeed a type ideal, by \eqref{ordertdeq} and \eqref{bigveepalpha}, and $\mathbb{P}=\mathbb{P}_=$ if and only if $\mathbb{P}$ is orthomodular, by \autoref{orthoequiv}. We call $T$ and $S$ \emph{complementary} when $(\bigvee T)\wedge(\bigvee S)=\mathbf{0}$ and $(\bigvee T)\vee(\bigvee S)=\mathbf{1}$. \begin{prp}\label{cideal} Given a separative complete ortholattice $\mathbb{P}$ and a type ideal $T\subseteq\mathbb{P}$, the following pairs are complementary complete ideals of $\mathrm{c}\mathbb{P}$. \begin{eqnarray} T\cap\mathrm{c}\mathbb{P}\quad &\textrm{and}& \quad\{p\in\mathrm{c}\mathbb{P}:\mathrm{c}[p]\cap T=\{\mathbf{0}\}\}.\label{tdprp1}\\ \mathrm{c}T\quad &\textrm{and}& \quad\{p\in\mathrm{c}\mathbb{P}:[p]\cap T=\{\mathbf{0}\}\}.\label{tdprp2} \end{eqnarray} \end{prp} \begin{proof}\ \begin{itemize} \item[\eqref{tdprp1}] As $T$ is a type ideal, so is $T\cap\mathrm{c}\mathbb{P}$ which, as $\mathrm{c}\mathbb{P}$ is a complete Boolean algebra, means it is a complete ideal of $\mathrm{c}\mathbb{P}$. Letting $q=\bigvee(T\cap\mathrm{c}\mathbb{P})$, we immediately have \[\mathrm{c}[q^\perp]\subseteq\{p\in\mathrm{c}\mathbb{P}:\mathrm{c}[p]\cap T=\{\mathbf{0}\}\},\] while if $p\in\mathrm{c}\mathbb{P}$ satisfies $\mathrm{c}[p]\cap T=\{\mathbf{0}\}$ then, as $p\wedge q\in T\cap\mathrm{c}\mathbb{P}$ (because $p\wedge q\leq q$ and $q\in T\cap\mathrm{c}\mathbb{P}$), we must have $p\wedge q=\mathbf{0}$ and hence $p\in[q^\perp]$, i.e. the inclusion above is an equality. \item[\eqref{tdprp2}] By \eqref{c(pwedgeq)} and \eqref{centresublattice}, $\mathrm{c}(\bigvee p_\alpha)=\bigvee\mathrm{c}(p_\alpha)$ whenever $(p_\alpha)\subseteq\mathbb{P}$ so $\mathrm{c}T$ is a type ideal and thus, as above, a complete ideal of $\mathrm{c}\mathbb{P}$. The fact that its complementary complete ideal is $\{p\in\mathrm{c}\mathbb{P}:[p]\cap T=\{\mathbf{0}\}\}$ follows in a similar manner, with another application of \eqref{c(pwedgeq)}. \end{itemize} \end{proof} It immediately follows that any type ideal naturally gives rise to the following two type decompositions. \begin{cor}\label{tdcor} Given a type ideal $T$ in a separative complete ortholattice $\mathbb{P}$, there are unique $p_T,q_T\in\mathrm{c}\mathbb{P}$ such that \begin{eqnarray} p_T\in T\quad &\textrm{and}& \quad\mathrm{c}[p_T^\perp]\cap T=\{\mathbf{0}\}.\label{tdcor1}\\ q_T\in\mathrm{c}T\quad &\textrm{and}& \quad[q_T^\perp]\cap T=\{\mathbf{0}\}.\label{tdcor2} \end{eqnarray} \end{cor} It follows immediately from \autoref{td} that, if $\mathbb{P}$ is a complete ortholattice, $\kappa$ is some (finite or infinite) cardinal and $T\subseteq\mathbb{P}$ is a type ideal, then so is \begin{equation}\label{T^n} T^\kappa=\{\bigvee S:S\subseteq T\textrm{ and }|T|=\kappa\}. \end{equation} If $\mathbb{P}$ is also separative then $\mathrm{c}(\bigvee S)=\bigvee\mathrm{c}(S)$, for any $S\subseteq\mathbb{P}$, which means $\mathrm{c}T=\mathrm{c}T^\kappa$, as $\mathrm{c}T$ is a complete ideal by \autoref{cideal}, i.e. $q_T=q_{T^\kappa}$. It also means that $p_{T^\kappa}\leq q_T$, even though in this case we can have $p_T<p_{T^\kappa}$. \subsection{Homogeneity} We now show how order-density yields homogeneous decompositions. \begin{dfn}\label{homdef} Assume $\mathbb{P}$ is a complete ortholattice. We say $p\in\mathbb{P}$ is $T$-\emph{homogeneous} if there are orthogonal $S\subseteq T\cap[p]$ with $p=\bigvee S$ and $\mathrm{c}S=\{\mathrm{c}(p)\}$. If $|S|=\kappa$, we say that $p$ has \emph{order} $\kappa$ or that $p$ is \emph{$\kappa$-$T$-homogeneous}. We denote the $\kappa$-$T$-homogeneous elements by $T_\kappa$. We call $p\in\mathbb{P}$ $T$-\emph{subhomogeneous} if there are very orthogonal $T$-homogeneous $S$ with $p=\bigvee S$. If each $s\in S$ has order $<\kappa$ then we say that $p$ is \emph{$<\!\!\kappa$-$T$-subhomogeneous}. By $\kappa$-$T$-subhomogeneous we mean $<\!\!\kappa^+$-$T$-subhomogeneous. We denote the $T$-subhomogeneous and $<\!\!\kappa$-$T$-subhomogeneous elements by $T_\mathrm{sub}$ and $T_{<\kappa}$ respectively \end{dfn} If $\mathbb{P}$ is a separative complete ortholattice and $T$ is a type ideal in $\mathbb{P}$ then so is $T_\mathrm{sub}$, $T_{<\kappa}$ and $T_\kappa$, for each cardinal $\kappa$. Hence we get the type decompositions in \autoref{tdcor}, and the central $<\!\!\kappa$-$T$-subhomogeneous part is just the join of all the central $\lambda$-$T$-homogeneous parts, for $\lambda<\kappa$, i.e. \[p_{T_{<\kappa}}=\bigvee_{\lambda<\kappa}p_{T_\lambda}\qquad\textrm{and}\qquad p_{T_\mathrm{sub}}=\bigvee_\lambda p_{T_\lambda}\] Also $T\subseteq T_\kappa\subseteq T_{<\kappa^+}\subseteq T^\kappa$ (see \eqref{T^n}), so $\mathrm{c}T\subseteq\mathrm{c}T_\kappa\subseteq\mathrm{c}T_{<\kappa^+}\subseteq\mathrm{c}T^\kappa=\mathrm{c}T$, for all $\kappa$, and $\mathrm{c}T_\mathrm{sub}=\mathrm{c}T$, so $p_{T_\mathrm{sub}}\leq q_{T_\mathrm{sub}}=q_T$. We aim to show that $p_{T_\mathrm{sub}}=q_T$ for suitable $T$. Also note that the order of a homogeneous element is not unique in general. For one thing, as long as $\mathbf{0}\in T$, then $\mathbf{0}$ is $\kappa$-$T$-homogeneous, for all $\kappa$. However, this can also happen for non-zero elements, i.e. we can have $T_\lambda\cap T_\kappa\neq\{\mathbf{0}\}$ for $\lambda\neq\kappa$. \begin{thm}\label{homthm} If $T$ is an order-dense type ideal in a separative complete ortholattice $\mathbb{P}$ then $\mathbf{1}$ is $T$-subhomogeneous. \end{thm} \begin{proof} By \eqref{tdprp2}, we may recursively define $t_\alpha\in T_\alpha$ so that $\mathrm{c}(t_\alpha)=\bigvee\mathrm{c}T_\alpha$, where $T_\alpha$ is the type ideal given by $T_\alpha=T\cap[(\bigvee_{\beta<\alpha}t_\beta)^\perp]$. Let $s_\alpha=\mathrm{c}(t_\alpha)^\perp\wedge\bigwedge_{\beta<\alpha}\mathrm{c}(t_\beta)\in\mathrm{c}\mathbb{P}$, by \eqref{centresublattice}. If $s=s_\alpha\wedge(\bigvee_{\beta<\alpha}t_\beta)^\perp\neq\mathbf{0}$ then, by the order density of $T$, we would have non-zero $t\in T\cap[s]\subseteq T_\alpha$. This $t$ would then be very orthogonal to $t_\alpha$ so $t\vee t_\alpha\in T_\alpha$ and $\mathrm{c}(t\vee t_\alpha)>\mathrm{c}(t_\alpha)$, contradicting the definition of $t_\alpha$. Thus $s_\alpha=s_\alpha\wedge\bigvee_{\beta<\alpha}t_\beta=\bigvee_{\beta<\alpha}(s_\alpha\wedge t_\beta)$. Also, for each $\beta<\alpha$, \[\mathrm{c}(s_\alpha\wedge t_\beta)=s_\alpha\wedge\mathrm{c}(t_\beta)=s_\alpha,\] so each $s_\alpha$ is homogeneous. As the $(t_\alpha)$ are orthogonal, they must be eventually $\mathbf{0}$. Thus $(\bigvee s_\alpha)^\perp=\bigwedge\mathrm{c}(t_\alpha)=\mathbf{0}$ so the $(s_\alpha)$ witness the subhomogeneity of $\mathbf{1}$. \end{proof} \begin{cor} If $T$ is an order-dense type ideal in a separative complete ortholattice $\mathbb{P}$ and $T_\lambda\cap T_\kappa\cap\mathrm{c}\mathbb{P}=\{\mathbf{0}\}$, whenever $\lambda\neq\kappa$, then there are unique orthogonal $(t_\kappa)\subseteq\mathrm{c}\mathbb{P}$ with $t_\kappa\in T_\kappa$, for all $\kappa$, and \[\mathbf{1}=\bigvee t_\kappa.\] \end{cor} \begin{proof} Existence follows immediately from \autoref{homthm} by joining all resulting homogeneous elements of the same order. Uniqueness now follows from \autoref{tdcor}, as $T_\lambda\cap T_\kappa\cap\mathrm{c}\mathbb{P}=\{\mathbf{0}\}$, for $\lambda\neq\kappa$, means that $t_\kappa=p_{T_\kappa}$, for all $\kappa$. \end{proof} \subsection{Type Relations} \begin{dfn}\label{tr} A relation $\sim$ on a complete ortholattice $\mathbb{P}$ is a \emph{type relation} if, whenever $(q_\alpha),(r_\alpha)\subseteq\mathbb{P}$ are such that $(q_\alpha\vee r_\alpha)$ are very orthogonal, \begin{equation}\label{treq} \forall\alpha(q_\alpha\sim r_\alpha)\qquad\Leftrightarrow\qquad\bigvee q_\alpha\sim\bigvee r_\alpha. \end{equation} We call $\sim$ \emph{proper} if $p\sim\mathbf{0}\Rightarrow p=\mathbf{0}$, for all $p\in\mathbb{P}$. \end{dfn} For any complete ortholattice $\mathbb{P}$, equality is a trivial example of a proper reflexive type relation, and it is of course the strongest reflexive relation on $\mathbb{P}$. While \begin{equation}\label{cpleqcq} \mathrm{c}(p)\leq\mathrm{c}(q) \end{equation} also defines a type relation, the weakest proper type relation. For if $\precsim$ is such a relation then $p\precsim q$ implies $\mathrm{c}(q)^\perp\wedge p\precsim\mathrm{c}(q)^\perp\wedge q=\mathbf{0}$ so $p\leq\mathrm{c}(q)$ and hence $\mathrm{c}(p)\leq\mathrm{c}(q)$. Thus \[\mathrm{c}(p)=\mathrm{c}(q)\] is the weakest symmetric proper type relation and this will coincide with equality if and only if $\mathbb{P}=\mathrm{c}\mathbb{P}$, i.e. if and only if $\mathbb{P}$ is a Boolean algebra, in which case they both represent the unique proper reflexive symmetric type relation on $\mathbb{P}$. Slightly more interesting examples are given by perspectivity, orthoperspectivity and semiorthoperspectivity, which are type relations by \autoref{ordertd}. Any type relation $\sim$ naturally defines other type relations $\precsim$ and $\precsim_\mathrm{rel}$ by \begin{eqnarray} p\precsim q &\Leftrightarrow& \exists r\in[q](p\sim r),\textrm{ and}\label{precsimdef}\\ p\precsim_\mathrm{rel} q &\Leftrightarrow& \exists r\in[q]_q(p\sim r).\label{precsimreldef} \end{eqnarray} In general $\precsim_\mathrm{rel}$ is stronger than $\precsim$, and they coincide when $\mathbb{P}$ is orthomodular, by \autoref{orthoequiv}. Even in the non-orthomodular case, if $\sim$ is weaker than orthoperspectivity (on $\mathbb{P}^\sim$ at least), then $p\sim r\leq q$ implies $r\sim r^{\perp_q\perp_q}\leq q$ and hence, if $\sim$ is also transitive, $\precsim_\mathrm{rel}$ and $\precsim$ again coincide. \begin{prp}\label{trti} For any reflexive type relation on a separative complete ortholattice $\mathbb{P}$, the following subsets are type ideals. \begin{enumerate} \item\label{trtd1} The $\sim$-finite elements of $\mathbb{P}$. \item\label{trtd2} The $\sim$-orthofinite elements of $\mathbb{P}$. \end{enumerate} \end{prp} \begin{proof}\ \begin{itemize} \item[\eqref{trtd1}] Say we have very orthogonal $(p_\alpha)\subseteq\mathbb{P}$. If $\bigvee p_\alpha$ were not $\sim$-finite, we would have $\bigvee p_\alpha\sim\bigvee q_\alpha$ for some $(q_\alpha)$ with $q_\beta<p_\beta$, for some $\beta$, by \autoref{ordertd}. As $\sim$ is a type relation, we would have $p_\beta\sim q_\beta$, and hence $p_\beta$ would not be $\sim$-finite. On the other hand, if $p_\beta$ is not $\sim$-finite, for some $\beta$, then $p_\beta\sim q_\beta$ for some $q_\beta<p_\beta$ and then $\bigvee p_\alpha\sim\bigvee q_\alpha<\bigvee p_\alpha$, where $q_\alpha=p_\alpha$ for $\alpha\neq\beta$, and hence $\bigvee p_\alpha$ is not $\sim$-finite. \item[\eqref{trtd2}] Essentially the same proof as in \eqref{trtd1}. \end{itemize} \end{proof} \begin{thm}\label{permod} If $\sim$ is a finite symmetric transitive type relation on a complete ortholattice $\mathbb{P}$ weaker than perspectivity then $\mathbb{P}$ is modular and $\sim$ coincides with $\sim_\mathrm{p}$. \end{thm} \begin{proof} If $\mathbb{P}$ were not orthomodular, we would have $[p]\neq[p]_p$, for some $p\in\mathbb{P}$, and thus $q<q^{\perp_p\perp_p}$, for any $q\in[p]\setminus[p]_p$, even though $q^{\perp_p\perp_p}\sim q$, by \autoref{orthoperpequiv}, contradicting finiteness. Perspectivity must also be finite, being weaker than $\sim$, and this comibined with orthomodularity means $\mathbb{P}$ is modular, by \eqref{perfin}. As $\mathbb{P}$ is complete, it is a continuous geometry, by \cite{Kaplansky1955}. Following the proof of \cite{Kaplansky1951} Theorem 6.6(c), we note that \cite{vonNeumann1960} Part III Theorem 2.7 means that perspectivity satisfies generalized comparison, i.e. for any $q,r\in\mathbb{P}$, we have $p\in\mathrm{c}\mathbb{P}$ (the centre we have defined is the same as the centre defined at the start of \cite{vonNeumann1960} Part III Chapter I, either by \autoref{centralequiv} \eqref{centralequiv3} or by \cite{vonNeumann1960} Part I Theorem 5.4 and \autoref{cenuniqcomp}) with $p\wedge q$ perspective to some $s\leq p\wedge r$ and $p^\perp\wedge r$ perspective to some $t\leq p^\perp\wedge q$. So if $q\sim r$ then $s\sim p\wedge q\sim p\wedge r$ which, by finiteness, means $s=p\wedge r$ and, likewise, $t=p^\perp\wedge q$. So $p\wedge q$ and $p\wedge r$ are perspective, as are $p^\perp\wedge q$ and $p^\perp\wedge r$ so, by \autoref{centralequiv} \eqref{centralequiv3}, $q$ and $r$ are also perspective. \end{proof} \subsection{Type Classes} \begin{dfn}\label{tc} Let $\mathbf{C}$ be the class of complete lattices with a relation $\perp$. We call $\mathbf{T}\subseteq\mathbf{C}$ a \emph{type class} if it is closed under isomorphisms and, for $(\mathbb{P}_\alpha)\subseteq\mathbf{T}$, \begin{equation}\label{tcprod} (\mathbb{P}_\alpha)\subseteq\mathbf{T}\qquad\Leftrightarrow\qquad\prod\mathbb{P}_\alpha\in\mathbf{T} \end{equation} \end{dfn} \begin{prp}\label{tctd} If $\mathbf{T}$ is a type class and $\mathbb{P}$ is a separative complete ortholattice then the following subsets of $\mathbb{P}$ are type ideals. \begin{eqnarray}\label{tctdeq} \mathbb{P}_\mathbf{T} &=& \{p\in\mathbb{P}:[p]\in\mathbf{T}\}.\label{tctd1}\\ \mathbb{P}_{\mathbf{T}\mathrm{rel}} &=& \{p\in\mathbb{P}:[p]_p\in\mathbf{T}\}.\label{tctd2} \end{eqnarray} \end{prp} \begin{proof} $\mathbb{P}_\mathbf{T}$ and $\mathbb{P}_{\mathbf{T}\mathrm{rel}}$ are type ideals by \eqref{ordertdeq} and \eqref{bigveepalpha} respectively. \end{proof} Any subclass of $\mathbf{C}$ consisting of all those complete lattices satisfying some universally quantified sentence in the language $\{\mathbf{0},\leq,\perp,\wedge,\vee\}$ will be a type class, like the following important examples \textendash \begin{eqnarray} \mathbf{D} &=& \{\mathbb{P}\in\mathbf{C}:\mathbb{P}\textrm{ is a distributive}\},\label{Ddef}\\ \mathbf{M} &=& \{\mathbb{P}\in\mathbf{C}:\mathbb{P}\textrm{ is a modular}\},\textrm{ and}\label{Mdef}\\ \mathbf{O} &=& \{\mathbb{P}\in\mathbf{C}:\mathbb{P}\textrm{ is a orthomodular}\}.\label{Odef} \end{eqnarray} (note that \autoref{orthoequiv}\eqref{orthoequiv5} characterizes orthomodularity just with an orthogonality relation $\perp$, rather than an orthocomplementation ${}^\perp$, which we may not have for $[p]$). We also have the type class $\mathbf{S}=\{\mathbb{P}\in\mathbf{C}:\mathbb{P}\textrm{ is a separative}\}$, although this does not lead to any interesting type decompositions in a separative complete ortholattice $\mathbb{P}$ because then $\mathbb{P}=\mathbb{P}_\mathbf{S}$ and, even though we may have $\mathbb{P}\neq\mathbb{P}_{\mathbf{S}\mathrm{rel}}$ (see the comments after \autoref{orthoequiv}), we will still have $\mathrm{c}\mathbb{P}\subseteq\mathbb{P}_{\mathbf{S}\mathrm{rel}}$. Now, using the type ideals and type decompositions coming from several instances of \eqref{tctd1} and \eqref{tdcor2} respectively, we can define \begin{eqnarray} p_\mathrm{I} &=& q_{\mathbb{P}_\mathbf{D}},\label{pI}\\ p_\mathrm{II} &=& q^\perp_{\mathbb{P}_\mathbf{D}}\wedge q_{\mathbb{P}_\mathbf{M}},\label{pII}\\ p_\mathrm{III} &=& q^\perp_{\mathbb{P}_\mathbf{M}}\wedge q_{\mathbb{P}_\mathbf{O}},\textrm{ and}\label{pIII}\\ p_\mathrm{IV} &=& q^\perp_{\mathbb{P}_\mathbf{O}}.\label{pIV} \end{eqnarray} If $\mathbb{P}$ is the projection lattice $\mathcal{P}(A)$ of a von Neumann algebra $A$ then these projections do indeed correspond to those you get from the classical von Neumann algebra type decomposition. Specifically, to see that $p_\mathrm{I}A$ corresponds to the type I part, see the comments after \autoref{pq=qp}, and to see that this even extends to annihilators in C*-algebras, see \autoref{commuteBoolean}. Also $p_\mathrm{II}A$ corresponds the classical type II part because a von Neumann algebra is finite if and only if its projection lattice is modular (see \cite{Berberian1972} \S34 Proposition 1 and \cite{Kaplansky1955} Theorem on page 1). This fact can also be extended, at least partially, to annihilators in C*-algebras, by \autoref{permod} and \autoref{simtr}. And $\mathcal{P}(A)$ is necessarily orthomodular, which means $p_\mathrm{IV}=0$, so $p_\mathrm{III}A$ also corresponds to the type III part in this case. We do not know if the same is true for the annihilators in a C*-algebra, i.e. whether there exist any type IV C*-algebras at all. The annihilators in such a C*-algebra would be wildly different from any ortholattices seen before in operator algebras, as they would fail to be orthomodular in a very strong way. We can also use \autoref{tdcor} and \eqref{T^n}, to define \begin{eqnarray*} p_\mathrm{I_n} &=& p_{\mathbb{P}_{\mathbf{D}n}},\textrm{ and}\\ p_\mathrm{II_1} &=& q^\perp_{\mathbb{P}_\mathbf{D}}\wedge p_{\mathbb{P}_\mathbf{M}}. \end{eqnarray*} If $\mathbb{P}$ is again the projection lattice $\mathcal{P}(A)$ of a von Neumann algebra $A$ then $p_\mathrm{I_n}A$ and $p_\mathrm{II_1}A$ are again the type $\mathrm{I_n}$ and $\mathrm{II_1}$ part respectively in the classical von Neumann type decomposition. In this case, a supremum of finite projections is again finite, which means that $p_\mathrm{I_n}\leq p_{\mathbb{P}_\mathbf{M}}$ and $\mathbb{P}^n_\mathbf{M}=\mathbb{P}_\mathbf{M}$. In an arbitrary separative complete ortholattice $\mathbb{P}$ we do not have any notion of Murray-von Neumann equivalence and so there is no reason to believe these facts remain true in general. For annihilators in C*-algebras we will, however, define an analgous notion (see \S\ref{Equivalence}) which will enable us to prove the first of these facts (see \autoref{nsubcor}\eqref{nsubcor3}). We could have also used the relative type-ideals in \eqref{tctd2}, rather than those in \eqref{tctd1}, to define the above type classes, as given below. \begin{eqnarray} p_{\mathrm{I}\mathrm{rel}} &=& q_{\mathbb{P}_{\mathbf{D}\mathrm{rel}}},\label{pIp}\\ p_{\mathrm{II}\mathrm{rel}} &=& q^\perp_{\mathbb{P}_{\mathbf{D}\mathrm{rel}}}\wedge q_{\mathbb{P}_{\mathbf{M}\mathrm{rel}}},\label{pIIp}\\ p_{\mathrm{III}\mathrm{rel}} &=& q^\perp_{\mathbb{P}_{\mathbf{M}\mathrm{rel}}}\wedge q_{\mathbb{P}_{\mathbf{O}\mathrm{rel}}},\textrm{ and}\label{pIIIp}\\ p_{\mathrm{IV}\mathrm{rel}} &=& q^\perp_{\mathbb{P}_{\mathbf{O}\mathrm{rel}}}.\label{pIVp} \end{eqnarray} If $\mathbb{P}$ is the projection lattice in a von Neumann algebra then, as this is orthomodular, we have $[p]=[p]_p$ (see \autoref{orthoequiv} \eqref{orthoequiv3}) and this decomposition is exactly the same as the previous one. But for annihilators in C*-algebras, it may well be different, although we do at least know that $p_{\mathrm{I}\mathrm{rel}}=p_\mathrm{I}$ in this case, by \autoref{commuteBoolean}. Indeed, these relative versions using $[p]_p$ are really more natural for annihilators, as using $[p]$ instead might result in the rather awkward situation that an annihilator $B$ could be modular in the larger C*-algebra $A$ even though it is not modular in itself (i.e. $[B]$ could be modular lattice even when $[B]_B$ is not), or vice versa. Although we could avoid this problem by using the even smaller type-ideal of $\mathbb{P}$ given by \[\{p\in\mathbb{P}:[p]_p=[p]\in\mathbf{M}\}\subseteq\mathbb{P}_{\mathbf{M}\mathrm{rel}}\cap\mathbb{P}_\mathbf{M}.\] In fact, this is just the intersection of $\mathbb{P}_\mathbf{M}$ (or $\mathbb{P}_{\mathbf{M}\mathrm{rel}}$) with the equality type-ideal $\mathbb{P}_=$ defined in \eqref{eqtypeideal}. Taking intersections of the various resulting type-ideals we get from these type-classes would yield more type-ideals leading to even finer type decompositions. Although, whether all these potential types are actually realized by certain C*-algebras, we do not know. \subsection{Boolean Elements} The following terminology comes from \cite{Chevalier1991} \begin{dfn} An ortholattice $\mathbb{P}$ has the \emph{relative centre property} if, for all $p\in\mathbb{P}$, \[\mathrm{c}_p[p]_p=\{p\wedge q:q\in\mathrm{c}\mathbb{P}\}.\] \end{dfn} This is equivalent to saying $\mathrm{c}(q)\wedge p=\mathrm{c}_p(q)$ whenever $q\in[p]_p$. \begin{dfn}\label{gcdef} A relation $\precsim$ on a complete ortholattice $\mathbb{P}$ satisfies \emph{generalized comparison} if, for any $q,r\in\mathbb{P}$, there exists $p\in\mathrm{c}\mathbb{P}$ with \[p\wedge q\precsim p\wedge r\quad\textrm{and}\quad p^\perp\wedge r\precsim p^\perp\wedge q.\] \end{dfn} \begin{prp}\label{simgcprp} If $\sim$ is a symmetric type relation and $\precsim_\mathrm{rel}$ is defined as in \eqref{precsimreldef} then $\precsim_\mathrm{rel}$ satisfies generalized comparison if and only if, for all $q,r\in\mathbb{P}$, \begin{equation}\label{simgc} \exists u\in[q]_q\exists v\in[r]_r(u\sim v\textrm{ while }q\wedge u^\perp\textrm{ is very orthogonal to }r\wedge v^\perp). \end{equation} \end{prp} \begin{proof} If $\precsim_\mathrm{rel}$ satisfies generalized comparison then, for any $q,r\in\mathbb{P}$ we have $p\in\mathrm{c}\mathbb{P}$, $s\in[p\wedge r]_r$, $t\in[p^\perp\wedge q]_q$ such that $p\wedge q\sim s$ and $p^\perp\wedge r\sim t$. If $\sim$ is a symmetric type relation then $u=(p\wedge q)\vee_qt=(p\wedge q)\vee t\sim s\vee(p^\perp\wedge r)=s\vee_r(p^\perp\wedge r)=v$ while $q\wedge u^\perp\leq p^\perp$ and $r\wedge v^\perp\leq p$. Conversely, given such a $u$ and $v$, setting $p=\mathrm{c}(r\wedge v^\perp)$ gives $p\wedge q\leq u^{\perp_q\perp_q}=u$ and hence \[p\wedge q=p\wedge u\sim p\wedge v\leq p\wedge r\] and, likewise, $p^\perp\wedge r=p^\perp\wedge v\sim p^\perp\wedge u\leq p^\perp\wedge q$. \end{proof} As noted after \autoref{tr}, $p\sim q\Leftrightarrow c(p)=c(q)$ defines a (proper reflexive symmetric) type relation on any complete ortholattice $\mathbb{P}$. To see that $\precsim_\mathrm{rel}$ then satisfies generalized comparison, simply take any $q,r\in\mathbb{P}$, set $u=q\wedge c(r)$ and $v=r\wedge c(q)$ and note that $c(u)=c(q)\wedge c(r)=c(v)$ while $q\wedge u^\perp\leq c(r)^\perp$ and $r\wedge v^\perp\leq c(q)^\perp$. The next result shows that, under suitable extra hypotheses, it is the only such relation on $\mathbb{P}_{\mathbf{D}\mathrm{rel}}$. In fact it shows that, if $\sim$ is a proper type relation on a complete Boolean algebra $\mathbb{P}$ then generalized comparison is equivalent to reflexivity, i.e. in this case generalized comparison just says $\sim$ is equality. \begin{thm}\label{Boolthm} If $\mathbb{P}$ is a complete ortholattice with the relative centre property, $\sim$ is a proper symmetric type relation and $\precsim_\mathrm{rel}$ satisfies generalized comparison then \begin{enumerate} \item\label{Boolthm1} For $p\in\mathbb{P}_{\mathbf{D}\mathrm{rel}}$ and $q\in\mathbb{P}$, $\mathrm{c}(p)\leq\mathrm{c}(q)\Leftrightarrow p\precsim_\mathrm{rel}q$. \item\label{Boolthm2} For $p,q\in\mathbb{P}_{\mathbf{D}\mathrm{rel}}$, $\mathrm{c}(p)=\mathrm{c}(q)\Leftrightarrow p\sim q$. \item\label{Boolthm3} When $p,q\in\mathbb{P}_{\mathbf{D}\mathrm{rel}}$ and $p\sim q$, $\sim$ is an orthoisomorphism on $[p]_p\times[q]_q$. \end{enumerate} \end{thm} \begin{proof}\ \begin{enumerate} \item The $\Leftarrow$ part is immediate from the comments after \eqref{cpleqcq}. For the $\Rightarrow$ part, note that, by generalized comparison, we have $s\in[p]_p$ and $t\in[q]_q$ with $s\sim t$. As $\mathbb{P}$ has the relative centre property and every element of a Boolean algebra is central, we have $\mathrm{c}(p\wedge s^\perp)\wedge p=\mathrm{c}_p(p\wedge s^\perp)=p\wedge s^\perp$. As $\sim$ is a type relation, $\mathbf{0}=\mathrm{c}(p\wedge s^\perp)\wedge s\sim \mathrm{c}(p\wedge s^\perp)\wedge t$. But $\sim$ is also proper and hence $\mathrm{c}(p\wedge s^\perp)\wedge t=\mathbf{0}$ so $\mathrm{c}(p\wedge s^\perp)\wedge q=\mathbf{0}$ and $\mathrm{c}(p\wedge s^\perp)\wedge\mathrm{c}(q)=\mathbf{0}$. As $\mathrm{c}(p)\leq\mathrm{c}(q)$, we must have $p\wedge s^\perp=\mathbf{0}$ and therefore $p=s$, i.e. $p\sim t\leq q$. \item A symmetric argument yields $q=t$ too, i.e. $p\sim q$. \item Say we had $r,s\in[p]_p$ and $t\in[q]_q$ with $r\sim t$ and $s\sim t$. We would then have $\mathrm{c}(r)=\mathrm{c}(t)=\mathrm{c}(s)$ and hence $r=\mathrm{c}_p(r)=\mathrm{c}(r)\wedge p=\mathrm{c}(s)\wedge p=\mathrm{c}_p(s)=s$. Also, for any $s\in[p]_p$, $\mathrm{c}(\mathrm{c}(s)\wedge q)=\mathrm{c}(s)\wedge\mathrm{c}(q)=\mathrm{c}(s)\wedge\mathrm{c}(p)=\mathrm{c}(s)$ and hence $s\sim\mathrm{c}(s)\wedge q\in[q]_q$. Thus $\sim$ restricted to $[p]_p\times[q]_q$ is the function $s\mapsto\mathrm{c}(s)\wedge q$. It is clearly order preserving and a symmetric argument shows that the same is true of the inverse function $t\mapsto\mathrm{c}(t)\wedge p$. Finally note that, as $[p]_p$ and $[q]_q$ are Boolean algebras, any order isomorphism is actually an orthoisomorphism. \end{enumerate} \end{proof} \section{Annihilators and Projections}\label{AvsP} \subsection{Annihilators}\label{annsec} Throughout, $A$ denotes a C*-algebra with positive elements $A_+=\{aa^*:a\in A\}$, self-adjoint elements $A_\mathrm{sa}=\{a=a^*:a\in A\}$ (or $A_\mathrm{sa}=\{a-b:a,b\in A_+\}$), unit ball $A^1=\{a:||a||\leq1\}$ and projections $\mathcal{P}(A)=\{a\in A:a=a^*=a^2\}$, where $p\leq q$ means $p=pq$, for $p,q\in\mathcal{P}(A)$. Consider the following relations on $A$. \begin{eqnarray*} aLb &\Leftrightarrow& ab^*=0.\\ a\bot b &\Leftrightarrow& aLb\textrm{ and }aLb^*.\\ a\top b &\Leftrightarrow& a\bot b\textrm{ and }a^*\bot b. \end{eqnarray*} Taking the orthocompletion of $A$ w.r.t. any of these relations amounts to the same thing. To see why, first note that $\{a\}^\perp=\{a^*a\}^\top$ and $\{a\}^\top=\{a,a^*\}^\perp$ so \[[A]^\perp=[A]^\top\subseteq\{S\subseteq A:S=S^*\}\] and, for any $S\subseteq A$ with $S=S^*(=\{s^*:s\in S\})$, we have $S^\perp=S^\top$. As $\top$ is a preorthogonality relation, it follows from \S\ref{TheCompletion} that \[[A]^\perp\textrm{ is a complete ortholattice.}\] In fact, all we have used here is the fact that $A$ is a *-ring with proper involution. Also $L$ is a preorthogonality relation on $A$ and the map \[B\mapsto B\cap B^*\] is an orthoisomorphism from $[A]^L$ to $[A]^\perp$. Every element of $[A]^L$ is clearly a closed left ideal and so every element of $[A]^\perp$ is a hereditary C*-subalgebra of $A$ (see \cite{Pedersen1979} Theorem 1.5.2). As C*-algebras, rather than their left ideals, are our primary object of study, and the equivalence in \S\ref{Equivalence} is slightly easier to define with $\bot$ rather than $\top$, we shall focus on the relation $\bot$. The elements of $[A]^\perp$ will be called \emph{annihilators} of $A$. Another thing to note is that the above relations all agree on $A_\mathrm{sa}$. The fact that $\{a\}^\perp=\{a^*a\}^\perp$, shows that $A_+$ is join-dense in $A$, w.r.t. to the preorder induced by $\perp$, and thus \[[A]^\perp\cong[A_+]^\perp.\] Indeed, it will often be convenient in proofs to use the elements of $A_+$ rather than $A$. If $A$ has real rank zero, then every hereditary C*-subalgebra contains an approximate unit of projections, so every annihilator will be an annihilator of subset of projections, i.e. $\mathcal{P}(A)$ will be join-dense in $A$ and \[[A]^\perp\cong[\mathcal{P}(A)]^\perp.\] We generalize this observation in \autoref{SP}. One other thing worth pointing out is that we have something extra on $[A]^\perp$ that we do not have for an arbitrary ortholattice. Specifically, we actually have a function $||\cdot\cdot||$ from $[A]^\perp\times[A]^\perp$ to $[0,1]$ which quantifies the degree of orthogonality of $B,C\in[A]^\perp$ given by \begin{equation}\label{||BC||} ||BC||=\sup_{b\in B^1_+,c\in C^1_+}||bc||. \end{equation} The important properties of this function are that, for $B,C,D\in[A]^\perp$, \begin{eqnarray*} ||BC|| &=& ||CB||,\\ B\neq\{0\} &\Leftrightarrow& ||BB||=1,\\ B\perp C &\Leftrightarrow& ||BC||=0,\textrm{ and}\\ C\subseteq D &\Rightarrow& ||BC||\leq||BD||. \end{eqnarray*} Indeed, these properties could be derived from the relevant properties of $||\cdot\cdot||$ on $A^1_+$ and the the fact that $a\perp b\Leftrightarrow||ab||=0$, for $a,b\in A^1_+$ (and $[A]^\perp\cong[A^1_+]^\perp$). If, further, $||BC^\perp||$ satisfies the triangle inequality, i.e. for all $B,C,D\in[A]^\perp$, \[||BD^\perp||\leq||BC^\perp||+||CD^\perp||,\] then we naturally call $||\cdot\cdot||$ an \emph{orthonorm}. We do not know if \eqref{||BC||} always yields an orthonorm on $[A]^\perp$, although we show it does for certain C*-algebras in \autoref{nsubcor}\eqref{nsubcor2}. We can even use $||\cdot\cdot||$ to define the \emph{orthospectrum} of $B,C\in[A]^\perp$ by \begin{equation*} \sigma(BC)=\overline{\{||BD||^2:D\in[C]\textrm{ and }((D\vee B^\perp)\wedge B)\perp((D^{\perp_B}\vee B^\perp)\wedge B)\}}, \end{equation*} except when $B=C=A$, in which case we define $\sigma(AA)=\{1\}$ (see \autoref{orthospecpq}). \subsection{Non-Commutative Topology}\label{NCT} From this point on, it will be convenient to assume that $A$ is concretely represented faithfully and non-degenerately on some Hilbert space $H$. This is, of course, valid, thanks to the standard GNS construction. One canonical choice would be the universal representation, which has the advantage that its projections distinguish all hereditary C*-subalgebras of $A$. The same is also true for the atomic representation, by by \cite{Pedersen1979} Proposition 4.3.13 and Theorem 4.3.15. However, we are primarily concerned with annihilators, and the projections in any faithful representation of $A$ still distinguish the annihilators (see \autoref{annproj}). So, as long as we fix it throughout, any faithful non-degenerate representation will do. However, we will still use standard non-commutative topology terminology (see \cite{Akemann1969}, \cite{Akemann1970} and \cite{Pedersen1979} 3.11.10) which, traditionally, has only be used with reference to projections in $A^{**}$. This restriction is certainly convenient for many of the proofs, and necessary for some of the results, but many of the results themselves are valid also for arbitrary representations, thanks to the following property of $A^{**}$. \begin{thm}\label{A**} Any representation $\pi$ of $A$ has a normal extension to $A^{**}$. \end{thm} \begin{proof} See \cite{Pedersen1979} Theorem 3.7.7. \end{proof} \begin{dfn} We call $p\in\mathcal{P}(\mathcal{B}(H))$ \emph{open} if $p=\sup(p_\alpha)$, for some increasing net $(p_\alpha)\subseteq A^1_+$, and \emph{closed} if $p^\perp$ is open. \end{dfn} Equivalently, a projection $p$ is closed if and only if $p=\inf(p_\alpha)$ for some decreasing net $(p_\alpha)\subseteq 1-A^1_+$. And the supremum/infimum of an increasing/decreasing net in $A$ coincides with the limit of that net in the strong (or weak) topology, so open and closed projections always lie in the double commutant $A''$ of $A$. We will denote the sets of open and closed projections by $\mathcal{P}(A'')^\circ$ and $\overline{\mathcal{P}(A'')}$ respectively. Also note that the identity operator $1$ is open, as we are dealing with a non-degenerate representation. If $B$ is a C*-subalgebra of $A$, then $\{b\in B_+:||b||<1\}$ is directed (see \cite{Pedersen1979} Theorem 1.4.2) and hence has a supremum $p_B=\sup(B^1_+)\in\mathcal{P}(A'')^\circ$. Conversely, if $p\in\mathcal{P}(A'')^\circ$, then $B=pAp\cap A$ is a hereditary C*-subalgebra of $A$ with $p=\sup(B^1_+)=p_B$. So a projection is open precisely when it is the supremum of $B^1_+$ for some hereditary C*-subalgebra $B$ of $A$.\footnote{Incidentally, the closed projections of $A$ can similarly be characterized as the infimums of norm filters (see \cite{Bice2011} Corollary 3.4), although we will not have further occasion to refer to these.} The open and closed projections can also be characterized as the limits of increasing and decreasing sequences respectively in $\widetilde{A}^1_+=(A+\mathbb{C}1)^1_+$ (or even $\widetilde{A}^1_\mathrm{sa}$), a surprisingly non-trivial fact when $A$ is not unital (and hence $A\neq\widetilde{A}$). Another important fact is that the collection of open projections $\mathcal{P}(A'')^\circ$ is norm closed (see \cite{Pedersen1979} Proposition 3.11.9). \begin{dfn} We define the \emph{interior} $p^\circ$ of $p\in\mathcal{P}(\mathcal{B}(H))$ by $p^\circ=\sup(pAp\cap A)^1_+$ and the \emph{closure} by $\overline{p}=p^{\perp\circ\perp}$. We call $p$ \emph{topologically regular}\footnote{As far as we know, such projections have not been considered or named before. They are the analog of regular open subsets of a topological space, although we are averse to simply calling them regular, as this is already a standard term meaning something different (see \cite{Akemann1969} Definition II.11 and the discussion at the start of \S\ref{SvsO}).} if $p=\overline{p}^\circ$ and denote the collection of all topologically regular open projections by $\overline{\mathcal{P}(A'')}^\circ$. \end{dfn} It follows that $p^\circ$ is the largest open projection satisfying $p^\circ\leq p$ and $\overline{p}$ is the smallest closed projection satisfying $p\leq\overline{p}$. The existence of such projections also follows from the fact that the supremum of a collection of open projections is open and the infimum of a collection of closed projections is closed (see \cite{Akemann1969} Proposition II.5 and combine it with \autoref{A**} to obtain the result for arbitrary representations). Also note that the interior of \emph{any} closed projection is in fact topologically regular. For if $p=q^\circ$, for some closed $q\in\mathcal{P}(\mathcal{B}(H))$, then $\overline{p}\leq q$ so \begin{equation}\label{intclosedtopreg} \overline{p}^\circ\leq q^\circ=p\leq\overline{p}^\circ. \end{equation} One more important thing to note is the difference between complements of annihilators and their corresponding projections. Specifically, given $B\in[A]^\perp$, we may have $p_{B^\perp}\neq p_B^\perp$, and one could view the complications of extending projection results to annihilators as all stemming from this fact. Indeed, $p_{B^\perp}$ is open while $p_B^\perp$ is closed, so they could not be equal unless they were clopen. This occurs when these projections, or their complements, lie in $A$ itself, i.e. if $B=pAp$ for some $p\in A$. In fact, if $A''=A^{**}$ (i.e. if we are dealing with the universal representation of $A$) and $1\in A$ then the clopen projections are precisely those in $A$ (see \cite{Akemann1969} Proposition II.18). However, $B^\perp=p_B^\perp Ap_B^\perp\cap A$ so, by definition, we do always have \begin{equation}\label{perpseq} p_{B^\perp}=p_B^{\perp\circ}. \end{equation} \begin{thm}\label{annproj} There are order isomorphisms between $[A]^\perp$ and $\overline{\mathcal{P}(A'')}^\circ$ given by \[B\mapsto p_B\qquad\textrm{and}\qquad p\mapsto pAp\cap A\] \end{thm} \begin{proof} For $B\in[A]^\perp$, $p_B=p_{B^{\perp\perp}}=p_{B^\perp}^{\perp\circ}$, by \eqref{perpseq}. But $p_{B^\perp}=\sup(B^\perp)^1_+$ is open so $p^\perp_{B^\perp}$ is closed and hence its interior, $p_B$, is a topologically regular projection. Also, if $p_B=p=p_C$, for $B,C\in[A]^\perp$, then $B^\perp=p^\perp Ap^\perp\cap A=C^\perp$ and hence $B=B^{\perp\perp}=C^{\perp\perp}=C$, so the first map is injective. For topologically regular $p$, \[(p^{\perp\circ}Ap^{\perp\circ}\cap A)^\perp=p^{\perp\circ\perp}Ap^{\perp\circ\perp}\cap A=p^{\perp\circ\perp\circ}Ap^{\perp\circ\perp\circ}\cap A=\overline{p}^\circ A\overline{p}^\circ\cap A=pAp\cap A,\] so $pAp\cap A$ is indeed an annihilator. Also $p_{pAp\cap A}=\sup(pAp\cap A)^1_+=p$, as $p$ is open. This, combined with the injectivity of the first map, shows that these maps are indeed inverse to each other which, as they are immediately seen to be order preserving, means they are also order isomorphisms. \end{proof} When making order theoretic statements about projections we must now always take care to note whether they are with respect to $\mathcal{P}(A'')$ or $\overline{\mathcal{P}(A'')}^\circ$, and we shall adopt the convention that, by default, it is the order structure of $\mathcal{P}(A'')$ being referred to unless otherwise specified, with subscripts for example. For while it follows from \autoref{annproj} that $\overline{\mathcal{P}(A'')}^\circ$ is a complete lattice, it is not a sublattice of $\mathcal{P}(A'')$. When $p,q\in\overline{\mathcal{P}(A'')}^\circ$ satisfy $pq=qp$, we do have $p\wedge q=p\wedge_{\overline{\mathcal{P}(A'')}^\circ}q=pq$, thanks to \cite{Akemann1969} Theorem II.7, so infimums do at least agree in this case, but even commutativity does not imply that supremums agree. Moreover, as mentioned above, the orthocomplement functions in the two structures are different. However, there are relations between some order theoretic concepts in $\mathcal{P}(A'')$ and the corresponding concepts in $\overline{\mathcal{P}(A'')}^\circ$. For example, the concept of centrality coincides in the two structures (see \autoref{centralannihilators} and the comments that follow) and the following result shows that commutativity is usually stronger in $\mathcal{P}(A'')$ than in $\overline{\mathcal{P}(A'')}^\circ$ (and it can be strictly stronger \textendash\, see the comments after \autoref{commuteclosure2}). \begin{prp}\label{commutativityimplication} If $\overline{\mathcal{P}(A'')}^\circ$ is orthomodular then, for $p,q\in\overline{\mathcal{P}(A'')}^\circ$, \[\exists r\in\{p,\overline{p},p^\perp,\overline{p}^\perp\}\exists s\in\{q,\overline{q},q^\perp,\overline{q}^\perp\}(rs=sr)\quad\Rightarrow\quad p\textrm{ and }q\textrm{ commute in }\overline{\mathcal{P}(A'')}^\circ.\] \end{prp} \begin{proof} For any projections $p$ and $q$ in a C*-algebra, $pq=qp\Leftrightarrow pq^\perp=q^\perp p$, and any $p$ and $q$ in an orthomodular lattice commute if and only if $p$ and $q^\perp$ commute. Thus, without loss of generality, we may assume that $pq=qp$ and hence $pq=p\wedge_{\overline{\mathcal{P}(A'')}^\circ}q$, by the comments above. As $\overline{\mathcal{P}(A'')}^\circ$ is orthomodular, we have $r\in\overline{\mathcal{P}(A'')}^\circ$ with $pqr=0$, $r\leq p$ and \[p=pq\vee_{\overline{\mathcal{P}(A'')}^\circ}r=\overline{pq\vee r}^\circ.\] But then $p^\perp qr=qp^\perp r=0$ and hence $qr=0$. So $r\leq p\wedge_{\overline{\mathcal{P}(A'')}^\circ}q^{\perp_{\overline{\mathcal{P}(A'')}^\circ}}$ and hence \[p=pq\vee_{\overline{\mathcal{P}(A'')}^\circ}r\leq(p\wedge_{\overline{\mathcal{P}(A'')}^\circ}q)\vee_{\overline{\mathcal{P}(A'')}^\circ}(p\wedge_{\overline{\mathcal{P}(A'')}^\circ}q^{\perp_{\overline{\mathcal{P}(A'')}^\circ}})\leq p.\] As $\overline{\mathcal{P}(A'')}^\circ$ is orthomodular, we are done, by the comments after \autoref{centredef}. \end{proof} In fact, the following result shows that orthomodularity itself is only an issue when $q<p$ and $p\overline{q}\neq\overline{q}p$ (which is indeed possible, by \autoref{commuteclosure1}). Combining this argument, the proof of \autoref{commutativityimplication} and \cite{Kalmbach1983} \S3 Lemma 3, we get that, even without orthomodularity, if $p,q\in\overline{\mathcal{P}(A'')}^\circ$ then \[\forall r,s\in\{p,\overline{p}^\perp,q,\overline{q}^\perp\}(rs=sr\textrm{ and }r\overline{rs}=\overline{rs}r)\quad\Rightarrow\quad p\textrm{ and }q\textrm{ commute in }\overline{\mathcal{P}(A'')}^\circ.\] \begin{prp} If $p,q\in\overline{\mathcal{P}(A'')}^\circ$, $q<p$ and $p\overline{q}=\overline{q}p$ then $p=q\vee_{\overline{\mathcal{P}(A'')}^\circ}\overline{q}^\perp p$. \end{prp} \begin{proof} Note $\overline{q\vee\overline{q}^\perp p}\geq\overline{q}\vee\overline{q}^\perp p\geq\overline{q}p\vee\overline{q}^\perp p=p$ so $p\leq\overline{q\vee\overline{q}^\perp p}^\circ=q\vee_{\overline{\mathcal{P}(A'')}^\circ}\overline{q}^\perp p$. \end{proof} We also note some elementary facts about ideals and their associated projections. \begin{prp}\label{centralideals} $p\in\mathcal{P}(A'')^\circ$ is central if and only if $pAp\cap A$ is an ideal. \end{prp} \begin{proof} If $pAp\cap A$ is an ideal in $A$ then its weak closure is an ideal in $A''$ containing $p$ as its unit. Thus, for any $a\in A$ we have $ap=pap=pa$ so $p\in A'$. On the other hand, if $p$ is central then, for any $a\in A$ and $b\in pAp\cap A$, we have $ab=apbp=pabp\in pAp\cap A$ and $ba=pbpa=pbap\in pAp\cap A$. \end{proof} \begin{prp}\label{centralprojections} If $p\in\mathcal{P}(A'')$ is central then so are both $\overline{p}$ and $p^\circ$. \end{prp} \begin{proof} If $a\in A$ and $ap=0$ then $abp=apb=0=bap$, for any $b\in A$, so $\{p\}^\perp$ is an ideal. Thus $p_{\{p\}^\perp}=p^{\perp\circ}$ is central, by \autoref{centralideals}. As $p$ is central if and only if $p^\perp$ is, we are done. \end{proof} In particular, if $I$ is an ideal in $A$ then $p_{I^\perp}=\overline{p}_I^\perp$ is central and hence $I^\perp$ is also an ideal (although this can also be proved by elementary algebraic means). Of course, in the commutative case, all the concepts in this subsection correspond to their usual topological counterparts. Specifically, if $A=C_0(X)$ for some locally compact topological space then the atomic representation identifies every element of $C_0(X)$ with the multiplication operator on $l^2(X)$ it defines. Then $A''$ is the set of all bounded multiplicaiton operators on $l^2(X)$, which can naturally be identified with $l^\infty(X)$ in the same way. Under this identification, projections are just characteristic functions $\chi_S$ of subsets $S$ of $X$, where $\chi_S$ is open, closed or topologically regular if and only if $S$ is, in the topology of $X$ (and note that the characteristic function of an open (closed) set is lower (upper) semicontinuous, a fact which will be generalized in \S\ref{MVF}). In particular, any open subset $S$ of $X$ that is open but not (topologically) regular corresponds to an open projection $\chi_S$ that is not topologically regular, which itself corresponds to a hereditary C*-subalgebra that is not an annihilator, contradicting \cite{Arzikulov2013} Lemma 16.1 (with $e=f=\chi_S$). For example, we could have $X=[-1,1]$ and $S=[-1,0)\cup(0,1]$. Incidentally, the atomic representation will usually not be the same as the universal representation, even in the commutative case considered in the previous paragraph, illustrating that the universal representation may not always be the best to work with. Above, we could also consider the subrepresentation on $l^2(Y)$, for some $Y\subseteq X$, which will still be faithful as long as $Y$ is dense in $X$. This is sometimes nicer in some sense, for example if $A=C^b(\mathbb{N})\cong C(\beta\mathbb{N})$ where $\beta\mathbb{N}$ is the Stone-\v{C}ech compactification of $\mathbb{N}$, we can consider the faithful subrepresentation on $l^2(\mathbb{N})$, which has the advantage that all projections in $A''$ are clopen (as all subsets of $\mathbb{N}$ are). These facts provide some justification for our choice to work with arbitrary representations. \subsection{Spectral Projections}\label{specsec} Some quite useful open and closed projections are the spectral projections of self-adjoint operators corresponding to open and closed intervals of $\mathbb{R}$. First, we define continuous functions $f_{r,s}$ on $\mathbb{R}$, for $r<s$, like so \[f_{r,s}(t) = \begin{dcases} 0 & \text{for } t\in(-\infty,r)\\ \frac{t-r}{s-r} & \text{for }t\in[r,s]\\ 1 & \text{for } t\in(s,\infty). \end{dcases}\] Also, for future reference, define $f_\delta=f_{\delta/2,\delta}$, for $\delta>0$. Using the continuous functional calculus and the fact we can take infimums and supremums of monotone nets in $A''_+$, we further define, for any $a\in A_\mathrm{sa}$, \begin{eqnarray*} a_{[s,\infty)} &=& \inf_{r<s}f_{r,s}(a),\textrm{ and}\\ a_{(s,\infty)} &=& \sup_{r>s}f_{s,r}(a).\\ \end{eqnarray*} We can similarly define spectral projections $a_S$ for any open or closed (even Borel) subset $S\subseteq\mathbb{R}$. As weak/strong limits of elements that commute with $a$, spectral projections also commute with $a$ so $a_{(s,\infty)}a=aa_{(s,\infty)}=a_{(s,\infty)}aa_{(s,\infty)}\in A''_\mathrm{sa}$, and likewise for $a_{[s,\infty)}$. We also have the following important operator inequalities. \[aa_{(-\infty,s]}\leq sa_{(-\infty,s]}\qquad\textrm{and}\qquad aa_{[s,\infty)}\geq sa_{[s,\infty)}.\] We also define $[a]=(aa^*)_{(0,\infty)}=$ the projection onto $\overline{\mathcal{R}(a)}$, the norm closure of the range of $a$. Note $[a^*]^\perp=(a^*a)_{(-\infty,0]}$ is the projection onto $\mathcal{N}(a)$, the kernel of $a$. In fact, these inequalities (almost) uniquely define the spectral projections. Specifically, for any $a\in\mathcal{B}(H)_+$, $a_{(-\infty,s]}$ and $a_{(s,\infty)}$ are the unique complementary orthogonal projections in $\mathcal{B}(H)$ such that $\langle av,v\rangle\leq s$, for all unit $v\in\mathcal{R}(a_{(-\infty,s]})$, and $\langle av,v\rangle>s$, for all unit $v\in\mathcal{R}(a_{(s,\infty)})$. Using this fact we obtain the following result, which will be useful later on. \begin{prp}\label{[aa*a]} For any $s,t$ with $0\leq s<t$ and $a\in\mathcal{B}(H)$, \[(aa^*)_{(s,t]}=[a(a^*a)_{(s,t]}].\] \end{prp} \begin{proof} First note that the map $p\mapsto[ap]$ preserves the orthogonality of spectral projections of $a^*a$. Specifically, if $S$ and $T$ are disjoint Borel subsets of $\mathbb{R}_+\setminus\{0\}$ then, as $[a^*a(a^*a)_S]\leq(a^*a)_S\perp(a^*a)_T$, we have \[[a(a^*a)_S]\perp[a(a^*a)_T].\] In particular, this holds for $S=(0,s]$ and $T=(s,\infty]$ and the result will follow if we can show that \[(aa^*)_{(0,s]}=[a(a^*a)_{(0,s]}]\quad\textrm{and}\quad (aa^*)_{(s,\infty]}=[a(a^*a)_{(s,\infty]}].\] Note that $(aa^*)_{\{0\}}=[a]^\perp$ so, by the comments above, we just need to show $\langle aa^*av,av\rangle\leq s\langle av,av\rangle$, for all $v\in\mathcal{R}((a^*a)_{(0,s]})$, and $\langle aa^*av,av\rangle>s\langle av,av\rangle$, for all non-zero $v\in\mathcal{R}((a^*a)_{(s,\infty]})$. But we immediately have $\langle a^*a(a^*a-s)v,v\rangle\leq0$, for all $v\in\mathcal{R}((a^*a)_{(0,s]})$, while also $\langle a^*a(a^*a-s)v,v\rangle>0$, for non-zero $v\in\mathcal{R}((a^*a)_{(s,\infty]})$, so we are done. \end{proof} \autoref{specann} below indicates why spectral projections are a convenient tool when dealing with annihilators. It also gives an idea of how plentiful they are. For example, assume $A$ is infinite dimensional so we have $a\in A_+$ with $\sigma(a)$ infinite. Then define a sequence $(f_n)$ of continuous functions from $\sigma(a)$ to $[0,1]$ with the sets $(f_n^{-1}(0,1])$ disjoint and non-empty. These give rise to infinitely many orthogonal annihilators $\{f_n(a)\}^{\perp\perp}$ in $A$. In fact, if we let $g_N=\sum_{n\in N}2^{-n}f_n$, for $N\subseteq\mathbb{N}$, then we get continuum many annihilators $B_N=\{g_N(a)\}^{\perp\perp}$ that, while no longer orthogonal, are still far apart in the sense that $||p_{B_N}-p_{B_M}||=1$, for all distinct $M,N\subseteq\mathbb{N}$. First, though, we prove the following simple, but useful, algebraic lemma. \begin{lem}\label{xab} For any $x\in A$, $a,b\in A_+$ and $\alpha,\beta,\gamma>0$, \[xa^\alpha b^\beta a^\gamma=0\quad\Leftrightarrow\quad xa^\alpha b=0.\] \end{lem} \begin{proof} Note that $ycc^*=0\Leftrightarrow yc=0$, for all $y,c\in a$, as $ycc^*=0$ gives $0=ycc^*y^*=(yc)(yc)^*0$ and hence $yc=0$. Applying this to $y=xa^\alpha b^\beta$ and $c=a^{\gamma/2}$ we see that $xa^\alpha b^\beta a^\gamma=0$ implies $xa^\alpha b^\beta a^{\gamma/2}=0$. We may continue to halve the last exponent in this way until it gets below $\alpha$, and then multiply on the right by a suitable exponent of $a$ to make it actually equal $\alpha$. Then applying the note again, this time with $y=x$ and $c=a^\alpha b^{\beta/2}$, we get $xa^\alpha b^{\beta/2}=0$. By reducing the last exponent again, and increasing it at the end if necessary, we finally get $xa^\alpha b=0$. The converse is similar. \end{proof} This lemma holds even if $A$ is just an arbitrary *-ring with proper involution, as long as you can also take positive square roots. Note that by iterating it we also get the same result for arbitrarily long sequences of (powers of) $a$'s and $b$'s. If $x=1$ then we actually get the slightly better result \[a^\alpha b^\beta a^\gamma=0\quad\Leftrightarrow\quad ab=0.\] However, for arbitrary $x$, we must keep the $\alpha$ exponent (e.g. if $A=M_2\cong\mathcal{B}(\mathbb{C}^2)$, $x$ is the projection onto $\mathbb{C}(2,-1)$, $a=\begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}$ and $b$ is the projection onto $\mathbb{C}(1,1)$ then $xa^\alpha b=0$ if and only if $\alpha=1$). \begin{prp}\label{prp1} If $a\in A^1_+$, $S\subseteq A_+$ and $as=s$, for all $s\in S$, then $ab=b$, for all $b\in S^{\perp\perp}$. \end{prp} \begin{proof} If we had $1\in A$ then, as $as=s\Leftrightarrow(1-a)s=0$, it would immediately follow that $1-a\in S^\perp$ and hence $(1-a)b=0$, for all $b\in S^{\perp\perp}$. Even if $1\notin A$, we still have $(1-a)b(1-a)\in S^\perp$, for any $b\in A$. Thus, if $b\in S^{\perp\perp}_+$, then $(1-a)b(1-a)b=0$ and hence $b(1-a)=0=(1-a)b$, by \autoref{xab}, i.e. $ab=b$. As any C*-algebra is generated by its positive elements, this completes the proof. \end{proof} One immediate consequence of \autoref{prp1} is that, even if $[B]_B$ is strictly contained in $[B]$, it is still order-dense in $[B]$ (see \eqref{densedef}). In fact, for all $b\in B_+$, we can find $D\in[B]_B$ contained $\overline{bAb}$, the hereditary C*-subalgebra generated by $b$. Just let $D=\{f_{2\delta}(b)\}^{\perp_B\perp_B}$, for $\delta<||b||$, and note that, as $f_\delta(b)f_{2\delta}(b)=f_{2\delta}(b)$, we have $d=f_\delta(b)d=f_\delta(b)df_\delta(b)\in\overline{bAb}$, for all $d\in D_+$. \begin{prp}\label{specann} If $a\in A^1_+$ and $f(a)\in A$, for some $f:[0,1]\rightarrow[0,1]$, then \[\{f(a)\}^{\perp\perp}\subseteq a_{\overline{f^{-1}(0,1]}}Aa_{\overline{f^{-1}(0,1]}}.\] \end{prp} \begin{proof} If $0\in\overline{\mathbb{R}\backslash f^{-1}(0,1]}$, take continuous $g$ on $[0,1]$ with $g^{-1}\{0\}=\overline{\mathbb{R}\backslash f^{-1}(0,1]}$, and hence $g(0)=0$ so $g(a)\in A$. Also, $g(a)\in\{f(a)\}^\perp$, so \[\{f(a)\}^{\perp\perp}\subseteq\{g(a)\}^\perp\subseteq g(a)_{\{0\}}Ag(a)_{\{0\}}=a_{\overline{f^{-1}(0,1]}}Aa_{\overline{f^{-1}(0,1]}}.\] On the other hand, if $0\notin\overline{\mathbb{R}\backslash f^{-1}\{0\}}$ then take continuous $g$ on $[0,1]$ with $g^{-1}\{1\}=\overline{\mathbb{R}\backslash f^{-1}\{0\}}$ and $g(0)=0$, so $g(a)\in A$ and $g(a)b=b$. By \autoref{prp1}, $g(a)c=c$, for all $c\in\{b\}^{\perp\perp}$, and hence $\{b\}^{\perp\perp}\subseteq g(a)_{\{1\}}Ag(a)_{\{1\}}=a_{\overline{f^{-1}(0,1]}}Aa_{\overline{f^{-1}(0,1]}}$. \end{proof} \subsection{Projection Properties}\label{psec} We now point out some basic properties of projections, useful in their own right, but also good to keep in mind as results that might admit generalization to the annihilators in some way. Indeed, \S\ref{SvsO} is devoted to proving some of these generalizations. The projections in any C*-algebra are orthomodular (if $A$ is not unital then $q^\perp$ is not well-defined, but we can still interpret $p\wedge q^\perp$ as denoting the largest projection below $q$ that annihilates $p$, when such a projection exists). In fact, we have the following order theoretic characterizations of commutatitivity, of which \eqref{pq=qp1}$\Rightarrow$\eqref{pq=qp2} implies orthomodularity (we should mention that the projections in a C*-algebra do not always form a lattice, so \eqref{pq=qp3} and \eqref{pq=qp2} should be interpretted as saying the given supremums and infimums actually exist \emph{and} satisfy the given identity). \begin{prp}\label{pq=qp} For $p,q\in\mathcal{P}(A)$, the following are equivalent. \begin{eqnarray} pq &=& qp.\label{pq=qp1}\\ p\wedge q &=& p\wedge(p^\perp\vee q).\label{pq=qp3}\\ p &=& (p\wedge q)\vee(p\wedge q^\perp).\label{pq=qp2} \end{eqnarray} \end{prp} \begin{proof}\ \begin{itemize} \item[\eqref{pq=qp1}$\Rightarrow$\eqref{pq=qp2}] Note that $p\wedge q=pq$ when \eqref{pq=qp1} holds so $p=pq+pq^\perp=(p\wedge q)\vee(p\wedge q^\perp)$. \item[\eqref{pq=qp2}$\Rightarrow$\eqref{pq=qp1}] As $p\wedge q\leq q$, $p\wedge q$ commutes with $q$. Likewise, $p\wedge q^\perp$ commutes with $q^\perp$ and hence $q$ so, if \eqref{pq=qp2} holds, $p=(p\wedge q)+(p\wedge q^\perp)$ commutes with $q$ too. \item[\eqref{pq=qp1}$\Rightarrow$\eqref{pq=qp3}] By \eqref{pq=qp1}, $p\wedge(p^\perp\vee q)=p\wedge(p\wedge q^\perp)^\perp=p(pq^\perp)^\perp=p-pq^\perp=pq=p\wedge q$. \item[\eqref{pq=qp3}$\Rightarrow$\eqref{pq=qp2}] As $p\wedge q^\perp\leq p$, we have $p\wedge(p^\perp\vee q)=p\wedge(p\wedge q^\perp)^\perp=p-p\wedge q^\perp$ which, if \eqref{pq=qp3} holds, means $p=p\wedge q+p\wedge q^\perp=(p\wedge q)\vee(p\wedge q^\perp)$. \end{itemize} \end{proof} So, by \autoref{pq=qp}, if $A$ is a commutative C*-algebra then every projection is central in $\mathcal{P}(A)$ (see \autoref{centredef}) and hence $\mathcal{P}(A)$ is a Boolean algebra (see the comments before \autoref{cenuniqcomp}). Conversely, if $A$ is non-commutative C*-algebra that is generated by its projections then $pq\neq qp$ for some $p,q\in\mathcal{P}(A)$ and hence $\mathcal{P}(A)$ is not a Boolean algebra, again by \autoref{pq=qp}. In particular, if $A$ is a von Neumann (or AW*-) algebra and $p\in\mathcal{P}(A)$ then $[p](=[p]_p$, as $\mathcal{P}(A)$ is orthomodular) is a Boolean algebra precisely when $pAp$ is commutative, i.e. precisely when $p$ is an abelian projection. Thus if $p_\mathrm{I}$ is obtained as in \eqref{pI} (with $\mathbb{P}=\mathcal{P}(A)$) then $p_\mathrm{I}A$ is indeed the type I part of $A$, in the classical von Neumann algebra type decomposition. Roughly speaking, the following result says that, for a projection $p$, being far from $q^\perp$ (as in (\ref{pnearq1})) is equivalent to being close to a subprojection of $q$ (as in (\ref{pnearq2}) and (\ref{pnearq3})), where $\lambda$ here quantifies this distance. \begin{prp}\label{pnearq} For $p,q\in\mathcal{P}(A)$ and $\lambda\in[0,1]$, the following are equivalent. \begin{enumerate} \item\label{pnearq1} $||pq^\perp||^2\leq\lambda$. \item\label{pnearq2} $pqp\geq(1-\lambda)p$. \item\label{pnearq3} There exists $r\in\mathcal{P}(A)$ with $r\leq q$ and $||r-p||^2\leq\lambda$. \end{enumerate} \end{prp} \begin{proof}\ \begin{itemize} \item[(\ref{pnearq1})$\Leftrightarrow$(\ref{pnearq2})] Just note $||pq^\perp||^2\leq\lambda\quad\Leftrightarrow\quad pq^\perp p\leq\lambda p\quad\Leftrightarrow\quad(1-\lambda)p\leq pqp$. \item[(\ref{pnearq2})$\Rightarrow$(\ref{pnearq3})] If $\lambda=1$ let $r=q$. Otherwise $\inf(\sigma(qpq)\backslash\{0\})\geq1-\lambda>0$ so $r=[qp]\in A$ and $||r-p||^2\leq\lambda$. \item[(\ref{pnearq2})$\Leftarrow$(\ref{pnearq3})] Given such an $r$, simply note that $||pq^\perp||^2\leq||pr^\perp||^2\leq||p-r||^2\leq\lambda$. \end{itemize} \end{proof} Similarly, the following says that a non-zero projection $p$ can not be simultaneously far from both another projection and its complement. In fact, \eqref{Pythag} is still valid for $p\in A_+$ with $||p||=1$ (although $q$ still has to be a projection). \begin{prp} For all $p,q\in\mathcal{P}(A)$ with $p\neq0$, \begin{equation}\label{Pythag} ||pq||^2+||pq^\perp||^2\geq1. \end{equation} Moreover, $||pq||^2+||pq^\perp||^2=1$ if and only if $\sigma_{pAp}(pqp)$ is a singleton. \end{prp} \begin{proof} For \eqref{Pythag}, simply take $v\in\mathcal{R}(p)\backslash\{0\}$ and note that \[||v||^2=||qv||^2+||q^\perp v||^2\leq(||qp||^2+||q^\perp p||^2)||v||^2.\] If $||pq||^2+||pq^\perp||^2=1$ then, setting $\lambda=1-||pq^\perp||^2$, we have $||pq||^2=\lambda$ and hence $pqp\leq\lambda p\leq pqp$ (using \autoref{pnearq} (\ref{pnearq1})$\Rightarrow$(\ref{pnearq2}) for the last inequality), i.e. $pqp=\lambda p$ and hence $\sigma_{pAp}(pqp)=\{\lambda\}$. Conversely, if $\sigma_{pAp}(pqp)=\{\lambda\}$ for some $\lambda\in[0,1]$ then $pqp=\lambda p$ and $pq^\perp p=(1-\lambda)p$ so $||pq||^2+||pq^\perp||^2=\lambda+(1-\lambda)=1$. \end{proof} \begin{prp}\label{sigmapq} For $p,q\in\mathcal{P}(A)$, \[\sigma(pq^\perp)\setminus\{0,1\}=1-\sigma(pq)\setminus\{0,1\}=\sigma(p^\perp q)\setminus\{0,1\}.\] \end{prp} \begin{proof} From elementary spectral theory, we have $\sigma(pq)=\sigma(ppq)=\sigma(pqp)$ and $1-\sigma(pqp)\setminus\{0,1\}=\sigma(p-pqp)\setminus\{0,1\}=\sigma(pq^\perp p)\setminus\{0,1\}$. \end{proof} Note that $||pq^\perp||$ satisfies the triangle inequality, i.e. for $p,q,r\in\mathcal{P}(A)$, \[||pr^\perp||=||p(q+q^\perp)r^\perp||\leq||pq^\perp||+||qr^\perp||.\] Also $||pq^\perp||=0=||qp^\perp||\Leftrightarrow p=q$ so \[\max(||pq^\perp||,||qp^\perp||)\] defines a metric on $\mathcal{P}(A)$. the next result shows that it in fact coincides with the usual metric on $\mathcal{P}(A)$, i.e. $||p-q||$. In particular, $\mathcal{P}(A)$ is complete in this metric. \begin{prp}\label{||p-q||} For $p,q\in\mathcal{P}(A)$, \[||p-q||=\max(||pq^\perp||,||p^\perp q||).\] In fact, $||p-q||<1\Rightarrow||pq^\perp||=||p-q||=||qp^\perp||$. \end{prp} \begin{proof} As $p-q=pq^\perp+pq-pq-p^\perp q=pq^\perp-p^\perp q$, and $pq^\perp(p^\perp q)^*=0=(pq^\perp)^*p^\perp q$, we have $||p-q||=\max(||pq^\perp||,||p^\perp q||)$. So if $||pq^\perp||,||p^\perp q||<1$ then, by \autoref{sigmapq}, $||pq^\perp||^2=\max(\sigma(pq^\perp))=\max(\sigma(p^\perp q))=||p^\perp q||^2$. \end{proof} The following result shows that the Sasaki projection (see \cite{Kalmbach1983} \S7 Lemma 13) defined by $q$ takes any $p$ to the range projection of $qp$. \begin{prp}\label{[qp]} For $p,q\in\mathcal{P}(\mathcal{B}(H))$, $[qp]=(p\vee q^\perp)\wedge q$ \end{prp} \begin{proof} As $\mathcal{P}(\mathcal{B}(H))$ is orthomodular, this is equivalent to $p^\perp\wedge q=[qp]^\perp\wedge q$. To see this note that, for any $v\in\mathcal{R}(p)$ and $w\in\mathcal{R}(q)$, \begin{equation}\label{vqw} \langle v,w\rangle=\langle v,qw\rangle=\langle qv,w\rangle. \end{equation} As $w\in\mathcal{R}(p^\perp\wedge q)$ if and only if the left expression is $0$, for all $v\in\mathcal{R}(p)$, and $w\in\mathcal{R}([qp]^\perp\wedge q)$ if and only if the right expression is $0$, for all $v\in\mathcal{R}(p)$, we are done. \end{proof} The previous results of this subsection, while rarely stated explicitly, are no doubt well known. The next result, however, might not be. It characterizes the spectrum of a product of projections purely in terms of the norm and the ortholattice structure of $\mathcal{P}(\mathcal{B}(H))$. \begin{thm}\label{orthospecpq} For $p,q\in\mathcal{P}(\mathcal{B}(H))$, \begin{equation}\label{specpqeq} \sigma(pq)\cup\{0\}=\overline{\{||qr||^2:r\leq p\textrm{ and }((r\vee q^\perp)\wedge q)\perp((r^{\perp_p}\vee q^\perp)\wedge q)\}}. \end{equation} \end{thm} \begin{proof} If $\theta\in\sigma(pq)\setminus\{0\}$ then, for any $\lambda>\theta$, $||q(pqp)_{(0,\lambda]}||^2\in[\theta,\lambda]$, while \[((pqp)_{(0,\lambda]}\vee q^\perp)\wedge q=(qpq)_{(0,\lambda]}\qquad\textrm{and}\qquad ((pqp)_{(0,\lambda]}^{\perp_p}\vee q^\perp)\wedge q=(pqp)_{(\lambda,1]},\] by \autoref{[aa*a]} (with $a=qp$) and \autoref{[qp]} (noting that we have $(pqp)_{(0,\lambda]}^{\perp_p}=(pqp)_{(\lambda,1]}\vee(p\wedge q^\perp)$). Conversely, say $r\leq p$ and $(r\vee q^\perp)\wedge q=[qr]$ is orthogonal to $(r^{\perp_p}\vee q^\perp)\wedge q=[qr^{\perp_p}]$. Thus $[qr]\perp r^{\perp_p}$ (see \eqref{vqw}) so $[pqr]=([qr]^\perp\wedge p)^{\perp_p}\leq r$ and this means $(pqp)r=r(pqp)r=r(pqp)$. Thus $r$ commutes with every spectral projection of $pqp$. Thus $r\leq(pqp)_{(0,||qr||^2]}$, otherwise $s=r(pqp)_{(\lambda,1]}\neq0$, for some $\lambda>||qr||^2$, and $||q^\perp s||^2\leq||q^\perp(pqp)_{(\lambda,1]}||^2\leq1-\lambda$ so $sqs\geq\lambda s$, by \autoref{pnearq}, which gives $||qr||^2\geq||qs||^2=||sqs||\geq\lambda>||qr||^2$, a contradiction. On the other hand, if $\theta<||qr||$ then $r(pqp)_{(\theta,1]}\neq0$, otherwise we would have $r\leq(pqp)_{(0,\theta]}$ and hence $||qr||^2\leq||q(pqp)_{(0,\theta]}||^2\leq\theta<||qr||^2$, a contradiction. But then $r(pqp)_{(\theta,||qr||^2]}=r(pqp)_{(\theta,1]}\neq0$ so $(\theta,||qr||^2]\cap\sigma(pq)\neq\emptyset$, for all $\theta<||qr||^2$, i.e. $||qr||^2\in\sigma(pq)$. \end{proof} \subsection{Annihilator Separativity}\label{SvsO} The most fundamental difference between annihilators and projections is that, as shown in \autoref{nonorthoxpl}, \begin{center} \textbf{the annihilators in an arbitrary C*-algebra may not be orthomodular.} \end{center} Nonetheless, we can show that $[A]^\perp$ is always separative, for arbitary C*-algebra $A$, which will suffice to allow us to apply the theory in \S\ref{OrderTheory}. In fact, we will prove a strong version of separativity that is quite close to orthomodularity. To see what this strong version is, note that with annihilators we can naturally quantify the degree of separativity. Specifically, for $\epsilon\in[0,1]$, call $[A]^\perp$ \emph{$\epsilon$-separative} if, for all $B,C\in[A]^\perp$, \[B\subsetneqq C\Rightarrow\exists D\in[A]^\perp(\{0\}<D\subseteq C\textrm{ and }||BD||\leq\epsilon).\] One immediately sees that if $[A]^\perp$ is $\epsilon$-separative, for any $\epsilon<1$, then it is separative and, by \autoref{orthoequiv}, $0$-separativity is equivalent to orthomodularity. So \autoref{epsep} really is as close as we can get to orthomodularity without actually verifying it. We can also use annihilators to separate arbitrary C*-subalgebras $B,C\subseteq A$ with $B\subseteq C$. Specifically, define the \emph{separation} of $B$ from $C$ by \[\mathrm{sep}_C(B)=\inf_{D\in[C]\setminus\{\{0\}\}}||BD||.\] By the comments after \autoref{prp1}, we could replace $[C]$ with $[C]_C$ here without changing the value of $\mathrm{sep}_C(B)$, so we might as well get rid of $C$, just assume we have a C*-subalgebra $B$ of $A$ and write $\mathrm{sep}(B)$ for $\mathrm{sep}_A(B)$. Let us also assume $B=\overline{bAb}$, for $b\in B^1_+$, and define \[||x||_b=\sup_{n\geq0}||xb^{1/n}x^*||^{1/2}\qquad\textrm{and}\qquad\gamma(b)=\inf_{x\neq0}||x||_b/||x||,\] as in \cite{AkemannEilers2002}. As $||xb^{1/n}x^*||^{1/2}=||b^{1/(2n)}x^*xb^{1/(2n)}||^{1/2}=||\sqrt{x^*x}b^{1/n}\sqrt{x^*x}||^{1/2}$, we actually have $\gamma(b)=\inf_{a\in A^1_+}||a||_b$. And for any $a\in A^1_+$, \[||a||_b=\sup_{n\geq0}||ab^{1/n}||\geq||\{f_{1-\delta,1}(a)\}^{\perp\perp}B||-\delta,\] and hence $\gamma(b)\geq\mathrm{sep}(B)$. While for any $C\in[A]^\perp$ and $c\in C^1_+$, we certainly have $||c||_b\leq||CB||$ so $\gamma(b)\leq\mathrm{sep}(B)$, i.e. \[\gamma(b)=\mathrm{sep}(B).\] It is immediate that $\mathrm{sep}(B)=0$ whenever $B$ is not essential in $A$ or, equivalently, when $b$ is a zero-divisor. Whether it is still possible to have $\mathrm{sep}(B)<1$ when $B$ is essential in $A$ was an open problem (see \cite{PeligradZsido2000}), usually phrased as asking whether all open dense projections $p$ ($B$ is essential means $p_B$ is dense in that $\overline{p}_B=1$) are \emph{regular}\footnote{This concept was first introduced and investigated in \cite{Tomita1959} Chapter 2 \S2 under a somewhat different, but equivalent, definition.} in the sense that $||ap||=||a\overline{p}||(=||a||$ when $p$ is dense), for all $a\in A$. This was answered in the negative in \cite{AkemannEilers2002} Proposition 3.4 where it was shown that $\gamma(b)\leq4/5$ for a particular non-zero divisor $b$. We improve on this in \autoref{0sep}, showing that $\mathrm{sep}(B)$ is, in fact, always $1$ or $0$, i.e. open dense projections are always either regular or very non-regular. In order to prove these results, we first need the spectral projection inequalities contained in the lemmas below. Note that if $c\in\mathcal{B}(H)_+$ and $p\in\mathcal{P}(\mathcal{B}(H))$ then $c\leq p$ implies $p^\perp cp^\perp\leq0$ and hence $p^\perp c=0$, i.e. $c=pc$. \begin{lem}\label{lem1} For $\epsilon,\lambda>0$, there exists $\delta>0$ such that, whenever $b,c\in\mathcal{B}(H)^1_+$, $c\leq q\in\mathcal{P}(\mathcal{B}(H))$ and $||bq||^2\leq\lambda+\delta$, we have \begin{equation}\label{lem1eq} ||c_{[0,1-\epsilon]}(cb^2c)_{[\lambda-\delta,1]}||\leq\epsilon. \end{equation} \end{lem} \begin{proof} If $\epsilon\geq1$ then \eqref{lem1eq} holds trivially, so assume $\epsilon<1$. For any $v\in H$, \begin{eqnarray} ||cv||^2 &=& ||cc_{[0,1-\epsilon]}v||^2+||cc_{(1-\epsilon,1]}v||^2\nonumber\\ &\leq& (1-\epsilon)^2||c_{[0,1-\epsilon]}v||^2+||c_{(1-\epsilon,1]}v||^2\nonumber\\ &\leq& ||v||^2-\epsilon(2-\epsilon)||c_{[0,1-\epsilon]}v||^2\nonumber\\ &\leq& ||v||^2-\epsilon||c_{[0,1-\epsilon]}v||^2\qquad\textrm{(as $\epsilon\leq1$)}.\nonumber \end{eqnarray} Now for $v\in\mathcal{R}((cb^2c)_{[\lambda-\delta,1]})$, \begin{eqnarray*} (\lambda-\delta)||v||^2 &\leq& \langle cb^2c v,v\rangle\\ &=& ||bcv||^2\\ &\leq& ||bq||^2||cv||^2\\ &\leq& (\lambda+\delta)(||v||^2-\epsilon||c_{[0,1-\epsilon]}v||^2),\textrm{ so}\\ (\lambda+\delta)\epsilon||c_{[0,1-\epsilon]}v||^2 &\leq& 2\delta||v||^2\textrm{ and}\\ ||c_{[0,1-\epsilon]}v||^2 &\leq& 2\delta||v||^2/(\lambda\epsilon), \end{eqnarray*} which immediately yields \eqref{lem1eq}, for $\delta\leq\lambda\epsilon^3/2$. \end{proof} \begin{cor}\label{cor1} For $\epsilon,\lambda>0$, there exists $\delta>0$ such that, whenever $b,c\in\mathcal{B}(H)^1_+$, $c\leq q\in\mathcal{P}(\mathcal{B}(H))$ and $||bq||^2\leq\lambda+\delta$, we have \[||(1-c)(cb^2c)_{[\lambda-\delta,1]}||\leq\epsilon.\] \end{cor} \begin{proof} Replacing $\epsilon$ with $\epsilon/\sqrt{2}$ in \autoref{lem1}, we obtain $\delta>0$ such that, for any $v\in\mathcal{R}((cb^2c)_{[\lambda-\delta,1]})$, \begin{eqnarray*} ||(1-c)v||^2 &=& ||(1-c)c_{[0,1-\epsilon/\sqrt{2})}v||^2+||(1-c)c_{[1-\epsilon/\sqrt{2},1]}v||^2\\ &\leq& ||c_{[0,1-\epsilon/\sqrt{2})}v||^2+\epsilon^2||c_{[1-\epsilon/\sqrt{2},1]}v||^2/2\\ &\leq& \epsilon^2||v||^2/2+\epsilon^2||v||^2/2. \end{eqnarray*} \end{proof} The following result generalizes \cite{Bice2009} Lemma 5.3. \begin{lem}\label{lem2} For $\epsilon,\lambda>0$, there exists $\delta>0$ such that, whenever $b,c\in\mathcal{B}(H)^1_+$, $c\leq q\in\mathcal{P}(\mathcal{B}(H))$ and $||bq||^2\leq\lambda+\delta$, we have \begin{equation}\label{lem2eq} ||b_{[0,\sqrt{\delta}]}(cb^2c)_{[\lambda-\delta,1]}||^2\leq1-\lambda+\epsilon. \end{equation} \end{lem} \begin{proof} Let $\delta>0$ be that obtained in \autoref{cor1} from replacing $\epsilon$ with $\epsilon/4$. If necessary, replace $\delta$ with a smaller non-zero number so that we also have \[(1-\lambda+\delta+\epsilon/2)/(1-\delta)\leq1-\lambda+\epsilon.\] Then, for all $v\in\mathcal{R}((cb^2c)_{[\lambda-\delta,1]})$, \begin{eqnarray*} (\lambda-\delta)||v||^2 &\leq& \langle cb^2cv,v\rangle\\ &=& \langle b^2cv,cv\rangle\\ &\leq& \langle b^2v,v\rangle+\epsilon||v||^2/2,\textrm{ by \autoref{cor1}},\\ &=& \langle b^2b_{[0,\sqrt{\delta}]}v,v\rangle+\langle b^2b_{(\sqrt{\delta},1]}v,v\rangle+\epsilon||v||^2/2\\ &\leq& \delta\langle b_{[0,\sqrt{\delta}]}v,v\rangle+\langle b_{(\sqrt{\delta},1]}v,v\rangle+\epsilon||v||^2/2\\ &=& \delta||b_{[0,\sqrt{\delta}]}v||^2+(||v||^2-||b_{[0,\sqrt{\delta}]}v||^2)+\epsilon||v||^2/2,\textrm{ so}\\ \quad(1-\delta)||b_{[0,\sqrt{\delta}]}v||^2 &\leq& (1-\lambda+\delta+\epsilon/2)||v||^2,\textrm{ and hence}\\ ||b_{[0,\sqrt{\delta}]}v||^2 &\leq& (1-\lambda+\epsilon)||v||^2. \end{eqnarray*} \end{proof} With \autoref{lem2}, we can already prove that $[A]^\perp$ is separative (see the proof of \autoref{epsep}, ignoring the last line). However, for $\epsilon$-separativity, for arbitrary $\epsilon>0$, we need a couple more results. \begin{lem}\label{lem3} For $\epsilon,\lambda>0$, there exists $\delta>0$ such that, whenever $b,c\in\mathcal{B}(H)^1_+$, $p,q\in\mathcal{P}(\mathcal{B}(H))$, $b\leq p$, $c\leq q$ and $||pq||^2\leq\lambda+\delta$, we have \begin{equation}\label{lem3eq} ||p(cb^2c)_{[\lambda-\delta,1]}||^2\leq\lambda+\epsilon. \end{equation} \end{lem} \begin{proof} Let $\delta>0$ be that obtained in \autoref{cor1} with $\epsilon$ replaced with $\epsilon/4$. If necessary, decrease $\delta$ so that $\delta\leq\epsilon/2$. Then, for $v\in\mathcal{R}((cb^2c)_{[\lambda-\delta,1]})$, \begin{eqnarray*} ||pv||^2 &=& \langle pv,pv\rangle\\ &\leq& \langle pcv,pcv\rangle+\epsilon||v||^2/2\\ &\leq& (||pc||^2+\epsilon/2)||v||^2\\ &\leq& (\lambda+\delta+\epsilon/2)||v||^2.\\ &\leq& (\lambda+\epsilon)||v||^2. \end{eqnarray*} \end{proof} \begin{thm}\label{septhm} If we have $\epsilon>0$ and C*-subalgebras $B$ and $C\neq\{0\}$ of $A$ satisfying $||BC||^2=\lambda<1$, then there exists $D\in[A]^\perp$ with \[||BD||\leq\epsilon\qquad\textrm{and}\qquad||CD||^2\geq1-\lambda-\epsilon.\] \end{thm} \begin{proof} Choose $\delta>0$ small enough that it satisfies \autoref{lem2} and \autoref{lem3} with $\epsilon$ replaced by some $\mu>0$, to be determined later. Take $c\in C^1_+$ and $b\in B^1_+$ with $||bc||^2>\lambda-\delta/2$. Let $c'=f_{\lambda-\delta,\lambda-\delta/2}(cb^2c)\in C$ and $a=(1-f_\delta(b))c'^2(1-f_\delta(b))$, so \begin{eqnarray} ||a|| &=& ||(1-f_\delta(b))c'||^2\nonumber\\ &\geq& ||b_{\{0\}}(cb^2c)_{[\lambda-\delta/2,1]}||^2\nonumber\\ &\geq& 1-||[b](cb^2c)_{[\lambda-\delta,1]}||^2,\textrm{ by \eqref{Pythag}}\nonumber\\ &\geq& 1-\lambda-\mu,\textrm{ by \eqref{lem3eq}.}\label{||s||} \end{eqnarray} In particular, $||a||>0$ as long as $\mu<1-\lambda$, and we may define $a'=||a||^{-1}a$. By \eqref{lem2eq}, we have \begin{equation}\label{anothereq} ||b_{[0,\sqrt{\delta}]}[c']||^2\leq||b_{[0,\sqrt{\delta}]}(cb^2c)_{[\lambda-\delta,1]}||^2\leq1-\lambda+\mu \end{equation} and so, by \autoref{pnearq}, \begin{equation}\label{BDeq} [c']b_{[\delta,1]}[c']\geq[c']b_{[\sqrt{\delta},1]}[c']\geq(\lambda-\mu)[c']. \end{equation} Now take $b'\in B^1_+$ with \begin{equation}\label{qdef} ||(1-b')f_{\delta}(b))||\leq\mu, \end{equation} and note that, as $||BC||^2=\lambda$, \begin{equation}\label{r^2} c'b'c'\leq c'[c'][b'][c']c'\leq\lambda c'^2. \end{equation} Putting all this together, we have \begin{eqnarray*} (1-\lambda-\mu)||b'a'b'|| &\leq& ||a||||b'a'b'||,\textrm{ by \eqref{||s||}}\\ &=& ||b'ab'||\\ &=& ||b'(1-f_\delta(b))c'^2(1-f_\delta(b))b'||\\ &\leq& ||(b'-b'f_\delta(b)b')c'||^2+2\mu,\textrm{ by \eqref{qdef}},\\ &=& ||c'(b'-b'f_\delta(b)b')^2c'||+2\mu\\ &\leq& ||c'(b'-b'f_\delta(b)b')c'||+2\mu\\ &\leq& ||c'(\lambda-b'b_{[\delta,1]}b')c'||+2\mu,\textrm{ by \eqref{r^2}},\\ &\leq& ||c'(\lambda-[c']b_{[\delta,1]}[c'])c'||+4\mu,\textrm{ again by \eqref{qdef}},\\ &\leq& ||c'(\lambda-\lambda+\mu)c'||+4\mu,\textrm{ by \eqref{BDeq} (and $||BC||^2=\lambda$)}\\ &\leq& 5\mu,\textrm{ and hence},\\ ||b'a'||^2 &\leq& ||b'\sqrt{a'}||^2\\ &\leq& 5\mu/(1-\lambda-\mu). \end{eqnarray*} Thus this inequality holds for all $b'$ in an approximate unit for $B$, and hence for all $b'\in B^1_+$. If we let $D=\{f_{1-2\mu,1}(a')\}^{\perp\perp}\in[A]^\perp$ then, for any $d\in D^1_+$, we have $||d-a'd||\leq2\mu$ and hence, for any $b'\in B^1_+$, \[||b'd||^2\leq||b'a'd||^2+4\mu\leq||b'a'||^2+4\mu\leq5\mu/(1-\lambda-\mu)+4\mu.\] Thus, so long as we choose $\mu>0$ at the start sufficiently small, this immediately gives $||BD||\leq\epsilon$. Also, $||[1-f_\delta(b)]c'||^2\leq||b_{[0,\sqrt{\delta}]}[c']||^2\leq1-\lambda+\mu$, by \eqref{anothereq}, so as long as we chose $\mu$ at least half as small as the $\delta$ obtained in \autoref{lem2} (from the given $\epsilon$), we can also apply \eqref{lem2eq} with $c$ and $b$ replaced by $1-f_\delta(b)$ and $c'$ respectively to get \[||c'_{[0,\sqrt{\mu}]}a'_{[1-\mu,1]}||^2\leq||c'_{[0,\sqrt{2\mu}]}((1-f_\delta(b))c'^2(1-f_\delta(b)))_{1-\lambda-2\mu}||^2\leq\lambda+\epsilon,\] and hence $||CD||^2\geq||f_{\sqrt{\mu}}(c')f_{1-2\mu,1-\mu}(a')||^2\geq||c'_{[\sqrt{\mu},1]}a'_{[1-\mu,1]}||^2\geq1-\lambda-\epsilon$. \end{proof} Note that \autoref{septhm} is a modulo-$\epsilon$ generalization of \eqref{Pythag} to annihilators. Essentially just rephrasing the first part also immediately yields the following. \begin{thm}\label{0sep} If $B$ is a C*-subalgebra of $A$ with $\mathrm{sep}(B)<1$ then $\mathrm{sep}(B)=0$. \end{thm} With a little more work, \autoref{septhm} also gives us $\epsilon$-separativity. \begin{thm}\label{epsep} $[A]^\perp$ is $\epsilon$-separative, for all $\epsilon>0$. \end{thm} \begin{proof} Take $B,C\in[A]^\perp$ with $B\subsetneqq C$, so we have $c\in C^1_+\setminus B$. This means we have $b\in B^\perp_+$ with $||b||=1$ and $bc\neq0$, and hence $b[c]\neq0$. Set $q=[c]$, $\lambda=||bq||^2$, take positive $\epsilon<\lambda$ and let $\delta>0$ be that obtained in \autoref{lem2}. Note that we may now assume that $||bc||^2>\lambda-\delta$ by replacing $c$ with $f_\mu(c)$ for sufficiently small $\mu$. Take $s,s'\in(\lambda-\delta,||bc||^2)$ with $s<s'$ and set $D=\{f_{s,s'}(cb^2c)\}^{\perp\perp}$. Then we see that $p_D\leq(cb^2c)_{[s,1]}\leq (cb^2c)_{[\lambda-\delta,1]}$ and $p_B\leq b_{\{0\}}\leq b_{[0,\sqrt{\delta}]}$ so, by \autoref{lem2}, \[||BD||^2\leq||b_{[0,\sqrt{\delta}]}(cb^2c)_{[\lambda-\delta,1]}||^2\leq1-\lambda+\epsilon<1.\] Now simply apply \autoref{septhm} to get another $D\in[A]^\perp$ with $||BD||\leq\epsilon$. \end{proof} \begin{cor}\label{SP} If $A$ has property (SP) then $[A]^\perp\cong[\mathcal{P}(A)]^\perp$. \end{cor} \begin{proof} Property (SP) means that every hereditary C*-subalgebra of $A$ contains a non-zero projection (see \cite{Blackadar1994} Definition 6.1.1) which, by the comments after \autoref{prp1}, is equivalent to saying every annihilator contains a non-zero projection. This, in turn, is equivalent to saying $\mathcal{P}(A)$ is order-dense in $A$ in the induced preorder. As $[A]^\perp$ and hence $A$ is separative, this is equivalent to saying $\mathcal{P}(A)$ is join-dense in $A$, by \autoref{jdod}. Now $[A]^\perp\cong[\mathcal{P}(A)]^\perp$, by \eqref{jdorthoiso}. \end{proof} Note that $[\mathcal{P}(A)]^\perp$ is just the canonical completion by cuts of the orthomodular partial order $\mathcal{P}(A)$. However, this does not necessarily mean that $[A]^\perp$ is orthomodular for $A$ with property (SP), as orthomodularity is not necessarily preserved by completions. In fact, an example is even given in \cite{Adams1969} of a modular lattice such that its completion is not orthomodular, and \autoref{nonorthoxpl}, with $X$ replaced by the Cantor space, even yields a real rank zero $A$ such that $[A]^\perp$ is not orthomodular. Replacing projections above with any essential ideal (for any C*-algebra now), we get the same result (and more can be said in this case \textendash\, see \autoref{simsub}). \begin{prp}\label{ess} If $B$ contains an essential ideal in $A$ then $[A]^\perp\cong[B]^\perp$. \end{prp} \begin{proof} Take $a\in A_+$. If $B$ is an essential ideal, there exists $b\in B_+$ with $0\neq ba\in B$ and hence $\{ba\}^{\perp\perp}\subseteq\{a\}^{\perp\perp}$, i.e. $ba$ is below $a$ in the induced preorder. As $a$ was arbitrary, $B$ is order-dense in $A$ in the induced preorder. The result now follows as in the proof of \autoref{SP}. \end{proof} So internally, essential ideals are the same as the C*-algebra itself, as far as the annihilators are concerned. The following shows that the same is true externally. \begin{prp}\label{essidann} If $C$ is an essential ideal in a hereditary C*-subalgebra $B$ of $A$ then $B^\perp=C^\perp$. \end{prp} \begin{proof} For $a\in C^\perp_+$, $b\in B_+$ and $c\in C_+$ we have $bc\in C$ and hence $babc=0$. As $c\in C_+$ was arbitrary, $bab\in C^\perp\cap B=\{0\}$ and hence $ab=0$. As $b\in B_+$ was arbitrary, $a\in B^\perp$, i.e. $C^\perp\subseteq B^\perp$, while the reverse inclusion is immediate. \end{proof} Note that if \autoref{essidann} were true for all hereditary C*-subalgebras of a particular C*-algebra $A$, not just ideals, then $[A]^\perp$ would have to be orthomodular \subsection{Annihilator Ideals}\label{annideals} Now that we know the annihilators in an arbitrary C*-algebra are separative, we immediately have the type decompositions given in \S\ref{tdsec}. The only question that remains is whether these types can also be characterized algebraically in the same way as the projections appearing in the classical type decomposition of a von Neumann algebra. Our first task is to identify the central annihilators. \begin{thm}\label{centralannihilators} $B\in[A]^\perp$ is central if and only if $B$ is an ideal. \end{thm} \begin{proof} We first show that, whenever $B,D\in[A]^\perp$ and $B$ is an ideal, \[(D\cap B)^\perp\cap(D\cap B^\perp)^\perp\subseteq D^\perp.\] To see this, take any $e\notin D^\perp$, so there is $d\in D$ such that $ed\neq0$. But then there must also be $b\in B$ or $b\in B^\perp$ such that $edb\neq0$ and hence $edbd\neq0$. But then $dbd\in D\cap B$ and hence $e\notin (D\cap B)^\perp$, or $dbd\in D\cap B^\perp$ and hence $e\notin (D\cap B^\perp)^\perp$. So $e\notin(D\cap B)^\perp\cap(D\cap B^\perp)^\perp$ and the inclusion is proved. The reverse inclusion is immediate and thus $B\in\mathrm{c}[A]^\perp$, by \autoref{centralequiv}. Conversely, if $B$ is not an ideal then we have $a\in A^1_+$ and $b\in B^1_+$ with $ab\notin B$. As $ba^2b\in B$ we must have $ab^2a\notin B$ (because $B$ is hereditary so $x\in B\Leftrightarrow x^*x\in B$ and $xx^*\in B$ \textendash\, see \cite{Pedersen1979} Theorem 1.5.2). If $cc^*\leq\lambda ab^2a$ for some $\lambda\geq0$ then $c=abd$, for some $d\in\mathcal{B}(H)$, by \cite{Douglas1966} (alternatively, one could obtain a similar factorization in $A$ with \cite{Pedersen1979} Proposition 1.4.5). Then $bc=babd=0\Leftrightarrow c=abd=0$, by \autoref{xab}, and hence $c\notin B^\perp$ as long as $c\neq0$. So, if we set $C=\{f_\delta(ab^2a)\}^{\perp\perp}$ for $\delta>0$ sufficiently small that $f_\delta(ab^2a)\notin B$, then $C$ is not contained in $B$ and $cc^*\leq2||c||^2\delta^{-1}ab^2a$, for all $c\in C$, and hence $C\cap B^\perp=\{0\}$. So we have $(B\wedge C)\vee(B^\perp\wedge C)=B\cap C\subsetneqq C$ and hence $B$ is not central. \end{proof} In particular, $B$ is central (in $[A]^\perp$) if and only if $p_B$ is (in $\mathcal{P}(A'')$), by \autoref{centralideals}. This is perhaps a little surprising, considering that commutativity itself is not the same in $[A]^\perp$ and $\mathcal{P}(A'')$, as shown by the examples in \S\ref{Examples}. Another important thing to note is the difference between central covers in $[A]^\perp$ and the central covers defined in \cite{Pedersen1979} 2.6.2. There, the central cover $\mathrm{c}(a)$ of an element $a\in A''_\mathrm{sa}$ is defined to be the smallest $b\in(A'\cap A'')_\mathrm{sa}$ with $a\leq b$. If $p\in\mathcal{P}(A'')$, this means that $\mathrm{c}(p)=\mathrm{c}_{\mathcal{P}(A'')}(p)$, i.e. central covers in this algebraic sense are the same for projections as central covers in the order theoretic sense \emph{with respect to} $\mathcal{P}(A'')$ (not $\overline{\mathcal{P}(A'')}^\circ$). In general we have $\mathrm{c}(p_B)\leq p_{\mathrm{c}(B)}$, for $B\in[A]^\perp$, but this inequality can be strict. We will primarily be concerned with central covers in $[A]^\perp$ rather than $A''_\mathrm{sa}$, although we would be remiss not to point out the following connections. \begin{prp}\label{centralclosures} If $B$ is a C*-subalgebra of $A$ then $p_{\mathrm{c}(B)}=\overline{\mathrm{c}(p_B)}^\circ$. \end{prp} \begin{proof} As $\mathrm{c}(B)$ is an ideal, $p_{\mathrm{c}(B)}$ is a central projection and $B\subseteq\mathrm{c}(B)$ so $p_B\leq p_{\mathrm{c}(B)}$ and hence $\mathrm{c}(p_B)\leq p_{\mathrm{c}(B)}$ which, in turn, gives $\overline{\mathrm{c}(p_B)}^\circ\leq\overline{p}_{\mathrm{c}(B)}^\circ=p_{\mathrm{c}(B)}$. For the reverse inequality, first note that $\overline{\mathrm{c}(p_B)}^\circ$ is central, by \autoref{centralprojections}. So $\overline{\mathrm{c}(p_B)}^\circ A\overline{\mathrm{c}(p_B)}^\circ\cap A$ is an annihilator ideal, by \autoref{centralideals}, which certainly contains $B$ and hence also $\mathrm{c}(B)$. Thus $p_{\mathrm{c}(B)}\leq\overline{\mathrm{c}(p_B)}^\circ$. \end{proof} Even if $p_{\mathrm{c}(B)}\neq\mathrm{c}(p_B)$, the analogous notion of `very orthogonal' is equivalent. \begin{cor}\label{vo} For $B,C\in[A]^\perp$, the following are equivalent. \begin{enumerate} \item\label{vo1} $B$ and $C$ are very orthogonal. \item\label{vo2} $\mathrm{c}(p_B)\mathrm{c}(p_C)=0$ \item\label{vo3} $BAC=\{0\}$. \end{enumerate} \end{cor} \begin{proof}\ \begin{itemize} \item[(\ref{vo1})$\Rightarrow$(\ref{vo2})] As $\mathrm{c}(p_B)\leq p_{\mathrm{c}(B)}$, $||\mathrm{c}(p_B)\mathrm{c}(p_C)||\leq||p_{\mathrm{c}(B)}p_{\mathrm{c}(C)}||\leq||\mathrm{c}(B)\mathrm{c}(C)||=0$. \item[(\ref{vo2})$\Rightarrow$(\ref{vo1})] If $\mathrm{c}(p_B)\mathrm{c}(p_C)=0$ then $\mathrm{c}(p_B)\leq\mathrm{c}(p_C)^\perp$ so $p_B=p_B^\circ\leq\mathrm{c}(p_B)^\circ\leq \mathrm{c}(p_C)^{\perp\circ}$. By \autoref{centralprojections}, $\mathrm{c}(p_C)^{\perp\circ}$ is central. Also $p_C\leq\mathrm{c}(p_C)^\circ$, and this latter projection is central, again by \autoref{centralprojections}. Thus we must in fact have $\mathrm{c}(p_C)^\circ=\mathrm{c}(p_C)$, i.e. $\mathrm{c}(p_C)$ is open so $\mathrm{c}(p_C)^\perp$ is closed and $\mathrm{c}(p_C)^{\perp\circ}$ is topologically regular, by \eqref{intclosedtopreg}. So we have $p_{\mathrm{c}(B)}\leq\mathrm{c}(p_C)^{\perp\circ}$ and hence $p_{\mathrm{c}(B)}\mathrm{c}(p_C)=0$. Now $p_{\mathrm{c}(B)}p_{\mathrm{c}(C)}=0$ follows, by the same argument applied to $p_C$, and thus $\mathrm{c}(B)\cap\mathrm{c}(C)=\{0\}$. \item[(\ref{vo2})$\Rightarrow$(\ref{vo3})] For $a\in A$, $b\in B$ and $c\in C$, $bac=b\mathrm{c}(p_B)a\mathrm{c}(p_C)c=\mathrm{c}(p_B)\mathrm{c}(p_C)bac=0$. \item[(\ref{vo3})$\Rightarrow$(\ref{vo2})] If $BAC=\{0\}$ then if $I=ABA$ is the ideal generated by $B$ we have $IC=\{0\}$ and hence $\overline{I}C=\{0\}$, so $||\mathrm{c}(p_B)p_C||\leq||p_{\overline{I}}p_C||=0$. But $\mathrm{c}(p_B)p_C=0$ means $p_C\leq\mathrm{c}(p_B)^\perp$ and hence $\mathrm{c}(p_C)\leq\mathrm{c}(p_B)^\perp$ so $\mathrm{c}(p_B)\mathrm{c}(p_C)=0$. \end{itemize} \end{proof} The next result, applied to $B\in[A]^\perp$, shows $[A]^\perp$ has the relative centre property. \begin{cor}\label{c_B} If $B$ is a hereditary C*-subalgebra of $A$ then \[\mathrm{c}_B[B]^\perp=\{B\cap C:C\in\mathrm{c}[A]^\perp\}.\] \end{cor} \begin{proof} For one inclusion, take $C\in\mathrm{c}[A]^\perp$ and $b\in(B_+\cap C^\perp)^{\perp_B}$. Then, for any $c\in C^\perp_+$, we have $\sqrt{b}c\in B\cap C^\perp_+$ and hence $bc=0$, i.e. $b\in C^{\perp\perp}=C$. As $C$ is an ideal in $A$, $B\cap C$ is an ideal in $B$, and thus $B\cap C=(B\cap C^\perp)^{\perp_B}\in\mathrm{c}[B]^\perp$. For the reverse inclusion, say we have $C\in\mathrm{c}_B[B]^\perp$ which, by \autoref{vo} \eqref{vo3}, means that $CBC^{\perp_B}=\{0\}$. But, for any $c\in C_+$, $d\in C^{\perp_B}_+$ and $a\in A_+$, we have \[cad=\sqrt{c}(\sqrt{c}a\sqrt{d})\sqrt{d}\in CBC^{\perp_B}=\{0\}.\] Thus $C$ and $C^{\perp_B}$ are very orthogonal (in $[A]^\perp$) so $\mathrm{c}(C)\cap B=C$. \end{proof} We also have something of an algebraic substitute for \autoref{ordertd}. \begin{prp}\label{algebratd} Given hereditary C*-subalgebras $(B_\alpha)$ with $\mathrm{c}(p_{B_\alpha})\mathrm{c}(p_{B_\beta})=0$, whenever $\alpha\neq\beta$, the C*-subalagebra $\bigoplus B_\alpha$ they generate is also hereditary. \end{prp} \begin{proof} A C*-subalgebra $B$ is hereditary if and only if $bab\in B$ for all $a\in A$ and $b\in B$ (see \cite{Blackadar2006} II.5.3.9). But given $a\in A$ and $\sum b_\alpha\in\bigoplus B_\alpha$ we have \begin{eqnarray*} (\sum b_\alpha)a(\sum b_\alpha) &=& (\sum b_\alpha\mathrm{c}(p_{B_\alpha}))a(\sum\mathrm{c}(p_{B_\alpha})b_\alpha)\\ &=& \sum b_\alpha\mathrm{c}(p_{B_\alpha})a\mathrm{c}(p_{B_\alpha})b_\alpha,\quad\textrm{by centrality and orthogonality,}\\ &=& \sum b_\alpha ab_\alpha\in\bigoplus B_\alpha,\quad\textrm{by hereditarity of the }(B_\alpha). \end{eqnarray*} \end{proof} \subsection{Equivalence}\label{Equivalence} Here we study the basic properties of the following relation, which we believe to the be the natural analog for annihilators in C*-algebras of Murray-von Neumann equivalence. \begin{dfn}\label{equidef} $B,C\in[A]^\perp$ are \emph{equivalent}, written $B\sim C$, if $\{a\}^{\perp\perp}=B$ and $\{a^*\}^{\perp\perp}=C$, for some $a\in A$. We write $B\precsim C$ if $B\sim D\subseteq C$, for some $D\in[A]^\perp$. \end{dfn} For $p,q\in\mathcal{P}(A)$, let us write $p\sim_\mathrm{MvN}q$ for the usual Murray-von Neumann equivalence notion, i.e. if there is a (partial isometry) $u\in A$ with $u^*u=p$ and $uu^*=q$. Likewise, $p\precsim_\mathrm{MvN}q$ means $p\sim_\mathrm{MvN}r\leq q$ for some $r\in\mathcal{P}(A)$. Also, we shall say a $A$ has \emph{polar decomposition} if, for all $a\in A$, there is a partial isometry $u\in A$ with $a=u|a|$, where $|a|=\sqrt{a^*a}$. Von Neumann algebras certainly have polar decomposition, but so too do AW*-algebras, Rickart C*-algebras and all C*-algebra quotients of these (e.g. the Calkin algebra). The following result shows that, for any such C*-algebra, the relation $\sim$ defined above really is a completely consistent extension of Murray-von Neumann equivalence. \begin{prp} If $A$ has polar decomposition then, for $p,q\in\mathcal{P}(A)$, \[p\sim_\mathrm{MvN}q\quad\Leftrightarrow\quad pAp\sim qAq\qquad\qquad\textrm{and}\qquad\qquad p\precsim_\mathrm{MvN}q\quad\Leftrightarrow\quad pAp\precsim qAq.\] \end{prp} \begin{proof} It suffices to prove the first equivalence. If $p\sim_\mathrm{MvN}q$ then the partial isometry witnessing this will also witness $pAp\sim qAq$. Conversely, say we have $a\in A$ with $\{a\}^{\perp\perp}=pAp$ and $\{a^*\}^{\perp\perp}=qAq$ and take a partial isometry $u\in A$ with $a=u|a|$. Then we immediately see that the left annihilator of $u$ is contained in the left annihilator of $a$, i.e. $\{u^*\}^\perp\subseteq\{a^*\}^\perp$ and hence $qAq=\{a^*\}^{\perp\perp}\subseteq\{u^*\}^{\perp\perp}=uAu^*$, which gives $uu^*\geq q$. By replacing $u$ with $qu$ if necessary we may assume that $uu^*=q$ and hence $\{u^*\}^{\perp\perp}=\{a^*\}^{\perp\perp}$. We also have $|a|^2=a^*a=|a|u^*u|a|$ and hence $|a|(1-u^*u)|a|=0$, which means $(1-u^*u)|a|=0$, i.e. $|a|=u^*u|a|=u^*a=a^*u$. This immediately gives $\{a\}^{\perp\perp}\subseteq\{u\}^{\perp\perp}$, while if $0=ab$ then $0=\sqrt{aa^*}ab=a\sqrt{a^*a}b=a|a|b=aa^*ub$ which, as $\{a^*\}^\perp=\{u^*\}^\perp$, gives $0=u^*ub$, i.e. $b\in\{u\}^\perp$. As $b\in\{a\}^\perp$ was arbitrary, $\{a\}^\perp\subseteq\{u\}^\perp$ and hence $\{u\}^{\perp\perp}\subseteq\{a\}^{\perp\perp}$, i.e. $u^*Au=\{u\}^{\perp\perp}=\{a\}^{\perp\perp}=pAp$ so $u^*u=p$ too. \end{proof} \begin{lem}\label{simlem} If $a,b\in A$ and $\{b^*\}^{\perp\perp}\subseteq\{a\}^{\perp\perp}$ then $\{ab\}^{\perp\perp}=\{b\}^{\perp\perp}$. \end{lem} \begin{proof} If $c\in A_+$ and $abc=0$ then $abcb^*=0$ so \[bcb^*\in\{a\}^\perp\cap\{b^*\}^{\perp\perp}\subseteq\{a\}^\perp\cap\{a\}^{\perp\perp}=\{0\}.\] Thus $\{ab\}^\perp\subseteq\{b\}^\perp$, while $\{b\}^\perp\subseteq\{ab\}^\perp$ is immediate, so $\{ab\}^\perp=\{b\}^\perp$. \end{proof} \begin{cor}\label{simtran} $\sim$ and $\precsim$ are transitive relations. \end{cor} \begin{proof} If $a$ witnesses $B\precsim C$ and $b$ witnesses $C\precsim D$, then $ab$ witnesses $B\precsim D$, by \autoref{simlem}. If $\precsim$ was actually $\sim$ here then one more application of \autoref{simlem} shows that $ab$ witnesses $B\succsim D$ and hence, as $\precsim$ and $\succsim$ are witnessed by the same element $ab$ of $A$, $B\sim D$. \end{proof} \begin{dfn} $A$ is \emph{anniseparable} if $B\in[A]^\perp\Rightarrow B=S^\perp$ for countable $S\subseteq A$. We call $B\in[A]^\perp$ \emph{principal} if there exists $b\in B$ with $\{b\}^{\perp\perp}=B$. We say $A$ is \emph{orthoseparable} if every pairwise orthogonal subset of $A_+$ is countable. \end{dfn} We can see immediately that \[\textrm{anniseparability}\quad\Leftrightarrow\quad\textrm{every annihilator is principal}\quad\Leftrightarrow\quad\sim\textrm{ is reflexive.}\] For, given $(s_n)\subseteq A^1_+$ such that $(s_n)^{\perp\perp}=B$, we can simply let $b=\sum2^{-n}s_n$ to get $\{b\}^{\perp\perp}$. It then immediately follows from \autoref{simtran} that $\sim$ is an equivalence relation and $\precsim$ is a preorder. Likewise, if we have orthogonal $(B_n)\subseteq[A]^\perp$, orthogonal $(C_n)\subseteq[A]^\perp$ and $(a_n)\subseteq A$ witnesses their equivalence, i.e. $\{a_n\}^{\perp\perp}=B_n$ and $\{a^*_n\}^{\perp\perp}=C_n$, for all $n\in\mathbb{N}$, then $a=\sum2^{-n}a_n$ witnesses the equivalence of $B=\bigvee B_n$ and $C=\bigvee C_n$, i.e. $\sim$ is countably additive. This yields the second implicaiton in \[\textrm{separability}\quad\Rightarrow\quad\textrm{orthoseparability}\quad\Rightarrow\quad\textrm{additivity}.\] Most C*-algebras one encounters are anniseparable. \begin{prp}\label{annisep} Each of the following conditions implies $A$ is anniseparable. \begin{enumerate} \item\label{annisep1} $A$ is an AW*-algebra (e.g. a von Neumann algebra). \item $A$ is (norm) separable. \item $A$ is orthoseparable and $[A]^\perp$ is orthomodular. \end{enumerate} \end{prp} \begin{proof}\ \begin{enumerate} \item An AW*-algebra is, by definition, a Baer *-ring (see \cite{Berberian1972} \S4 Definition 1 and 2, and also Proposition 1 to see how this is equivalent to other definitions sometimes given for AW*-algebras, like the one in \cite{Pedersen1979} 3.9.2), meaning that, for every $S\subseteq A$, $R(S)=pA$ for some $p\in\mathcal{P}(A)$, and hence $S^\perp=pAp=\{p\}^{\perp\perp}$. To see that von Neumann algebras are AW*-algebras, simply note that, for any $S\subseteq A$, $R(S)$ is a weakly closed right ideal and hence of the form $pA$, for some $p\in\mathcal{P}(A)$, by \cite{Pedersen1979} Proposition 2.5.4. \item For any $B\in[A]^\perp$, $B=S^{\perp\perp}$, where $S$ is any countable dense subset of $B$. \item For any $B\in[A]^\perp$, let $S$ be a maximal pairwise orthogonal subset of $B_+$. If we had $S^{\perp\perp}<B$ then, by orthomodularity, we would have $b\in B_+\cap S^\perp$, contradicting the maximality of $S$. \end{enumerate} \end{proof} \begin{prp}\label{posp'} If $B,C\in[A]^\perp$ are principal and semiorthoperspective, $B\sim C$. \end{prp} \begin{proof} Take $b,c\in A_+$ with $\{b\}^{\perp\perp}=B$ and $\{c\}^{\perp\perp}=C$. Given $d\in A_+$ with $bcd=0$ we have $cdc\in C\cap B^\perp=\{0\}$ and hence $cd=0$, i.e. $\{bc\}^{\perp}=\{c\}^\perp$ and hence $\{bc\}^{\perp\perp}=\{c\}^{\perp\perp}=C$. A symmetric argument gives $\{cb\}^{\perp\perp}=\{b\}^{\perp\perp}=B$ and hence $bc$ witnesses $B\sim C$. \end{proof} \begin{cor}\label{posp'cor} If $A$ is anniseparable, $\sim$ is weaker than perspectivity. \end{cor} \begin{cor}\label{simBsim} If $A$ is anniseparable and $B\in[A]^\perp$ then $\sim_B$ is $\sim$ on $[B]_B$. \end{cor} \begin{proof} If $\{b\}^{\perp\perp},\{b^*\}^{\perp\perp}\in[B]_B$, for some $b\in A$ (necessarily with $b\in B$), then $\{b\}^{\perp_B\perp_B}=\{b\}^{\perp\perp\perp_B\perp_B}=\{b\}^{\perp\perp}$ and, likewise, $\{b^*\}^{\perp_B\perp_B}=\{b^*\}^{\perp\perp}$. Thus $\sim$ restricted to $[B]_B$ is stronger than $\sim_B$. Conversely, if $[A]^\perp$ is anniseparable then, as $\sim$ is weaker than perspectivity, which is itself weaker than orthoperspectivity, we have $\{b\}^{\perp_B\perp_B}\sim\{b\}^{\perp\perp}\sim\{b^*\}^{\perp\perp}\sim\{b^*\}^{\perp_B\perp_B}$, for any $b\in B$. \end{proof} From \autoref{posp'cor}, we see that the only thing stopping $\sim$ from being a dimensional equivalence relation (for anniseparable $A$) according to \cite{Kalmbach1983} \S11 Definition 1, is the potential lack of finite (orthogonal) divisibility. Although we can prove that $\sim$ satisfies a non-orthogonal version of divisibility. \begin{prp}\label{nonorthodiv} For any $a\in A$, the map $B\mapsto(Ba)^{\perp\perp}$ is order and supremum preserving on $[A]^\perp$. Also $(Ba)^{\perp\perp}\sim B$, for all princpal $B\in[\{a^*\}^{\perp\perp}]$. \end{prp} \begin{proof} The given map is certainly order preserving, even for arbitrary subsets $B$ of $A$. If we have $\mathcal{B}\subseteq[A]^\perp$ then \[\bigvee_{B\in\mathcal{B}}(Ba)^{\perp\perp}=(\bigcap_{B\in\mathcal{B}}(Ba)^\perp)^\perp=((\bigcup\mathcal{B})a)^{\perp\perp}\subseteq((\bigvee\mathcal{B})a)^{\perp\perp}.\] To see that this last inclusion can be reversed, take $c\in((\bigcup\mathcal{B})a)^\perp_+$. This means $aca^*\in(\bigcup\mathcal{B})^\perp=(\bigvee\mathcal{B})^\perp$ and hence $c\in((\bigvee\mathcal{B})a)^\perp$, i.e. $((\bigcup\mathcal{B})a)^\perp\subseteq((\bigvee\mathcal{B})a)^\perp$ and hence $((\bigvee\mathcal{B})a)^{\perp\perp}\subseteq((\bigcup\mathcal{B})a)^{\perp\perp}$. Lastly, note that if $B=\{b\}^{\perp\perp}\subseteq\{a^*\}^{\perp\perp}$, for some $b\in B_+$, then $\{a^*b\}^{\perp\perp}=B$, by \autoref{simlem}, and \[c\in\{ba\}^\perp\quad\Leftrightarrow\quad aca^*\in\{b\}^\perp=B^\perp\quad\Leftrightarrow\quad c\in(Ba)^\perp,\] i.e. $\{ba\}^{\perp\perp}=(Ba)^{\perp\perp}$ and hence $ba$ witnesses $B\sim(Ba)^{\perp\perp}$. \end{proof} Note, however, that the map $B\mapsto(Ba)^{\perp\perp}$ may not preserve infimums, as the following example shows. Specifically, consider $A=\mathcal{B}(H)$, where $H$ is a separable infinite dimensional Hilbert space with basis $(e_n)$. Define $a\in\mathcal{B}(H)_+$ by $ae_n=\frac{1}{n^2}e_n$, so $\overline{\mathcal{R}(a)}=H$ and hence $\{a\}^{\perp\perp}=A$, and let $c$ be the projection onto $\mathbb{C}v$, where $v=\sum\frac{1}{n}e_n$. Then $b=c^\perp\perp c$, and hence $B=bAb=\{b\}^{\perp\perp}\perp\{c\}^{\perp\perp}=cAc=C$ even though we still have $\overline{\mathcal{R}(ab)}=H$ and hence $(Ba)^{\perp\perp}=\{ba\}^{\perp\perp}=\{a\}^{\perp\perp}=A$, while $(Ca)^{\perp\perp}\neq\{0\}$. \begin{prp}\label{simtr} If $A$ is anniseparable and orthoseparable, $\sim$ is a type relation. \end{prp} \begin{proof} Orthoseparability yields the $\Rightarrow$ part of \eqref{treq}. For the $\Leftarrow$ part, say we have $B,C\in[A]^\perp$, $B\sim C$ and $D\in c[A]^\perp$. By \autoref{nonorthodiv}, we have $E,F\in[C]$ with $B\cap D\sim E$, $B^\perp\cap D\sim F$ and $E\vee F=C$. As the closed ideal generated by any $a\in A$ is the same as that generated by $a^*$, we have $c(E)=c(B\cap D)\subseteq B$ and $c(F)=c(B^\perp\cap D)\subseteq B^\perp$. This means $E=B\cap C$ and $F=B^\perp\cap C$. \end{proof} In particular, we get the type-decompositions in \autoref{tdcor} coming from the type ideals of $\sim$-finite and $\sim$-orthofinite annihilators, by \autoref{trti}. Whether these agree with the type decompositions you get from the type ideals of modular or relatively modular annihilators (see \autoref{tctd} and \eqref{Mdef}), we do not know. However, it does follow from \autoref{permod} that if $\sim$ is finite on $[A]^\perp$ then $[A]^\perp$ is modular. So if $[A]^\perp_{\sim\perp\mathrm{Fin}}$ is the type ideal of $\sim$-orthofinite annihilators in $[A]^\perp$ then $[A]^\perp_{\sim\perp\mathrm{Fin}}\cap[A]^\perp_{\mathbf{O}\mathrm{rel}}$ (see \eqref{Odef}) is the type ideal consisting of those $B\in[A]^\perp$ for which $\sim$ is finite on $[B]_B$ and \[[A]^\perp_{\sim\perp\mathrm{Fin}}\cap[A]^\perp_{\mathbf{O}\mathrm{rel}}\quad\subseteq\quad[A]^\perp_{\mathbf{M}\mathrm{rel}}.\] \autoref{nonorthodiv} also yields a Cantor-Schroeder-Bernstein theorem. \begin{thm} If $A$ is anniseparable and $B,C\in[A]^\perp$ then \[B\precsim C\precsim B\quad\Rightarrow\quad B\sim C.\] \end{thm} \begin{proof} Take $b,c\in A$ with $\{b\}^{\perp\perp}=B$, $\{b^*\}^{\perp\perp}\subseteq C$, $\{c\}^{\perp\perp}=C$ and $\{c^*\}^{\perp\perp}\subseteq B$. We will apply Tarski's fixed point theorem, as in \cite{Berberian1972} \S1 Theorem 1. Specifically, note that the map on $[B]$ defined by \[D\mapsto((Db^*)^{\perp_C}c^*)^{\perp_B}\] is order-preserving so, as $[B]$ is a complete lattice, it has a fixed point $F$. By \autoref{nonorthodiv}, \begin{eqnarray*} F^{\perp_B}=((Fb^*)^{\perp_C}c^*)^{\perp_B\perp_B} &\sim& (Fb^*)^{\perp_C},\textrm{ and}\\ F &\sim& (Fb^*)^{\perp_C\perp_C},\textrm{ so}\\ B\sim F\vee F^{\perp_B} &\sim& (Fb^*)^{\perp_C\perp_C}\vee(Fb^*)^{\perp_C}\sim C. \end{eqnarray*} \end{proof} The next result shows $\precsim$ satisfies generalized comparison (see \autoref{gcdef}). \begin{thm}\label{BDCDthm} If $A$ is anniseparable and orthoseparable then, for all $B,C\in[A]^\perp$, there exists $D\in\mathrm{c}[A]^\perp$ with \begin{equation}\label{BDCD} B\cap D\precsim C\cap D\qquad\textrm{and}\qquad C\cap D^\perp\precsim B\cap D^\perp. \end{equation} \end{thm} \begin{proof} Let $(a_n)$ be a maximal subset of $A^1$ such that, for all distinct $m,n\in\mathbb{N}$, \[a_n^*a_n\in B,\quad a_na_n^*\in C,\quad\textrm{and}\quad a_ma_n^*=0=a_m^*a_n.\] Let $a=\sum_n2^{-n}a_n$, $E=\{a\}^{\perp_B\perp_B}\subseteq B$ and $F=\{a^*\}^{\perp_C\perp_C}\subseteq C$. By \autoref{orthoperpequiv} and \autoref{posp'cor}, $E\sim\{a\}^{\perp\perp}\sim\{a^*\}^{\perp\perp}\sim F$. By maximality, $(B\cap E^\perp)A(C\cap F^\perp)=\{0\}$ and \eqref{BDCD} now follows from \autoref{simgcprp} and \autoref{vo}. \end{proof} Next we show that $\sim$ is the same in any sufficienlty large hereditary C*-subalgebra $B$ of $A$. By \autoref{ess}, it applies to any $B$ containing an essential ideal of $A$. \begin{prp}\label{simsub} If $B$ is an anniseparable hereditary C*-subalgebra of $A$ and $C\mapsto B\cap C$ is an injective map from $[A]^\perp$ to $[B]^\perp$ then, for $D,E\in[A]^\perp$, \[D\sim E\quad\Leftrightarrow\quad B\cap D\sim_B B\cap E\] \end{prp} \begin{proof} For any $b\in B$, $B\cap\{b\}^{\perp\perp}$ is the smallest element of $\{B\cap C:C\in[A]^\perp\}$ containing $b^*b$. As $C\mapsto B\cap C$ maps $[A]^\perp$ to $[B]^\perp$, $[B]^\perp=\{B\cap C:C\in[A]^\perp\}$ so \begin{equation}\label{perpeq} \{b\}^{\perp_B\perp_B}=\{b\}^{\perp\perp}\cap B. \end{equation} So if $\{b\}^{\perp_B\perp_B}=B\cap D$ and $\{b^*\}^{\perp_B\perp_B}=B\cap E$ then $\{b\}^{\perp\perp}=D$ and $\{b^*\}^{\perp\perp}=E$, by the injectivity of $C\mapsto B\cap C$, i.e. $B\cap D\sim_B B\cap E$ implies $D\sim E$. On the other hand, as $B$ is anniseparable, we have $d\in B\cap D_+$ such that $B\cap D=\{d\}^{\perp_B\perp_B}=B\cap\{d\}^{\perp\perp}$ and hence $D=\{d\}^{\perp\perp}$, as $C\mapsto B\cap C$ is injective. Likewise, we have $e\in B\cap E_+$ such that $\{e\}^{\perp\perp}=E$. If $a\in A$ witnesses $D\sim E$ then so does $dae$, by \autoref{simlem}. But $B$ is a hereditary C*-subalgebra so $dae\in B$ and hence witnesses $B\cap D\sim_BB\cap E$, again by \eqref{perpeq}. \end{proof} In particular, under the hypothesis of \autoref{simsub}, $A$ is also anniseparable. However, the converse fails in general. For example, if $H$ is a non-separable Hilbert space then $\mathcal{K}(H)$ is not anniseparable, as $\mathcal{K}(H)$ is a non-principal annihilator in itself, even though $\mathcal{K}(H)$ is an essential ideal in the (necessarily anniseparable) von Neumann algebra $\mathcal{B}(H)$. In fact, if $B$ is an essential ideal in anniseparable $A$ then \[B\textrm{ is principal (in itself) }\quad\Leftrightarrow\quad B\textrm{ is anniseparable}.\] The $\Leftarrow$ part is immediate, and for the $\Rightarrow$ part simply note that if $\{b\}^{\perp_B\perp_B}=B$, for some $b\in B$, then $\{b\}^{\perp\perp}=A$ and, for any $c\in A$, we have $bc\in B$ and $\{bc\}^{\perp_B\perp_B}=\{c\}^{\perp\perp}\cap B$, by \eqref{perpeq}. \subsection{Abelian Annihilators}\label{AA} \begin{thm}\label{commuteBoolean} $B\in[A]^\perp$ is commutative if and only if $[B]$ is a Boolean algebra. \end{thm} \begin{proof} First note that $A$ is commutative if and only if every hereditary C*-subalgebra of $A$ is an ideal. For if $A$ is commutative then so is $A''$ and, in particular, $\mathcal{P}(A'')^\circ$, so all hereditary C*-subalgebras of $A$ are ideals. While if every hereditary C*-subalgebra of $A$ is an ideal then every open projection in $A''$ is central. As $\mathcal{P}(A'')^\circ$ contains all spectral projections of elements of $A_+$ corresponding to open subsets of $\mathbb{R}$, it follows that $\mathrm{span}(\mathcal{P}(A'')^\circ)$ is dense in $A$ and hence $A$ is commutative. So if $B$ is commutative then, by the previous paragraph and \autoref{centralannihilators}, $[B]_B=\mathrm{c}_B[B]_B$ is a Boolean algebra, so we only have to show that \begin{equation}\label{BooleanAnn} [B]_B=[B]. \end{equation} Given $C\in[B]$, take $b\in C^{\perp_B\perp_B}_+$. Then $babc=bacb=0$, for all $a\in C^\perp_+$ and $c\in C$, so $bab\in B\cap C^\perp=C^{\perp_B}$, but $bab\in C^{\perp_B\perp_B}$ too so $ba=0$ and hence $b\in C^{\perp\perp}=C$, i.e. $C^{\perp_B\perp_B}=C$ and hence $C\in[B]_B$. Conversely, if $B$ is not commutative then, by the first paragraph and the other direction of \autoref{centralannihilators}, $[B]_B$ is not a Boolean algebra. It may be that $[B]$ strictly contains $[B]_B$, but infimums still agree in both structures, as do supremums when one of the annihilators is $\{0\}$, so the counterexample to distributivity in the proof of \autoref{centralannihilators} still works in the possibly larger structure $[B]$. \end{proof} \begin{dfn}\label{abeliandef} If $B\in[A]^\perp$ is commutative it will be called \emph{abelian}. If $[A]^\perp$ contains no non-zero abelian annihilator ideals, $A$ will be called \emph{properly non-abelian}. If $\mathrm{c}(B)=A$ for some abelian $B\in[A]^\perp$, we call $A$ \emph{discrete}. If $A$ contains no non-zero abelian annihilators it will be called \emph{continuous}. \end{dfn} These definitions are consistent with classical von Neumann (or AW*-)algebra terminology (see \cite{Berberian1972} \S15 Definition 3). It follows from \autoref{commuteBoolean} (and \eqref{BooleanAnn}) that the collection of abelian annihilators coincides with the type ideal \[[A]^\perp_{\mathbf{D}}=[A]^\perp_{\mathbf{D}\mathrm{rel}}\] (see \eqref{tctd1}, \eqref{tctd2} and \eqref{Ddef} for this notation). It then follows from \autoref{BDCDthm} \autoref{Boolthm} that when $B$ and $C$ are abelian annihilators in anniseparable orthoseparable $A$, \[B\sim C\quad\Leftrightarrow\quad\mathrm{c}(B)=\mathrm{c}(C).\] Also \eqref{tdcor1} gives us a decomposition of $A$ into central abelian and properly non-abelian parts, while \eqref{tdcor2} gives us a decomposition into central discrete and countinuous parts. We could actually obtain these decompositions in a more algebraic, rather than order theoretic way, using \autoref{algebratd} and the following result. \begin{thm}\label{commutativesubalgebra} If $B$ is a commutative hereditary C*-subalgebra then so is $B^{\perp\perp}$. \end{thm} \begin{proof} We first claim that every element of $B$ commutes with every element of $B^{\perp\perp}$. If not, we would have $b\in B^1_+$ and $a\in B^{\perp\perp}_+$ such that $ab\neq ba$. Then, for some $\epsilon>0$, we must have $ab_{[\epsilon,1]}\neq b_{[\epsilon,1]}a$ and hence $b_{[\epsilon,1]}a(1-b_{[\epsilon,1]})=b_{[\epsilon,1]}ab_{[0,\epsilon)}\neq0$. Thus, for some $\delta<\epsilon$ sufficiently close to $\epsilon$, we must have $b_{[\epsilon,1]}ab_{[0,\delta]}\neq0$ and hence $f(b)ag(b)\neq0$ where $f=f_{(\epsilon+\delta)/2,\epsilon}$ and $g=1-f_{\delta,(\epsilon+\delta)/2}$. If we had $g(b)af(b)^2ag(b)\in B^\perp$ then, as $a\in B^{\perp\perp}$, $f(b)ag(b)a=0$ and hence $f(b)ag(b)=0$, a contradiction. Thus $f(b)ag(b)c\neq0$ for some $c\in B$. As $B$ is hereditary and both $f(b)$ and $c$ are in $B$, this means that $d=f(b)ag(b)c\in B$ and, likewise $d^*\in B$. However, $dd^*\leq\lambda f(b)^2$ for some $\lambda>0$ while $d^*d\leq\lambda'g(b)^2$ (note that $b,c\in B$ so $c$ commutes with $b$ and hence with $g(b)\in B+\mathbb{C}1$) for some $\lambda'>0$. As $f(b)g(b)=0$, this means that $d$ and $d^*$ do not commute, contradicting the fact $B$ is commutative. Now the claim is proved, take any $e,f\in B^{\perp\perp}_+$. Given any $b\in B_+$, note that $b(ef-fe)=b^{1/4}eb^{1/2}fb^{1/4}-b^{1/4}fb^{1/2}eb^{1/4}=0$, as $b^{1/4}eb^{1/4},b^{1/4}fb^{1/4}\in B$. Thus $ef-fe\in B^\perp\cap B^{\perp\perp}=\{0\}$ and hence, as $e$ and $f$ were arbitrary, $B^{\perp\perp}$ is commutative. \end{proof} It now follows that the continuous C*-algebras are, in fact, already well-known. \begin{cor}\label{tII/III} $A$ is continuous if and only if it is antiliminal. \end{cor} \begin{proof} A C*-algebra is antiliminal if and only if it contains no commutative hereditary C*-subalgebras (either by definition, as in \cite{Pedersen1979} 6.1.1, or by a theorem, as in \cite{Li1992} Propositions 13.3.11 and 13.3.12). So if $A$ is antiliminal then it certainly will not contain any abelian annihilators, while conversely if $A$ contains a commutative hereditary C*-algebra $B$ then $B^{\perp\perp}$ is an abelian annihilator, by \autoref{commutativesubalgebra}, and so $A$ can not be continuous. \end{proof} And we now have our first simple application of decomposition. \begin{cor}\label{GCRtypeI} If $A$ is postliminal then it is discrete. \end{cor} \begin{proof} Every C*-subalgebra of a postliminal algebra is postliminal (see \cite{Pedersen1979} Proposition 6.2.9 or \cite{Li1992} Proposition 13.3.5) and, in particular, not antiliminal. Thus the continuous part of $A$ must be $\{0\}$ and $A$ must be discrete. \end{proof} It should be mentioned that there is already a well-known postliminal-antiliminal decomposition theorem, which says that every C*-algebra has a unique postliminal ideal $I$ such that $A/I$ is antiliminal (see \cite{Pedersen1979} Proposition 6.2.7 or \cite{Li1992} Proposition 13.3.6). The continous/antiliminal part $A_\mathrm{NI}$ obtained from \eqref{tdcor2} (using the type ideal of abelian annihilators) is quite different, being a subalgebra rather than a quotient, and not just any subalgebra either, an annihilator ideal. Although it does naturally give rise to an antiliminal quotient. To see how, note that, as the relation $ab=0$, for $a,b\in A^1_+$, is liftable (see \cite{Loring1997} Proposition 10.1.10), whenever $B\in\mathrm{c}[A]^\perp$ and $\pi:A\rightarrow A/B^\perp$ is the canonical homomorphism, $\pi\upharpoonright B$ is injective and $\pi(B)$ is an essential ideal in $A/B^\perp$. Thus $A/A_\mathrm{I}$ (where $A_\mathrm{I}$ denotes the discrete part of $A$) contains $\pi(A_\mathrm{NI})$ as an essential antiliminal ideal. If $A/A_\mathrm{I}$ contained a non-zero commutative hereditary C*-subalgebra $B$ then $B\cap\pi(A_\mathrm{NI})$ would be a non-zero commutative hereditary C*-subalgebra of $\pi(A_\mathrm{NI})$, a contradiction, so $A/A_\mathrm{I}$ must also be antiliminal. As noted in \S\ref{Motivation}, any infinite dimensional type I von Neumann factor will not be postliminal, so the converse of \autoref{GCRtypeI} fails in general. However, there is the possibility that they could be equivalent in the separable case. If this were true then it would show that the discrete-continuous annihilator decomposition really is stronger than the classical postliminal-antiliminal decomposition in the separable case, because you could obtain the latter as a corollary of the former by the comments in the previous paragraph. Here are some more consequences of \autoref{commutativesubalgebra} that will be important in the next subsection. \begin{cor}\label{comsim} If $B\in[A]^\perp_\mathbf{D}$, $C\in[A]^\perp$ and $B\sim C$ then $C\in[A]^\perp_\mathbf{D}$. \end{cor} \begin{proof} Assume $\{a\}^{\perp\perp}=B$ and $\{a^*\}^{\perp\perp}=C$. As $B$ is commutative, we have $ba^*ad=da^*ab$ and hence $aba^*ada^*=ada^*aba^*$, for all $b,d\in B$. As $B$ is hereditary, so is $\overline{aBa^*}$ (because $aBa^*AaBa^*\subseteq aBa^*$) and \[C=\{a^*\}^{\perp\perp}=\{aa^*aa^*\}^{\perp\perp}\subseteq\overline{aBa^*}^{\perp\perp}\subseteq\{a^*\}^{\perp\perp}=C\] so, by \autoref{commutativesubalgebra}, $C$ is also commutative. \end{proof} \begin{cor}\label{discomden} If $A$ is discrete, $[A]^\perp_\mathbf{D}$ is order-dense in $[A]^\perp$. \end{cor} \begin{proof} Take $B\in[A]^\perp_\mathbf{D}$ with $\mathrm{c}(B)=A$. For any other non zero $C\in[A]^\perp$, we have $\mathrm{c}(C)\wedge\mathrm{c}(B)=\mathrm{c}(C)\neq0$ and hence $bac\neq0$, for some $b\in B$ $a\in A$ and $c\in C$, by \autoref{vo}. So $C'\sim B'\subseteq B$, where $B'=\{cab\}^{\perp\perp}$ and $C'=\{bac\}^{\perp\perp}$, which, as $B$ and hence $B'$ is commutative, means $C'$ is also commutative, by \autoref{comsim}. \end{proof} Before leaving the topic of abelian annihilators, let us point out that, while projection lattices are a complete isomorphism invariant for commutative AW*-algebras (by Stone's representation theorem), the same is not true for annihilator lattices of C*-algebras. Indeed, if $X$ is any topological space with a countable basis of regular open subsets then \[[C^b(X)]^\perp\cong[\mathbb{B}_\infty]^\perp\times\{0,1\}^n,\] where $\mathbb{B}_\infty$ denotes the free Boolean algebra with infinitely many generators and $n$ is the number of isolated points of $X$ (see \cite{Birkhoff1967} Ch IX \S4 Theorem 3). So $[C([0,1])]^\perp$ is orthoisomorphic to $[C([0,1]\times[0,1])]^\perp$, for example, even though $C([0,1])$ and $C([0,1]\times[0,1])$ are not isomorphic C*-algebras (as their spectrums $[0,1]$ and $[0,1]\times[0,1]$ are not homeomorphic, having dimension $1$ and $2$ respectively). \subsection{Homogeneous Annihilators}\label{HomogeneousAnnihilators} We have two competing notions of homogeneity. One is classical, where $A$ is said to be \emph{$n$-homogeneous ($n$-subhomogeneous)}, for some $n\in\mathbb{N}$, if $\dim(H_\pi)=n$ ($\dim(H_\pi)\leq n$), for all $\pi\in\widehat{A}(=$ the set of all irreducible representations of $A)$. The other is the natural one coming from the abelian annihilators, that is the notion of $A$ being $n$-$[A]^\perp_\mathbf{D}$-(sub)homogeneous, according to \autoref{homdef}. We show in this subsection that these concepts are closely related. \begin{prp}\label{[A]hom} If $A$ is $n$-$[A]^\perp$-homogeneous then $A$ is not $(n-1)$-subhomogeneous. \end{prp} \begin{proof} Take orthogonal $B_1,\ldots,B_n\in[A]^\perp$ with $\mathrm{c}(B_k)=A$, for all $k$. This means that the ideal generated by each $B_k$ is essential in $A$. So, taking any $b_1\in B_1\setminus\{0\}$, we can therefore find $a_1\in A$ and $b_2\in B_2$ with $b_1a_1b_2\neq0$. Continuing in this manner, we obtain $a_1,\ldots,a_{n-1}\in A$ and $b_1,\ldots,b_n$ with $b_k\in B_k$, for all $k$, and $c=b_1a_1b_2a_2\ldots..a_{n-1}b_n\neq0$. Let $\pi$ be an irreducible representation of $A$ on a Hilbert space $H$ with $\pi(c)\neq0$. Then $\pi(b_k)\neq0$, for all $k$, which, as the $B_1,\ldots,B_n$ are orthogonal, means that $\dim(H)\geq n$. \end{proof} \begin{dfn} A map $F$ on $A_\mathrm{sa}$, for each C*-algebra $A$, is \emph{functorial} if \[\pi\circ F=F\circ\pi\] whenever $\pi:A\rightarrow B$ is a (possibly non-unital) C*-algebra homomorphism. \end{dfn} \begin{lem} If we have rank one $p_1,\ldots,p_n\in\mathcal{P}(M_n)$ with $p_1\vee\ldots\vee p_n=1$ then there are functorial maps $F=F_{p_1,\ldots,p_n}$ and $G=G_{p_1,\ldots,p_n}$ with $G(p_1,\ldots,p_n)=p_1$ such that, whenever we have rank one $q_1,\ldots,q_n\in\mathcal{P}(M_n)$ and $G(q_1,\ldots,q_n)\neq0$, \[q_1\vee\ldots\vee q_n=1=F(q_1,\ldots,q_n).\] \end{lem} \begin{proof} For projections $p$ and $q$ on a Hilbert space, $\sigma(pq)=\sigma(p^\perp q^\perp)=\sigma(p^\perp q^\perp p^\perp)$. So, if $0<\delta<1-\sup(\sigma(pq)\setminus\{1\})$, \[f_\delta(1-p^\perp q^\perp p^\perp)=1-(p^\perp q^\perp p^\perp)_{\{1\}}=(p^\perp\wedge q^\perp)^\perp=p\vee q.\] So we get the functorial maps we want be letting $F_{p_1}$ and $G_{p_1}$ be the identity and recursively defining \begin{eqnarray*} F_{p_1,\ldots,p_n}(q_1,\ldots,q_n) &=& f_{\delta/2}(q)\quad\textrm{and}\\ G_{p_1,\ldots,p_n}(q_1,\ldots,q_n) &=& f_\delta(q)G_{p_1,\ldots,p_{n-1}}(q_1,\ldots,q_{n-1}),\quad\textrm{where}\\ q &=& 1-(1-q_n)(1-F_{p_1,\ldots,p_{n-1}}(q_1,\ldots,q_{n-1}))(1-q_n)\quad\textrm{and}\\ \delta &=& 1-||(p_1\vee\ldots\vee p_{n-1})p_n||^2. \end{eqnarray*} \end{proof} \begin{thm}\label{[A]homthm} If $A=B_1\vee\ldots\vee B_n$, for $B_1,\ldots,B_n\in[A]^\perp_{\mathbf{D}}$, $A$ is $n$-subhomogeneous. \end{thm} \begin{proof} Assume that we have $\pi\in\widehat{A}$ with $\dim(H_\pi)>n$. For all $k\leq n$, $B_k$ is a commutative hereditary C*-subalgebra of $A$ and hence, for all $\pi\in\widehat{A}$, there is a projection $r_{k\pi}$ of rank at most one with $\pi[B_k]=\mathbb{C}r_{k\pi}$. Let \[m=\max\{\mathrm{rank}(\bigvee_{k\leq n}r_{k\pi}):\pi\in\widehat{A}\textrm{ and }\dim(H_\pi)>n\}\] and pick $\pi\in\widehat{A}$ such that $\dim(H_\pi)>n$ and $m=\mathrm{rank}(\bigvee_{k\leq m}r_{k\pi})$. We can then also pick $b_1,\ldots,b_m\in\bigcup_{k\leq n}B_{k+}$ such that $\pi(b_k)$ is a (necessarily rank one) projection, for each $k\leq m$, and $\mathrm{rank}(\bigvee_{k\leq m}\pi(b_k))=m$. Take any $\delta\in(0,1)$ and set \begin{eqnarray*} b_F &=& F_{\pi(b_1),\ldots,\pi(b_m)}(f_{\delta/2}(b_1),\ldots,f_{\delta/2}(b_m))\quad \textrm{and}\\ b_G &=& G_{\pi(b_1),\ldots,\pi(b_m)}(f_{\delta/2}(b_1),\ldots,f_{\delta/2}(b_m)). \end{eqnarray*} By irreducibility, we have $a\in A_+$ with at least $n+1$ points in $\sigma(\pi(a))$. Take continuous functions $g_1,\ldots,g_{n+1}$ on $\mathbb{R}$ with disjoint supports that each have non-empty intersection with $\sigma(\pi(a))$. Now define \[c=(1-b_F)a_0b_Ga_1g_1(a)\ldots a_{n+1}g_{n+1}(a)a_{n+2}f_\delta(b_1)\ldots a_{n+m+1}f_\delta(b_m),\] where $a_0,\ldots,a_{n+m+1}\in A$ are chosen so that $\pi(c)\neq0$ (which is possible by the irreducibility of $\pi$ and the fact $\mathrm{rank}(\pi(b_F))<\dim(H_\pi)$). Now say we have another $\pi'\in\widehat{A}$ such that $\pi'(c)\neq0$. Then $g_k(\pi'(a))\neq0$, for all $k\leq n+1$, and hence \[\dim(H_{\pi'})\geq|\sigma(\pi'(a))|>n.\] Likewise, for all $k\leq m$, $f_\delta(\pi'(b_k))\neq0$ and hence $f_{\delta/2}(\pi'(b_k))$ is a rank one projection. As we also have $\pi'(b_G)\neq0$, this means that $\pi'(b_F)=\bigvee_{k\leq m}f_{\delta/2}(\pi'(b_k))$ and $\mathrm{rank}(\pi'(b_F))=m$ which, by our choice of $m$, means $\pi'(b_F)=\bigvee_{k\leq n}r_{k\pi'}$. Thus $0=\pi'(b(1-b_F))=\pi'(bc)$, for all $b\in\bigcup_{k\leq n}B_k$, and this also holds of course if $\pi'(c)=0$. As $\pi'$ was arbitrary (and the atomic representation is faithful), it follows that $bc=0$, for all $b\in\bigcup_{k\leq n}B_k$, i.e. $cc^*\in(B_1\vee\ldots\vee B_n)^\perp=A^\perp=\{0\}$, contradicting $\pi(c)\neq0$. \end{proof} \begin{thm}\label{homchar} $A$ is $n$-$[A]^\perp_{\mathbf{D}}$-homogeneous if and only if $A$ contains an $n$-homogeneous essential ideal. \end{thm} \begin{proof} Say $A$ contains an $n$-homogeneous essential ideal $B$. Then $B$ is liminal and hence discrete so $B=\bigvee\mathrm{c}_B[B]^\perp_\mathbf{D}$. As $[B]^\perp_\mathbf{D}$ is order-dense in $[B]^\perp$, by \autoref{discomden}, we have $B=\bigvee C_\lambda$ for $(C_\lambda)\subseteq\mathrm{c}_B[B]^\perp$ with each $C_\lambda$ being $\lambda$-$[B]^\perp_\mathbf{D}$-homogeneous. If we had $C_\lambda\neq\{0\}$, for some $\lambda\neq n$, then we would have $\pi\in\widehat{C}_\lambda$ with $\dim(H_\pi)\neq n$, by either \autoref{[A]hom} or \autoref{[A]homthm}. As $C_\lambda$ is an ideal, this $\pi$ extends to an irreducible representation of $B$, contradicting $n$-homogeneity. Thus $B$ is $n$-$[B]^\perp_\mathbf{D}$-homogeneous and hence $A$ is $n$-$[A]^\perp_\mathbf{D}$-homogeneous, by \autoref{ess}. Conversely, assume $A=B_1\vee\ldots\vee B_n$, for orthogonal $B_1,\ldots,B_n\in[A]^\perp_\mathbf{D}$ with $\mathrm{c}(B_k)=A$, for all $k$. This means that the closed ideal $C_k$ generated by $B_k$ is essential in $A$, for all $k$. As the intersection of a pair of essential ideals $D$ and $E$ is again essential (because then, for any $a\in A\setminus\{0\}$, we have $e\in E$ with $ae\neq0$, and then $f\in F$ with $aef\neq0$, where $ef\in E\cap F$), we see that $C=\bigcap C_k$ is an essential ideal in $A$. If $\pi\in\widehat{C}$ then $\ker(\pi)\cap B_k\neq0$, for each $k$, otherwise we would have $C\subseteq\ker(\pi)$ and $\pi$ would be the zero representation. Thus $\dim(H_\pi)\geq n$. But also $C=B_1\vee_C\ldots\vee_CB_k$ and hence $\dim(H_\pi)\leq n$, by \autoref{[A]homthm}. \end{proof} \begin{cor}\label{subequiv} The following are equivalent. \begin{enumerate} \item\label{subequiv1} $A\in[A]^{\perp n}_{\mathbf{D}}$. \item\label{subequiv2} $A\in[A]^\perp_{\mathbf{D}<n+1}$. \item\label{subequiv3} $A$ is $n$-subhomogeneous. \item\label{subequiv4} There are orthogonal $k$-homogeneous ideals $B_k$ with $B_1\oplus\ldots\oplus B_n$ essential. \end{enumerate} \end{cor} \begin{proof}\ \begin{itemize} \item[\eqref{subequiv1}$\Rightarrow$\eqref{subequiv3}] See \autoref{[A]homthm}. \item[\eqref{subequiv3}$\Rightarrow$\eqref{subequiv2}] If $A$ is discrete but $A\notin[A]^\perp_{\mathbf{D}<n+1}$, then there exists non-zero $B\in[A]^\perp_{\mathbf{D}(n+1)}$. By \autoref{[A]homthm}, $B$ has representations $\pi$ with $\dim(H_\pi)>n$. By \cite{Pedersen1979} Proposition 4.1.8, the same is true of $A$ and so $A$ is not $n$-subhomogeneous. \item[\eqref{subequiv2}$\Rightarrow$\eqref{subequiv1}] Immediate. \item[\eqref{subequiv2}$\Rightarrow$\eqref{subequiv4}] Immediate from the `only if' part of \autoref{homchar}. \item[\eqref{subequiv4}$\Rightarrow$\eqref{subequiv2}] If \eqref{subequiv4} holds then, from the `if' part of \autoref{homchar}, we see that each $B_k^{\perp\perp}$ is $k$-$[B_k^{\perp\perp}]^\perp_{\mathbf{D}}$-homogeneous and $A=B_1^{\perp\perp}\vee\ldots\vee B_n^{\perp\perp}$. \end{itemize} \end{proof} We should mention that a more classical proof of \eqref{subequiv3}$\Leftrightarrow$\eqref{subequiv4} could be obtained by utilizing the Jacobson topology on $\widecheck{A}(=\{\ker(\pi):\pi\in\widehat{A}\}=$ the primitive ideal space \textendash\, see \cite{Pedersen1979} \S 4.1), specifically by using \cite{Pedersen1979} Theorem 4.4.6 and Proposition 4.4.10. Another thing $\widecheck{A}$ can be used for is establishing a similar result about $<\!\!\aleph_0$-subhomogeneity. Specifically, we call $A$ \emph{$<\!\!\aleph_0$-subhomogeneous} if $\dim(H_\pi)<\infty$ for all $\pi\in\widehat{A}$. This notion turns out to be different from $<\!\!\aleph_0$-$[A]^\perp_\mathbf{D}$-subhomogeneity (see \autoref{aleph0sub}), but we can at least show it is stronger. \begin{prp} If $A$ is $<\!\!\aleph_0$-subhomogeneous then $A\in[A]^\perp_{\mathbf{D}<\aleph_0}$. \end{prp} \begin{proof} As $A$ is $<\!\!\aleph_0$-subhomogeneous, it is liminal and hence discrete. Thus if $A\notin[A]^\perp_{\mathbf{D}<\aleph_0}$ we would have non-zero $B\in[A]^\perp_{\mathbf{D}\aleph_0}$, meaning there are orthogonal $(B_n)\subseteq[A]^\perp_\mathbf{D}$ with $\mathrm{c}(B_n)=\mathrm{c}(B)$, for all $n$. This means that the subsets of $\widecheck{B}$ defined by $O_n=\{I\in\widecheck{B}:B_n\setminus I\neq\{0\}\}$ are open dense in the Jacobson topology. Thus $O=\bigcap O_n$ is non-empty, as $\widecheck{B}$ is a Baire space (see \cite{Pedersen1979} Theorem 4.3.5), so $B$, and hence $A$, has an irreducible representation $\pi$ which is not zero on all of $B_n$, for any $n$. Thus $\dim(H_\pi)=\infty$, contradicting the $<\!\!\aleph_0$-subhomogeneity of $A$. \end{proof} \begin{thm}\label{nsubess} If $A$ is $<\!\!\aleph_0$-$[A]^\perp_\mathbf{D}$-subhomogeneous then every essential hereditary C*-subalgebra $B$ of $A$ contains an essential ideal. \end{thm} \begin{proof} First assume that $A$ is $n$-($[A]^\perp_\mathbf{D}$-)subhomogeneous, for some $n\in\mathbb{N}$. Let $\widehat{A}_B=\{\pi\in\widehat{A}:\dim(\pi[B]H_\pi)<\dim(H_\pi)\}$ and \[C=\bigcap_{\pi\in\widehat{A}_B}\ker(\pi).\] This is immediately seen to be an ideal, and $C\subseteq B$ because the atomic representation is faithful on open projections, by \cite{Pedersen1979} Proposition 4.3.13 and Theorem 4.3.15. All we need to verify is that $C^\perp=\{0\}$. Assume to the contrary that we have non-zero $c\in C^\perp_+$. Then $c\notin C$ so we have $\pi\in\widehat{A}_B$ with $\pi(c)\neq0$ and \[\dim(\pi[B]H_\pi)=m=\max\{\dim(\pi'[B]H_{\pi'}):\pi'\in\widehat{A}_B\textrm{ and }\pi'(c)\neq0\}.\] As $B$ is hereditary, so is $\pi[B]$ and we can take $b\in B_+$ with $|\sigma(\pi(b)\setminus\{0\})|=m$. Let $g_1,\ldots,g_m$ be continuous functions on $\mathbb{R}$ with disjoint supports contained in $[\delta,\infty)$, for some $\delta>0$, that each have non-empty intersection with $\sigma(\pi(b))$. Now define \[a=(1-f_\delta(b))a_0g_1(b)a_1\ldots g_m(b)a_mc\in C^\perp,\] where $a_0,\ldots,a_m\in A$ are chosen so that $\pi(a)\neq0$. Take any $b'\in B$. As $a\in C^\perp$ we have $b'a\in C^\perp$ so, if $b'a\neq0$, there exists $\pi'\in\widehat{A}_B$ with $\pi'(b'a)\neq0$. But then $\pi'(a)\neq0$ so $\pi'(c)\neq0$ and $g_k(\pi'(b))\neq0$, for all $k\leq m$. This means $\sigma(\pi'(b)\backslash\{0\})\subseteq[\delta,\infty)$ and $\pi'(b'a)=\pi'(b'(1-f_\delta(b)))=0$, by the definition of $m$, a contradiction. Thus $b'a=0$ which, as $b'$ was arbitrary, means $aa^*\in B^\perp=\{0\}$, contradicting $\pi(a)\neq0$. For the general case, take an increasing sequence of (annihilator) ideals $(I_n)$ in $A$ such that $\bigcup I_n$ is essential in $A$ and $I_n$ is $n$-subhomogeneous, for all $n$. As $B$ is essential in $A$, and $I_n$ is an ideal in $A$, $B\cap I_n$ is essential in $I_n$ (if $a\in I_{n+}$ then we have $b\in B_+$ with $ab\neq0\neq abab$ and $bab\in B\cap I_{n+}$), and hence contains an essential ideal $C_n$ in $I_n$. But then $\bigcup C_n$ is an essential ideal in $\bigcup I_n$ which, as $\bigcup I_n$ is itself an essential ideal in $A$, means $\bigcup C_n$ is an essential ideal in $A$. \end{proof} \begin{cor}\label{subhom=} If $B\in[A]^\perp$ is $<\!\!\aleph_0$-$[B]^\perp_\mathbf{D}$-subhomogeneous then $[B]=[B]_B$ so \[[A]^\perp_{\mathbf{D}<\aleph_0}\subseteq[A]^\perp_=.\] \end{cor} \begin{proof} Take $C\in[B]$. As $C$ is essential in $C^{\perp_B\perp_B}$, we have an essential ideal $D$ in $C^{\perp_B\perp_B}$ with $D\subseteq C$, by \autoref{nsubess}. Then $C^{\perp_B\perp_B}=C^{\perp_B\perp_B\perp\perp}=D^{\perp\perp}=C$, by \autoref{essidann}. \end{proof} \begin{cor} If $B$ is a hereditary C*-subalgebra of $A$ and $B\in[B]^\perp_{\mathbf{D}n}$ then \[B^{\perp\perp}\in[A]^\perp_{\mathbf{D}n}.\] \end{cor} \begin{proof} Say $B_1,\ldots,B_n\in[B]^\perp_\mathbf{D}$ witness $B\in[B]^\perp_{\mathbf{D}n}$. Then $B_1^{\perp\perp},\ldots,B_n^{\perp\perp}\in[A]^\perp_\mathbf{D}$, by \autoref{commutativesubalgebra}, and $C=B_1^{\perp\perp}\vee\ldots\vee B_n^{\perp\perp}\in[A]^\perp_{\mathbf{D}n}$, by \autoref{c_B}. We do not immediately know that $B\subseteq C$, as supremums in $B$ and $A$ can differ, but $C$ will contain the hereditary C*-subalgebra generated by $B_1,\ldots,B_n$. As this hereditary C*-subalgebra is essential in $B$, it contains an essential ideal $D$ in $B$, by \autoref{nsubess}. By \autoref{essidann}, we have $B^{\perp\perp}=D^{\perp\perp}\subseteq C\subseteq B^{\perp\perp}$. \end{proof} \begin{thm}\label{Dnsim} If $B,C\in[A]^\perp$ are perspective and $B\in[A]^\perp_{\mathbf{D}n}$ then $C\in[A]^\perp_{\mathbf{D}n}$. \end{thm} \begin{proof} It suffices to prove the result for semiorthoperspective $B$ and $C$ (see \S\ref{persec}). By \autoref{discomden}, $[A]^\perp_\mathbf{D}$ is order-dense in $[\bigvee \mathrm{c}[A]^\perp_\mathbf{D}]\supseteq[\mathrm{c}(B)]=[\mathrm{c}(C)]$. By \autoref{commutativesubalgebra}, $[C]^\perp_\mathbf{D}$ is order-dense in $C$ so by \autoref{homthm}, $C$ is $[C]^\perp_\mathbf{D}$-subhomogeneous. If $C$ is not $n$-$[C]^\perp_\mathbf{D}$-homogeneous then one of the non-zero homogeneous parts $D$ has order $<n$ or $>n$. In the first case, $D=C\cap\mathrm{c}(D)$ and $B\cap\mathrm{c}(D)$ are semiorthoperspective. In the second, we have $E\in[D]$ that is $(n+1)$-$[C]^\perp_\mathbf{D}$-homogeneous. Then $[E]_E=[E]$ and $[B]=[B]_B$, by \autoref{subhom=}, and hence $E$ is semiorthoperspective to some $F\in[B]$, by \autoref{sopcor}. By cutting down further by a central element, we may assume that $F$ is $m$-homogeneous for some $m\leq n$. So in either case, after some renaming, we have $m$-$[A]^\perp_\mathbf{D}$-homogeneous $B$ semiorthoperspective to $n$-$[A]^\perp_\mathbf{D}$-homogeneous $C$ with $m<n$. We show that this leads to a contradiction. Take orthogonal $B_1,\ldots,B_m\in[A]^\perp_\mathbf{D}$ with $\mathrm{c}(B_k)=\mathrm{c}(B)$, for all $k\leq m$. By \autoref{vo}, we have $B_1'\subseteq B_1$ and $B_2'\subseteq B_2$ such that $B_1'\sim B_2'$. We also have non-zero $B_2''\subseteq B_2'$ and $B_3''\subseteq B_3$ with $B_2''\sim B_3''$. We then also have $B_1''\subseteq B_1'$ with $B_1''\sim B_2''$, by \autoref{simtran}. Continuing in this way, and again renaming, we get non-zero orthogonal $B_1\sim\ldots\sim B_m\sim C_1\sim\ldots\sim C_n$ in $[A]^\perp_\mathbf{D}$ such that $B\cap D=B_1\vee\ldots\vee B_m$ and $C\cap D=C_1\vee\ldots\vee C_n$, where $D=\mathrm{c}(B_1)=\ldots=\mathrm{c}(C_n)$. As $D$ is central, replacing $B$ and $C$ with $B\cap D$ and $C\cap D$ we still see that $B$ and $D$ are semiorthoperspective. As they are also principal, we have $B\sim C$, by \autoref{posp'}. This means that $C=D_1\vee\ldots\vee D_m$, with $B_k\sim D_k$, for all $k\leq m$. By \autoref{comsim}, each $D_k$ will be abelian and hence $C$ will be $m$-subhomogeneous, by \autoref{[A]homthm}, contradicting \autoref{[A]hom}. \end{proof} \begin{cor}\label{nsubcor} Assume $A$ is $<\!\!\aleph_0$-$[A]^\perp_\mathbf{D}$-subhomogeneous. \begin{enumerate} \item\label{nsubcor1} Every open dense projection in $A''$ is regular. \item\label{nsubcor2} The function $||BC^\perp||$, for $B,C\in[A]^\perp$, satisfies the triangle inequality. \item\label{nsubcor3} $[A]^\perp$ is modular, and hence a continuous geometry. \end{enumerate} \end{cor} \begin{proof}\ \begin{enumerate} \item Every dense $p\in\mathcal{P}^\circ(A'')$ dominates some dense $q\in\mathcal{P}^\circ(A''\cap A')$, by \autoref{nsubess}. As central projections are necessarily regular (see \cite{Effros1963} \S6), $p$ must also be regular. \item Take $B,C,D\in[A]^\perp$ and $q\in\mathcal{P}^\circ(A''\cap A')$ with $q\leq p_C+p_{C^\perp}$. For every $b\in B^1_+$ and $d\in D^{\perp1}_+$, we have \[||bd||=||bdq||=||bqd||\leq||b(p_C+p_{C^\perp})d||\leq||bp_{C^\perp}||+||p_Cd||,\] and hence $||BD^\perp||\leq||BC^\perp||+||CD^\perp||$. \item We prove that perspectivity is finite (see \eqref{modperfin}). If not, we would have perspective $B,C\in[A]^\perp$ with $C<B$. By cutting down by a central element, we may assume that $C$ is $n$-$[A]^\perp_\mathbf{D}$-homogeneous, for some $n\in\mathbb{N}$. By \autoref{subhom=}, $C\in[B]=[B]_B$ so $C<B$ means we have non-zero $b\in B_+\cap C^\perp$. As $\mathrm{c}(B)=\mathrm{c}(C)$, by perspectivity, the proof of \autoref{[A]hom} (starting with $b$ instead of $b_1$) shows that $B$ is not $n$-subhomogeneous. However, $B$ is $n$-$[A]^\perp_\mathbf{D}$-homogeneous, by \autoref{Dnsim}, and hence $n$-subhomogeneous, by \autoref{[A]homthm}, a contradiction. \end{enumerate} \end{proof} So, by \autoref{nsubcor}\eqref{nsubcor3} and \cite{Kaplansky1955}, the annihilators $[A]^\perp$ in a $<\!\!\aleph_0$-$[A]^\perp_\mathbf{D}$-subhomogeneous C*-algebra $A$ form a continuous geometry. As far as we know, these kinds of continuous geometries have not been investigated before. \subsection{Matrix Valued Functions}\label{MVF} \begin{dfn}\label{lscts} Assume $X$ and $Y$ are topological spaces. We denote the set of continuous functions from $X$ to $Y$ by $C(X,Y)$. If $A$ is a C*-algebra, we call $f:X\rightarrow A$ \emph{bounded} if $\sup_{x\in X}||f(x)||<\infty$. The C*-algebras (with pointwise operations) of bounded and bounded continuous functions from $X$ to $A$ will be denoted by $B(X,A)$ and $C^b(X,A)$ respectively. We call $f:X\rightarrow\mathcal{P}(A)$ \emph{lower semicontinuous} at $x\in X$ if \[\forall\epsilon>0\exists\textrm{ a neighbourhood $Y$ of $x$ s.t. }\forall y\in Y(||p(y)^\perp p(x)||<\epsilon).\] We say that $p$ is lower semicontinuous if it is lower semicontinuous at every $x\in X$ and denote the set of all lower semicontinuous $p$ by $C^\circ(X,\mathcal{P}(A))$. We call $p$ \emph{upper semicontinuous} if $p^\perp$ (i.e. the function defined by $p^\perp(x)=p(x)^\perp$, for all $x\in X$) is lower semicontinuous and denote the set of upper semicontinuous $p$ by $\overline{C}(X,\mathcal{P}(A))$. \end{dfn} By \autoref{||p-q||}, $p:X\rightarrow\mathcal{P}(A)$ will be continuous at $x$ if and only if it is simultaneously upper and lower semicontinuous at $x$ and hence \[C(X,\mathcal{P}(A))=\overline{C}(X,\mathcal{P}(A))\cap C^\circ(X,\mathcal{P}(A)).\] Also note that if $d:\mathcal{P}(A)\rightarrow\mathbb{R}$ is continuous and monotone in the sense that $p\leq q\Rightarrow d(p)\leq d(q)$ then, by \autoref{pnearq} \eqref{pnearq1}$\Rightarrow$\eqref{pnearq3}, $d\circ p$ will be lower semicontinuous in the usual sense (i.e. $f^{-1}(r,\infty)$ is open, for all $r\in\mathbb{R}$) if $p$ is lower semicontinous according to \autoref{lscts}. In particular, $d$ could be the restriction to $\mathcal{P}(A)$ of a dimension function on $A$. For $f:X\rightarrow A$, $[f]$ denotes the map $x\mapsto[f(x)]=(f(x)f(x)^*)_{(0,\infty)}$ to $\mathcal{P}(A'')$. \begin{prp}\label{finspec} If $f\in C(X,A_+)$, $x\in X$ and $f(x)$ has finite spectrum then $[f]$ is lower semicontinuous at $x$. \end{prp} \begin{proof} Say $\delta>0$, $a,b\in A$, $||a-b||\leq\delta$ and $\lambda\in\sigma(b)$. For unit $v\in\mathcal{R}(b_{\{\lambda\}})$ we have \[\lambda||a_{\{0\}}v||=||a_{\{0\}}(a-\lambda)v||\leq||(a-\lambda)v||\leq||(b-\lambda)v||+\delta=\delta,\] and hence $||a_{\{0\}}b_{\{\lambda\}}||\leq\delta/\lambda$. As $f(x)_{(0,\infty)}$ is a finite sum of spectral projections, this calculation shows that, for any $\epsilon>0$, by choosing a sufficientely small neighbourhood $Y$ of $x$, we can ensure that $||f(y)_{\{0\}}f(x)_{(0,\infty)}||\leq\epsilon$, for all $y\in Y$. \end{proof} If $f(x)$ has infinite spectrum then $f$ can indeed fail to be lower semicontinuous at $x$. For example, let $X=\mathbb{N}_\infty$ (the one point $\{\infty\}$ compactification of $\mathbb{N}$) and $A=\mathcal{K}(H)$, where $H$ is infinite dimensional separable with basis $(e_n)$. Let $p_n$ denote the projection onto $\mathbb{C}e_n$, for each $n\in\mathbb{N}$. Now consider $f\in C(X,A_+)$ defined by $f(n)=\sum_{k=1}^n2^{-k}p_{2k}+\sum_{k=n+1}^\infty2^{-k}p_{2k+1}$ and $f(\infty)=\sum_{k=1}^\infty2^{-k}p_{2k}$. Note that $[f](n)=\sum_{k=1}^np_{2k}+\sum_{k=n+1}^\infty p_{2k+1}$ and $[f](\infty)=\sum_{k=1}^\infty p_{2k}$ so $||[f](n)^\perp[f](\infty)||=1$, for all $n\in\mathbb{N}$, and hence $[f]$ is not lower semicontinuous at $\infty$. \begin{lem}\label{[pq]} The map $(p,q)\mapsto[pq]$ is lower semicontinuous on \[\{(p,q)\in\mathcal{P}(A)\times\mathcal{P}(A):\inf(\sigma(pq)\setminus\{0\})>0\}.\] \end{lem} \begin{proof} If $\inf(\sigma(pq)\setminus\{0\})>0$ then $r=(qpq)_{(0,1]}\in A$, $||p^\perp r||<1$ and $[pq]=[pr]\in A$. By \cite{Bice2012} Lemma 2.3, for any $\epsilon>0$ we can choose $\delta>0$ so that $||p-p'||,||r-r'||\leq\delta$ implies $||[p'r']-[pr]||\leq\epsilon$. But then $||p-p'||,||q-q'||\leq\delta$ implies $r'=(q'rq')_{(0,1]}\in A$ and $||r-r'||<\delta$ (see \cite{Bice2012} (2.3)) and hence \[||[p'q']^\perp[pq]||\leq||[p'r']^\perp[pr]||\leq||[p'r']-[pr]||\leq\epsilon.\] \end{proof} \begin{lem}\label{pveeq} The map $(p,q)\mapsto p\vee q$ is lower semicontinuous on \[\{(p,q)\in\mathcal{P}(A)\times\mathcal{P}(A):\sup(\sigma(pq)\setminus\{1\})<1\}.\] \end{lem} \begin{proof} If $\sup(\sigma(pq)\setminus\{1\})<1$ then $r=(qpq)_{[0,1)}\in A$, $||pr||<1$ and $p\vee q=p\vee r\in A$. By \cite{Bice2012} Lemma 2.4, for any $\epsilon>0$ we can choose $\delta>0$ so that $||p-p'||,||r-r'||\leq\delta$ implies $||p'\vee r'-p\vee r||\leq\epsilon$. But then $||p-p'||,||q-q'||\leq\delta$ implies $r'=(q'rq')_{(0,1]}\in A$ and $||r-r'||<\delta$ (see \cite{Bice2012} (2.3)) and hence \[||(p'\vee q')^\perp(p\vee q)||\leq||(p'\vee r')^\perp(p\vee r)||\leq||p'\vee r'-p\vee r||\leq\epsilon.\] \end{proof} If $p,q\in C^\circ(X,\mathcal{P}(M_n))$ then $\sup(\sigma(p(x)q(x))\setminus\{1\})<1$, for all $x\in X$, so it follows from \autoref{pveeq} and \autoref{pnearq} \eqref{pnearq1}$\Rightarrow$\eqref{pnearq3} that $p\vee q\in C^\circ(X,\mathcal{P}(M_n))$ (where $(p\vee q)(x)=p(x)\vee q(x)$, for all $x\in X$). Also, if $(p_n)\subseteq C^\circ(X,\mathcal{P}(M_n))$ is increasing (i.e. for any $x\in X$, $(p_n(x))$ is (non-strictly) increasing in $\mathcal{P}(M_n)$) then, for any $x\in X$, it follows that $\bigvee_n p_n(x)=p_m(x)$, for some $m\in\mathbb{N}$, which, as $p_m\leq\bigvee_np_n$ and $p_m$ is lower semicontinuous, means that $\bigvee_np_n$ is lower semicontinuous at $x$. As $x\in X$ was arbitrary, $\bigvee p_n\in\mathcal{P}(M_n)$. This applies equally well to transfinite sequences and thus, combining thes facts, we see that $C^\circ(X,\mathcal{P}(M_n))$ is closed under taking arbitrary suprema, i.e. \begin{equation}\label{bigveeP} \mathcal{P}\subseteq C^\circ(X,\mathcal{P}(M_n))\quad\Rightarrow\quad\bigvee\mathcal{P}\in C^\circ(X,\mathcal{P}(M_n)). \end{equation} Given a Hilbert space $H$, we represent the C*-algebra of bounded functions $B(X,\mathcal{B}(H))$ from $X$ to $\mathcal{B}(H)$ on $\bigoplus_{x\in X}H$ by defining $f(v_x)_{x\in X}=(f(x)v_x)_{x\in X}$, for any $f\in B(X,\mathcal{B}(H))$ and $(v_x)_{x\in X}\in\bigoplus_{x\in X}H$. Thus, our assumption that $A$ is represented concretely and faithfully on some Hilbert space $H$ means we can identify $C^b(X,A)''$ with a subset of $B(X,A'')$ and then use non-commutative topological concepts on $B(X,\mathcal{P}(\mathcal{B}(H)))$ with reference to $C^b(X,A)$. In fact, if $X$ is a completely regular Hausdorff topological space then it is not difficult to see that $C^b(X,A)''$ will be identified in this way with the entirety of $B(X,A'')$, i.e. \[C^b(X,A)''=B(X,A'').\] \begin{dfn} For any function $f$ between topological spaces we denote the set of points on which $f$ is continuous by $C_f$. \end{dfn} \begin{prp}\label{pcircp} If $X$ is a completely regular topological space and $p:X\rightarrow\mathcal{P}(A)$ then $p^\circ(x)=p(x)=\overline{p}(x)$, for all $x\in C_p^\circ$. \end{prp} \begin{proof} For any $x\in C_p^\circ$ we have a function $g:X\rightarrow[0,1]$ with $g(x)=1$ and $g[X\setminus C_p^\circ]=\{0\}$. Then $gp$ (the map $x\mapsto g(x)p(x)$) is continuous on $X$, $gp\leq p$ and $g(x)p(x)=p(x)$. As $p^\circ=\sup(pC^b(X,A)p\cap C^b(X,A))^1_+$, the first statement is proved. The second follows from this, $C_p=C_{p^\perp}$ and $\overline{p}=p^{\perp\circ\perp}$. \end{proof} Representing $M_n$ canonically on $\mathbb{C}^n$, we have $M_n''=M_n$, and thus we can identify $C^b(X,M_n)''$ with a subset of $B(X,M_n)$. The following result says that, under this identification, the open and closed projections in $C^b(X,M_n)''$ are precisely the lower and upper semicontinous projection functions respectively. \begin{prp}\label{normregopenclosed} If $X$ is a normal regular topological space then \[C^\circ(X,\mathcal{P}(M_n))=\mathcal{P}(C^b(X,M_n)'')^\circ\qquad\textrm{and}\qquad\overline{C}(X,\mathcal{P}(M_n))=\overline{\mathcal{P}(C^b(X,M_n)'')}\] \end{prp} \begin{proof} First note that $[f]\in C^\circ(X,\mathcal{P}(M_n))$ whenever $f\in C(X,M_n)$, by \autoref{finspec}. Thus, for any hereditary C*-subalgebra $B$ of $C^b(X,M_n)$, we have $\sup B^1_+=\bigvee\{[f]:f\in B^1_+\}\in C^\circ(X,\mathcal{P}(M_n))$, by \eqref{bigveeP}, which means that $\mathcal{P}^\circ(C^b(X,M_n)'')\subseteq C^\circ(X,\mathcal{P}(M_n))$. Given $p\in C^\circ(X,\mathcal{P}(A))$ and $x\in X$, let $Y$ be a closed neighbourhood of $x$ such that $||p(y)^\perp p(x)||<1$, for all $y\in Y$. For each $m\in\mathbb{N}$, the set \[Y_m=\{y\in Y:\dim(p(y))\leq\dim(p(x))+m\}\] is closed. The function defined by $f_0(y)=[p(y)p(x)]$ is continuous on $Y_0$ and thus has a continuous extension $g_0:X\rightarrow M_n^1$, by the Tietze extension theorem. The function defined by $f_1(y)=p(y)g_0(y)^*g_0(y)p(y)$ is continuous on $Y_1$ and, continuing to apply the Tietze extension theorem in this way we eventually obtain $f=f_{n-\dim(p(x))}\in C(Y,M_{2+})$ with $f(y)\leq p(y)$, for all $y\in Y$. As $X$ is completely regular, we have a function $h:X\rightarrow[0,1]$ with $h(x)=1$ and $h[X\setminus Y]=\{0\}$. The function $f_x$ defined by $f_x(y)=h(y)f(y)$, for all $y\in Y$, and $f_x(y)=0$, for $y\in X\setminus Y$, is continuous on all of $X$. We thus have $p=\sup_{x\in X}f_x\in\mathcal{P}^\circ(C^b(X,M_n)'')$ \end{proof} We suspect that normality here could not be replaced with complete regularity. To see this, all you would need is a continuous function $f:Y\rightarrow[0,1]$, where $Y$ is a closed subset of a completely regular space $X$ such that $Y$ is not locally compact and $f$ has no extension to any $Z\subseteq\overline{Y}^{\beta X}$ properly containing $Y$ ($\beta X$ here denotes the Stone-\v{C}ech compactification of $X$ \textendash\, see \cite{GillmanJerison1960}). Identifying $[0,1]$ with a subspace of $\mathcal{P}(M_2)$ and extending $f$ to $X$ by defining $f(x)$ to be the identity of $M_2$, for $x\notin Y$, we see that $f\in C^\circ(X,\mathcal{P}(M_2))$. If $f\in\mathcal{P}^\circ(C^b(X,M_n)'')$ then we would have a net $(f_n)\subseteq C^b(X,M^1_{n+})$ with $f=\sup f_n$ and hence $f^\beta=\sup f^\beta_n$ (where $f^\beta_n$ denotes the unique continuous extension of $f_n$ to $\beta X$) would be a lower semicontinuous extension of $f$ to $\beta X$. Then $f$ would have to be $0$ on $\overline{Y}^{\beta X}\setminus Y$. But $Y$ is not locally compact so $\overline{Y}^{\beta X}\setminus Y$ is not closed and hence we would have $f(y)=0$ for some $y\in Y$, a contradiction. \begin{thm}\label{Cp} For every $p\in C^\circ(X,\mathcal{P}(M_n))$, $C_p$ is open dense. \end{thm} \begin{proof} For finite rank projections $q$ and $r$, $\dim(q)=\dim(r)$ implies $||q-r||=||qr^\perp||=||rq^\perp||$ and hence $C_p=C_{\dim\circ p}$. As noted after \autoref{lscts}, $\dim\circ p$ is lower semicontinuous and, as $\{0,\ldots,n\}$ is discrete, $\dim\circ p$ will be constant, and hence continuous, on some open neighbourhood about any $x\in C_{\dim\circ p}$. Now note that $\dim(p(x))<n$, for all $x\in X\setminus C_p$, by discontinuity. If $X\setminus C_p$ contained an open set $O$, then we would have $\dim(p(x))<n-1$, for all $x\in O$, again by discontinuity. But then $\dim(p(x))<n-2$, for all $x\in O$, etc. and hence we eventually get $\dim(p(x))=0$, for all $x\in O$, which shows that $\dim\circ p$ is continuous on $O$, a contradiction, i.e. $C_p$ is open dense. \end{proof} \begin{dfn} Given functions $f,g$ on a topological space $X$, we define \begin{eqnarray*} f=_\mathrm{od}g &\Leftrightarrow& \{x\in X:f(x)=g(x)\}^\circ\textrm{ is dense, and}\\ f=_\mathrm{d}g &\Leftrightarrow& \{x\in X:f(x)=g(x)\}\textrm{ is dense}. \end{eqnarray*} \end{dfn} Note that $=_\mathrm{od}$ is a transitive relation, while $=_\mathrm{d}$ may not be. However, they coincide for the functions we are considering. \begin{prp}\label{DpDq} If $X$ is a completely regular topological space, the following are equivalent, for $p,q\in C^\circ(X,\mathcal{P}(M_n))$. \begin{enumerate} \item\label{DpDq1} $\overline{p}=\overline{q}$. \item\label{DpDq2} $C_p\cap C_q\subseteq\{x\in X:p(x)=q(x)\}$. \item\label{DpDq3} $p=_\mathrm{od}q$. \item\label{DpDq4} $p=_\mathrm{d}q$. \end{enumerate} If these conditions hold and $p\leq q$ then $C_p\subseteq C_q$, so $C_p\subseteq\{x\in X:p(x)=q(x)\}$. \end{prp} \begin{proof}\ \begin{itemize} \item[\eqref{DpDq1}$\Rightarrow$\eqref{DpDq2}] Assume $p(x)\nleq q(x)$ for some $x\in C_p\cap C_q$. As $X$ is completely regular, we have a continuous function $f:X\rightarrow[0,1]$ such that $f(x)=1$ and $f[X\setminus C_q]=\{0\}$. Letting $g(y)=f(y)q(x)^\perp$, for all $y\in X$, we see that $g$ is continuous and $p(x)g(x)\neq0$, so $g$ witnesses the fact $q^{\perp\circ}\nleq p^{\perp\circ}$ and hence $\overline{p}=p^{\perp\circ\perp}\neq q^{\perp\circ\perp}=\overline{q}$. \item[\eqref{DpDq2}$\Rightarrow$\eqref{DpDq3}] Immediate from \autoref{Cp}. \item[\eqref{DpDq3}$\Rightarrow$\eqref{DpDq4}] Immediate from the definitions. \item[\eqref{DpDq4}$\Rightarrow$\eqref{DpDq1}] If $f\in p^\perp C(X,M_n)p^\perp\cap C^b(X,M_n)^1_+$ then $pf$ is $0$ everywhere on $X$. If $p=_\mathrm{d}q$ then $qf$ is $0$ on a dense subset of $X$ which, as $||qf||$ is immediately seen to be lower semicontinuous, means $qf$ is $0$ on the entirety of $X$, i.e. $f\in q^\perp C(X,M_n)q^\perp\cap C^b(X,M_n)^1_+$. Thus $p^{\perp\circ}=q^{\perp\circ}$ and hence we have $\overline{p}=p^{\perp\circ\perp}=q^{\perp\circ\perp}=\overline{q}$. \end{itemize} For the last statement, assume that $x\in C_p\setminus C_q$, so there is an open neighbourhood $Y$ of $x$ such that $\dim(p(y))=\dim(p(x))$, for all $y\in Y$. As $x\notin C_q$, $x$ must be a limit point of the open set $Y\cap\{y\in X:\dim(q(y))>\dim(q(x))\}$ so, in particular, this set is not empty. As $p(x)\leq q(x)$, it is contained in $\{y\in X:p(y)\neq q(y)\}$, showing that this set has non-empty interior and $p\neq_\mathrm{d}q$. \end{proof} Say $X$ is a completely regular topological space and we have a $=_\mathrm{od}$ equivalence class $E\subseteq C^\circ(X,\mathcal{P}(M_n))$. For any $p\in E$, we have $\overline{p}^\circ\in E$, by \autoref{pcircp} and \autoref{Cp}. And, for any other $q\in E$, we have $\overline{p}=\overline{q}$ and hence $\overline{p}^\circ=\overline{q}^\circ$, by \autoref{DpDq}, i.e. $E$ contains precisely one topologically open projection. This will in fact be the maximum of $E\cap\mathcal{P}(C^b(X,M_n)'')^\circ$ and hence, if $X$ is normal, the maximum of $E$, by \autoref{normregopenclosed}. \begin{prp}\label{commute=od} If $X$ is completely regular and $p,q\in\overline{\mathcal{P}(C^b(X,M_n)'')}^\circ$, \[p\textrm{ and }q\textrm{ commute in }\overline{\mathcal{P}(C^b(X,M_n)'')}^\circ\quad\Leftrightarrow\quad pq=_\mathrm{od}qp.\] \end{prp} \begin{proof} For any $p,q\in\overline{\mathcal{P}(C^b(X,M_n)'')}^\circ$, we have $p,q\in C^\circ(X,\mathcal{P}(M_n))$ by the first part of the proof of \autoref{normregopenclosed}. We then have $p\vee q\in C^\circ(X,\mathcal{P}(M_n))$, by \eqref{bigveeP}, and hence $\overline{p\vee q}^\circ=_\mathrm{od}p\vee q$, by \autoref{pcircp} and \autoref{Cp}. Likewise, we have $p^{\perp\circ}=_\mathrm{od}p$ and hence $p\wedge_{\overline{\mathcal{P}(C^b(X,M_n)'')}^\circ}q=(p^{\perp\circ}\vee q^{\perp\circ})^{\perp\circ}=_\mathrm{od}p\wedge q$. Thus \begin{equation}\label{commute=odeq} \overline{(p^{\perp\circ}\vee q^{\perp\circ})^{\perp\circ}\vee(p^{\perp\circ}\vee q)^{\perp\circ}}^\circ=_\mathrm{od}(p\wedge q)\vee(p\wedge q^\perp). \end{equation} If $p$ and $q$ commute in $\overline{\mathcal{P}(C^b(X,M_n)'')}^\circ$ then \eqref{commute=odeq} becomes $p=_\mathrm{od}(p\wedge q)\vee(p\wedge q^\perp)$ which is equivalent to $pq=_\mathrm{od}qp$. If $pq=_\mathrm{od}qp$ then $p=_\mathrm{od}\overline{(p^{\perp\circ}\vee q^{\perp\circ})^{\perp\circ}\vee(p^{\perp\circ}\vee q)^{\perp\circ}}^\circ$ which, as both sides here are topologically regular, means $=_\mathrm{od}$ is actually $=$ and hence $p$ and $q$ commute in $\overline{\mathcal{P}(C^b(X,M_n)'')}^\circ$. \end{proof} As $[A]^\perp$ and $\overline{\mathcal{P}(A'')}^\circ$ are isomorphic (see \autoref{annproj}), the relation $\sim$ we have on $[A]^\perp$ can be seen as a relation on $\overline{\mathcal{P}(A'')}^\circ$, which we will also denote by $\sim$, i.e. \[p\sim q\quad\Leftrightarrow\quad\exists a\in A(p=\overline{[a]}^\circ\textrm{ and }q=\overline{[a^*]}^\circ).\] \begin{thm}\label{BCpara} If $X$ is a hereditarily paracompact Hausdorff topological space then, for $p,q\in\overline{\mathcal{P}(C^b(X,M_n)'')}^\circ$, \[p\sim q\qquad\Leftrightarrow\qquad\dim\circ p=_\mathrm{od}\dim\circ q\] \end{thm} \begin{proof} If $a\in A$ witnesses $p\sim q$ then (even if $X$ is just completely regular), by \autoref{finspec}, \autoref{pcircp} and \autoref{Cp}, \[\dim\circ p=\dim\circ\overline{[a]}^\circ=_\mathrm{od}\dim\circ[a]=\dim\circ[a^*]=_\mathrm{od}\dim\circ\overline{[a]}^\circ=\dim\circ q.\] On the other hand, if $\dim\circ p=_\mathrm{od}\dim\circ q$ then, for all $x\in C_p\cap C_q$, $\dim(p(x))=\dim(q(x))$. Let $u$ be a partial isometry with $u^*u=p(x)$ and $uu^*=q(x)$. Choose an open neighbourhood $Y_x$ of $x$ with $Y_x\subseteq C_p\cap C_q$ and $||p(x)-p(y)||,||q(x)-q(y)||<1$. This means $p(x)p(y)$ has a polar decomposition for all $y\in Y_x$ and, moreover, the partial isometry appearing in this decomposition can be chosen continuously on $Y_x$. Specifically, let $v(y)=p(x)p(y)\sqrt{p(y)p(x)p(y)}^{-1}$, where $^{-1}$ here denotes the quasi-inverse, so $v(y)$ is a partial isometry with $v(y)^*v(y)=p(y)$ and $v(y)v(y)^*=p(x)$, and $v$ is continuous on $Y_x$ (alternatively, this can be derived from \cite{Blackadar2006} II.3.3.4). Likewise, define a continuous function $w$ on $Y_x$ so that $w(y)^*w(y)=q(x)$ and $w(y)w(y)^*=q(y)$, for all $y\in Y_x$. Take a locally finite refinement $(Z_\alpha)$ of $(Y_x)_{x\in C_p\cap C_q}$. By replacing each $Z_\alpha$ with $Z_\alpha\setminus\overline{\bigcup_{\beta<\alpha}Z_\beta}$ (and then throwing out the resulting empty sets), we obtain a locally finite collection of disjoint open sets with $\overline{\bigcup Z_\alpha}=X$. For each $\alpha$ pick $x$ with $Z_\alpha\subseteq Y_x$, choose a function $g_\alpha:Z_\alpha\rightarrow[0,1]$ with $g_\alpha(x)=1$ and $g[X\setminus Z_\alpha]=\{0\}$, and set $f(z)=g(z)w(y)uv(y)$, for all $z\in Z_\alpha$. Defining $f(z)=0$ for all $z\in X\setminus\bigcup Z_\alpha$ we see that the local finiteness of $(Z_\alpha)$ implies $f$ is continuous on all of $X$, i.e. $f\in C^b(X,M_n)$. Furthermore, $[f]=_\mathrm{od}q$ and $[f^*]=_\mathrm{od}p$ and so we must have $\overline{[f]}^\circ=q$ and $\overline{[f^*]}^\circ=p$. \end{proof} We now show that \autoref{BCpara} can be used to prove a number of important facts about C*-algebras of the form $C^b(X,M_n)$. \begin{cor}\label{matfunfin} If $A$ is isomorphic to a C*-algebra of the form $C^b(X,M_n)$, for any hereditarily paracompact Hausdorff topological space $X$, then \begin{enumerate} \item\label{matfunfin1} $A$ is anniseparable, \item\label{matfunfin2} $\sim$ is finite on $[A]^\perp$, \item\label{matfunfin3} $[A]^\perp$ is modular, and \item\label{matfunfin4} $\sim$ coincides with perspectivity. \end{enumerate} \end{cor} \begin{proof} \eqref{matfunfin1} follows from \autoref{BCpara} and the fact $=_\mathrm{od}$ is reflexive. For \eqref{matfunfin2}, simply note that, for $p,q\in\overline{\mathcal{P}(C^b(X,M_n)'')}^\circ$ with $p\leq q$, $\dim\circ p=_\mathrm{od}\dim\circ q$ implies $p=_\mathrm{od}q$ and hence $p=q$, by \autoref{DpDq}. Now \eqref{matfunfin3} and \eqref{matfunfin4} follow from \autoref{permod}. \end{proof} Lastly, let us point out that any C*-algebra of the form $C^b(X,M_n)$ will actually be isomorphic to the C*-algebra $C(\beta X,M_n)$, where $\beta X$ is the Stone-\v{C}ech compactification of $X$. So we could have restricted ourselves to compact $X$ without restricting the class of C*-algebras under consideration. However, as mentioned in \S\ref{NCT}, the Stone-\v{C}ech compactification can often be significantly harder to work with than $X$ itself, which is why we chose not to do this (e.g. it is not clear to us that \autoref{BCpara} would apply to $\beta X$ if it applied to $X$, i.e. we do not know if the Stone-\v{C}ech compactification of a hereditarily paracompact space is again hereditarily paracompact). In fact, if $X$ is compact Hausdorff then the representation of $C(X,M_n)$ coming from the canonical representation of $M_n$ on $\mathbb{C}^n$ is just the atomic representation of $C(X,M_n)$. As every such space is normal and regular, \autoref{normregopenclosed} shows that lower semicontinous projection functions correspond precisely to the open projections which, in turn, correspond precisely to the hereditary C*-subalgebras of $C(X,M_n)$, by \cite{Pedersen1979} Proposition 3.11.9, Proposition 4.3.13 and Theorem 4.3.15. Also, analogous theorems for $C_0(X,M_n)$, where $X$ is locally compact, can also be proved in much the same way, or can be derived from the corresponding theorems for $C(X_\infty,M_n)$ (where $X_\infty$ is the one point compactification of $X$), using \autoref{ess} and the fact that $C_0(X,M_n)$ is an essential ideal in $C(X_\infty,M_n)$. Also, there is presumably some room for the results of this subsection to be generalized to non-trivial Hilbert and C*-bundles (see \cite{RaeburnWilliams1998}). Indeed, we could have derived \autoref{matfunfin}\eqref{matfunfin3} (which, combined with \eqref{matfunfin4}, also gives \eqref{matfunfin2}) without reference to the topological space $X$, simply by using \autoref{nsubcor}\eqref{nsubcor3} and the fact $C^b(X,M_n)\cong C(\beta X,M_n)$ is $n$-homogeneous. At any rate, C*-algebras of the form $C^b(X,M_n)$ already allow us to create a number of instructive elementary examples, as we now show. \subsection{Examples}\label{Examples} For use in the following examples, define $P_\theta\in\mathcal{P}(M_2)$ by \[P_\theta=\begin{bmatrix}\sin\theta \\ \cos\theta\end{bmatrix}\begin{bmatrix}\sin\theta & \cos\theta\end{bmatrix}=\begin{bmatrix} \sin^2\theta & \sin\theta\cos\theta \\ \sin\theta\cos\theta & \cos^2\theta \end{bmatrix}.\] For $p,q\in\overline{\mathcal{P}(A'')}^\circ$, we have \[\overline{p}\leq q\quad\Rightarrow\quad p\leq q\quad\Leftrightarrow\quad p\leq\overline{q}\quad\Leftrightarrow\quad\overline{p}\leq\overline{q},\] and the first implication can not be reversed in general, even when $A$ is commutative. Slightly more worthy of note is the fact that $p\leq q$ does not even imply that $\overline{p}$ and $q$ commute. \begin{xpl}[$p,q\in\overline{\mathcal{P}(A'')}^\circ$ with $p<q$ but $\overline{p}q\neq q\overline{p}$]\label{commuteclosure1}\hfill\\ Let $A=C([0,1],M_2)$ and define \[p(x) = \begin{cases} 0 & \text{for } x\in[0,\frac{1}{2}] \\ P_0 & \text{for } x\in(\frac{1}{2},1]\end{cases}\qquad\textrm{and}\qquad q(x) = \begin{cases} P_{\pi/4} & \text{for } x\in[0,\frac{1}{2}]\\ 1 & \text{for } x\in(\frac{1}{2},1]\end{cases}.\] We immediately have $p<q$ and \[\overline{p}(x) = \begin{cases} 0 & \text{for } x\in[0,\frac{1}{2})\\ P_0 & \text{for } x\in[\frac{1}{2},1]\end{cases},\] so $\overline{p}q(\frac{1}{2})=P_0P_{\pi/4}\neq P_{\pi/4}P_0=q\overline{p}(\frac{1}{2})$. \end{xpl} \begin{xpl}[$p,q\in\overline{\mathcal{P}(A'')}^\circ$ with $p\overline{q}=\overline{q}p$, $\overline{p}q=q\overline{p}$ and $\overline{p}\,\overline{q}=\overline{q}\,\overline{p}$ but $pq\neq qp$]\label{commuteclosure2}\hfill\\ Let $A=C([0,1],M_2)$ and define \[p(x) = \begin{cases} 1 & \text{for } x\in[0,\frac{1}{2}) \\ P_0 & \text{for } x\in[\frac{1}{2},1]\end{cases}\qquad\textrm{and}\qquad q(x) = \begin{cases} P_{\pi/4} & \text{for } x\in[0,\frac{1}{2}]\\ 1 & \text{for } x\in(\frac{1}{2},1]\end{cases}.\] We immediately have $pq(\frac{1}{2})=P_0P_{\pi/4}\neq P_{\pi/4}P_0=qp(\frac{1}{2})$ and \[\overline{q}(x) = \begin{cases} P_{\pi/4} & \text{for } x\in[0,\frac{1}{2})\\ 1 & \text{for } x\in[\frac{1}{2},1]\end{cases},\] so $p\overline{q}=\overline{q}p$. \end{xpl} By concatenating the projection functions (and their orthocomplements) in \autoref{commuteclosure1} and \autoref{commuteclosure2} (with the functions taking the constant value $1$ on open intervals in between) it is easy to see that one can obtain $p,q\in\overline{\mathcal{P}(C([0,1],M_2)'')}^\circ$ having any combination of truth values for the statements \begin{equation}\label{pq=qps} pq=qp,\quad p\overline{q}=\overline{q}p,\quad\overline{p}q=q\overline{p}\quad\textrm{and}\quad\overline{p}\,\overline{q}=\overline{q}\,\overline{p}, \end{equation} while still having $pq=_\mathrm{od}qp$, i.e. while still having $p$ and $q$ commute in $\overline{\mathcal{P}(C([0,1],M_2)'')}^\circ$ (although, if even one of the statements in \eqref{pq=qps} is true, then this is automatic from \autoref{commutativityimplication} and the fact $\overline{\mathcal{P}(C([0,1],M_2)'')}^\circ$ is orthomodular, by \autoref{matfunfin} \eqref{matfunfin3}). While on the topic of commutativity, we mention the following natural question posed by Akemann \textendash\, if $p<q$ are topologically regular open projections in $A''$, for some separable C*-algebra $A$, can we always find commuting $a,b\in A$ that have range projections $p$ and $q$ respectively? The answer is no in general, even when $p$ and $q$ satisfy all the various combinations of commutativity in \eqref{pq=qps}, as the following example shows. \begin{xpl}[$p,q\in\overline{\mathcal{P}(A'')}^\circ$, $p<q$ but $ab\neq ba$ when $[a{]}=p$ and $[b{]}=q$]\hfill\\ Let $A=C([0,1],M_2)$ and define \[p(x) = \begin{cases} P_0 & \text{for } x\in[0,\frac{1}{2})\\ 0 & \text{for }x=\frac{1}{2} \\ P_{\pi/4} & \text{for } x\in(\frac{1}{2},1]\end{cases}\qquad\textrm{and}\qquad q(x) = \begin{cases} P_0 & \text{for } x\in[0,\frac{1}{2}]\\ 1 & \text{for } x\in(\frac{1}{2},1]\end{cases}.\] We immediately see that $p,q\in\overline{\mathcal{P}(A'')}^\circ$, $p<q$, and all the equations in \eqref{pq=qps} hold. If we have $a\in A$ with $[a]=p$ then, for $x\neq\frac{1}{2}$, $a(x)=\lambda(x)p(x)$, for some non-zero $\lambda(x)\in\mathbb{C}$. Thus if we have $b\in A$ with $ab=ba$ then, for $x\neq\frac{1}{2}$, $b(x)\in\mathbb{C}p(x)+\mathbb{C}p(x)^\perp$. But $(\mathbb{C}P_0+\mathbb{C}P_0^\perp)\cap(\mathbb{C}P_{\pi/4}+\mathbb{C}P_{\pi/4}^\perp)=\mathbb{C}1$ and hence $b(\frac{1}{2})=\lambda1$, for some $\lambda\in\mathbb{C}$. In particular, $[b(\frac{1}{2})]\neq P_0$ and hence $[b]\neq q$. \end{xpl} A non-regular projection in a liminal C*-algebra is given in \cite{Akemann1970} Example I.2. The following example shows that non-regular projections even exist in homogeneous C*-algebras and, moreover, that they can still be topologically regular and even equivalent to a regular projection. In fact, the $p$ and $q$ given below are even Murray-von Neumann equivalent in $A^{**}$, and the partial isometry witnessing this will even yield an isomorphism between $pAp\cap A$ and $qAq\cap A$, i.e $p$ and $q$ will even be Peligrad-Zsid\'{o} equivalent (see \cite{PeligradZsido2000}). \begin{xpl}[$p,q\in\overline{\mathcal{P}(A'')}^\circ$ with $\dim\circ p=\dim\circ q$, $p$ non-regular and $q$ regular] Let $A=C([0,1],M_2)$ and let $p(x)=P_{(\pi/4)\sin(1/x)}$, for all $x\in(0,1]$, and set $p(0)=0$. We immediately see that $p\in\overline{\mathcal{P}(A'')}^\circ$ and $\overline{p}$ is identical to $p$ except at $0$, where we have $\overline{p}(0)=1$. If $r(x)=P_{\pi/2}$, for all $x\in[0,1]$, then $r\in\mathcal{P}(A)$ and $||\overline{p}r||=||r(0)||=1>1/\sqrt{2}=||pr||$, so $p$ is not regular. Let $q$ be a continuous function from $(0,1]$ to rank $1$ projections in $M_2$ such that $\{q(1/n):n\in\mathbb{N}\setminus\{0\}\}$ is dense in the collection of rank $1$ projections in $M_2$. Extending $q$ to a lower semicontinuous function on $[0,1]$ by defining $q(0)=0$, we immediately see that $q\in\overline{\mathcal{P}(A'')}^\circ$, while we also have $||a(0)||\leq\sup_{x\in(0,1]}||a(x)q(x)||\leq||aq||$ and hence $||a\overline{q}||=||aq||$, for any $a\in A$, i.e. $q$ is regular. \end{xpl} On the other hand, it is easy to find examples of open projections that are regular but not topologically regular. Indeed, if $A$ is commutative then every projection in $A''$ is central and hence regular (as noted before \cite{Effros1963} Theorem 6.1) and so any non-regular open subset of the spectrum of $A$ will represent such a projection. So, apart from the name, there does not appear to be any strong connection between regular and topologically regular projections. \begin{xpl}[$\overline{\mathcal{P}(A'')}^\circ$ is not (operator norm) closed]\hfill\\ Let $A=C([0,1],M_2)$. For each $n\in\mathbb{N}$, define $p_n\in\overline{\mathcal{P}(A'')}^\circ$ by $p_n(x)=P_{(1/n)\sin(1/x)}$, for all $x\in(0,1]$, and set $p_n(0)=0$. Then $p_n$ converges to $p_\infty$, where $p_\infty(x)=P_0$, for all $x\in(0,1]$, and again $p_\infty(0)=0$. This $p_\infty$ is open (as it should be, because the set of open projections is always norm closed in $A''$, by \cite{Pedersen1979} Proposition 3.11.9) but not topologically regular, as $\overline{p}_\infty(0)=P_0$ and $\overline{p}_\infty^\circ=\overline{p}_\infty$. \end{xpl} Note the above sequence also converges in the orthometric $\max(||pq^{\perp\circ}||,||qp^{\perp\circ}||)$ coming from the orthonorm (see \autoref{nsubcor}\eqref{nsubcor2}), but to a topologically regular open projection this time, namely $\overline{p}_\infty^\circ=\overline{p}_\infty$. Thus the orthometric would appear to yield the more natural topology on $\overline{\mathcal{P}(A'')}^\circ$, at least in this case. However, $\overline{\mathcal{P}(A'')}^\circ$ may not be complete even with respect to the orthometric. \begin{xpl}[$\overline{\mathcal{P}(A'')}^\circ$ is not complete in the orthometric]\hfill\\ Let $A=C([0,1],M_2)$. Let $g_1(x)=\sin(1/x)$, for all $x\in(0,1]$, and recursively define $g_{n+1}(x)=g_n(2x)$, for $x\in(0,\frac{1}{2}]$, and $g_{n+1}(x)=g_n(2x-1)$, for $x\in(\frac{1}{2},1]$. For each $n$, let $p_n(x)=0$, when $x=m2^{-n}$ for some $m\in\mathbb{N}$ with $m<2^{-n}$, and $p_n(x)=P_{\sum_{k=1}^n4^{-k}g_k(x)}$, for all other $x\in[0,1]$. Now $(p_n)\subseteq\overline{\mathcal{P}(A'')}^\circ$ and $(p_n)$ is Cauchy in the orthometric but, as dyadic rationals are dense in $[0,1]$, $(p_n)$ has no limit in $\overline{\mathcal{P}(A'')}^\circ$. \end{xpl} If we want to work with a complete metric space, we can of course just take the completion with respect to the orthometric (as long as $||\cdot\cdot||$ is an orthonorm on $[A]^\perp$). The orthonorm then extends continuously to this completion and we can then naturally extend the ordering too by defining \[B\leq C\quad\Leftrightarrow\quad||BC^\perp||=0.\] However, we do not know whether the order properties of the orthometric completion are generally better or worse than those of $[A]^\perp$ itself. The next example is important because it shows that all the work we did in \S\ref{OrderTheory}, extending results about orthomodular lattices to separative ortholattices and carefully distinguishing $[p]$ and $[p]_p$, was in fact necessary for the theory to apply to arbitrary C*-algebras. \begin{xpl}[$[A{]}^\perp$ is not orthomodular]\hfill\label{nonorthoxpl}\\ Let $A=C([0,1],\mathcal{K}(H))$, where $H$ is a separable infinite dimensional Hilbert space with basis $(e_n)$, and let $(s_n)$ enumerate a dense subset of $[0,1]$. Define $U_n:H\rightarrow\mathbb{C}^2$ by $U_nv=(\langle v,e_{2n}\rangle,\langle v,e_{2n+1}\rangle)$ and set $p_n(s_n)=0$ and, for $x\neq s_n$, \[p_n(x)=U_n^*P_{1/(x-s_n)}U_n.\] Let $p=\sum p_n\in\overline{\mathcal{P}(A)}^\circ$. Also let $v=\sum(1/n)e_n\in H$, let $Q\in\mathcal{P}(\mathcal{K}(H))$ be the projection onto $\mathbb{C}v$ and define $q\in\mathcal{P}(A)$ by $q(x)=Q$, for all $x\in[0,1]$. Note that $p\vee q>p$. We first claim that $p\vee q\in\overline{\mathcal{P}(A)}^\circ$. To see this, note that $\overline{p}_n(x)=p_n(x)$, for $x\neq s_n$, and $\overline{p}_n(s_n)=U_n^*U_n$. Also $\overline{p}=\sum\overline{p}_n$ and $\overline{p\vee q}=\overline{p}\vee q$. Now take $m\in\mathbb{N}$, let $(x_n)\subseteq[0,1]$ be such that $x_n\rightarrow s_m$ and $P_{1/(x_n-s_m)}=P_0$, for all $n$. Then $\overline{p\vee q}(x_n)\rightarrow p(s_m)\vee Q\vee U_m^*P_0U_m$ in the weak (and strong) operator topology, while if $a\in A_+$ then $a(x_n)\rightarrow a(s_m)$ in norm. Thus if $a\leq\overline{p\vee q}$ then $a(s_m)\leq p(s_m)\vee Q\vee U_m^*P_0U_m$. But we could have also chosen $(x_n)$ such that $P_{1/(x_n-s_m)}=P_{\pi/2}$, for all $n$, and this would show that $a(s_m)\leq p(s_m)\vee Q\vee U_m^*P_{\pi/2}U_m$. But \[(p(s_m)\vee Q\vee U_m^*P_0U_m)\wedge(p(s_m)\vee Q\vee U_m^*P_{\pi/2}U_m)=p(s_m)\vee Q\] which, as $m$ was arbitrary, shows that $\overline{p\vee q}^\circ=p\vee q$. We next claim that $(p\vee q)\wedge\overline{p}^\perp=\{0\}$, which will verify the non-orthomodularity of $[A]^\perp$. If not, we would have non-zero $r\in A_+$ with $r\leq p\vee q$ and $r\perp p$. For some $n$, we would then have $r(s_n)\neq0$ and hence $r(s_n)=\lambda R$, where $R=p(s_n)\vee Q-p(s_n)$ and $\lambda\neq0$. But $RU_n^*P_0U_n\neq0$ and hence $r(x)p_n(x)\neq0$ for some $x$ sufficiently close to $s_n$. This contradicts the fact that $rp=0$. \end{xpl} \begin{xpl}[$A\in[A{]}^\perp_{\mathbf{D}<\aleph_0}$ that is not $<\!\!\aleph_0$-subhomogeneous]\label{aleph0sub}\hfill\\ Let $(e_n)$ be a basis for a separable Hilbert space $H$. For each $n\in\mathbb{N}$, identify $M_n$ in the canonical way with $P_n\mathcal{B}(H)P_n$, where $P_n$ is the projection onto $\mathrm{span}\{e_1,\ldots,e_n\}$. Let $A$ be the C*-subalgebra of $\prod_nM_n$ of sequences $(a_n)$ converging to some $a_\infty\in\mathcal{B}(H)$. Then each canonical copy of $M_n$ in $A$ will be an $n$-homogeneous annihilator ideal (the annihilator of the kernel of the $n^\mathrm{th}$ coordinate representation in fact) and $(\bigvee_nM_n)^\perp=\{0\}$ so $A\in[A]^\perp_{\mathbf{D}<\aleph_0}$. But the representation $\pi$ that takes each $(a_n)$ to its limit $a_\infty$ will map $A$ onto $\mathcal{K}(H)$. Thus $\pi$ is irreducible which, as $H$ is infinite dimensional, means $A$ is not $<\!\!\aleph_0$-subhomogeneous. \end{xpl} We have focused on examples of C*-algebras of continuous functions, as these are the most tractable and already provide a good intuitive basis for working with annihilators. But there are undoubtedly many secrets to be gleaned from looking at the annihilator structure of all the various C*-algebras under investigation in current research. We make the first tentative step in this direction with the following simple example. \begin{prp}\label{CAR} The CAR algebra $M_{2^\infty}$ is (purely) infinite. \end{prp} \begin{proof} For each $n\in\mathbb{N}$, let $(e_{s,t})_{s,t\in\{0,1\}^n}$ be the canonical matrix units of the canonical unital copy of $M_{2^n}$ in $M_{2^\infty}$. So, for each $m,n\in\mathbb{N}$ and $s,t\in\{0,1\}^n$, we have $e_{s,t}=\sum_{r\in\{0,1\}^m}e_{sr,tr}$, where $sr$ is the $m+n$ length sequence obtained from simply concatenating $s$ and $r$ (and likewise for $tr$). We also have $e_{q,r}e_{s,t}=0$ whenever $r$ and $s$ differ on their common domain. For each $n\in\mathbb{N}$, let $\delta_n$ be the sequence of $n$ $0$s followed by a $1$, and recursively define $\sigma_n\in\{0,1\}^{n+1}$ so that $\sigma_j$ and $\sigma_k$ differ on their common domain, for distinct $j$ and $k$, and such that every finite sequence of $0$s and $1$s has some restriction or extension in $(\sigma_n)$. Setting $a=\sum_{n=1}^\infty2^{-n}e_{\delta_n,\sigma_n}$, we have \[aa^*=\sum_{n=1}^\infty2^{-2n}e_{\delta_n,\delta_n}\qquad\textrm{while}\qquad a^*a=\sum_{n=1}^\infty2^{-2n}e_{\sigma_n,\sigma_n}.\] Thus, for any $n\in\mathbb{N}$, we can find $m\in\mathbb{N}$ and $(t_r)_{r\in\{0,1\}^n}\subseteq\{0,1\}^m$ so that $e_n=\sum_{r\in\{0,1\}^n}e_{rt_r,rt_r}\leq 2^{2(m+n)}a^*a$ and hence $e_n\in\{a\}^{\perp\perp}$ with $||e_n||=1$. Also, for any $n\in\mathbb{N}$ and $b=\sum_{r,s\in\{0,1\}^n}\lambda_{r,s}e_{r,s}\in M_{2^n}\subseteq M_{2^\infty}$, we have $||e_nbe_n||=||\sum_{r,s\in\{0,1\}^n}\lambda_{r,s}e_{rt_r,st_s}||=||b||$. As $\bigcup_nM_{2^n}$ is dense in $M_{2^\infty}$ this means that $\sup_{e\in\{a\}^{\perp\perp},||e||=1}||be||=||b||$ for all $b\in M_{2^\infty}$ so $\{a\}^\perp=\{a\}^{\perp\perp\perp}=\{0\}$, i.e. $\{a\}^{\perp\perp}=M_{2^\infty}$. On the other hand, we immediately see that $e_{1,1}\in\{a^*\}^\perp$ and so $\{a^*\}^{\perp\perp}\neq M_{2^\infty}$, so $a$ witnesses the equivalence of $M_{2^\infty}$ with a proper subannihilator. To see that any annihilator $B$ is equivalent to a proper subannihilator just take any projection $p\in B$ and note that it must be unitarily equivalent to some projection in $M_{2^n}$, for some $n\in\mathbb{N}$, which itself will have a subprojection unitarily equivalent to a matrix unit $e\in M_{2^n}$. But then $\{e\}^{\perp\perp}$ is isomorphic to $M_{2^\infty}$ and so the argument of the previous paragraph applies. \end{proof} This example might be a little surprising, given that the CAR algebra, and all other UHF algebras for that matter, are usually thought of as being very finite C*-algebras, primarily due to the fact they possess a (unique bounded) faithful trace $\tau$ which, furthermore, always gives rise to the unique hyperfinite $\mathrm{II}_1$ factor as $\pi_\tau[A]''$ (where $\pi_\tau$ is the representation coming from the GNS construction applied with $\tau$). This might even lead some operator algebraists to dismiss the annihilators as clearly giving the `wrong' notion of finiteness. Alternatively, you might blame the relation $\sim$ rather than the annihilators themselves, and if it turned out that the inclusion ordering on the annihilators in the CAR algebra is modular, despite being infinite w.r.t. $\sim$, then it might indeed be the better to focus on perspectivity rather than the $\sim$ relation, at least for some C*-algebras. However, we would interpret \autoref{CAR} rather as merely showing that the close relationship between traces and projections in von Neumann algebras does not extend to annihilators in C*-algebras. If the tracial structure is what you are interested in then you are better off looking at the positive elements of $A$ under the equivalence notion given in \cite{CuntzPedersen1979}, as mentioned in \S\ref{Motivation}. In fact, traces give rise to dimension functions on $A$ but these are quite different from the natural dimension functions you would consider on $[A]^\perp$, even in the commutative case. To see this, note that dimension functions on $C(X)$ correspond to measures $\mu$ on $X$, while annihilators correspond to regular open subsets $O$ of $X$. Thus you would naturally think $D(O)=\mu(O)$ defines a dimension function on these regular open sets, but this is not the case. Consider, for example, $X=[-1,1]$ and $\mu=\delta_0$, the point probability measure at $0$. Then $\mu([-1,0))=0=\mu((0,1])$ even though $\mu([-1,0)\vee(0,1])=\mu([-1,1])=1$, so this function on regular open sets does not even satisfy the basic dimension function axiom \begin{equation}\label{dimax} D(N\vee O)+D(N\wedge O)=D(N)+D(O). \end{equation} Even using the Lebesgue measure for $\mu$ instead would not help, as there are Cantor-like nowhere dense subsets of $[-1,1]$ with non-zero Lebesgue measure, and their complements can be expressed as the union of two disjoint regular open sets. Going up a dimension to $[-1,1]\times[-1,1]$, there even exists a pair of connected disjoint regular open sets whose complement is an Osgood curve (see \cite{Osgood1903}), i.e. a Jordan curve of non-zero Lebesgue measure. \autoref{CAR} also illustrates the fact that taking direct limits, a common construction with C*-algebras, is likely to produce a C*-algebra that is infinite. This begs the question of whether there might exist other general constructions that can still produce C*-algebras with the properties we want, but which remain finite. Indeed, do there exist any separable finite type II C*-algebras at all? Surely there must, and it is probably just a matter of examining enough examples through the lens of annihilators until one is found. Although there is the possibility that some new technique might be needed to create one or that somehow separability precludes their existence (for non-separable examples, we can of course just take any type $\mathrm{II}_1$ von Neumann algebra), which would be quite intriguing in either case.
2,877,628,090,045
arxiv
\section{From operational risk to cyber risk}\label{sec:ORcyber} \vspace{-2ex} For a very long time, IT risks were classified in operational risk. Many databases recording operational failure events also contain cyber incidents, which have been modelled using traditional operational research methods. Recently, as cyber failures originate more from malicious attacks and may exhibit strong systemic effects, cyber risk has become a class of risks on its own, requiring specific approaches to study it. Besides the usual IT literature on cyber security and defense, the scientific literature on cyber concerns mainly two fields, Management and Operational Research (OR), for general models and study of properties, and Actuarial Research, when suggesting pricing models for insurance. These two paths clearly appear in the state-of-the-art review presented below.\\[1ex] Extreme Value Theory (EVT) methods have already proven to be quite useful for operational risks (among recent papers, see e.g. \cite{DAS2021,Embrechts2018}) in the industrial and financial sectors, loss severity being even larger in manufacturing than in finance. Relying on EVT becomes even more essential when studying cyber risk because it has moved, over the years, from a possible threat to an important emerging risk, which generally goes with a high probability of extreme events (due to immature management). Then, it also calls for the development or improvement of dynamic EVT approaches. This might be facilitated when using unsupervised EVT methods, where the threshold above which observations are considered as extremes is automatically detected. It is what we propose here, extending a recent method to skewed non-negative data, as those explored in this study. Our approach makes the empirical estimation of the whole heavy-tailed distribution more straightforward, and of easier use for non-specialists of EVT. It may become a handy tool in many applied research fields dealing with heavy-tailed data, in particular operational research. \vspace{-3ex} \paragraph*{Facing an emerging risk: the compromise between cyber security and cyber resilience -} ~\\ Cyber threats and crimes have increased exponentially in recent decades, due to a rapid diffusion of new and evolving Information and Communication Technologies such as Social Media, cloud computing, big data, Internet of Things (IoT) and smart cities (see e.g. \citep{Pasculli2020}). There are innumerable examples of cyber crimes having a huge impact in terms of human and financial costs, e.g. one of the latest attacks, attributed to 'Darkside' (cyber crime group), causing in early May 2021 the shut down of {\it Colonial Pipeline} network and a shortage of oil supply in the North-East Cost of USA. The frequency of cyber attacks increased even more since the beginning of the Coronavirus pandemic; we have seen a simultaneous surge in the use of Internet and in the cyber attacks targeted against individuals, hospitals, and small businesses (see e.g.~\citep{NCSC2020}). Even though all actors in society are getting more and more aware of the growing importance of cyber risk, we are still far from having reached the same level of understanding and assessment for this specific risk, as we do for financial risk or natural disasters.\\[1ex] From a societal point of view, we clearly need to develop ways of becoming more resilient as we increasingly depend on well functioning IT systems. Managing this risk does not only mean minimizing and preventing cyber-attacks, but also, if an attack is successful, ensuring that its consequences are not too severe for organizations or individuals, in other words, making society more resilient to them. While the term `cyber security' is as old as computers themselves, the term `cyber resilience' has emerged recently and is gaining currency. Cyber security is focused on security alone, the term security referring to defense, protection, precaution. But organizations need a broader strategy that includes their ability to survive an attack, to recover with as little harm as possible, to continue to operate when experiencing a cyber attack, and finally to insure some unavoidable risks. This is what cyber resilience refers to. As already pointed out in the review by \cite{Aven2016} and in \cite{Aven2019}, integrating resilience principles and methods may participate to the development of modern risk management. The future of this resilience will be multidimensional, combining prevention and protection measures like: users' education, security protocols, redundancies in IT systems, clear managerial attention for implementing an adequate strategy, insurance coverage to ensure the survival and functioning of the system.\\[1ex] In order to fight against cyber-criminality in France, since the end of the 90’s, the Central Criminal Intelligence Service (SCRC\footnote{Service Central de Renseignement Criminel}) of the GN Judicial Pole (PJGN\footnote{Pôle Judiciaire de la Gendarmerie Nationale}) has developed various strategies, among which the setting up of a Cybercrime Fighting Center named C3N \footnote{Centre de lutte Contre les Criminalit\'es Num\'eriques}. SCRC, which consists of the C3N and of the Intelligence Division (DR\footnote{Division du Renseignement}), aims at improving prevention and protecting individuals and companies from cyber crimes, mainly small and medium businesses, which have less capacity to invest in cyber security\footnote{We describe here the GN organizational structure during the period covered by our dataset. The C3N has recently moved to the newly created COMCyberGEND (Commandement de la gendarmerie dans le cyberespace)}. In 2014, the GN started collecting centrally the complaints related to cyber attacks, of individuals or companies from rural and peri-urban areas in all metropolitan and overseas territories (it covers 95\% of the national territory and 55 \% of the French population). One task of C3N is to collect data and exploit criminal information with the DR, relying on the analysis of the thousands complaints that are received at GN and registered by the SCRC.\\[1ex] From a management point of view, cyber security has to be weighed up against building resilience to IT attack or failure. It is clear that nowadays, companies need to give access to their services through Internet. But systems connected to Internet cannot be 100\% safe. Hence, there will always be a certain amount of system failures due to cyber-crimes. The question is where to allocate resources: to security and/or to resilience? How much should be invested in security through fire-walls, multiple identifications, security officers, etc., against in building resilience e.g. by prevention measures, using multiple systems with backups, or/and by covering the remaining risk through an insurance policy whenever it concerns material damages (or any quantifiable damage)? The answer depends on the type of risk faced by a company or organization.\\[1ex] This raises the question of the insurer's point of view, who needs to go further in her understanding of cyber risk. Moreover, the targets of cyber attacks are largely affecting intangibles such as data theft or reputation damages, making losses difficult to quantify and predict. Due to this lack of knowledge and their fear of a strong systemic component, insurance companies generally offer inadequate cyber risk coverage to their customers, whether they are individuals, businesses or organizations; see~\citep{Advisen2018}~or~\citep{SwissRE2017} for discussions on these issues. The insurance market for cyber risk is in its infancy, although it is growing at a fast pace. The total gross premium was estimated to reach 8 billion US\$ in 2020, a very small proportion of the total gross insurance premium of 4,000 billion US\$. However, it has grown threefold since 2014; see \citep{MarshMicrosoft2019}. Insurance companies do provide cyber covers but they are limited in their coverage and reimbursements. This is the major complaint from customers as stated in~\citep{Accenture2019}: {\it "There is a lack of capacity in the market and a willingness of insurance companies to take over this risk"}. It is probably why management still relies much more on security spending, expected to reach 124 billion US\$ compared to insurance premiums of 8 billion US\$. Until recently, there were also many "silent" covers, {\it i.e.} exposures to loss or liability from a cyber-triggered event in other lines of business (Directors and Officers, Property Damages, Business interruption); see~\citep{CRO2016}. However, insurances companies are rapidly removing the silent covers from the contracts, leaving insured entities uncovered against cyber risks, if they do not buy specific cyber policies.\\[1ex] From these various perspectives, it becomes clear that the analysis of cyber risk is a condition for building a resilient society against cyber crime. The first step towards modelling is to collect data reflecting accurately this risk. Nevertheless, as it is an emerging risk, it is difficult to find relevant datasets that describe well the various types of attacks to the systems and measure their quantitative impact. While most academic studies use available data breaches public datasets, \cite{Eling2019} had the nice idea to extract 1,579 cyber risk incidents from an operational risk database, to consider a larger range of cyber risks. In our case, we had the opportunity to get access to a large, unique and exceptionally rich database - the database of cyber complaints filed at the GN, covering many categories of cyber risk. Besides its sheer size (more than 200,000 data records), this database contains not only quantitative information on possible damages, but also qualitative one such as a text description of the complaints and additional information on the victims and actors.\\[1ex] Having access to a new database, from a different source, is of much interest. Indeed, it gives the opportunity to compare results obtained from various sources, to look for some dependence between data coming from different databases, etc. Hence, we proceed to a first analysis of the GN database, to shed light on some aspects of cyber risk, in particular, to the probability of extreme events and their frequency. For that, we introduce an algorithm based on a stochastic hybrid model, detecting automatically the threshold above which observations are considered as extremes, facilitating the estimation of the heaviness of the tail distribution. This method has been tested at different stages on various types of data (in engineering, finance, insurance) and could become part of the stochastic modelling techniques used in operations research (a software package is under construction, with two cases, one being the version developed in this study). Our objective is to transform the cyber threats and its uncertainties into a measurable risk. Quantitative assessment of risks is the basis to design insurance covers for hedging the worst consequences of cyber crimes. Moreover, knowing the probability of occurrence of cyber attacks and their severity distribution allows management to find the right balance between investing into cyber security or/and in insurance protection. It is one of the many steps towards making society more resilient.\\[1ex] In Section~\ref{sec:questions}, we review the state-of-the-art in cyber risk modelling and introduce the research questions we aim at answering with this study. We discuss the main issues related to data exploration and present basic statistics on the GN database in Section~\ref{sec:data}. In view of modelling heavy-tailed non-negative asymmetric data, a self-calibrating algorithm based on a general parametric model and on non linear optimization techniques is developed in Section~\ref{sec:model}. Confidence intervals for the evaluated model parameters are introduced in this algorithm, revisiting the Jackknife technique because of its high execution speed. Application of our method to the GN cyber database follows in Section~\ref{sec:appliCyber}, turning to modelling damage severity (Section~\ref{ss:appliSeverity}), then to both severity and frequency of extreme damages (Section~\ref{ss:Poisson}). Consequences for risk management are discussed in Section~\ref{ss:RM}, while investigating a possible classification of cyber attacks via their distribution heaviness is tackled in Section~\ref{ss:classes}. % We conclude the study in Section~\ref{sec:concl}, discussing management and research perspectives. In Appendix~\ref{App-test2and3components}, a series of experiments based on simulated data is conducted to challenge and prove the benefit of our method. Appendix~\ref{App-estim-comparison-EVTmethods} completes the study comparing the tail-threshold and the tail index estimations obtained with our algorithm and various EVT methods included in the {\it tea R-package}. \section{State-of-the-art in cyber modelling and our research questions} \label{sec:questions} \vspace{-2ex} \paragraph*{State-of-the-art -} Modelling of cyber risk is an important issue, as it helps understand the underlying (dynamical) structure of this phenomenon through its realization (collected data). Moreover, it allows generalizing beyond the data, to obtain relevant perspectives for future behaviors, in view of drawing possible scenarios. It is also a fundamental step for convincing insurance companies to take over this risk. As long as cyber risk is not widely understood, fears dominate the market and reluctance is the rule. The fast pace of information technologies makes this type of risk difficult to analyze, a challenge already taken up by researchers from different fields, as actuaries, data scientists, economists (in particular from game theory), IT system experts, probabilists and statisticians, etc (see e.g. \cite{Dacorogna2022}). Various toy/theoretical models (due to a lack of data), and models capturing some features of cyber risk have been suggested. The field of research in cyber risk is very active, even though still in a fledging state leaving big holes in our understanding of it. Let us briefly review some of the directions of research. We could refer to many papers; we do not give an exhaustive list, but examples of recent ones in which one can also find complementary bibliographies. \citep{Agrafiotis2018} provide a taxonomy of cyber harms and a study of their possible consequences, while \citep{Cohen2019} suggest their own taxonomy (with a few overlaps) and definition for the ever elusive cyber risk. Based on this and a database compiled by AON that contains 30,000 cases, they statistically describe financial cyber losses and suggest that the risk is very similar to operational risk. Efforts in the direction of a taxonomy can also be found in \citep{CRO2016} from the point of view of the insurance industry. Unfortunately, those various taxonomies are not fully compatible. Among the researchers working on the statistical modelling of empirical cyber risk data, the group around Eling pioneered this path (see ~\citep{Eling2016}). In one of their latest publications on the subject, \citep{Eling2019} apply a dynamic version of the standard peak-over-threshold (POT) method in EVT designed by \citep{Chavez2016} for operational risks, including covariates to analyze cyber losses compiled in an operational risk database. Contrary to~\citep{Cohen2019}, they conclude that cyber risks differ from other operational risk categories. This debate illustrates the fact that data on the subject are sparse, which makes it difficult to come up with a consistent picture. It may differ from one dataset to another. That is why it is important to analyse as many datasets as possible, as soon as they become available. Modelling goes in various directions: \citep{Baldwin2017} present a simple model to look at contagion of cyber attacks, while~\citep{Boehme2018} propose a review of cyber risk analysis from various disciplines and identify ways to improve cyber risk modelling. \citep{Fahrenwaldt2018} model, with an interacting Markov chain, the diffusion of cyber viruses or worms in a structured data network, while \citep{Farkas2021} apply Generalized Pareto regression trees to analyze cyber claims, identifying criteria for claim classification and evaluation on the same database as in~\citep{Eling2019}. \citep{Peng2018,Xu2017} argue that cyber risk should be tackled in a multivariate framework where the various risk factors are dependent on each other via copulas, otherwise, the risk would be underestimated. Nevertheless, they neglect the fact that treating the problem using EVT has already shown good estimation of the extreme risks. Other ways of tackling the question of cyber risk is to look at the amount of money that should be invested in cyber security versus buying insurance protection. It is what~\citep{Marotta2017,Wang2019} explore in their papers, while \citep{Nagurney2017} deal with the optimal investment in cyber security under budget constraints. Using a supply chain game theory network model, they study the vulnerability of the network to additional retailers or budget constraints. Unfortunately, all these models are not backed by strong empirical evidences due to the lack of data of various sources. Along the same line of research,~\citep{Paul2021} propose a two-stage stochastic programming model to help decide on the optimal resource allocation strategies by governments and firms. They conclude that {\it "it is beneficial to spend more on intelligence given its increasing returns to the reduction of social costs related to cybersecurity"}. Intelligence means not only detection effectiveness, as defined by the authors, but also a better understanding of the quantitative impact of the risk, hence identifying the likely attackers' targets. Modelling games between attackers and providers in an interdependent cyber-physical systems (CPS) is the chosen approach by \citep{He2020} to analyze the survival probability of CPS and Nash equilibrium strategies. On the actuarial side, \citep{Romanosky2019} do a qualitative analysis of cyber insurance policies by examining those filed with state insurance US commissioners and found surprising variations {\it "in the sophistication (or lack thereof) of the equations and metrics used to price premiums"}. This is another testimony of the lack of maturity of this market.~\citep{Carfora2019} propose an actuarial approach to compute insurance premium based on the publicly available dataset of the Privacy Rights Clearing House. \citep{Zeller2020} present a review of the actuarial literature on the topic and apply an approach based on marked point process for modelling cyber risk in view of determining the insurance premium. They identify and propose models for key co-variables required to model frequency and severity of cyber claims. Finally, let us refer to \citep{Bouveret2018}, who proposes to use a frequency severity model for the computation of the Value-at-Risk of the cyber risk of financial institutions. This brief review of the various ways the insurance market and the academic literature tackle the problem, is far from being exhaustive, if ever possible. It shows, first of all, the lack of data, second the wide spectrum of approaches, and third, conclusions that seem not always congruent. More investigation, in particular empirical on diverse data sources, must take place to better understand this complex problem.\\[-7ex] \paragraph*{Our research questions -} Our research program is one step in this direction and is structured around four questions; this study answers the first two, while setting up a methodology and giving hints for further investigating the last two. The first question is whether the data collected by SCRC would add some value for studying cyber risk. Of course, the very nature of this data, quite different from the sources studied so far, partakes in the originality of the study, as it can only help shed another light to the cyber risk landscape. It may also reveal general characteristics of this type of risk, if they are found in other databases too. It implies, as for any other source, checking first for reliability and relevance of the data. To do so, we explore the database and develop on it a statistical study. We conclude that, indeed, it is a unique source of information, confirming as well as complementing the picture given by databases studied so far. This is the object of Section~\ref{sec:data}. Given that systemic risk appears in the literature as a crucial feature of cyber risk, it is natural to question its presence in our data. Hence our second query: do we observe and detect effects indicating systemic risk? It is an important issue, of high interest to the insurance market. That is why we explore the existence of heavy-tailed distributions for variables characterizing the cyber risk. We do it on the damages reported in the complaints. It is intended to determine the threshold (denoted here, `tail-threshold'), above which observations are considered as extremes. Standard graphical methods of EVT could be used, but they are not always very robust. Here, we adapt to asymmetric non-negative heavy-tailed data, a model introduced in Debbabi {\it et al.}(see \citep{Debbabi2014}, \citep{Debbabi2017}) and developed the associated iterative algorithm, which identifies automatically the tail-threshold, is flexible and of easy use, whatever the size of the dataset. This unsupervised method gives the advantage of fitting the whole empirical distribution, whatever its heterogeneity, with a specific treatment for the tail where observations are scarce. The presence of very heavy tails is clearly assessed for the amounts at stakes in the complaints. However, a high probability of extreme event would not be enough to qualify the risk as systemic but it constitutes one piece of the puzzle. Our focus on extremes is additionally motivated by the building of cyber resilience. We believe that cyber security may tackle the main attacks which probabilities fall in the main body of the distribution, while the tail distribution may concern the attacks they cannot handle or detect. Extreme attacks will be best fought through cyber resilience in two ways: on the operational side, increasing the redundancy of the system, on the other side, protecting the finance of the company through good insurance covers. In order for insurance to propose a good coverage to firms, the underwriters will perform a due diligence on their management of cyber security. It is in the dialogue between insurance underwriters and risk managers that the company will be able to identify on the operational side, where resilience through redundancies of the IT system should be put in place in order to save on the insurance premiums. At the management level, this tradeoff between investments into securities and insurance covers has to be found. A good modelling of this risk will definitely help find the optimal tradeoff as well as determine the fair insurance premium. Moreover, by having a good model for the tail of the distribution, it becomes possible to estimate the capital needed for covering this risk, which is another piece of the puzzle to estimate the cost of bearing this risk. The third question concerns the classification of cyber attacks. Our goal is to build an alternative classification of the complaints based on statistical regularities, in order to reduce the number of classes, whether it is those proposed by the ministry of justice, or those by the SCRC. It is too early to call an answer to this question, but hints about a varying tail index gives some hopes in this direction (see the last paragraph in Subsection~\ref{ss:classes}). Here again, we can improve cyber resilience by early detection and better characterization of the type of cyber attacks. Then, the security forces can focus their investigations and fight more efficiently the cyber attackers. Finally, another aim we target is the dynamic modelling of this risk in a multivariate setting, since the size and content of the GN database offer the possibility of such investigation. Our algorithm should facilitate the processing of the extremes in this extended context. A standard argument in the literature is that cyber risk is characterized by its fast-changing environment. It is another study in itself, for which extra cleansing and treatment of the database are needed, in particular through the complaints description. As we manually double-checked the data on extremes, we concentrate, in this paper, on their time evolution, taking into account the frequency of the cyber attacks with extreme damages. This is tackled in Subsection~\ref{ss:Poisson}. It appears that the frequency of large damages in complaints does not vary significantly with time, at least on the observed period. Nevertheless, once we will have access again to the database, with an increasing amount of cyber attacks during the covid19 pandemics, we will proceed a second time to the same frequency study to check the current result. \vspace{-2ex} \section{Data Exploration} \label{sec:data} \vspace{-2ex} A scientific approach to study any phenomenon must rely on data, on one hand to inspire the modelling, on the other hand, to check the correctness of the model. We want to understand how our data have been generated. This may look trivial, but is, in fact, very important, as it is the foundation of the study. For cyber risk, this is much more difficult as the risk is changing rapidly and the amount of representative data is scarce. Here, we have a unique opportunity to study data widely collected, and over few years, covering most of the French territories. A first phase of data cleansing is needed. In our case, the amount of data is too high to manually check its reliability. An automatic text recognition algorithm has been developed by C3N to ensure the congruence of the various fields. However, improvements need to be made to this fledgeling procedure. Therefore, we decided to also check manually the data presenting the largest damages, as we are particularly interested in their modelling. Note that the data presented here, have also been briefly described in a book chapter (see \citep{Ventre2020}) to illustrate the scientific approach when working on data. Nevertheless, since the aim here is quite different, we choose to come back on the descriptive statistics of the dataset under study and extend it, so that the paper is self-contained and more comprehensive. Moreover, we have in the meantime double-checked the information given in the dataset and performed more detailed descriptive statistics for reaching a broader understanding, before going deeper in the analysis and modelling. At the GN, we were explained the process of entering the data in the system. In every GN office in France, the officer (not a specialist in cyber security) receiving the complaint is in charge of writing a short report and filling the various fields of the database. From a geographical point of view, the data collection is spread over the entire country. However, the big cities (e.g. Paris, Lyon, Marseille) are not covered in this database as they are under the responsibility of the {\it Police Nationale}. The industry, however, is usually located in the peri-urban area surrounding big cities. Nevertheless, given that cyberspace and geographical space are more or less independent, most of cyber crimes appear to be distributed demographically rather than geographically (except maybe in the case of those of 'proximity' such as cyber-harassment). This seems to postulate that the GN and the police, having more or less half of the population living in their area of competence, have to both know about cyber criminality in comparable proportions. The officer's first concern is to help the people who are filing complaints and try to find the culprit, rather than filling in a database. This may explain the existence of many errors due, either to wrong transcriptions, or typing errors. For this reason, we decided to review manually the extreme amounts. We compared, one by one in the database, the written description of the complaint to the other data fields (list given below) to assess consistency. We looked at 1,100 items containing the largest amounts in the declared 'dammages' field. They represent the 98.2\% quantile of the studied sample. We detected various problems in the dataset. Among them, the most frequent (90\% of the few errors found on these extremes) was a mistake in reporting the cents amount, where the dot marking the cents was missing (e.g. instead of 500.00 \euro{} a much larger amount of 50,000 was reported). In the rest of the section, we present results based on both filtering procedure, the automatic one developed by C3N and our own manual work. \vspace{-2ex} \subsection{The SCRC database} \label{ss:database} \vspace{-2ex} The SCRC database starts from 2014, but is more reliable since July 2015. Thus, we consider in this paper the period from 07-2015 to 04-2019, which includes 208,037 data. The data to which we have access at SCRC, correspond to structured data and are presented under the {\it JavaScript Object Notation} (JSON) format. Each complaint has been first registered at SCRC according to 3 main fields: the cyber crime description, its victim, and its author. Then, it has been filtered and exported into a database presenting the following fields:\\[-6ex] \begin{description} \item [1- report\_date:] Reporting date of the cyber crime complaint \\[-5ex] \item [2- damages:] Amount of the damage in Euro (\EUR)\\[-5ex] \item [3- victims.dob:] Date of birth of the victim \\[-5ex] \item [4- victims.sex:] Gender of the victim\\[-5ex] \item [5- category:] Category of the crime (SCRC classification)\\[-5ex] \item [6- natinf:] Nature/type of crime (Ministry of Justice classification)\\[-5ex] \end{description} Note that the field 'category' corresponds to the classification of cyber crimes by SCRC into 10 groups subdivided into 52 subgroups (also subdivided into subsubgroups), while the field 'natinf', also referring to the type of crime (and represented by a code; see \cite{natinf2014}), is defined by the `{\it p\^ole d'\'evaluation des politiques p\'enales de la direction des affaires criminelles et des gr\^aces du Minist\`ere de la Justice'}\footnote{cf. \url{http ://www.justice.gouv.fr/include_htm/pub/rap_cybercriminalite_annexes.pdf}} and includes 475 classes. In Table \ref{tab:dataPrej}, we present the composition of the cyber crimes sample communicated by SCRC. About 70\% of the complaints do not mention any declared amount, or suggest a null amount. This class includes physical (e.g. child pornography) or moral (e.g. hate crimes, cyber-harassment) harm. The other 30\% complaints correspond to material damage recorded by the GN units on the declarations of victims of property crimes. We introduce there a new class defined with a threshold of $500$~\EUR, corresponding to the amount above which the judicial system authorizes generally for prosecution. \vspace{-2ex} \begin{table}[H] \centering \caption{\label{tab:dataPrej}\sf\small Breakdown of the 208'037 data in terms of the damages amount. \vspace{0.7ex}} {\small \begin{tabular}{crr}\hline &&\\[-1.5ex] Amount $x$ (\EUR) & Sample Size $n$ & Percentage \\ && (w.r.t. the total size)\\ &&\\[-1.5ex] \hline \hline &&\\[-1.5ex] Not Declared (ND) or $x=0$ & 147,052 & 70.69\% \\ $0 < x < 500$ & 29,074 & 13.97\%\\ $x \ge 500$ & 31,911 & 15.34\% \\[-2ex] &&\\ \hline \end{tabular} } \vspace{-2ex} \end{table} It should be noted that besides the description of the cyber attack, no variable has yet been introduced in the GN database to discriminate between individuals and companies. We looked at it manually for the extreme damages above 40,000~\EUR, but, of course, it needs to be done automatically for every complaint; adding a dedicated field in the data entry form has been recommended to the GN. {\bf $\triangleright$ Gender of the victims of cyber crimes -} Looking at the type of population reporting to GN for a cyber attack (Table~\ref{tab:sexe}), we observe that 11.65\% do not contain a mention of gender (ND for "not defined"), leaving 183,801 data points instead of 208,037. The gender is not a discriminating parameter when considering the number of complaints, as can be seen in Table~\ref{tab:sexe}. Nevertheless, we could investigate if the type of complaints differs by gender. It will be considered in a further study. \begin{table}[H] \centering \caption{\label{tab:sexe}\sf\small Gender classification of the sample with non-negative amounts. \vspace{0.7ex}} \begin{tabular}{ccc} \hline &&\\[-1.5ex] Gender & Data number & Percentage \\ &&\\[-1.5ex] \hline \hline &&\\[-1.5ex] F & 91,599 & 44.03\% \\ M & 92,202 & 44.32\% \\ ND & 24,236 & 11.65\% \\[-2ex] &&\\ \hline \end{tabular} \end{table} \vspace{-2ex} The gender proportion is slightly obscured by the fact that it is not possible to distinguish between private complaints (made by individuals for attacks on their private systems) and complaints originating from companies. The gender portion would be better defined if this distinction could be made. Nevertheless, it could be retrieved, most of the time, through the complaint description. Given the large amount of data, automatic text recognition is being developed to find out this (and other) information. Individuals and companies need to register their complaints at the National Police/Gendarmerie if they want to be covered by their insurance. We suspect that the majority of complaints comes from individuals. Nevertheless, we could check manually for the extreme observations the two following properties: (i) Statistically, the amounts would be roughly the same for individuals and companies; (ii) the vast majority of complaints originates from individuals even for the very large amounts. {\bf $\triangleright$ Age of the victims of cyber crimes -} Let us turn to the age of the victims. We obtain the following box-plot given in Figure~\ref{fig:boxplot-age}. \begin{figure}[h] \centering \includegraphics[scale=0.6]{Boxplot_ages.pdf} \parbox{282pt}{\caption{ \label{fig:boxplot-age} \sf\small Box Plot of the ages of the victims ($y$-axis) for every month of the period ($x$-axis) from July 2015 to April 2019.}} \end{figure} We observe that the median remains more or less constant with time, with a value around $42.5$ years, which is also close to the average age ($43.4$ years) of the sample, and to the median age of the French population (around $40.5$). The interquartile interval remains also more or less stable (interquartile interval [29.6; 54.4] years, on average). The lower and upper limits of this interval indicate a slight negative asymmetry. \begin{figure}[h] \centering \begin{minipage}{.33\textwidth} \centering \includegraphics[scale=0.33]{Age_comparison_2016.pdf} \end{minipage}% \begin{minipage}{.33\textwidth} \centering \includegraphics[scale=0.33]{Age_comparison_2017.pdf} \end{minipage}% \begin{minipage}{.33\textwidth} \centering \includegraphics[scale=0.33]{Age_comparison_2018.pdf} \end{minipage}% \caption{\label{fig:age-pyr}\sf\small Comparison of the age of the victims to the ages pyramid of the French population for the years 2016, 2017, and 2018. The $x$-axis represents the age and the $y$-axis the percentage of victims w.r.t. the French population. Here, the label "Total" means the comparison with the sum of males and females.} \end{figure} We study the distribution of the age of the victims for the three calendar years 2016, 2017, and 2018 (for which we have complete data that have been verified by SCRC), and compare it with the age pyramid of the French population, taking also the gender into account. For both years, the registered victims the most represented concern two classes, independently of the gender: young people around 20 years old and adults around 45 years old. The same criticism about the non-discrimination between individual complaints and company complaints could be made here as for the gender analysis. We are mixing the two sorts of origin. It is a limitation due to the fact that we do not have currently access to this information. Nevertheless, the small positive amounts are completely dominated by individual complaints and constitute the vast majority of the data. The proportion is accentuated to more than 95\% when considering the data with no declared amount (representing more than 70\% of the full dataset), which we also used in this age/gender analysis. This explains why we can afford comparison with the French population. As illustrated in Figure \ref{fig:age-pyr}, the proportion of cyber crimes victims is, besides a small minimum around 30 years, of the same order between 20 and 45 years for both genders, then falls down rapidly below and above those ages. The proportion in Figure \ref{fig:age-pyr} is related to the pyramid of age as given by \cite{INSEE2018}. Note that the value 100 on the $x$-axis collects all ages from 100 years old on. We looked at the cases where the ages of the victims were equal or above 100 years old. Those are generally cases of identity theft where the grand-child, or a relative, accesses bank accounts using the login and password of the owner. There is no case of ransomware or direct involvement of the elderly. We observe in Figure~\ref{fig:age-pyr} that, in 2017, 0.10\% (0.06\% in 2016, and 0.09\%) to 0.14\% (0.11\% in 2016, and 0.13\% in 2018)\% of the French population aged from 20 to 60 years old have filed complaints as victims of cyber crimes. This represents already a large number of victims w.r.t. the 66,883,761 French inhabitants\footnote{\url{https://www.insee.fr/en/statistiques/2382601?sommaire=2382613}} (66,774,482 in 2016), given the fact that GN covers only 55\% of this population. All the more that it needs to be multiplied by a factor, according to the iceberg effect, estimated with various methods as roughly 250 on cyber complaints related to ransomwares in \cite{Dregoir2017}, using the GN database, but also external data (Google trends for the locky virus, which is of ransomware type)\footnote{For the study of general delinquency reportability rates, even if not so explicit for cyber aspects, see also \url{https://www.interieur.gouv.fr/Interstats/L-enquete-Cadre-de-vie-et-securite-CVS/Rapport-d-enquete-Cadre-de-vie-et-securite-2019}}. Indeed, in cybercrime, a pronounced iceberg effect exists due to the absence of a complaint or, in the most serious cases, to the very absence of detection of the problem by the victims. Consequently, for security forces, the filing of complaints is only the visible part of a criminal phenomenon and does not grant access to the ground truth. To better understand the meaning of those percentages, one might compare them to the percentage of the (French) population victim of other types of attacks (non cyber ones). {\bf $\triangleright$ Cyber crimes by type -} Now we turn to the type of cyber crimes and provide in Table~\ref{tab:X-natinf} the first 10 classes of the full sample of size 208'037, by decreasing order of class size. From the description registered at GN, it is not so easy to distinguish to which type a cyber crime belongs to, therefore how to classify it within a GN category and a Natinf one, especially given the large amount of those categories. We already know that, for insurance purpose, the granularity must be much coarser: One future goal is to find a scientific way to regroup GN categories. One approach could be through the heaviness of the tail distribution, as discussed further when modelling the damages severity. \begin{table}[h] \centering \parbox{400pt}{\caption{\label{tab:X-natinf}{\sf\small Damages classified by type: the 10 classes the most represented among the full sample, identified by natinf code. It represents 78.1\% of the full sample of size 208,037.\vspace{0.7ex}}} \footnotesize \begin{tabular}{crlrr} \hline &&&&\\[-1.5ex] Class & Natinf code & Type & Complaints Number & Percentage \\ &&&&\\[-1.5ex] \hline\hline &&&&\\[-1.5ex] {\bf 1} & {\bf 7,875} & {\bf Fraud} & {\bf 123,536} & {\bf 59.38\%} \\ {\bf 2} & {\bf 28,139} & {\bf Identity theft} & {\bf 9,697} & {\bf 4.66\%}\\ {\bf 3} & {\bf 58} & {\bf Breach of trust} & {\bf 7,256} & {\bf 3.49\%}\\ 4 & 372 & Defamation & 4,888 & 2.35\%\\ {\bf 5} & {\bf 1,619} & {\bf Violation to SADP\textsuperscript{a}} & {\bf 4,495} & {\bf 2.16\%}\\ {\bf 6} & {\bf 7,203} & {\bf Blackmail} & {\bf 3,295} & {\bf 1.58\%}\\ {\bf 7} & {\bf 7,151} & {\bf Theft} & {\bf 2,891} & {\bf 1.39\%}\\ 8 & 10,765 & Invasion of privacy & 2,399 & 1.15\%\\ 9 & 7,173 & Threat to individuals & 2,088 & 1.00\% \\ 10 & 376 & Public abuse & 1,997 & 1.00\%\\[-2ex] &&&&\\ \hline \end{tabular} \\ \textsuperscript{a}SADP: System of Automated Data Processing (STAD in French)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} \end{table} \vspace{-2ex} When looking at the sample for which dammage amounts above 500~\EUR are provided, we obtain the following classification in Table \ref{tab:X-natinf-500}. Note that the classes common to Tables~\ref{tab:X-natinf} \& \ref{tab:X-natinf-500} are indicated in bold. We observe that 'Fraud' is the most represented types of damages in both tables, with a proportion above 87\% of the 31,911 considered data in Table~\ref{tab:X-natinf-500}, and above 59\% (for the 208,037) in Table~\ref{tab:X-natinf}. This first (by size) class is far from the second class (Identity theft in Table~\ref{tab:X-natinf} and Breach of Trust in Table~\ref{tab:X-natinf-500}, respectively) which size is less than 5\%. Note the second gap in the size between 'Breach of trust' and the other classes for damages above 500~\EUR, going from 4.7\% to less than 1\%, whereas the percentage is regularly decreasing in Table~\ref{tab:X-natinf}. \begin{table}[H] \centering \parbox{430pt}{\caption{\label{tab:X-natinf-500}\sf\small Damages above 500~\EUR classified by type: the 10 classes the most represented among the 31,911 data identified by natinf code. It represents 96.6\% of the sample of size 31,911. \vspace{0.7ex}}} {\footnotesize \begin{tabular}{crlrr} \hline &&&&\\[-1.5ex] Class & Natinf code & Type & Complaints Number & Percentage \\ &&&&\\[-1.5ex] \hline\hline &&&&\\[-1.5ex] {\bf 1} & {\bf 7,875} & {\bf Fraud} & {\bf 27,914} & {\bf 87.5\%}\\ {\bf 2} & {\bf 58} & {\bf Breach of trust} & {\bf 1,497} & {\bf 4.7\%}\\ {\bf 3} & {\bf 7,151} & {\bf Theft} & {\bf 257} & {\bf 0.8\%}\\ {\bf 4 }& {\bf 28,139} & {\bf Identity theft} & {\bf 234} & {\bf 0.7\%}\\ 5 & 26012 & Fraud per legal entity & 214 & {\bf 0.7\%}\\ {\bf 6} & {\bf 1,619} & {\bf Violation to SADP} & {\bf 181} & {\bf 0.6\%}\\ 7 & 94 & Harmful substances in educat. instit. & 153 & 0.5\%\\ {\bf 8} & {\bf 7,203} & {\bf Blackmail} & {\bf 152} & {\bf 0.5\%}\\ 9 & 560 & Use of falsified or forged cheque & 134 & 0.4\%\\ 10 & 7,881 & Violation to SADP at the prejudice & 94 & 0.3\%\\ && of a vulnerable person && \\[-2ex] &&&&\\ \hline \end{tabular} } \end{table} \vspace{-2ex} \subsection{Frequency} \label{ss:freq} \vspace{-2ex} Let us proceed to the inter-temporal analysis. Before doing it, we recall that the time stamp in our database is the reporting date, not the date of occurrence of the attack. This fact can of course bias the analysis. We are able to detect trends in the occurrences but not outbursts or seasonality of attacks. \begin{figure}[h] \centering \includegraphics[scale=0.6]{monthly_freq_damages.pdf} \vspace{-2ex} \parbox{346pt}{\caption{\label{fig:frm2} \small\sf Monthly frequency of the $N$ complaints. The $x$-axis represents the 46 successive months over the entire period. The left $y$-axis gives the monthly frequency of complaints, while the right one gives the normalized number of complaints per month w.r.t. the monthly average on the full sample.}} \end{figure} In Figure~\ref{fig:frm2}, we present the variations of the monthly frequency of complaints on the sample from July 2015 to April 2019; see the blue (dark) curve. The mean of this quantity is slightly increasing over time, from slightly above 4,000 (4,194.5) for the two first years, to around 5,000 (5,032.8) for the last period; see the (green) horizontal lines in Figure~\ref{fig:frm2}. The orange (light) curve represents the variation of the monthly number of complaints against the average over the full sample. In the same figure, the values of the right vertical axis are normalized and defined as: \begin{equation}\label{eq:moyNormal} \frac{m_i - \overline{M}_T}{\overline{M} _T} \quad \text{with}\quad \overline{M}_T= \frac1{K_T}\sum_{j=1}^{K_T} m_j ,\quad i=1,\cdots,K_T, \end{equation} where $K_T=46$ is the total number of months in the sample, and $m_i$ is the monthly frequency for the $i$-th month with $i \in [ \, 1,K_T \, ] \,$. We see that the two curves (orange-light and blue-dark) are quite similar, as expected from the way they are computed. However, the scale displayed on the right is different and varies from negative (-0.45) to positive (0.35) values. It helps distinguish between two periods: The first 2015-2016 is negative and the second 2017 to 2018 is positive. Moreover, for the year 2017, a positive trend is clearly observable. In Figure~\ref{fig:fra}, we draw the evolution from 2016 to 2019 of the annual moving average of the monthly frequency of the $n$ complaints, choosing a monthly rolling window. It means that for each month from June 2016 (as the yearly averaging starts in July 2015) to April 2019, we use the data of the past year until the considered month, to compute the average of the complaints number. Using the same notations as in Equation~\eqref{eq:moyNormal}, we compute the annual moving average during the whole period of $K_T=46$ months, as $\displaystyle \overline{M}_{T,i}= \frac1{k_T}\sum_{j=i}^{i+k_T-1} m_j$, $\forall\, 1\le i \le K_T-k_T+1$, with $k_T=12$ months and $\overline{M}_{T,1}$ corresponding to the average on the month of June 2016. \begin{figure}[h] \centering \includegraphics[scale=0.6]{moving_average-freq.pdf} \parbox{300pt}{\caption{\label{fig:fra}\sf\small Annual moving average of the monthly frequency of complaints. The $y$-axis presents the average number of complaints, while the $x$-axis presents the dates of the moving average.}} \label{fig:Moy_nbr_ann} \end{figure} We observe a strong increase of the number of complaints from the year 2017, going from 3500 to 5500 complaints, with a positive (linear) trend. In 2018, there is a slight decrease until the level 4700, then it seems to go up again in 2019. Those observations confirm those made on Figure~\ref{fig:frm2}. \vspace{-2ex} \subsection{Severity} \label{ss:sev} \vspace{-2ex} In this section, we proceed to the statistical study of the data for which damages amounts are given as positive, {\it i.e.} which correspond to material damage recorded by the GN on the declarations of victims of property crimes. In Table~\ref{tab:statDesc}, we present the main descriptive statistics on this sample (of size 60,985; see Table~\ref{tab:dataPrej}), namely: mean, median, standard deviation, dispersion index (DI) ({\it i.e.} ratio between the standard deviation and the mean: $DI=\sigma/\mu$), skewness and kurtosis (all quantities being, of course, empirical). Note that those results are to be taken with caution, as the database for small or ND amounts still needs a lot of care and corrections. Once the corrections done, a specific study for the data with fields ND and $x=0$ (and, eventually, $0<x<500$) will be the object of another investigation. We pay a particular attention to the 31,911 amounts above $500$~\EUR ($x\ge 500$) for two reasons: The first is that $500$~\EUR corresponds to the amount above which prosecution can be open, the second is pragmatic, the database for small or ND amounts still needing some care, as already explained. We also give the main descriptive statistics for the two samples of positive amounts and of amounts above $500$~\EUR. \begin{table}[h] \centering \caption{\label{tab:statDesc} \sf Descriptive statistics for positive damages amounts. \vspace{0.7ex}} \small \begin{tabular}{lccccccc} \hline &&&&&&&\\[-1.5ex] & Max & Mean & Median & Standard deviation & DI & Skewness & Kurtosis\\ &&&&&&&\\[-1.5ex] \hline\hline &&&&&&&\\ amounts >0 & 8,069,984 & 3,476.67 & 522.21 & 44,879.06 & 12.91 & 124.50 & 19,931.67 \\ (sample size: 60,985) &&&&&&&\\ &&&&&&&\\[-1.5ex] \hline &&&&&&&\\ amounts $\ge 500$ \EUR & 8,069,984 & 6,460.11 & 1,500.00 & 61,891.58 & 9.58 & 90.58 & 10,512.55 \\ (sample size: 31,911) &&&&&&&\\ &&&&&&&\\ \hline \end{tabular} \end{table} \vspace{-2ex} Besides the mean and median, which, of course, have different values between the two samples, the other descriptive statistics share the same characteristics. We observe a strong skewness, whatever the sample considered, indicated by a very high value, but also by a strong difference between mean and median. For each sample, the variance exhibits a very high empirical value, pointing out that it can be interpreted as infinity. This is corroborated by the high values of DI and kurtosis. If the existence of the 2nd moment is questioned, a fortiori that of the higher moments, hence the very large value of the kurtosis. It already suggests the existence of heavy tails for the damages severity. To conclude this section, we present the distribution of the positive amounts $\ge 500$\EUR for each month of the entire period, providing a box-plot for each month; see Figure~\ref{fig:boxplot-montant}. \begin{figure}[h] \centering \includegraphics[scale=0.6]{Monthly_boxplot_above500.pdf} \vspace{-2ex} \parbox{360pt}{\caption{ \label{fig:boxplot-montant} \sf\small Box Plot of the monthly amounts $x\ge 500$ of the damages from July 2015 to April 2019. The $x$-axis corresponds to the 46 months. The $y$-axis is a logarithmic scale so that all values, extreme or not, can be seen on the plot. }} \end{figure} We observe that the median is more or less constant for each month (with an average value of 1,493.6\EUR), as well as the interquartile interval $[Q_1,Q_3]$, where $Q_1, Q_3$ denote the 1st and 3rd quartile, respectively (on average, $Q_1$ is 840.1\EUR and $Q_3$, 3,198.4\EUR). The position of the limits $Q_1-1.5\times (Q_3-Q_1)$ (4,378\EUR) and $Q_3+1.5\times (Q_3-Q_1)$ (-2,697\EUR; not visible here as we consider values larger than 500\EUR), respectively, indicate an important negative dissymetry. The mass of values beyond $Q_3+1.5\times (Q_3-Q_1)$ confirms the observation of a heavy tail distribution for this variable. This temporal analysis validates the choice of a static approach chosen for our modelling, developed below. \vspace{-2ex} \section{Probabilistic modelling of heavy-tailed data} \label{sec:model} \vspace{-2ex} As observed in the previous section, we are clearly confronted to heavy-tailed and asymmetric (due to the strong skewness) data. This characteristic is common to many fields (for instance on health data), among which OR where heavy-tailed data are often to be taken care of (see e.g. \cite{DAS2021}). The presence of extreme risks induces specific risk management procedures and need for capital. Thus, it is essential to be able to quantify accurately the probability of extremes occurence for designing the appropriate hedging strategy. In this context, building on the main ideas underlying the method developed in \cite{Debbabi2017}, we adapt its general hybrid model and draw a new algorithm allowing for a relevant fit of any heavy-tailed asymmetric non-negative data, thanks to the automatic detection of the tail-threshold. This new version completes the overall method, providing now 2 versions of the algorithm, one for symmetric data, the other here for asymmetric ones. \vspace{-2ex} \subsection{ Dealing with heavy-tailed data thanks to EVT} \label{ss:EVT} \vspace{-2ex} In view of developing our model, we briefly recall some results from univariate EVT, namely the main asymptotic theorems for extremes, as well as the founding ideas of the approach developed in \cite{Debbabi2014,Debbabi2017}. This will be useful when considering the extreme loss severity associated with the registered complaints in Section~\ref{sec:appliCyber}. See also \citep{Ventre2020} and \cite{Kratz2019} for a recent overview of some standard (supervised) and new (unsupervised) methods in univariate EVT\footnote{Note that the wording used to describe well-known EVT notions in these references and in this paragraph (or, even, in any EVT (text)book), may be similar.}. For more details, we refer the reader to standard books on the EVT literature, e.g. \cite{Leadbetter2011}(1st ed. 1983), \cite{Resnick2008} (1st ed. 1987), \cite{Embrechts2011} (1st ed. 1997), \cite{Reiss2007} (1st ed. 1997), \cite{beirlant2004}, \cite{deHaan2006}, \cite{Resnick2007}. While the (Generalized) Central Limit theorem provides a limiting distribution for the distribution bulk, thus describing the mean behavior of the phenomenon studied through an observed (iid) sample from this (unknown) distribution, the Extreme Value theorems consider the extreme behavior of such phenomenon. There are two EVT pillar theorems, one (named the three-types theorem, unified into a single three-parameter family) proving that the renormalized maximum of a sample is asymptotically distributed as a Generalized Extreme Value (GEV) distribution, defined by \begin{equation} G_{\mu,\beta,\xi}\left(x\right)= \exp\left[ -\left(1+\xi \,\frac{x-\mu}{\beta}\right)^{-1/\xi}\right],~\text{for}~x~\text{such that}~~ 1+\xi~\frac{x-\mu}{\beta}>0~, \end{equation} with location parameter $\mu$, scale parameter $\beta$, and tail index $\xi$, this latter parameter being our focus in EVT, as it determines the nature of the tail distribution. Extracting more information in the tail of the distribution for a better estimation of $\xi$, we turn to thresholds methods and the second EVT pillar theorem, the Pickands-Balkema-de Haan theorem, which proves that, for a sufficiently high threshold $u$, a very good approximation to the excess distribution function ({\it i.e.} the distribution of the exceedances above $u$) is the Generalized Pareto distribution (GPD) $G_{\xi,\beta(u)}$ defined by \begin{equation}\label{eq:defGPD} G_{\xi,\beta(u)}(y)=\left\{ \begin{array}{ll} 1-\left(1+\xi\frac{y}{\beta(u)}\right)^{-1/\xi} &\mathrm{if}~ \xi \neq 0 \\ 1-\exp\left(-\frac{y}{\beta(u)}\right) &\mathrm{if}~ \xi = 0 \end{array}\right. \end{equation} where $y\geq 0$ if $\xi\geq 0$, and $\displaystyle 0\leq y \leq -\frac{\beta(u)}{\xi}$ if $\xi<0$. The distribution tail can be of three types, according to the sign of $\xi$, with a heavy (or fat) tail if $\xi>0$ (Fr\'echet domain of attraction), a light tail if $\xi=0$ (Gumbel domain of attraction), and a finite upper endpoint if $\xi<0$ (Weibull domain of attraction). The main problem when applying the Pickands-Balkema-de Haan theorem, comes back to the identification of the threshold above which observations are considered as extremes, so that they can be fitted with a GPD. Various methods have been developed for this purpose and for estimating the tail index, among which supervised methods (as the standard ones of the EVT literature) and unsupervised ones (see e.g. references in \cite{Debbabi2017} and \cite{Tencaliec2020}). A final reminder, useful for our analysis, concerns the relation between the value of the tail index and the existence of moments of the GEV and GPD distributions. The $\displaystyle k^{\text{th}}$ moment exits if $\displaystyle \xi < 1/k$ or, equivalently, the so-called shape parameter $\alpha=1/\xi$ satisfies $\alpha > k$. It means that the smaller the shape parameter (or, equivalently, the larger the tail index), the heavier is the tail. For instance, if $1< \alpha \le 2$, the second moment of the distribution does not exist (i.e. infinite variance) but the first moment (the expectation) is finite. In insurance companies, the severity of risk is often classified according to the range of the shape parameter $\alpha$. For instance, pandemics and natural catastrophes like windstorms or floods have $1.2< \alpha <2$, while earthquakes have fatter tails with $0.9<\alpha\le 1.1$ (if $\alpha\le 1$, the reinsurer will only give limited covers in order to force the loss distribution to have a finite expectation, hence an $\alpha$ back above 1); financial risks exhibit $2 < \alpha \le 4$. This is why, in the following, we may privilege discussing the range of $\alpha$ rather than that of $\xi$. \vspace{-2ex} \subsection{Towards an unsupervised modelling method} \vspace{-2ex} Let us describe succintely the main idea of the unsupervised method by Debbabi et al. (see \cite{Debbabi2014}, \cite{Debbabi2017}, and, for an overview, \cite{Kratz2019}, \cite{Ventre2020}). It has been developed for fitting multi-component data that exhibit heavy tails, determining in an automatic way the threshold $u_2$ above which the GPD fits the tail distribution, as well as the tail index $\xi$. Note that to be able to determine 'automatically' the threshold $u_2$, we need all the information contained in the data, contrarily to standard EVT approaches, where only the information contained in the tail of the distribution is used, explaining in this latter case why defining $u_2$ is less straightforward. Using all information also means providing a model for the whole data at once, which may be seen as an advantage. Unlike existing statistical methods for density parameters estimation such as maximum (log) likelihood or moments ones, to name a few, this iterative algorithm is built on the solving of a set of non-linear least squares problems by the Levenberg-Marquardt (L-M) technique \citep{Levenberg1944,Marquardt1963}, which combines Gauss–Newton and gradient descent methods to reach the desired minimum. Our method has been developed in successive stages since 2014, testing it in terms of goodness-of-fit on simulated data, but also in many applications that help improve it and extend it as done here on cyber data, to better catch the data complexity. It is based on an algorithm calibrating on data a general hybrid model composed of two main components, built on asymptotic theorems, splitting mean and extreme behaviors to use for each behavior a general limiting parametric distribution. One of the key ideas to improve an earlier version of the method has been to introduce a bridge between the two main components, chosen as an exponential distribution, to have a continuous hybrid distribution and to allow for a better determination of the extreme behavior (described by a GPD). Indeed, we could always try to link directly the main and extreme distributions (without a bridge), but then, the GPD might have to go towards intermediate behavior (intermediate order statistics) rather than describing the largest order statistics. With the bridge's introduction, we do not face this issue, providing at the same time (thanks to the algorithm) a way to determine automatically the threshold above which the observations are considered as extremes and are fitted with a GPD (via the Pickands-Balkema-de Haan theorem). This general model can then be calibrated on any type of heavy-tailed non-negative data (rather than finding a specific model). The algorithmic method comes into two versions, depending on the nature of the data: symmetric versus asymmetric (skewed). While the first version has been built for symmetric data, approximating the mean behavior with a Gaussian distribution (using the CLT) (see \cite{Debbabi2017}), the second version is developed here for asymmetric non-negative data replacing the Gaussian behavior with a lognormal one, as it is well known that the CLT suffers of a slow speed of convergence for skewed data, as could be experimented on the cyber dataset of the GN; see Figure~\ref{fig:cdf-positiveDamages}. We observe on the blue curve (G-E-GPD model) of this figure that the slow convergence has a negative impact on the whole fit. \vspace{-2ex} \subsection{An algorithm for asymmetric data} \vspace{-2ex} If the main idea of the approach still holds when changing the Gaussian component into a lognormal one, we need of course to define the new model and relations between parameters, and adapt the self-calibrating algorithm accordingly. The model we consider here, denoted by LN-E-GPD (Lognormal-Exponential-Generalized Pareto Distribution), is characterized by its probability density function (pdf) $h$ expressed as: \begin{equation}\label{eq:LNEGPD} h(x;\boldsymbol{\theta})= \gamma_1\, f(x;\mu,\sigma)\,\mathbf{1}_{(x\leq u_1)} + \gamma_2 \, e(x;\lambda)\, \mathbf{1}_{(u_1\leq x \leq u_2)} + \gamma_3\, g(x-u_2;\xi,\beta) \,\mathbf{1}_{( x \geq u_2)}, \end{equation} where $f(\cdot;\mu,\sigma)$ denotes the lognormal (LN) pdf with mean $\mu\in\mathbb{R}$ and standard deviation $\sigma >0$, defined for all $x>0$ by $\displaystyle f(x;\mu,\sigma)=\frac{1}{x\sigma\sqrt{2\pi}} e^{-\frac{(\log x-\mu)^2}{2\sigma^2}}$, $e$: the exponential pdf with intensity $\lambda$ (defined on $\mathbb{R}^+$ by $\displaystyle e(x;\lambda)=\lambda\,\exp^{-\lambda\, x}$), $g$: the pdf of the GPD defined in \eqref{eq:defGPD} with tail index $\xi>0$ (heavy-tail condition) and scale parameter $\beta>0$, while $\gamma_i$, $i\in\{1,2,3\}$, are the non-negative weights (for $h$ to be a pdf) with $\gamma_1+\gamma_2+\gamma_3 \ge 1$, and $u_1$ and $u_2$ are the two junction points between the components, with $u_1\le u_2$. Let us define the relations between the parameters of the model, using the heavy-tailed framework and the $C^1$ assumption on the pdf that imposes smooth transitions from one component to another. For heavy-tailed data, we have $\xi>0$ and the asymptotic (for high threshold) relation $\beta=\xi \, u_2$, which we are going to use in the algorithm. The $C^1$ assumption is translated by $\gamma_1\, f(u_1;\mu,\sigma)=\gamma_2 \, e(u_1;\lambda)$, $\gamma_2 \, e(u_2;\lambda)=\gamma_3\, g(0;\xi,\beta)$, and similar equalities when considering the derivative of $f,e$ and $g$, respectively. Therefore, after some computation, we obtain: \begin{equation}\label{eq:relations-Param} \left\{ \begin{array}{ll} \beta=\xi \, u_2; \quad \lambda=\frac{1+\xi}{\beta}; & \gamma_2=\Big[ \xi \, e^{-\lambda\,u_2} + \left(1+\lambda\,\displaystyle \frac{ F(u_1;\mu,\sigma)}{f(u_1;\mu,\sigma)} \right) e^{-\lambda\,u_1}\Big]^{-1}; \\ \lambda \sigma^2 u_1-\log u_1=\sigma^2-\mu; &\quad \gamma_1=\gamma_2 \,\frac{e(u_1;\lambda)}{f(u_1;\mu,\sigma)}; \quad \gamma_3=\beta\,\gamma_2 \, e(u_2;\lambda). \end{array} \right. \end{equation} Those relations help reduce the size of the vector of parameters to be estimated from 10 to 4, namely $\big[\mu,\sigma,u_2,\xi]$. Then, we run the iterative algorithm for the LN-E-GPD model to estimate those 4 parameters (the other 6 being deduced via \eqref{eq:relations-Param}). The model parameters are estimated via an iterative algorithm adapted from that developed for the G-E-GPD model, described in details in Debbabi et al. and which convergence has been studied analytically and numerically. We recall here the main principle (see the pseudo-code in \cite{Kratz2019}, Section 2.4.1): We initialize the parameters of the distribution body and the threshold $u_2$ to estimate in a first iteration the tail index $\xi$. Then, we fix the latter with this first value and estimate the other parameters. This back-and-forth process (between body and tail) is iterated by minimizing together, with the L-M technique, two distances between empirical and model distributions, one for the whole distribution and the other for the tail, until convergence. If the main principle holds for the new model, it is also worth noticing that a critical point, which highlights the difference between modeling the bulk data by a Gaussian or a lognormal distribution, lies on the choice of initial parameters conducting the algorithm to convergence. Indeed, we recall that for G-E-GPD, the Gaussian mean corresponds to the distribution mode, which gives a nice strategy to initialize the Gaussian parameters. Unfortunately, it is no longer the case for the lognormal distribution for which we have tested several techniques to obtain initialization that holds up. Moreover, this algorithm provides an additional flexibility compared to a two components model: If an observed phenomenon under study would be well explained by two components only, then the two thresholds $u_1$ (junction point between the body and the exponential bridge) and $u_2$ (junction point between the exponential bridge and the GPD) will collapse into one during the calibration. This has been shown for the G-E-GPD model in \cite{Debbabi2017}), providing the earlier G-GPD model introduced in \cite{Debbabi2014}. Here, we also show the same property for the LN-E-GPD model, conducting a series of experiments based on Monte Carlo simulations (see Appendix~\ref{App-test2and3components}), leading to a LN-GPD model (with non uniform weights for each component), which is a generalized version of the Cszeledin distribution (a LN-Pareto distribution with specific weights) introduced in \cite{Knecht2003} and used in the cyber case by \cite{Eling2019}. It demonstrates the outperformance of the three components model\footnote{Note that this general method and its two algorithmic versions will be part of a statistical software package. Meantime, the R code is available upon request.}. Hence, the two components model should not be a purposedly chosen one, but come as a specific subcase of a general model that has been calibrated, moreover in an automatic way, without resorting to either costly computational techniques or via standard EVT techniques, recognized as oversensitive to the threshold above which observations are considered as extremes. This points out the outperformance of our model, which lies in its generality, simplicity, and self-calibrating property. \vspace{-2ex} \subsection{Assessing the parameters estimation via a re-sampling technique} \label{ss:jackknife} \vspace{-2ex} Another important input in the extension of this method is the construction of confidence intervals for the estimated parameters via a re-sampling technique, as well as the introduction of a better visualizing tool for the tail fit (see Section~\ref{ss:appliSeverity} and the right plot of Figure~\ref{fig:cdf-positiveDamages}) (those elements have been introduced in the software package under construction). To provide confidence intervals for the estimation of the model parameters, we revisit the Jackknife method (see~\cite{Kunsch1989}) that measures the variability of the estimation across sub-samples. This is one of the earliest re-sampling techniques, which is more suited for a large number of observations than the standard bootstrap (which we also ran to check that we would obtain the same results; it was the case, but it took a few days of computation to obtain the results, confirming the advantage of the Jackknife use in this context). Our main focus is on the tail of the distribution since it is more difficult to estimate than the features in the body such as mean and variance. Thus, we consider the three parameters of the GPD component, namely the tail index $\xi$, the scale parameter $\beta$ and the exceedance threshold $u_2$. To define a numerical confidence range, we build randomly $m=10$ subsamples and run on each one the algorithm for calibrating the hybrid model. Each subsample is constructed in the following way: we omit some randomly selected data points that amount to $10$\% of the original dataset of size $n=60985$, making sure that each of those selected observations is omitted only once, while used in the 9 other computations (note that it means that each selected observation in the whole sample will be removed in 1 of the 10 subsamples). The estimation results obtained with the Jackknife method are based on the average of the 10 estimates obtained for each subsample. Taking the example of the tail index, its estimator (through this method) denoted by $\bar{\xi}^J$ is defined by $\displaystyle \bar{\xi}^J = \frac{1}{m}\sum_{i=1}^m \hat{\xi}_i^J $, where $\hat{\xi}_i^J$ is the estimator of $\xi$ obtained on the $i$-th subsample and $m$ is the number of subsamples (here, $m=10$). Similarly, we can compute the standard deviation of these estimated (via the Jackknife method) parameters (see~\cite{Yang1986}) to obtain an estimation of the standard deviation of the estimator over the whole sample of size $n$. Taking back the example of the parameter $\xi$, the estimated standard deviation of the estimator $\hat{\xi}$ (over the whole sample), is defined by \begin{equation} \widehat{\sigma(\hat\xi)} = \sqrt{(1-1/m)\sum_{i=1}^m (\hat{\xi}_i^J- \bar{\xi}^J)^2 }. \end{equation} As $\widehat{\sigma(\hat\xi)}/ \sigma(\hat\xi)\, \rightarrow 1$, we define the 95\% variability $a_{95\%}^J=\Phi^{-1}(0.975)\, \widehat{\sigma(\hat\xi)}$ (assuming asymptotic normality) and the confidence range displayed in Table~\ref{tab:Jack} is expressed as \begin{equation} \hat{\xi} - a_{95\%}^J ~~\le~~\xi ~~\le~~ \hat{\xi} + a_{95\%}^J. \end{equation} A similar procedure can be applied for all parameters. This last step completes the algorithm that provides parameters calibrated on a general model with their CI, in a fast and reliable way. \vspace{-2ex} \section{Application to Cyber Data} \label{sec:appliCyber} \vspace{-2ex} We start, in Section~\ref{ss:appliSeverity}, considering the data field `damages', which provides the severity of the cyber attack. This focus on the financial consequences of cyber attacks, i.e. the amounts, and not on causes of the attacks, is due to three reasons: First, the financial risk must be well understood for providing good insurance covers. Second, as the amounts have stable statistical properties over time (see Figure~\ref{fig:boxplot-montant}) a static univariate distribution will give a good picture of the severity variable. Third, the amounts constitute a large enough set of observations so that the extremes can be well modelled. For this latter reason, we mix as well all types of cyber attacks, and not only a specific one. The investigation of the causes will be the object of future studies. Recall also that we were able to double check manually the information given on the 'amounts' variable in the GN database for the tail (for the amounts above 40,000\EUR, corresponding to the quantile of order 98.2\%). This is why our study concerns the modelling of the extreme amounts, including the frequency of the occurence of extreme damages (above a high threshold); see Section~\ref{ss:Poisson}. We will investigate further the multivariate modelling, once the information given on all variables will have been carefully checked. Once having a calibrated model for the damages, we can then use it for risk management purposes, as for instance evaluating how much capital is required to cover cyber risk. An illustration is given in Section~\ref{ss:RM}. We are also interested in the tail modelling of the various types of attacks listed in the GN database. The idea is to check if the tail index could be used as a discriminating criterion between various forms of cyber complaints. A first exploratory attempt is developed in Section~\ref{ss:classes}, considering the three most frequent types. The first class is preponderant compared to the other two. Nevertheless, the robustness of the parameters estimation we observed with our method makes it possible to apply the model for those latter classes of small size (but still of larger size than of most samples considered so far in the cyber literature). \vspace{-2ex} \subsection{Application to the damage severity} \label{ss:appliSeverity} \vspace{-2ex} Based on the empirical results obtained for the damage severity in Table~\ref{tab:statDesc} with characteristics specific to heavy-tailed phenomena, we naturally look for a model able to capture such a feature and consider our flexible hybrid model, defined in \eqref{eq:LNEGPD} and \eqref{eq:relations-Param}, which we calibrate on the positive damages using the iterative algorithm discussed in Section~\ref{sec:model}. The obtained estimates are given in Table~\ref{tab:param-PositiveDamages}, where we observe that the LN-E-GPD model reduces into two components, as the 2 thresholds on either side of the exponential bridge collapse to $u_1=u_2$. It is interesting as this 2 components model has already been suggested for cyber data (see e.g. \cite{Zeller2020}). The threshold $u_2$, automatically evaluated via the hybrid model, corresponds to a quantile of order $96.6$\%. The GPD fitted above $u_2$ exhibits a shape parameter $\alpha=1/\xi=1.23$, indicating a heavy tail with a finite first moment but no finite variance. \begin{table}[htbp] \centering \caption{\label{tab:param-PositiveDamages} \sf Evaluated parameters of the hybrid LN-E-GPD model for positive damages. \vspace{0.7ex}} {\small \begin{tabular}{lcccccccccc} \hline Model & $\mu$ & $\sigma$ & $\gamma_1$ & $u_1$ & $\lambda$ & $\gamma_2$ & $\xi$ & $u_2$ & $\beta$ & $\gamma_3$ \\ \hline \hline &&&&&&&&&&\\ LN-E-GPD & 6.27 & 1.54 & 99.4\% & 9,999.34 & 0.0002 & 17.5\% & 0.81 & 9,999.34 & 8,087.11 & 3.4\% \\ &&&&&&&& q(96.6\%) &&\\ \hline \end{tabular} } \end{table} Fitting the LN-E-GPD model on the damage severity data, we obtain the following output of the algorithm, given in Figure~\ref{fig:cdf-positiveDamages} and Table~\ref{tab:fit-PositiveDamages}. We also exhibit the fit of the G-E-GPD model, replacing the Lognormal component with a Gaussian (G) one, to illustrate its inadequacy to account for the asymmetry of the data due to the slow convergence of the CLT. It impacts, as expected, the whole fit, including the tail one, as can be observed, even if this impact is mitigated by the bridge component. In Figure~\ref{fig:cdf-positiveDamages}, we provide two types of graphs, one (left) giving the empirical cdf and the two fitted distributions (with a logarithmic scale on the $x$-axis), the other (right) displaying the corresponding survival distributions ($1-F$) on a double logarithmic scale (i.e. for the $x$- and $y$-axis). \vspace{-2ex} \begin{figure}[h] {\resizebox*{8cm}{!}{\includegraphics[width=7cm,height=7cm]{cdf-damages-2020oct19.pdf}}} \hfill {\resizebox*{8cm}{!}{\includegraphics[width=7cm,height=7cm]{survival-cdf-damages-2020oct22.pdf}}} \parbox{500pt}{\caption{\label{fig:cdf-positiveDamages} \sf\small Cdf (left plot, with a log scale for the $x$-axis) and survival cdf (right plot, with a log scale for both the $x$ and $y$ axes) of the positive damages. The empirical cdf is represented in black, the LN-E-GPD in light (red) and the G-E-GPD in dark (blue). The dashed vertical lines (with the same color code) correspond to the thresholds between the components of the hybrid model considered, while the continuous vertical (green) line points out the 500\euro -threshold.}} \end{figure} The interest of taking a double logarithmic scale is that, for such a representation, a GPD becomes a linear decreasing function, which slope is the negative value of the shape parameter, i.e. $-\alpha$. Indeed, considering the survival GPD for $x \,(\ge u_2)$, namely $1-G_{\xi,\beta}(x)= \left(1+\frac{\xi}{\beta}\, x\right)^{-1/\xi}$, and taking its logarithm gives $$ \log\left(1-G(x;\xi,\beta)\right)\,=\, -\frac1{\xi} \log\left(1+\frac{\xi}{\beta}\,x\right) \underset{x\,\text{large enough}}{\sim} -\frac1{\xi}\log\left(\frac{\xi}{\beta}\right)-\frac1{\xi}\log x \,=:\, y(\log(x)). $$ The function $y(\cdot)$ is then a linear function in $\log x$, with slope $-1/\xi= -\alpha$ and intercept $-\frac1{\xi}\log\left(\frac{\xi}{\beta}\right)$. This facilitates the comparison between empirical and fitted tail distributions, as the mismatch will appear clearly. It is an interesting alternative representation to QQ-plots. Looking at Figure~\ref{fig:cdf-positiveDamages}, we observe that the LN-EXP-GPD model fits better the empirical data than the G-EXP-GPD, as expected. This is confirmed by the errors reported in Table~\ref{tab:fit-PositiveDamages}, where the total error of the latter is almost twice as big as the LN-EXP-GPD for both root mean square errors (RMSE) and mean absolute errors (MAE) (1.51\% versus 0.80\% for the RMSE, and 1.33\% versus 0.66\% for the MAE). \begin{table}[htbp] \centering \parbox{360pt}{\caption{\label{tab:fit-PositiveDamages} \sf\small Measuring the goodness of fit for the 2 considered hybrid models on the positive damages. The total and tail errors are computed using respectively the root mean squared error (RMSE) and the mean absolute error (MAE). \vspace{0.7ex}}}\\ \begin{tabular}{lccccc} \hline &&&&&\\[-1.5ex] Model & \multicolumn{2}{c}{Total error in \%} & \multicolumn{2}{c}{Tail error in \%} & BIC criterion \\ & RMSE & MAE & RMSE & MAE & \\[1ex] \hline\hline &&&&&\\ LN-E-GPD & 0.80 & 0.66 & 0.94 & 0.79 & -255,927\\ &&&&&\\ \hline &&&&&\\ G-E-GPD & 1.51 & 1.33 & 1.60 & 1.56 & -222,225\\[2ex] \hline \end{tabular} \end{table} Focusing on the distribution tail, the right plot of~Figure~\ref{fig:cdf-positiveDamages} clearly depicts a better fit for the LN model than for the Gaussian model. We see that the Gaussian model overestimates the heaviness of the tail, while the LN model fits well the linear representation of the empirical tail (lower right part). The superiority of the LN hybrid model over the Gaussian one, is also confirmed by the Bayesian Information Criterion (BIC) which is 15\% lower for the first model than for the second one. The shape parameter of the tail distribution ($1/\xi$), estimated as 1.24, indicates a rather heavy tail, as already commented. \begin{table}[h] \centering \caption{\label{tab:Jack} \sf\small Variability of the GPD parameters estimation using the Jackknife method. \vspace{0.7ex}} \begin{tabular}{lccc} \hline &&&\\[-1.5ex] & $\alpha$ & $\beta$ & $u_2$ \\[0.5ex] \hline\hline &&&\\[-0.5ex] Estimation & 1.236 & 8,087 & 9,999\\ 95\% Confidence Range (CR) & [1.213 ; 1.260] & [7,929 ; 8,245] & [9,980 ; 10,018] \\[1.5ex] \hline \end{tabular} \end{table} We observe in Table~\ref{tab:Jack} that the results of the fit are robust towards the samples. The choice of $u_2$ is very stable (0.2\% variation), while the scale index $\beta$ is in range of $\pm 2$\% and the shape parameter $\alpha$ in range $\pm 1.8$\%. Further, to stress the stability of the result and to be closer to a realistic frame (given that positive values below 20\euro\, do not really make sense, given the context), we also ran the iterative algorithm to fit the LN-E-GPD model on a second sample obtained when removing the 583 damages below 20\euro\, (from the sample of size 60,985). Note that those removed data correspond, most probably, to data badly reported in the database due to a wrongly placed decimal point. Once corrected, they would be above 20 (implying that the second sample cannot be considered as a censored sample). Given the method, such a sample should not change the tail of the damages distribution. Indeed, we found an estimate of 1.26 for the shape parameter (compared with 1.24 when considering the positive damages), hence well within the uncertainty range. For comparing the evaluation of the tail heaviness, we also introduce the classical Hill estimator (see \cite{Hill1975}) for the tail index $\xi$, defined as \begin{equation} \hat\xi=H_{k,n} =\frac 1k \sum_{i=0}^{k-1} \log \left(\frac{X_{n-i,n}}{X_{n-k,n}}\right) \end{equation} where $\displaystyle X_{n,n}=\max_{1\le i\le n} X_i \ge X_{n-1,n}\ge\cdots \ge X_{n-k+1,n}\ge X_{n-k,n}$ are the largest order statistics of the heavy-tailed observations (here, the damage severity) $(X_1,\cdots,X_n)$, with $k$ such that $X_{n-k,n}=u_2$. The main problem faced when using this type of tail index estimators, is to evaluate $u_2$, {\it i.e.} to select the number $k$ of largest order statistics; here we use the threshold $u_2$ determined automatically by our algorithm, instead of going through the standard EVT graphical (supervised) methods. The Hill estimator is weakly consistent for heavy-tailed data and satisfies, under some second order property, $\displaystyle\sqrt{k}\, \left( H_{k,n} - \xi\right)\ \underset{n\to\infty}{\stackrel{d}{\longrightarrow}} N\left(0,\, \xi^{2}\right)$, from which we build an asymptotic confidence interval (CI) for $\hat\xi= H_{k,n}$. Considering $u_2$ as the 96.6\% quantile (see Table~\ref{tab:param-PositiveDamages}), we obtain as Hill estimate: $\widehat H_{k,n} =0.962$ with 95\% asymptotic confidence interval $[0.55; 1.37]$. Note that the estimate obtained via the algorithmic method (0.81) lies within this confidence interval. \vspace{-2ex} \subsection{A Poisson-GPD model for the Severity and Frequency of Extreme Damages} \label{ss:Poisson} \vspace{-2ex} In the previous subsection, we focused on the modelling of the damage severity, with a particular interest for the extreme damages. Now, we are looking for fully modelling those extremes, taking into account not only their magnitude but also their frequency. To do so, if the extreme observations constitute a stationary time series, we can introduce a Poisson-GPD model combining a one-dimensional Poisson process with parameter $\lambda(>0)$ for modelling the frequency at which exceedances over the threshold $u$ occur, with a GPD for representing their magnitude (see \cite{Smith2003} for details). The distribution of this model is expressed as: \begin{equation}\label{eq:G-Pmodel} H_u(x; \xi,\beta,\lambda):=exp\left\{-\lambda\left( 1+\xi \,\frac{x-u}{\beta} \right)_+^{-1/\xi}\right\},\quad x>u. \end{equation} If there is some non-stationarity in the data, as, for instance, a change over time in the frequency of exceedances, or an increase of the severity of damages due to inflation, then time variability should be introduced in the scale parameter of the GPD, say $\beta(t)$, and in the Poisson intensity parameter, say $\lambda(t)$.\\[1ex] Turning to our dataset, let us look at the frequency of extremes exceeding the threshold $u_2$ evaluated in the previous subsection (see Table~\ref{tab:param-PositiveDamages}). We consider the frequency with various time horizons from 1 to 4 months. Whatever the chosen horizon, we do not observe any clear trend, as illustrated in Figure~\ref{fig:freq-extremes} for quaterly frequency. In this figure, we present the quarterly frequency (left plot) and the quarterly percentage (right plot) of exceedances. The percentage allows to differentiate trend in the frequency of damages from trend in the exceedances. On the plots, there is no obvious trend in both cases (frequency and percentage). \begin{figure}[h] \centering {\resizebox*{7cm}{!}{\includegraphics{Quarterly_Number_of_Exceedances.pdf}}} \hspace{12pt} {\resizebox*{7cm}{!}{\includegraphics{Quarterly_Percentage_of_Exceedances.pdf}}} \parbox{450pt}{\caption{\label{fig:freq-extremes} \sf\small Quarterly frequency of damages (left $y$-axis) with magnitude larger than $u_2$ and the corresponding quarterly percentage of extreme damages (right $y$-axis). The $x$-axis presents the various quarters.}} \end{figure} To assess such statement, we perform stationary tests on R, namely the Augmented Dickey-Fuller and the Phillips-Perron unit root tests, for the time series of exceedances and of exceedances monthly frequency, respectively. For the exceedances time series, both tests strongly reject their null hypothesis, from which we conclude to the stationarity. For the frequency time series, although we considered a monthly frequency to have more observations than for the quaterly one, the number of observations is still small (46) to obtain a statistically significant and conclusive result. We display the obtained results in Table~\ref{tab:stationarity}. \begin{table}[htbp] \centering \parbox{330pt}{\caption{ \label{tab:stationarity}\sf\small Results of various stationarity tests on the exceedances dataset. \vspace{0.7ex}}} \small \begin{tabular}{lrcrl} \hline &&&&\\[-1.5ex] \multicolumn{1}{c}{\textbf{Stationarity Test}} & \multicolumn{1}{c}{\textbf{Test Value}} & \textbf{Lag} & \multicolumn{1}{c}{\textbf{p-value}} & \multicolumn{1}{l}{\textbf{Interpretation}} \\[0.5ex] \hline \hline &&&&\\[-1.5ex] \textit{For Exceedances } {\footnotesize (2,994 observations)}&&&&\\[0.2ex] Augmented Dickey-Fuller & -10.42 & 14 & < 0.01 & non-stationarity strongly rejected\\ Phillips-Perron & -1,290.2 & 9 & < 0.01 & non-stationarity strongly rejected\\ &&&&\\[-1.6ex] \multicolumn{2}{l}{\textit{For Monthly Exceedances Frequency} {\footnotesize(46 observations)}} &&&\\[0.2ex] Augmented Dickey-Fuller & -2.58 & 3 & 0.34 & non-stationarity not rejected \\ Phillips-Perron & -22.52 & 3 & 0.02 & non-stationarity rejected \\ \hline \end{tabular}% \end{table}% Therefore, due to this stationarity, we consider the Poisson-GPD model with distribution \eqref{eq:G-Pmodel} and calibrate it on the exceedances above the threshold $u_2$. As time unit for the Poisson model, given that our dataset covers 45 months (ignoring the last month of our datset, namely April 2019, to keep full quarters), we choose the quarterly frequency of exceedances (above $u_2$) because it is the minimum interval size giving enough values. In the considered sample, there are 2,994 exceedance observations. Then, we estimate the 4 parameters of the model on those observations, using the maximum likelihood method (run in R). The estimates are presented in Table~\ref{tab:PGP-MLEparam}; we observe that the estimate of the exceedance rate $\lambda$ is relatively close to the average quarterly frequency, namely 2,994/15=199.6 exceedances per quarter (45 months corresponding to 15 quarters), which lies in the 95\% confidence interval of $\lambda$. We also notice that the estimate of the tail index $\xi$ is higher than the one ($0.81$) computed in the previous section (see Table~\ref{tab:param-PositiveDamages}) and that the latter lies outside of the Jackknife confidence range. \begin{table}[b] \begin{center} \parbox{450pt}{\caption{\label{tab:PGP-MLEparam}\sf\small Estimation of the parameters of the Poisson-GPD model for the $N=2,994$ exceedances above the threshold $u_2= 9,999.34$. The confidence range (CR) for $\xi$ and $\beta$ is obtained via the Jackknife method.\vspace{0.7ex}}} \begin{tabular}{cccc} \hline &&&\\[-1.5ex] Parameter & Exceedance rate & Scale parameter & Tail index \\ & $\lambda$ & $\beta$ & $\xi$ \\ &&&\\[-1.5ex] \hline \hline &&&\\[-1.5ex] ML estimates & 187.07 & 8,087.63 & 0.983\\ CR 95\% & [171.84 ; 227.36] & [8,087.53 ; 8,087.73] & [0.930 ; 1.036]\\ &&&\\[-1.5ex] Hill estimate & & & 0.962 \\ CI 95\% & & & [0.55;1.37] \\ \hline \end{tabular} \end{center} \end{table} Let us plot in Figure~\ref{fig:surv-extremes}, using a double log-scale, the survival GPD of the Poisson-GPD model with parameters estimated via the ML method (given in Table~\ref{tab:PGP-MLEparam}) (dark/blue line), and, for comparison, the survival GPD of the LN-E-GPD model calibrated via our algorithm (light/red dashed line), as well as the one calibrated when using the Hill estimate for the tail index (light/yellow dotted line). \begin{figure}[h] \center \includegraphics[width=8cm,height=8cm]{Survival-cdf-Exc_Quarterly_u_9999.pdf} \parbox{430pt}{\caption{\label{fig:surv-extremes} \small \sf Survival GPD for the extreme damages with parameters estimated from different methods: MLE for the Poisson-GPD model (dark/blue line), algorithmic method for the LN-E-GPD model (light/red dashed line), Hill estimator for the tail index in the LN-E-GPD model (light/yellow dotted line). In black, the empirical survival cdf. Double log scale representation.}} \end{figure} We clearly observe that the LN-E-GPD model calibrated with our algorithm provides, among the three models, the best overall fit for the tail of the distribution, while in the two other cases, the fit is better at the start of the tail, but then overestimates the fatness of the tail. Our method is especially designed to emphasize the tail observations, while the MLE method has to compromise between $\lambda$ for the Poisson and the GPD parameters, putting the same weights to all the points. This is another, {\em a posteriori}, justification of the use of the algorithmic method. Nevertheless, the Poisson-GPD model is giving a more complete view on the extremes. Now, if we use as Poisson-GPD parameters, on one hand the GPD parameters evaluated by the algorithmic method, on the other hand the Poisson intensity parameter $\lambda$ estimated by the empirical average quarterly frequency, the likelihood would decrease by only $0.1\%$ of the maximum likelihood computed on the initial Poisson-GPD model, which is already very close. \vspace{-2ex} \subsection{Some consequences for risk management} \label{ss:RM} \vspace{-2ex} Recall our main focus on the tail of the distribution, as we want to characterize the cyber risk by how big is the probability of occurrence of extremes. This information is particularily relevant for (re)insurance, to know how much capital is required to cover such risk. This is assessed with risk measures. Using regulatory ones and our model, we evaluate the standalone capital ({\it i.e.} without considering diversification benefits of the company risk portfolio), then compare our results to those obtained by standard EVT methods such as Hill. This is detailed in Section~\ref{ss:RM}. The role of risk models in risk management practices is to help quantify both the liabilities of a (re)insurance company through the mathematical expectation (important for the computation of the risk premium), and the capital through risk measures. Our model takes into account both concerns, as it is built to provide a good modelling for both the bulk and the tail of the distribution. We estimated the mean and third quartile with our calibrated model; it reproduces well the empirical values, with an error of 0.2\% for the mean and 0.4\% for the third quartile. Moreover the knowledge of the tail also helps better understand descriptive features of the underlying distribution of the data. On a finite sample, any statistical quantity that we estimate is finite; it does not mean that theoretical moments, including expectation, exist. So, the first message we have drawn is that, most probably, the expectation of cyber risk exists since $\alpha$ is significantly larger than 1, as observed in Table~\ref{tab:Jack} (see Section~\ref{ss:EVT} for the relation between moments and tail fatness). Recall that the finiteness of the expectation is a necessary condition for the risk to be insurable. Hence cyber risk, as explored in this database, satisfies this condition. Now, let us study the capital requirement for the insurance solvency and the risk capital required from banks. We do it using the two regulatory risk measures, value-at-risk (VaR) and expected shorfall (ES), and compute VaR(99.5\%), according to Solvency II, and ES(97.5\%), for Basel 4. Since we do not know the company portfolio, we evaluate the capital standalone (for cyber risk), which already gives an indication about the statistical nature of the considered risk. Recall the advantage of using a model (rather than empirical values): We can compute probabilities beyond the data present in the database. It allows also to estimate risk measures (e.g. ES) in a sharper way via an analytical formula. The risk measures being evaluated from the tail distribution, we consider the GPD estimated with our approach (hybrid model calibrated via our algorithm), and the 12 EVT-methods based on the Hill estimator for the tail index, proposed in the {\it tea} R-package that also contains the references to the methods we quote. We refer to Table~\ref{tab:tea-results} of the Appendix, where results obtained via these methods for the tail-threshold $u_2$ and the inverse of the tail index $\alpha=1/\xi$ are reported, then commented. Recall (see e.g.~\cite{McNeil2016}) that for $G\sim$ GPD$(\xi,\sigma(u_2))$ (where $0<\xi<1$), we have, for $p\ge G(u_2)$, with $\beta\sim \xi\,u_2$ for high threshold $u_2$ (so that $p\to 1$), \begin{equation}\label{eq:VaR-GPD} VaR(p)= u_2 - \frac{\beta}{\xi}\left[1-\left( \frac{1-p}{1-G(u_2)}\right)^{-\xi} \right] \underset{p\to 1}{\sim} u_2 \left(\frac{ 1-p}{1-G(u_2)}\right)^{-\xi} \end{equation} and \begin{equation}\label{eq:ES-GPD} ES(p)=\frac{VaR(p)}{1-\xi}+ \frac{\beta - \xi\,u_2}{1-\xi} \underset{p\to 1}{\sim} \frac{VaR(p)}{1-\xi}. \end{equation} We use those relations to estimate VaR and ES from the calibrated GPD (with each method), replacing the parameters by their estimates and $G(u_2)$ by $G_n(u_2)= 1 - N_{u_2}/n$ where $G_n$ denotes the empirical cdf of $G$, with $n$ the sample size and $N_{u_2}$ the number of observations above $u_2$. We denote those estimates by $\widehat{VaR}(p)$ and $\widehat{ES}(p)$, respectively. To estimate $ES(p)$ directy from the data (without using the calibrated GPD), we proceed as in \cite{Kratz2018} (in the context of backtesting ES), simply averaging $k$ quantiles from $VaR(p)$: \begin{equation}\label{eq:ES-numeric} \widetilde{ES}_{n,k}(p):= \frac1k \sum_{i=1}^{k} VaR(p_i),\quad \text{with}\quad p_j = p + \frac{j-1}{k}(1-p), \quad j=1,\ldots,k, \; k \in \mathbb{N}. \end{equation} In our case, we take a large $k$ as we are interested in the strong precision of the numerical estimate and chose $k=20,000$. Considering our algorithmic approach, we evaluate $VaR(99.5\%)$, $ES(p)$ with $p=97.5\%$ and $99.77\%$, respectively, using \eqref{eq:ES-GPD} for VaR, and two possible estimates for ES, namely \eqref{eq:ES-GPD} and \eqref{eq:ES-numeric} (averaging the VaR's estimates from \eqref{eq:VaR-GPD}). The latter is chosen to avoid resorting a second time to the asymptotic relation in \eqref{eq:ES-GPD} between ES and VaR. We compare those estimates with the empirical values obtained directly from the data (using \eqref{eq:ES-numeric} for ES with empirical quantiles). Then we select, on one hand, the two EVT methods (among the 12 of the {\it tea} R-package) that look the most stable accross samples with a threshold $u_2$ that remains below $q(99.90)$ (see Table~\ref{tab:tea-results} of the Appendix), namely the Reiss \& Thomas (2007) approach that is very stable and the Hall (1990) one, on the other hand, two other (less stable) EVT methods exhibiting a threshold $u_2$ the closest to ours, namely AMSE, performed on our whole dataset (of size $n=60,985$), and Danielsson et al. (2001), performed on the 5,026 largest observations of the dataset (only case for this method where $u_2$ is below $q(97.5)$ and high enough (we cannot apply the Pickands-Balkema-de Haan theorem for a threshold close to the 3rd quartile). The selection of the Reiss \& Thomas (2007) and Hall (1990) methods avoids making any arbitrary choice because of their stability. Nevertheless, the problem is that generally their threshold $u_2$ (corresponding to $q(99.6\%)$ and $q(99.77\%)$, respectively) is larger than ours and than VaR(99.5\%), meaning that comparisons of the regulatory quantities of interest obtained with the various methods become less direct. So, we will also compute $ES(99.77\%)$ to make the comparison straightforward. The two additional EVT methods, AMSE and Danielsson et al. (2001), provide the thresholds $u_2=q(97.47\%)$ and $u_2=q(97.4\%)$, respectively, which are closer to our threshold and allow for comparison between the various evaluations of $VaR(99.5\%)$, $ES(97.5\%)$, and, of course, also $ES(99.77\%)$. In addition, we added a specific case of the Hall method, where the threshold $u_2$ drops from about 160,000 to 80,000, such that we can compute all quantities of interest. When evaluating the risk measures with the respective parameters estimates, we express them as a factor of the empirical mean, to ease the comparison with other risks. Then, we compute the relative variation $\Delta$ between the empirical quantity (obtained directly on the data) and the quantity evaluated via the estimated GPD (with the different approaches, respectively). We report the results of this analysis in Table~\ref{tab:risk-measures}. \begin{table}[H] \centering \parbox{500pt}{\caption{\label{tab:risk-measures}\sf\small Estimates of $VaR(99.5\%)$ (via \eqref{eq:VaR-GPD}) and $ES(p)$ (via \eqref{eq:ES-GPD} and \eqref{eq:ES-numeric}, respectively) for $p=97.5\%$ and $99.77\%$, expressed as a multiplying factor of the mean (which value is 3476~\EUR) for various models. Comparison with the empirical values $\widetilde{VaR}(99.5\%)$ and $\widetilde{ES}(p)$ by computing the relative variation $\Delta$ in $\%$. \vspace{0.7ex}}} \small \hspace*{-20pt} \begin{tabular}{|l|c|c|c|cc|c|cc|} \hline &&&&&&&&\\[-1.5ex] & $\widehat{VaR}$ & $\Delta$ & $p=97.5\%$ & \multicolumn{2}{c|}{$\Delta$ (in \%)} & $p=99.77\%$ & \multicolumn{2}{c|}{ $\Delta$ (in \%)} \\ & (99.5\%) & (in \%) & $\widehat{ES}(p)$ \, $\widetilde{ES}(p)$ & & & $\widehat{ES}(p)$ \, $\widetilde{ES}(p)$ & & \\ &&&&&&&&\\[-1.5ex] \hline \hline &&&&&&&&\\[-1.5ex] Empirical & $\widetilde{VaR}$= 25 & & 23 & & & 114 & & \\ \hline \footnotesize{Our model ($\alpha=1.24$)} & 13 & -46.5 & 19 $\quad$ 17 & -17.1 & -28.2 & 132 $\quad$ 104 & 15.9 & -8.7 \\ \footnotesize{AMSE ($\alpha=1.17$)} & 24 & -3.7 & 43 $\quad$ 33 & 85.1 & 43.6 & 331 $\quad$ 227 & 190.6 & 99.1 \\ \footnotesize{Danielsson-al.(01) ($\alpha=1.15$)} & 24 & -2.9 & 47 $\quad$ 35 & 101.7 & 49.6 & 373 $\quad$ 242 & 227.1 & 112.1 \\ \footnotesize{Hall (1990)($u_2\!\!=\!\!q(99.45\%)$;$\alpha\!=\!1.37$)} & 25 & -2 & 28 $\quad$ 26 & 21.4 & 14.4 & 159 $\quad$ 142 & 39.9 & 24.5 \\ \footnotesize{Hall (1990) ($\alpha=1.61$)} & $-$ & $-$ & $-$ & $-$ & $-$ & 119 $\quad$ 114 & 4.2 & -0.3 \\ \footnotesize{Reiss \&Thomas(07) ($\alpha=1.47$)} & $-$ & $-$ & $-$ & $-$ & $-$ & 130 $\quad$ 121 & 14.2 & 5.9 \\ \hline \end{tabular} \end{table}% The results presented in this table, illustrate the difficulty in modelling our data due to its high noise content. This is well illustrated in the survival plot in Figure~\ref{fig:cdf-positiveDamages}. Moreover, an important difficulty with the extremes reported in our database is the strong biais towards round numbers in the filed complaints. This is particularly sensitive in extremes as we might have an accumulation of large values on one round number and then one complaint with precise number introducing artificial discontinuities. Models will result from a compromise between various properties of the data (very extremes, moderately extremes, ...). For instance, it seems that our model reflects well the tail of the distribution whatever the chosen zone but not pointwise (via VaR). Concerning the VaR(99.5\%), we observe that our method provides a bad estimation, while the two other EVT methods (AMSE and Danielsson et al.) and the specific case for Hall's method give an accurate estimation (slight underestimation). The other two stable methods do not allow for the computation of the VaR at this threshold. However, taking only one point (as we do here for the $99.5$\% quantile) does not reflect the tail of the distribution in a proper way. To understand further the phenomenon, we computed several quantiles and could see how it fluctuates a lot around the empirical value, with good (for instance, for VaR(90\%) with an empirical value of 4,300 \EUR and 4,155 estimated with our model; relative error of -3.4\%) and bad (under and over) estimations (as in this example of 99.5\%). This becomes more obvious when examining the right plot in Figure~\ref{fig:cdf-positiveDamages} where we see that the empirical values are underestimated for low quantiles and become well fitted with quantiles above 99.9\% ($1e-03$ on the graph). This is why we turn to ES that gives a much better picture of the tail, as well recognized nowadays. It has already been a long debate in regulation; Basel 4 moved for market risk from VaR to ES. This might be even more needed for cyber risk, as we can experiment here. Evaluating ES in the two described ways, we observe that our tail modelling reflects better the data, whatever $p$, while the AMSE and Danielsson et al. methods provide a gross over-evaluation and different results according to the way ES is estimated. For $p=99.77\%$, we can evaluate ES with the methods by Hall and Reiss \& Thomas, with which we obtain also good results, especially for the Hall estimate. This very good fit by Hall's method might be explained by the fact that the level 99.77\% is that of his estimated threshold. For this latter method, when considering the specific case where $u_2=q(99.45\%)$, while the quantile at $u_2$ is perfectly distributed, ES at 97.5\% gives similar results as with our method, but looses accuracy when $p$ increases. Finally, looking at the two ways ES is estimated based on the GPD model, we observe, as often in practice (see e.g. the discussion about it in \cite{McNeil2016}), that the numerical estimation based on averaging quantiles provides generally a smaller $\Delta$ (taking into account the signs) than when using the asymptotic relation in $\eqref{eq:ES-GPD}$ between VaR and ES. Clearly more research will be needed to produce credible values for the solvency risk measures like VaR, while the ES is better estimated as it concerns values beyond 97.5\% and the tail in this case is better captured by our model. We also see that the factors for the empirical risk measures are quite high (25 times the mean for the VaR(99.5\%) and 61 times the mean for the ES(97.5\%)), which is a sign that we are confronted here with very volatile risks; even the EVT models are not able to catch this for the VaR. In natural catastrophes like windstorms or flood, the factor for VaR(99.5\%) are usually around 20 times the mean. For earthquakes, values around 30 times the mean are found. Therefore, when underwriting cyber risks, approaches implemented for natural catastrophes could be borrowed, as, for instance, developing IT systems to control the accumulation of exposures and set limits to them. This will help diversifying the risks, which is key to successfully underwrite extreme risks. This is also why we want to refine our understanding of cyber risk by differentiating it by type of attack, as presented in the next subsection. \vspace{-2ex} \subsection{Comparing the types of cyber attacks via their tail index} \label{ss:classes} \vspace{-2ex} To conclude this section on modelling, we apply our method on three samples, the full sample, the sample related to the fraud only (but representing 87.3\% of the data) and the breach of trust one (representing only 4.9\% of the data). The idea here is to look at the possibility of finding significant differences in the statistics of the various types of attack. This is made possible as the Jackknife results show a relative robustness of the parameters estimation resulting from the application of the algorithmic method. Our assumption is that the tail index could be a discriminant between various forms of cyber complaints. Given the little number of qualified damages, this can only be a first attempt to see if this assumption can gain ground in our data. We provide, in Table~\ref{tab:tail-index}, only the tail index and associated shape parameter ({\it i.e.} the inverse of the tail index), as well as the threshold $u_2$ above which the extremes are modelled with a GPD. \begin{table}[h] \begin{center} \parbox{430pt}{\caption{\label{tab:tail-index}\sf\small Estimation of the tail index $\xi$ and shape parameter $1/\xi$, as well as of the tail-threshold (also expressed as a quantile) above which the GPD is fitted. Three samples are considered, the full one on the period July 2015-April 2019, a second one restricted to fraud-related damages (87.3\% of the full sample), and the last one restricted to breach-of-trust-related damages (4.9\% of the full sample). The confidence ranges are computed using the Jackknife method. \vspace{0.7ex}}} \begin{tabular}{lccc} \hline &&&\\[-1.5ex] LN-E-GPD & Full sample & Fraud sample & Breach of Trust sample \\ &&&\\[-1.5ex] \hline \hline &&&\\[-1.5ex] Number of observations& 60,985 & 53,260 & 3,004\\ Tail index & 0.8088 & 0.8114 & 0.852 \\ Shape parameter & 1.24 & 1.23 & 1.17\\ {\small with 95\% confidence range} & [1.21 ; 1.26] & [1.21 ; 1.26] & [1.09 ; 1.27] \\[0.5ex] \hline &&&\\[-1.5ex] Threshold (quantile) & 9,999 (96.6\%) & 8,999 (96.3\%)& 14,999 (97.3\%)\\ 95\% confidence range & [9,980 ; 10,018] & [8,826 ; 9,172] & [13,481 ; 16,517]\\[0.5ex] \hline \end{tabular} \end{center} \end{table} \vspace{-2ex} The results are in line with our expectations. Fraud, representing 87.3\% of the data, gives a shape parameter close to the full sample one. The interesting result is for breach of trust, where the shape parameter is about 6\% smaller than for the full sample. The Jackknife confidence ranges are in line with the fact that the numerical stability depends heavily on the number of observations. The confidence range is much wider for "breach of trust" than for "fraud", but still with a reasonable range. Nevertheless, the narrow confidence range for "fraud" points out to a useful discriminating method whenever the sample sizes are comparable: in such a case, if there is enough difference between tail indices, the algorithm will detect it. Given the small size of the data sample, it is not possible here to come up with a definitive conclusion, but it suggests a possible way for exploring further with more data (e.g. when we will have access to the complaints registered since April 2019) and a better characterization of the type of complaints in the database. Finally, the shape parameters confirm the common intuition that cyber risk is susceptible to systemic risk; indeed, the type of tails we observe here are extremely heavy. The shape parameters are close to those from earthquake or floods risk in insurance; both risks are characterized by the wide spread of the damages. \vspace{-2ex} \section{Management and research perspectives} \label{sec:concl} \vspace{-2ex} We first want to emphasize the importance of understanding the data and making sure they are representative of the phenomenon under study. We spent a fair amount of time with our colleagues of SCRC to understand but also review manually the cleansing of their database (done automatically at C3N). We did it in general, through a preliminary statistical exploration of the database, and specifically, complaint by complaint, for the 1,100 largest declared damages. It gives us confidence that we are having here a very important source of information for studying cyber crimes and cyber risk in general, all the more since it is a large database, including various types of variables. Cyber attacks are a massive phenomenon, especially when considering the iceberg effect. They reach every place in the country given the delocalised nature of Internet. Second, we observe that the GN dataset, quite different from those studied so far in the literature, shows similar perspectives in terms of very heavy-tailed distributions. Indeed, the results we obtain for the tail of the damage severity distribution confirm the presence of extremes, which is, quoting \cite{Tang2019}, {\it a signature of common shocks or systemic risk}. Systemic risk originates from the combination of the existence of extremes, the interconnection between the various systems, and the weight of this set of systems in the general economy. Undoubtedly, these three properties are characteristic of cyber risk, as already discussed at the beginning of the paper. To reveal the presence of extremes, we adapted a recent algorithm for fitting heavy-tailed distribution to the case of positive asymmetric data. This tool is very important as it allows for an automatic fit of both the main and extreme behaviors of the empirical distribution. By EVT, we know that the extreme behavior follows a GPD. So, we compared our results with those obtained with other standard EVT methods and the shape parameter is of the same range across methods. Here, we introduce an additional way to judge the quality of the estimated model, by computing the standalone capital requirements with standard risk measures. We observe that our model evaluates well ES(97.5\%), with the closest value to the empirical estimate among tested methods. We would like to point out the benefit of using this algorithm, not only for the study we carried out, but also for our next investigation on this dataset, when considering a multivariate setting and a dynamic view. Indeed, this method detects by itself the threshold above which observations are considered as extremes without resorting to heavy computations; this solves a practical issue encountered with standard methods of EVT that require separate treatment for the tail, or with other dynamic EVT methods resorting to an arbitrary high threshold. It should then lighten the procedure when introducing covariates and make it more accurate. The OR field, where probability of extremes matters, would benefit from a method that integrates seamlessly the presence of extremes to the modelling of the whole distribution, as proposed in this study, as well as a straightforward way to assess its confidence range through the Jackknife method. Moreover, to take into account not only the magnitude of the largest damages but also their frequency, we introduced a Poisson-GPD model. We studied the frequency of the extremes to see how it would evolve with time. Contrary to what is often assumed for cyber risk in general, for the extremes, we did not find a strong dynamic component. Namely, on the given period of observed data, it does not seem that, from a statistical point of view, the nature of the risk is changing. Nevertheless, this will be investigated again when having access to data recorded on a longer time period. We will use the same approach as described in Section~\ref{ss:Poisson}, introducing time variability in the parameters of the Poisson-GPD model, if needed. Those statistical results, obtained for material damages, represent a solid basis for helping build resilience. They will now be interpreted by criminal intelligence analysts (from the Intelligence Division of the GN) in order to establish hypotheses in terms of explanation and anticipation by the SCRC. On the insurance side, the results obtained on the tail of the distribution confirm that cyber risk as a whole is insurable, and help evaluate how costly it can be to cover such risk. The existence of an expectation for the loss is crucial to compute the insurance technical premium, as it is its main component. Its second component is the risk-loading, which is related to the capital allocated to the risk. Once the probability distribution of the risk is known, the capital can be estimated in relation to the risk measure used for computing the solvency capital requirements. That is why, it is crucial for the insurability of a risk to have a good knowledge of its entire probability distribution, which is provided by our model. The heaviness of the (right) tail of the distribution of the damages has been estimated with a shape parameter of $1.24\ \pm \ 0.025$. As this parameter has a value significantly larger than 1, it indicates a finite expectation, which is a necessary (but not sufficient) condition for insurability of cyber risk. Nevertheless, the shape parameter with a value below 2 ({\it i.e.} infinite variance), classifies cyber as a very high risk, in the same range as natural catastrophes. But there are important differences between the two risks: the main characteristic of cyber risk is that cyber attacks are performed directly by humans, contrary to natural catastrophes. Also, the geographical location is crucial for the latter, while of much less importance for cyber attacks. For cyber risk, the self-hygiene of the IT system plays the most important role in terms of vulnerability. Other factors that should be studied for making the system more resilient are attackers' motivations, the possible targets in the system (databases, reputation, financials), and the security protocols of users. Finally, this study, performed on a novel and exhaustive database, establishes and measures the potential high intensity of cyber risk, a crucial information for the various actors involved in helping society to be more resilient, including the GN itself, insurance companies, strategic management, and policy makers. It also opens various interesting avenues of investigation. One of them is the automatic classification of the types of cyber crimes, according to their tail index, as started to be tackled in this paper. Comparison with existing classifications, such as those of the GN or the Ministry of Justice, will be made and discussed with SCRC and other experts. Given the rich, multi-fields GN database, we will also turn our attention to the modelling of cyber in a multivariate context, as already mentionned. It is an ongoing work. A further step will be to adapt our models and methods to fully account for systemic risk, building from studies on this topic developed in the aftermaths of the 2008/2009 financial crisis. Collaboration with experts of various disciplines will remain essential, taking into account the multiple factors playing a role, for interpreting the results of the suggested models, and for developing adequate resilience management strategies. Indeed, cyber security and resilience are major challenges in our modern economies, and are top priorities on the agenda of governments, security forces, and management of companies and organizations. Therefore all those efforts are necessary steps for building an agreement on how to assess and manage this risk both quantitatively and qualitatively. \vspace{-2ex} \section*{Acknowledgement(s)} \vspace{-2ex} The PJGN database we used for this study has been entrusted by the Gendarmerie under confidentiality agreement. Use and interpretation are the strict responsibility of the authors. As required by Gendarmerie Nationale, any communication on this study should mention that the source is from "Gendarmerie Nationale – PJGN – treated by ESSEC-CREAR". The authors are grateful to G\'en\'eral Daoust and Colonel Piat to have made possible this collaboration with the CREAR and its associated or invited members. Our warm thanks to Lieutenant Colonel J\'er\^ome Barlatier, Head of the Intelligence Division of SCRC, to have made this collaboration effective, through the partnership between CREAR and SCRC-PJGN, and to him and Commandant Edouard Klein (C3N) for hosting and helping us with the database. The CREAR also acknowledges with gratitude the financial support of Labex MME-DII (ANR-11-LBX-0023-01) for two research visits of Nehla Debbabi. \bibliographystyle{apacite}
2,877,628,090,046
arxiv
\section{Introduction} Let $X$ be a (real or complex) Banach space and let $x(t) \in X$ describe the state of a physical system at time $t \geq 0$. With $a(t) = \ddot{x}(t)$ denoting the acceleration of system, the Newton's second law of motion states that \begin{equation} F(t) = Ma(t) \text{ for } t \geq 0, \label{EQUATION_NEWTON_SECOND_LAW} \end{equation} where $M \colon D(M) \subset X \to X$ is a linear, continuously invertible, accretive operator representing the ``mass'' of the system. When being displaced from its equilibrium situated in the origin, the system is affected by a restoring force $F(t)$. In classical mechanics, this force is postulated to be proportional to the instantaneous displacement, i.e., \begin{equation} F(t) = Kx(t) \text{ for } t \geq 0 \label{EQUATION_MATERIAL_LAW} \end{equation} for some closed, linear operator $K \colon D(K) \subset X \to X$. When $M^{-1} K$ is a bounded linear operator, plugging Equation (\ref{EQUATION_MATERIAL_LAW}) into (\ref{EQUATION_NEWTON_SECOND_LAW}), we arrive at the classical harmonic oscillator model \begin{equation} \ddot{x}(t) = M^{-1} K x(t) \text{ for } t \geq 0. \label{EQUATION_HARMONIC_OSCILLATOR} \end{equation} Assuming now that the restoring force is proportional to the value of the system at some past time $t - \tau$, Equation (\ref{EQUATION_MATERIAL_LAW}) is replaced with the relation \begin{equation} F(t) = K x(t - \tau) \text{ for } t \geq 0, \label{EQUATION_MATERIAL_LAW_WITH_DELAY} \end{equation} where $\tau > 0$ is a time delay. Plugging Equation (\ref{EQUATION_MATERIAL_LAW_WITH_DELAY}) into (\ref{EQUATION_NEWTON_SECOND_LAW}) leads then to the linear harmonic oscillator equation with pure delay written as \begin{equation} \ddot{x}(t) = M^{-1} K x(t - \tau) \text{ for } t \geq 0. \label{EQUATION_DELAY_OSCILLATOR} \end{equation} Problems similar to Equation (\ref{EQUATION_DELAY_OSCILLATOR}) also arise when modeling systems with distributed parameters such as general wave phenomena (cf. \cite{KhuPoAzi2013}). Equations similar to (\ref{EQUATION_DELAY_OSCILLATOR}) are often referred to as delay or retarted differential equations. After being transformed to a first order in time system on a Banach space $X$, a general equation with constant delay can be written as \begin{equation} \dot{u}(t) = H(t, u(t), u_{t}) \text{ for } t > 0, \quad u(0) = u^{0}, \quad u_{0} = \varphi. \label{EQUATION_DELAY_DIFFERENTIAL_EQUATION_GENERAL} \end{equation} Here, $\tau > 0$ is a fixed delay parameter, $u_{t} := u(t + \cdot) \in L^{1}(-\tau, 0; X)$, $t \geq 0$, denotes the history variable, $H$ is an $X$-valued operator defined on a subset of $[0, \infty) \times X \times L^{1}(-\tau, 0; X)$ and $u^{0} \in X$, $\varphi \in L^{1}(-\tau, 0; X)$ are appropriate initial data. Equations of type (\ref{EQUATION_DELAY_DIFFERENTIAL_EQUATION_GENERAL}) have been intensively studied in the literature. We refer the reader to the monographs by Els'gol'ts \& Norkin \cite{ElNo1973} and Hale \& Lunel \cite{HaLu1993} for a detailed treatment of Equations (\ref{EQUATION_DELAY_DIFFERENTIAL_EQUATION_GENERAL}) in finite-dimensional spaces $X$. In contrast to this, results on Equation (\ref{EQUATION_DELAY_DIFFERENTIAL_EQUATION_GENERAL}) in infinite-dimensional spaces $X$ are less numerous. A good overview can be found in the monograph of B\'{a}tkai \& Piazzera \cite{BaPia2005}. Khusainov et al. considered in \cite{KhuAgDa1999} Equation (\ref{EQUATION_DELAY_DIFFERENTIAL_EQUATION_GENERAL}) in $\mathbb{R}^{n}$ with \begin{align*} H(t, u(t), u_{t}) &= A_{1} u(t) + A_{2} u(t - \tau) \\ &+ \big(u^{T}(t) \otimes b_{1}\big) u(t) + \big(u^{T}(t) \otimes b_{2}\big) u(t - \tau) + \big(u^{T}(t - \tau) \otimes b_{3}\big) u(t - \tau)\big) \end{align*} for symmetric matrices $A_{1}, A_{2} \in \mathbb{R}^{n \times n}$ and column vectors $b_{1}, b_{2}, b_{3} \in \mathbb{R}^{n}$ and proposed a rational Lyapunov function to study the asymptotic stability of solutions to this system. In their work \cite{KhuAgKosKoj2000}, Khusainov, Agarwal et al. studied a modal, or spectrum, control problem for a linear delay equation on $\mathbb{R}^{n}$ reading as \begin{equation} \dot{x}(t) = A x(t) + b u(t) \text{ for } t > 0 \end{equation} with a feedback control $u(t) = \sum\limits_{j = 0}^{m} c_{j}^{T} x(t - j \tau)$ for some delay time $\tau > 0$ and parameter vectors $c_{j} \in \mathbb{R}^{n}$. For canonical systems, they developed a method to compute the unknown parameters such that the closed-loop system possesses the spectrum prescribed beforehand. Under appropriate ``concordance'' conditions, they were able to carry over their considerations for a rather broad class of non-canonical systems. In the infite-dimensional situation, a rather general particular case of (\ref{EQUATION_DELAY_DIFFERENTIAL_EQUATION_GENERAL}) with $H(t, v, \psi) = A v + F(\psi)$ where $A$ generates a $C_{0}$-semigroup $(S(t))_{t \geq 0}$ on $X$ and $F$ is a nonlinear operator on $L^{2}(-\tau, 0; X)$ was studied by Travies \& Webb in their work \cite{TraWe1976}. Under appropriate assumptions on $F$, they proved the integral equation corresponding to the weak formulation of the delay equation given by \begin{equation} u(t) = S(t) \varphi(0) + \int_{0}^{t} S(t - s) F(u_{s}) \mathrm{d}s \text{ for } t > 0 \notag \end{equation} to possess a unique solution in $H^{1}_{\mathrm{loc}}(0, \infty; X)$. Di Blasio et al. addressed in \cite{DiBlKuSi1983} a similar problem \begin{equation} \dot{u}(t) = \big(A + B\big) u(t) L_{1} u(t - r) + L_{2} u_{t}, \text{ for } t > 0, \quad u(0) = u^{0}, \quad u_{0} = \varphi \label{EQUATION_DELAY_DI_BLASIO} \end{equation} where $A$ generates a holomorphic $C_{0}$-semigroup on a Hilbert space $H$, $B$ is a perturbation of $A$ and $L_{1}, L_{2}$ are appropriate linear operators. If $u^{0}$ and $\varphi$ possess a certain regularity, they proved the existence of a unique strong solution in $H^{1}_{\mathrm{loc}}(0, \infty; X) \cap L^{2}_{\mathrm{loc}}\big(0, \infty; D(A)\big)$ by analyzing the $C_{0}$-semigroup inducing the the semiflow $t \mapsto (u(t), u_{t})$. These results were elaborated on by Di Blasio et al. in \cite{DiBlKuSi1984} leading to a generalization for the case of weighted and interpolation spaces and including a desription of the associated infinitesimal generator. Finally, the general $L^{p}$-case for $p \in (0, \infty)$ was investigated by Di Blasio in \cite{DiBl2003}. Recently, in their work \cite{KhuPoRa2013}, Khusainov et al. proposed an explicit $L^{2}$-solution theory for a non-homogeneous initial-boundary value problem for an isotropic heat equation with constant delay \begin{equation} \begin{split} u_{t}(t, x) &= \partial_{i} \big(a_{ij}(x) \partial_{j} u(t, x)\big) + b_{i}(x) \partial_{i} u(t, x) + c(x) u(t, x) \\ &+ \partial_{i} \big(\tilde{a}_{ij}(x) \partial_{j} u(t - \tau, x)\big) + \tilde{b}_{i}(x) \partial_{i} u(t - \tau, x) + \tilde{c}(x) u(t - \tau, x) + \\ &+ f(t, x) \text{ for } (t, x) \in (0, \infty) \times \Omega, \\ u(t, x) &= \gamma(t, x) \text{ for } (t, x) \in (0, \infty) \times \partial \Omega, \\ u(0, x) &= u^{0}(x) \text{ for } x \in \Omega, \\ u(t, x) &= \varphi(t, x) \text{ for } (t, x) \in (-\tau, 0) \times \Omega. \end{split} \notag \end{equation} where $\Omega \subset \mathbb{R}^{d}$ is a regular bounded domain and the coefficient functions are appropriate. Conditions assuring for exponential stability were also given. Over the past decade, hyperbolic partial differential equations have attracted a considerable amound of attention, too. In \cite{NiPi2006}, Nicaise \& Pignotti studied a homogeneous isotropic wave equation with an internal feedback with and without delay reading as \begin{equation} \begin{split} \partial_{tt} u(t, x) - \triangle u(t, x) + a_{0} \partial_{t} u(t, x) + a \partial_{t} u(t - \tau, x) &= 0 \text{ for } (t, x) \in (0, \infty) \times \Omega, \\ u(t, x) &= 0 \text{ for } (t, x) \in (0, \infty) \times \Gamma_{0}, \\ \frac{\partial u}{\partial \nu}(t, x) &= 0 \text{ for } (t, x) \in (0, \infty) \times \Gamma_{1} \end{split} \notag \end{equation} under usual initial conditions where $\Gamma_{0}, \Gamma_{1} \subset \partial \Omega$ are relatively open in $\partial \Omega$ with $\bar{\Gamma}_{0} \cap \bar{\Gamma}_{1} = \emptyset$ and $\nu$ denotes the outer unit normal vector of a smooth bounded domain $\Omega \subset \mathbb{R}^{d}$. They showed the problem to possess a unique global classical solution and proved the latter to be exponentially stable if $a_{0} > a > 0$ or instable, otherwise. These results have been carried over by Nicaise \& Pignotti \cite{NiPi2008} and Nicaise et al. \cite{NiPiVa2011} to the case time-varying internally distributed or boundary delays. In \cite{KhuPoAzi2013}, Khusainov et al. considered a non-homogeneous initial-boundary value problem for a one-dimensional wave equation with constant coefficients and a single constant delay \begin{equation} \begin{split} \partial_{tt} u(t, x) &= a^{2} \partial_{xx} u(t - \tau, x) + b \partial_{x} u(t - \tau, x) + c u(t - \tau, x) \\ &+ f(t, x) \text{ for } (t, x) \in (0, T) \times (0, l), \\ u(t, x) &= \gamma(t, x) \text{ for } (t, x) \in (0, T) \times \{0, 1\}, \\ u(0, x) &= u^{0}(x) \text{ for } x \in (0, 1), \\ u(t, x) &= \varphi(t, x) \text{ for } t \in (-\tau, 0), x \in (0, 1). \end{split} \notag \end{equation} Under appropriate regularity and compatibility assumptions, they proved the problem to possess a unique $C^{2}$-solution for any finite $T > 0$. Their proof was based on extrapolation methods for $C_{0}$-semigroups and an explicit solution representation formula. Recently, Khusainov \& Pokojovy presented in \cite{KhuPo2014} a Hilbert-space treatment of the initial-boundary value problem for the equations of thermoelasticity with pure delay \begin{equation} \begin{split} \partial_{tt} u(x, t) - a \partial_{xx} u(x, t - \tau) + b \partial_{x} \theta(x, t - \tau) &= f(x, t) \text{ for } x \in \Omega, t > 0, \\ \partial_{t} \theta(x, t) - c \partial_{xx} \theta(x, t - \tau) + d \partial_{tx} u(x, t - \tau) &= g(x, t) \text{ for } x \in \Omega, t > 0, \\ u(0, t) = u(l, t) = 0, \quad \partial_{x} \theta(0, t) = \partial_{x} \theta(l, t) &= 0 \text{ for } t > 0, \\ \phantom{\partial_{t}} u(x, 0) = u^{0}(x), \quad \phantom{\partial_{t}} u(x, t) &= u^{0}(x, t) \text{ for } x \in \Omega, t \in (-\tau, 0), \\ \partial_{t} u(x, 0) = u^{1}(x), \quad \partial_{t} u(x, t) &= u^{1}(x, t) \text{ for } x \in \Omega, t \in (-\tau, 0), \\ \phantom{\partial_{t}} \theta(x, 0) = \theta^{0}(x), \quad \phantom{\partial_{t}} \theta(x, t) &= \theta^{0}(x, t) \text{ for } x \in \Omega, t \in (-\tau, 0). \end{split} \notag \end{equation} Their proof exploited extrapolation techniques for strongly continuous semigroups and an explicit solution representation formula. In the present paper, we give a Banach space solution theory for Equation (\ref{EQUATION_DELAY_OSCILLATOR}) subject to appropriate initial conditions. Our approach is solely based on the step method and does not incorporate any semigroup techniques. In contrast to earlier works by Khusainov et al. \cite{KhuDiRuLu2008, KhuIvaKo2006, KhuPoAzi2013}, we only require the invertibility and not the positivity of $M^{-1} K$ in Equation (\ref{EQUATION_DELAY_OSCILLATOR}). In Section \ref{SECTION_CLASSICAL_HARMONIC_OSCILLATOR}, we briefly outline some seminal results on second-order abstract Cauchy problems. In our main Section \ref{SECTION_CLASSICAL_HARMONIC_OSCILLATOR_WITH_PURE_DELAY}, we prove the existence and uniqueness of solutions to the Cauchy problem for the delay equation (\ref{EQUATION_DELAY_OSCILLATOR}) as well as their continuous dependence on the data. Next, we give an explicit solution representation formula in a closed form based on the delayed exponential function introduced by Khusainov \& Shuklin in \cite{KhuShu2005}. Finally, we prove the solution of the delay equation to converge to the solution of the original second order abstract differential equation as the delay parameter $\tau$ goes to zero. \section{Classical harmonic oscillator} \label{SECTION_CLASSICAL_HARMONIC_OSCILLATOR} For the sake of completeness, we briefly discuss the initial value problem for the harmonic oscillator being a second order in time abstact differential equation \begin{equation} \ddot{x}(t) - \Omega^{2} x(t) = f(t) \text{ for } t \geq 0 \label{EQUATION_HARMONIC_OSCILLATOR_GENERAL} \end{equation} subject to the initial conditions \begin{equation} x(0) = x_{0} \in D(\Omega), \quad \dot{x}(0) = x_{1} \in X. \label{EQUATION_HARMONIC_OSCILLATOR_GENERAL_IC} \end{equation} Here, we assume the linear operator $\Omega \colon D(\Omega) \subset X \to X$ to be continuously invertible and generate a $C_{0}$-group $(e^{t\Omega})_{t \in \mathbb{R}} \subset L(X)$ on a (real or complex) Banach space $X$ with $L(X)$ denoting the space of bounded, linear operators on $X$ equipped with the norm $\|A\|_{L(X)} := \sup\big\{\|Ax\|_{X} \;|\; x \in X, \|x\|_{X} \leq 1\big\}$. A more rigorous treatment of this problem can be found in \cite[Section 3.14]{ArBaHieNeu2001}. The general solution to the homogeneous equation is known to read as \begin{equation} x_{h}(t) = e^{\Omega t} c_{1} + e^{-\Omega t} c_{2} \text{ for } t \geq 0 \notag \end{equation} with some $c_{1}, c_{2} \in D(\Omega)$. Vectors $c_{1}, c_{2}$ can be computed using the initial conditions from Equation (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL_IC}) leading to a system of linear operator equations \begin{equation} c_{1} + c_{2} = x_{0}, \quad \Omega c_{1} - \Omega c_{2} = x_{1}. \notag \end{equation} The latter is uniquely solved by \begin{equation} c_{1} = \tfrac{1}{2} \Omega^{-1} (\Omega x_{0} + x_{1}), \quad c_{1} = \tfrac{1}{2} \Omega^{-1} (\Omega x_{0} - x_{1}). \notag \end{equation} Thus, the unique solution of the homogeneous equation with the initial conditions (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL_IC}) is given by \begin{equation} x_{h}(t) = \tfrac{1}{2} \Omega^{-1} e^{\Omega t} (\Omega x_{0} + x_{1}) + \tfrac{1}{2} \Omega^{-1} e^{-\Omega t} (\Omega x_{0} - x_{1}) \text{ for } t \geq 0 \end{equation} or, equivalently, \begin{equation} x_{h}(t) = \tfrac{1}{2} (e^{\Omega t} + e^{-\Omega t}) x_{0} + \tfrac{1}{2} \Omega^{-1} (e^{\Omega t} - e^{-\Omega t}) x_{1} \text{ for } t \geq 0. \label{EQUATION_HARMONIC_OSCILLATOR_GENERAL_SOLUTION_HOMOGENEOUS_EQUATION} \end{equation} A particular solution to the non-homogeneous equation with zero initial conditions will be determined in the Cauchy form \begin{equation} x_{p}(t) = \int_{0}^{t} K(t, s) f(s) \mathrm{d}s \text{ for } t \geq 0. \label{EQUATION_HARMONIC_OSCILLATOR_GENERAL_SOLUTION_NONHOMOGENEOUS_EQUATION_ANSATZ} \end{equation} We refer the reader to \cite[Chapter 1]{ArBaHieNeu2001} for the definition of Bochner integrals for $X$-valued functions. In Equation (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL_SOLUTION_NONHOMOGENEOUS_EQUATION_ANSATZ}), the function $K \in C^{0}([0, \infty) \times [0, \infty), L(X))$ is the Cauchy kernel, i.e., for any fixed $s \geq 0$, the function $K(\cdot, s)$ is the solution of the homogeneous problem satisfying the initial conditions \begin{equation} K(t, s)\big|_{t = s} = 0_{L(X)}, \quad \partial_{t} K(t, s)\big|_{t = s} = \mathrm{id}_{X}. \notag \end{equation} Using the ansatz \begin{equation} K(t, s) = e^{\Omega t} c_{1}(s) + e^{-\Omega t} c_{2}(s) \text{ for } t, s \geq 0 \notag \end{equation} for some $c_{1}, c_{2} \in C^{1}([0, \infty), L(X))$ and taking into account the initial conditions, we arrive at \begin{equation} K(t, s)\big|_{t = s} = e^{\Omega t} c_{1}(s) + e^{-\Omega t} c_{2}(s) = 0_{L(X)}, \quad \partial_{t} K(t, s)\big|_{t = s} = \Omega e^{\Omega s} c_{1}(s) - \Omega e^{-\Omega s} c_{2}(s) = \mathrm{id}_{X}. \notag \end{equation} Solving this system with generalized Cramer's rule, we obtain for $s \geq 0$ \begin{equation} \begin{split} c_{1}(s) &= \left(\det{}_{L(X)}\begin{pmatrix} e^{\Omega s} & e^{-\Omega s} \\ \Omega e^{\Omega s} & -\Omega e^{-\Omega s} \end{pmatrix}\right)^{-1} \det{}_{L(X)}\begin{pmatrix} 0_{L(X)} & e^{-\Omega s} \\ \mathrm{id}_{X} & -\Omega e^{-\Omega s} \end{pmatrix} = \tfrac{1}{2} \Omega^{-1} e^{-\Omega s}, \\ % c_{2}(s) &= \left(\det{}_{L(X)}\begin{pmatrix} e^{\Omega s} & e^{-\Omega s} \\ \Omega e^{\Omega s} & -\Omega e^{-\Omega s} \end{pmatrix}\right)^{-1} \det{}_{L(X)}\begin{pmatrix} e^{\Omega s} & 0_{L(X)} \\ \Omega e^{\Omega s} & \mathrm{id}_{X} \end{pmatrix} = \tfrac{1}{2} \Omega^{-1} e^{-\Omega s}. \end{split} \notag \end{equation} Thus, the Cauchy kernel is given by \begin{equation} K(t, s) = \tfrac{1}{2} \Omega^{-1} (e^{\Omega(t - s)} - e^{-\Omega(t - s)}) \text{ for } t, s \geq 0, \notag \end{equation} whereas the particular solution satisfying zero initial conditions reads as \begin{equation} x_{p}(t) = \frac{1}{2} \Omega^{-1} \int_{0}^{t} (e^{\Omega (t - s)} - e^{-\Omega (t - s)}) f(s) \mathrm{d}s \text{ for } t \geq 0. \notag \end{equation} Hence, for $x_{0} \in D(\Omega)$, $x_{1} \in X$ and $f \in L^{1}_{\mathrm{loc}}(0, \infty; X)$, the unique mild solution $x \in W^{1, 1}_{\mathrm{loc}}(0, \infty; X)$ to the Cauchy problem (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL})--(\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL_IC}) can be written as \begin{equation} \begin{split} x(t) &= \tfrac{1}{2} (e^{\Omega t} + e^{-\Omega t}) x_{0} + \tfrac{1}{2} \Omega^{-1} (e^{\Omega t} - e^{-\Omega t}) x_{1} \\ &+ \tfrac{1}{2} \Omega^{-1} \int_{0}^{t} (e^{\Omega (t - s)} - e^{-\Omega (t - s)}) f(s) \mathrm{d}s \text{ for } t \geq 0. \end{split} \label{EQUATION_HARMONIC_OSCILLATOR_GENERAL_EXPLICIT_SOLUTION} \end{equation} If the data additionally satisfy $x_{0} \in D(\Omega^{2})$, $x_{1} \in D(\Omega)$ and $f \in W^{1, 1}_{\mathrm{loc}}(0, \infty; X) \cup C^{0}\big([0, \infty), D(\Omega^{2})\big)$, then the mild solution $x$ given in Equation (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL_EXPLICIT_SOLUTION}) is a classical solution satisfying $x \in C^{2}\big([0, \infty), X\big) \cap C^{1}\big([0, \infty), D(\Omega)\big) \cap C^{0}\big([0, \infty), D(\Omega^{2})\big)$. \section{The linear oscillator with pure delay} \label{SECTION_CLASSICAL_HARMONIC_OSCILLATOR_WITH_PURE_DELAY} In this section, we consider a Cauchy problem for the linear oscillator with a single pure delay \begin{equation} \ddot{x}(t) - \Omega^{2} x(t - 2\tau) = f(t) \text{ for } t \geq 0 \label{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL} \end{equation} subject to the initial condition \begin{equation} x(t) = \varphi(t) \text{ for } t \in [-2\tau, 0]. \label{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC} \end{equation} Here, $X$ is a Banach space, $\Omega \in L(X)$ is a bounded, linear operator and $\varphi \in C^{1}\big([-2\tau, 0], X\big)$, $f \in L^{1}_{\mathrm{loc}}(0, \infty; X)$ are given functions. In contrast to Section \ref{SECTION_CLASSICAL_HARMONIC_OSCILLATOR}, the boundedness of $\Omega$ is indespensable here. Indeed, Dreher et al. proved in \cite{DreQuiRa2009} that Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) are ill-posed even if $X$ is a Hilbert space and $\Omega$ possesses a sequence of eigenvalues $(\lambda_{n})_{n \in \mathbb{N}} \subset \mathbb{R}$ with $\lambda_{n} \to \infty$ or $\lambda_{n} \to -\infty$ as $n \to \infty$. The necessity for $\Omega$ being bounded has also been pointed out by Rodrigues et al. in \cite{RoWu2007} when treating a linear heat equation with pure delay. \begin{definition} A function $x \in C^{1}\big([-2\tau, \infty), X\big) \cap C^{2}\big([-2\tau, 0], X\big) \cap C^{2}\big([0, \infty), X\big)$ satisfying Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) pointwise is called a classical solution to the Cauchy problem (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}). \end{definition} A mild formulation of (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) is given by \begin{align} \dot{x}(t) &= \dot{x}(0) + \Omega^{2} \int_{0}^{t} x(s - 2\tau) \mathrm{d}s + \int_{0}^{t} f(s) \mathrm{d}s \text{ for } t \geq 0, \label{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_MILD_FORMULATION} \\ x(t) &= \varphi(t) \text{ for } t \in [-2\tau, 0]. \label{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_MILD_FORMULATION_IC} \end{align} \begin{definition} A function $x \in C^{1}\big([-2\tau, \infty), X\big)$ satisfying Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_MILD_FORMULATION})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_MILD_FORMULATION_IC}) is called a mild solution to the Cauchy problem (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}). \end{definition} By the virtue of fundamental theorem of calculus, any mild solution $x$ to (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) with $x \in C^{1}\big([-2\tau, \infty), X\big) \cap C^{2}\big([-2\tau, 0], X\big) \cap C^{2}\big([0, \infty), X\big)$ is also a classical solution. Obviously, for the problem (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) to possess a classical solution, one necessarily requires $\varphi \in C^{2}\big([-2\tau, 0], X\big)$. In the following subsection, we want to study the existence and uniquess of mild and classical solutions to the Cauchy problem (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) as well as their continuous dependence on the data. \subsection{Existence and uniqueness} Rather then using the semigroup approach (cf. \cite[Chapter 2]{HaLu1993}), we decided to use the more straightforward step method here reducing (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_MILD_FORMULATION})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_MILD_FORMULATION_IC}) to a difference equation on the functional vector space $\hat{C}^{1}_{2\tau}(\mathbb{N}_{0}, X)$ defined as follows. \begin{definition} Let $X$ be a Banach space, $\tau > 0$ and $s \in \mathbb{N}_{0}$. We introduce the metric vector space \begin{equation} \begin{split} \hat{C}^{s}_{\tau}(\mathbb{N}_{0}, X) &:= l^{\infty}_{\mathrm{loc}}\big(\mathbb{N}_{0}, C^{s}\big([-\tau, 0], X)\big)\big) \\ &:= \Big\{x = (x_{n})_{n \in \mathbb{N}_{0}} \,\big|\, x_{n} \in C^{s}\big([-\tau, 0], X\big) \text{ for } n \in \mathbb{N}_{0}, \\ &\phantom{:= \Big\{x = (x_{n})_{n \in \mathbb{N}_{0}} \,\big|\,} \frac{\mathrm{d}^{j}}{\mathrm{d}t^{j}} x_{n}(-\tau) = \frac{\mathrm{d}^{j}}{\mathrm{d}t^{j}} x_{n-1}(0) \text{ for } j = 0, \dots, s - 1, n \in \mathbb{N}\Big\} \end{split} \notag \end{equation} equipped with the distance function \begin{equation} d_{\hat{C}_{\tau}^{s}(\mathbb{N}_{0}, X)}(x, y) := \sum_{n \in N} 2^{-n} \frac{\max\limits^{}_{k = 0, \dots, n} \|x_{k} - y_{k}\|_{C^{s}([-\tau, 0], X)}}{1 + \max\limits^{}_{k = 0, \dots, n} \|x_{k} - y_{k}\|_{C^{s}([-\tau, 0], X)}} \text{ for } x, y \in \hat{C}_{\tau}^{s}(\mathbb{N}_{0}, X). \notag \end{equation} \end{definition} Obviously, $\hat{C}^{s}_{\tau}(\mathbb{N}_{0}, X)$ is a complete metric space which is isometrically isomorphic to the metric space $C^{s}_{\tau}\big([-\tau, \infty), X\big) := C^{s}\big([-\tau, \infty), X\big)$ equipped with the distance \begin{equation} d_{C^{s}_{\tau}([0, \infty), X)}(x, y) := \sum_{n \in N} 2^{-n} \frac{\|x - y\|_{C^{s}([-\tau, \tau n], X)}} {1 + \|x - y\|_{C^{s}([-\tau, \tau n], X)}} \text{ for } x, y \in C^{s}\big([-\tau, \infty), X\big). \notag \end{equation} For any $x \colon [-\tau, \infty) \to X$, we define for $n \in \mathbb{N}_{0}$ the $n$-th segment of $x$ by means of \begin{equation} x_{n} := x(n\tau + s) \text{ for } s \in [-\tau, 0]. \notag \end{equation} By induction, $x$ is a mild solution of (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) if and only if $(x_{n})_{n \in \mathbb{N}_{0}} \in \hat{C}^{1}_{2\tau}(\mathbb{N}_{0}, X)$ solves \begin{equation} \begin{split} \dot{x}_{n}(s) &= \dot{x}_{n-1}(0) + \Omega^{2} x_{n-1}(s) + \int_{2(n-1)\tau}^{2(n-1)\tau + s} f(\sigma) \mathrm{d}\sigma \text{ for } s \in [-2\tau, 0], n \in \mathbb{N}, \\ x_{0}(s) &= \varphi(s) \text{ for } s \in [-2\tau, 0]. \end{split} \label{EQUATION_DIFFERENCE_EQUATION_FOR_DOT_X_N} \end{equation} \begin{theorem} \label{THEOREM_MILD_SOLUTION_DISCRETE} Equation (\ref{EQUATION_DIFFERENCE_EQUATION_FOR_DOT_X_N}) has a unique solution $(x_{n})_{n \in \mathbb{N}_{0}} \in \hat{C}^{1}_{2\tau}(\mathbb{N}_{0}, X)$. Moreover, $x$ continuously depends on the data in sense of the estimate \begin{equation} \|x_{n}\|_{C^{1}([-2\tau, 0], X)} \leq \kappa^{n} \Big(\|\varphi\|_{C^{1}([-2\tau, 0], X)} + \|f\|_{L^{1}(0, 2\tau n; X)}\Big) \text{ for any } n \in \mathbb{N} \notag \end{equation} with $\kappa := 1 + (1 + 2\tau) \big(1 + \|\Omega\|_{L(X)}^{2}\big)$. \end{theorem} \begin{proof} By the virtue of the fundamental theorem of calculus, Equation (\ref{EQUATION_DIFFERENCE_EQUATION_FOR_DOT_X_N}) is satisfied if and only if \begin{align} x_{n}(s) &= x_{n-1}(0) + (s - 2\tau) \dot{x}_{n-1}(0) + \Omega^{2} \int_{-2\tau}^{s} x_{n-1}(\sigma) \mathrm{d}\sigma \\ &+ \int_{-2\tau}^{s} \int_{2(n-1)\tau}^{2(n-1)\tau + \sigma} f(\xi) \mathrm{d}\xi \mathrm{d}\sigma \text{ for } s \in [-2\tau, 0], n \in \mathbb{N}, \label{EQUATION_DIFFERENCE_EQUATION_1} \\ x_{n}(-2\tau) &= x_{n-1}(0), \quad \dot{x}_{n}(-2\tau) = \dot{x}_{n-1}(0) \text{ for } n \in \mathbb{N}, \label{EQUATION_DIFFERENCE_EQUATION_2} \\ x_{0}(s) &= \varphi(s) \text{ for } s \in [-2\tau, 0]. \label{EQUATION_DIFFERENCE_EQUATION_3} \end{align} By induction, we can easily show that for any $n \in \mathbb{N}$ there exists a unique local solution $(x_{0}, x_{1}, \dots, x_{n}) \in \Big(C^{1}\big([-2\tau, 0], X\big)\Big)^{n + 1}$ to (\ref{EQUATION_DIFFERENCE_EQUATION_1})--(\ref{EQUATION_DIFFERENCE_EQUATION_3}) up to the index $n$. Here, we used the Sobolev embedding theorem stating \begin{equation} W^{1, 1}(0, T; X) \hookrightarrow C^{0}\big([0, T], X\big) \text{ for any } T > 0. \notag \end{equation} Further, we can estimate \begin{equation} \begin{split} \|x_{n}\|_{C^{0}([-2\tau, 0], X)} &\leq \Big(1 + 2\tau \big(1 + \|\Omega\|_{L(X)}^{2}\big)\Big) \|x_{n-1}\|_{C^{1}([-2\tau, 0], X)} \\ &+ 2 \tau \|f\|_{L^{1}(2(n-1)\tau, 2n\tau; X)}. \end{split} \label{EQUATION_ESIMATE_FOR_X_N} \end{equation} Similarly, Equation (\ref{EQUATION_DIFFERENCE_EQUATION_FOR_DOT_X_N}) yields \begin{equation} \|\dot{x}_{n}\|_{C^{0}([-2\tau, 0], X)} \leq \big(1 + \|\Omega\|_{L(X)}^{2}\big) \|x_{n-1}\|_{C^{0}([-2\tau, 0], X)} + \|f\|_{L^{1}(2(n-1)\tau, 2n\tau; X)}. \label{EQUATION_ESIMATE_FOR_DOT_X_N} \end{equation} Equations (\ref{EQUATION_ESIMATE_FOR_X_N}) and (\ref{EQUATION_ESIMATE_FOR_DOT_X_N}) imply together \begin{align*} \|x_{n}\|_{C^{1}([-2\tau, 0], X)} &\leq \kappa \big(\|\varphi\|_{C^{1}([-2\tau, 0], X)} + \|f\|_{L^{1}(2(n-1)\tau, n\tau: X)}\big). \end{align*} By induction, we then get for any $n \in \mathbb{N}$ \begin{equation} \|x_{n}\|_{C^{1}([-2\tau, 0], X)} \leq \kappa^{n} \big(\|\varphi\|_{C^{0}([-2\tau, 0], X)} + \|f\|_{L^{1}(0, 2\tau n, X)}\big) \notag \end{equation} which finishes the proof. \end{proof} Letting $x(t) := x_{k}(t - 2 (k + 1) \tau)$ for $t \geq 0$ and $k := \lfloor \tfrac{t}{2\tau}\rfloor \in \mathbb{N}_{0}$, we obtain the unique mild solution $x$ of Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}). \begin{corollary} \label{COROLLARY_MILD_SOLUTION} Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) possess a unique mild solution $x$ satisfying for any $T := 2n\tau$, $n \in \mathbb{N}$, \begin{equation} \|x\|_{C^{1}([-2\tau, T], X)} \leq \kappa^{n} \Big(\|\varphi\|_{C^{1}([-2\tau, T], X)} + \|f\|_{L^{1}(0, T; X)}\Big) \text{ for any } n \in \mathbb{N}. \notag \end{equation} with $\kappa := 1 + (1 + 2\tau) \big(1 + \|\Omega\|_{L(X)}^{2}\big)$. \end{corollary} \begin{theorem} Under additional conditions $\varphi \in C^{2}\big([-2\tau, 0], X\big)$ and $f \in C^{0}\big([0, \infty), X\big)$, the unique mild solution given in Corollary \ref{COROLLARY_MILD_SOLUTION} is a classical solution. \end{theorem} \begin{proof} Differentiating Equation (\ref{EQUATION_DIFFERENCE_EQUATION_FOR_DOT_X_N}) with respect to $t$, using the assumptions and the fact that $x \in C^{1}\big([-2\tau, \infty), X\big)$, we deduce that $x|_{[-2\tau, 0]} \equiv \varphi \in C^{2}\big([-2\tau, 0], X\big)$ and \begin{equation} \ddot{x} = \Omega^{2} x(\cdot - 2\tau) + f \in C^{0}\big([0, \infty), X\big). \notag \end{equation} Hence, $x \in C^{1}\big([-2\tau, \infty), X\big) \cap C^{2}\big([-2\tau, 0], X\big) \cap C^{2}\big([0, \infty), X\big)$ and is thus a classical solution of Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}). \end{proof} \subsection{Explicit representation of solutions} \label{SECTION_HARMONIC_OSCILLATOR_WITH_DELAY_SOLUTION_REPRESENTATION} Following Khusainov \& Shuklin \cite{KhuShu2005} and Khusainov et al. \cite{KhuPoRa2013}, we define for $t \in \mathbb{R}$ the operator-valued delayed exponential function \begin{equation} \exp_{\tau}(t; \Omega) := \left\{ \begin{array}{cc} 0_{L(X)}, & -\infty < t < -\tau, \\ \mathrm{id}_{X}, & -\tau \leq t < 0, \\ \mathrm{id}_{X} + \Omega \frac{t}{1!}, & 0 \leq t < \tau, \\ \mathrm{id}_{X} + \Omega \frac{t}{1!} + \Omega^{2} \frac{(t - \tau)^{2}}{2!}, & \tau \leq t < 2\tau, \\ \dots & \dots \\ \mathrm{id}_{X} + \Omega \frac{t}{1!} + \Omega^{2} \frac{(t - \tau)^{2}}{2!} + \dots + \Omega^{k} \frac{\left(t - (k - 1) \tau\right)^{k}}{k!}, & (k - 1) \tau \leq t < k\tau, \\ \dots & \dots. \end{array} \right. \label{EQUATION_DEFINITION_OF_DELAYED_EXPONENTIAL} \end{equation} Throughout this Section, we additionally assume that $\Omega \colon X \to X$ is an isomorphism from the Banach space $X$ onto itself. \begin{theorem} The delayed exponential function $\exp_{\tau}(\cdot; \Omega)$ lies in $C^{0}\big([-\tau, \infty), X\big) \cap C^{1}\big([0, \infty), X\big) \cap C^{2}\big([\tau, \infty), X\big)$ and solves the Cauchy problem \begin{align} \ddot{x}(t) - \Omega^{2} x(t - 2\tau) &= 0_{X} \text{ for } t \geq \tau, \label{EQUATION_DDE_FOR_DEXP_1} \\ x(t) &= \varphi(t) \text{ for } t \in [-\tau, \tau] \label{EQUATION_DDE_FOR_DEXP_2} \end{align} where \begin{equation} \varphi(t) = \left\{ \begin{array}{cc} \mathrm{id}_{X}, & -\tau \leq t < 0, \\ \mathrm{id}_{X} + \Omega t, & 0 \leq t \leq \tau. \end{array} \right. \notag \end{equation} \end{theorem} \begin{proof} To prove the smoothness of $x$, we first note that $x$ is an operator-valued polynomial and thus analytic on each of the intervals $[(k - 1) \tau, k \tau]$ for $k \in \mathbb{Z}$. By the definition of $\exp_{\tau}(\cdot; \Omega)$, we further find \begin{equation} \frac{\mathrm{d}^{j}}{\mathrm{d} t^{j}} x(k\tau - 0) = \frac{\mathrm{d}^{j}}{\mathrm{d} t^{j}} x(k\tau + 0) \text{ for } j = 0, \dots, k, \quad k \in \mathbb{N}_{0}. \notag \end{equation} Hence, $x \in C^{0}\big([-\tau, \infty), X\big) \cap C^{1}\big([0, \infty), X\big) \cap C^{2}\big([\tau, \infty), X\big)$. For $k \in \mathbb{N}$, $k \geq 2$, we have \begin{equation} x(t) = \mathrm{id}_{X} + \Omega \frac{t}{1!} + \Omega^{2} \frac{(t - \tau)^{2}}{2!} + \Omega^{3} \frac{(t - 3\tau)^{3}}{4!} + \Omega^{4} \frac{(t - 3\tau)^{4}}{4!} + \dots + \Omega^{k} \frac{\left(t - (k - 1) \tau\right)^{k}}{k!}. \notag \end{equation} For $t \geq \tau$, differentiation yields \begin{align*} \dot{x}(t) &= \Omega + \Omega^{2} \frac{t - \tau}{2!} + \Omega^{3} \frac{(t - 2\tau)^{2}}{4!} + \Omega^{4} \frac{(t - 3\tau)^{3}}{3!} + \dots + \Omega^{k} \frac{\left(t - (k - 1) \tau\right)^{k-1}}{(k - 1)!} \\ % &= \Omega \Big( \mathrm{id}_{X} + \Omega \frac{t - \tau}{2!} + \Omega^{2} \frac{(t - 2\tau)^{2}}{4!} + \Omega^{3} \frac{(t - 3\tau)^{3}}{3!} + \dots + \Omega^{k-1} \frac{\left(t - (k - 1) \tau\right)^{k-1}}{(k - 1)!}\Big) \\ &= \Omega \exp_{\tau}(t - \tau; \Omega) = \Omega x(t - \tau) \notag \end{align*} and therefore \begin{align*} \ddot{x}(t) &= \Omega^{2} + \Omega^{3} \frac{t - 2\tau}{1!} + \Omega^{4} \frac{(t - 3\tau)^{2}}{2!} + \dots + \Omega^{k} \frac{\left(t - (k - 1) \tau\right)^{k-2}}{(k - 2)!} \\ % &= \Omega^{2} \Big( \mathrm{id}_{X} + \Omega \frac{t - 2\tau}{1!} + \Omega^{2} \frac{(t - 3\tau)^{2}}{2!} + \dots + \Omega^{k-2} \frac{\left(t - (k - 1) \tau\right)^{k-2}}{(k - 2)!}\Big) \\ &= \Omega^{2} \exp_{\tau}(t - 2\tau; \Omega) = \Omega^{2} x(t - 2\tau). \end{align*} Hence, $x$ satisfies Equation (\ref{EQUATION_DDE_FOR_DEXP_1}). Finally, by the definition of $\exp_{\tau}(\cdot; \Omega)$, $x$ satisfies Equation (\ref{EQUATION_DDE_FOR_DEXP_2}), too. \end{proof} \begin{corollary} The delayed exponential function $\exp_{\tau}(\cdot; -\Omega)$ lies in $C^{0}([-\tau, \infty), X) \cap C^{1}\big([0, \infty), X\big) \cap C^{2}\big([\tau, \infty), X) \cap $ and solves the Cauchy problem (\ref{EQUATION_DDE_FOR_DEXP_1})--(\ref{EQUATION_DDE_FOR_DEXP_2}) with the initial data \begin{equation} \varphi(t) = \left\{ \begin{array}{cc} \mathrm{id}_{X}, & -\tau \leq t < 0, \\ \mathrm{id}_{X} - \Omega t, & 0 \leq t \leq \tau. \end{array} \right. \notag \end{equation} \end{corollary} We define the functions \begin{equation} \begin{split} x_{1}(t; \Omega) &:= \frac{1}{2} \big(\exp_{\tau}(t; \Omega) + \exp_{\tau}(t; -\Omega)\big) \text{ for } t \geq -\tau, \\ x_{2}(t; \Omega) &:= \frac{1}{2} \Omega^{-1} \big(\exp_{\tau}(t; \Omega) - \exp_{\tau}(t; -\Omega)\big) \text{ for } t \geq -\tau. \end{split} \label{EQUATION_FUNCTIONS_X_1_AND_X_2} \end{equation} From Equation (\ref{EQUATION_DEFINITION_OF_DELAYED_EXPONENTIAL}), we explicitly obtain \begin{equation} x_{1}(t; \Omega) = \left\{ \begin{array}{cc} \mathrm{id}_{X}, & -\tau \leq t < \tau, \\ \mathrm{id}_{X} + \Omega^{2} \frac{(t - \tau)^{2}}{2!}, & \tau \leq t < 3\tau, \\ \mathrm{id}_{X} + \Omega^{2} \frac{(t - \tau)^{2}}{2!} + \Omega^{4} \frac{(t - 3\tau)^{4}}{4!}, & 3\tau \leq t < 5\tau, \\ \dots & \dots \\ \mathrm{id}_{X} + \Omega^{2} \frac{(t - \tau)^{2}}{2!} + \dots + \Omega^{2k} \frac{(t - (2k - 1) \tau)^{2k}}{(2k)!}, & (2k - 1) \tau \leq t < (2k + 1) \tau \end{array}\right. \notag \end{equation} and \begin{equation} x_{2}(t; \Omega) = \left\{ \begin{array}{cc} 0_{L(X)}, & -\tau \leq t < 0, \\ \mathrm{id}_{X} \frac{t}{1!}, & 0 \leq t < 2\tau, \\ \mathrm{id}_{X} \frac{t}{1!} + \Omega^{2} \frac{(t - 2\tau)^{3}}{3!}, & 2\tau \leq t < 4\tau, \\ \mathrm{id}_{X} \frac{t}{1!} + \Omega^{2} \frac{(t - 2\tau)^{3}}{3!} + \Omega^{4} \frac{(t - 4\tau)^{5}}{5!}, & 4\tau \leq t < 6\tau, \\ \dots & \dots \\ \mathrm{id}_{X} \frac{t}{1!} + \Omega^{2} \frac{(t - 2\tau)^{3}}{3!} + \dots + \Omega^{2k} \frac{(t - (2k) \tau)^{2k+1}}{(2k + 1)!}, & 2k \tau \leq t < 2(k + 1) \tau. \end{array}\right. \notag \end{equation} Obviously, $x_{1}$ and $x_{2}$ are even functions with respect to $\Omega$. Figure \ref{FIGURE_FUNCTIONS_X1_AND_X2} displays the functions $x_{1}(\cdot; \Omega)$ and $x_{2}(\cdot; \Omega)$ for various values of $\tau$ and $\Omega$. \begin{figure}[h!] \centering \includegraphics[scale = 0.4]{fig01.eps} \includegraphics[scale = 0.4]{fig02.eps} \caption{Functions $x_{\tau}^{1}(\cdot; \Omega)$ and $x_{\tau}^{2}(\cdot; \Omega)$. \label{FIGURE_FUNCTIONS_X1_AND_X2}} \end{figure} \begin{theorem} The functions $x_{1}(\cdot; \Omega), x_{2}(\cdot; \Omega)$ satisfy $x_{1}(\cdot; \Omega), x_{2}(\cdot; \Omega) \in C^{1}\big([-\tau, \infty), X\big) \cap C^{2}\big([-\tau, 0], X) \cap C^{2}\big([\tau, \infty), X\big)$. Further, $x_{1}(\cdot; \Omega)$ and $x_{2}(\cdot; \Omega)$ are solutions to the Cauchy problem (\ref{EQUATION_DDE_FOR_DEXP_1})--(\ref{EQUATION_DDE_FOR_DEXP_2}) with the initial data $\varphi(t) = \mathrm{id}_{X}$, $-\tau \leq t \leq \tau$, and $\varphi(t) = \mathrm{id}_{X} t$, $-\tau \leq t \leq \tau$, respectively. \end{theorem} First, assuming $f \equiv 0_{X}$, Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) reduce to \begin{align} \ddot{x}(t) - \Omega^{2} x(t - 2\tau) &= 0 \text{ for } t \geq 0 \label{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_RIGHT_HAND_SIDE_ZERO}, \\ x(t) &= \varphi(t) \text{ for } t \in [-2\tau, 0], \label{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_RIGHT_HAND_SIDE_ZERO_IC} \end{align} \begin{theorem} \label{THEOREM_SOLUTION_RIGHT_HAND_SIDE_ZERO} Let $\varphi \in C^{2}\big([-2\tau, 0], X\big)$. Then the unique classical solution $x$ to Cauchy problem (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_RIGHT_HAND_SIDE_ZERO})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_RIGHT_HAND_SIDE_ZERO_IC}) is given by \begin{equation} x(t) = x^{1}_{\tau}(t + \tau; \Omega) \varphi(-2\tau) + x^{2}_{\tau}(t + 2\tau; \Omega) \dot{\varphi}(-2\tau) + \int_{-2\tau}^{0} x^{2}_{\tau}(t - s; \Omega) \ddot{\varphi}(s) \mathrm{d}s. \notag \end{equation} \end{theorem} \begin{proof} To solve Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}), we use the ansatz \begin{equation} x(t) = x^{1}_{\tau}(t + \tau; \Omega) c_{1} + x^{2}_{\tau}(t + 2\tau; \Omega) c_{2} + \int_{-2\tau}^{0} x^{2}_{\tau}(t - s; \Omega) \ddot{c}(s) \mathrm{d}s \label{EQUATION_ANSATZ_X_RIGHT_HAND_SIDE_ZERO} \end{equation} for some $c_{1}, c_{2} \in X$ and $c \in C^{2}\big([-2\tau, 0], X\big)$. Plugging the ansatz from Equation (\ref{EQUATION_ANSATZ_X_RIGHT_HAND_SIDE_ZERO}) into Equation (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_RIGHT_HAND_SIDE_ZERO}), we obtain for $t \geq 0$ \begin{align*} \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} &\Big(x^{1}_{\tau}(t + \tau; \Omega) c_{1} + x^{2}_{\tau}(t + 2\tau; \Omega) c_{2} + \int_{-2\tau}^{0} x^{2}_{\tau}(t - s; \Omega) \ddot{c}(s) \mathrm{d}s\Big) \\ -&\Omega^{2} \Big(x^{1}_{\tau}((t + \tau) - 2\tau; \Omega) c_{1} + x^{2}_{\tau}((t + 2\tau) - 2\tau; \Omega) c_{2} \\ &+ \int_{-2\tau}^{0} x^{2}_{\tau}((t - 2\tau) - s; \Omega) \ddot{c}(s) \mathrm{d}s = 0 \end{align*} or, equivalently, \begin{align*} \Big(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} &x^{1}_{\tau}(t + \tau; \Omega) - \Omega^{2} x^{1}_{\tau}((t + \tau) - 2\tau; \Omega)\Big) c_{1} \\ &+ \Big(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} x^{2}_{\tau}(t + 2\tau; \Omega) - \Omega^{2} x^{2}_{\tau}((t + 2\tau) - 2\tau; \Omega)\Big) c_{2} \\ &+ \int_{0}^{\tau} \Big(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} x^{2}_{\tau}(t - s; \Omega) - \Omega^{2} x^{2}_{\tau}((t - 2\tau) - s; \Omega)\Big) \ddot{c}(s) \mathrm{d}s \equiv 0_{X}. \notag \end{align*} Since $x^{1}_{\tau}(\cdot; \Omega)$ and $x^{2}_{\tau}(\cdot; \Omega)$ solve the homogeneous equation, all three coefficients at $c_{1}$, $c_{2}$ and $\ddot{c}$ vanish implying that the function $x$ in Equation (\ref{EQUATION_ANSATZ_X_RIGHT_HAND_SIDE_ZERO}) is a solution of Equation (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_RIGHT_HAND_SIDE_ZERO}). Now, we show that selecting $c_{1} := \varphi(-2\tau)$, $c_{2} := \dot{\varphi}(-2\tau)$ and $c := \varphi$, the function $x$ in Equation (\ref{EQUATION_ANSATZ_X_RIGHT_HAND_SIDE_ZERO}) satisfies the initial condition (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_RIGHT_HAND_SIDE_ZERO_IC}). Letting for $t \in [-2\tau, 0]$ \begin{equation} \big[I \varphi\big](t) := \int_{-2\tau}^{0} x^{2}_{\tau}(t - s; \Omega) \ddot{\varphi}(s) \mathrm{d}s \notag \end{equation} and performing a change of variables $\sigma := t - s$, we find \begin{equation} \big[I \varphi\big](t) = \int_{t + 2\tau}^{t} x^{2}_{\tau}(\sigma; \Omega) \ddot{\varphi}(t - \sigma) \mathrm{d}\sigma = -\int_{t}^{t + 2\tau} x^{2}_{\tau}(\sigma; \Omega) \ddot{\varphi}(t - \sigma) \mathrm{d}\sigma. \notag \end{equation} Since $x_{2}$ can continuously be extended by $0_{L(X)}$ onto $(-\infty, -\tau]$, we get \begin{equation} \big[I \varphi\big](t) = -\int_{0}^{t + 2\tau} x_{2}(\sigma; \Omega) \ddot{\varphi}(t - \sigma) \mathrm{d}\sigma. \notag \end{equation} Integrating by parts, we further get \begin{equation} \begin{split} \big[I \varphi\big](t) &= -\int_{0}^{t + 2\tau} x^{2}_{\tau}(\sigma; \Omega) \ddot{\varphi}(t - \sigma) \mathrm{d}\sigma \\ &= -x^{2}_{\tau}(\sigma; \Omega) \dot{\varphi}(t - \sigma)\big|_{\sigma = 0}^{\sigma = t + 2\tau} + \int_{0}^{t + 2\tau} \dot{x}^{2}_{\tau}(\sigma; \Omega) \dot{\varphi}(t - \sigma) \mathrm{d}\sigma. \end{split} \notag \end{equation} Now, taking into account \begin{equation} x^{2}_{\tau}(t; \Omega) = t \, \mathrm{id}_{X}, 0 \leq t \leq 2\tau, \label{EQUATION_DEFINITION_OF_X_2_BETWEEN_0_AND_2TAU} \end{equation} we obtain \begin{equation} \big[I \varphi\big](t) = -x^{2}_{\tau}(t + 2\tau; \Omega) \dot{\varphi}(-2\tau) + \int_{t}^{t + 2\tau} \dot{x}^{2}_{\tau}(\sigma; \Omega) \dot{\varphi}(t - \sigma) \mathrm{d}\sigma. \notag \end{equation} Again, using Equation (\ref{EQUATION_DEFINITION_OF_X_2_BETWEEN_0_AND_2TAU}), we compute \begin{equation} \big[I \varphi](t) = -t \dot{\varphi}(-2\tau) - \varphi(t - \sigma)\big|_{\sigma = t}^{\sigma = t + 2\tau} = -x^{2}_{\tau}(t; \Omega) \dot{\varphi}(-2\tau) - \varphi(-2\tau) + \varphi(t). \notag \end{equation} Hence, for $t \in [-2\tau, 0]$, we have \begin{equation} x(t) = x^{1}_{\tau}(t + \tau; \Omega) \varphi(-2\tau) + x^{2}_{\tau}(t + 2\tau; \Omega) \dot{\varphi}(-2\tau) + \int_{-2\tau}^{0} x^{2}_{\tau}(t - s; \Omega) \ddot{\varphi}(s) \mathrm{d}s = \varphi(t) \notag \end{equation} as claimed. \end{proof} Next, we consider Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) for the trivial initial data, i.e., \begin{align} \ddot{x}(t) - \Omega^{2} x(t - 2\tau) &= f(t) \text{ for } t \geq 0 \label{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_INITIAL_DATA_ZERO}, \\ x(t) &= 0 \text{ for } t \in [-2\tau, 0], \label{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_INITIAL_DATA_ZERO_IC} \end{align} \begin{theorem} \label{THEOREM_SOLUTION_INITIAL_DATA_ZERO} Let $f \in C^{0}\big([0, \infty), X\big)$. The unique classical solution $x$ to Cauchy problem (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_INITIAL_DATA_ZERO})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_INITIAL_DATA_ZERO_IC}) is given by \begin{equation} x(t) = \int_{0}^{t} x^{2}_{\tau}(t - s; \Omega) f(s) \mathrm{d}s. \notag \end{equation} \end{theorem} \begin{proof} To find an explicit solution representation, we use the ansatz \begin{equation} x(t) = \int_{0}^{t} x_{2}(t - s; \Omega) c(s) \mathrm{d}s \text{ for } t \geq \tau \notag \end{equation} for some function $c \in C^{0}\big([0, \infty), X\big)$. Differentiating this expression with respect to $t$ and exploiting the initial conditions for $x^{2}_{\tau}(\cdot; \Omega)$, we get \begin{align*} \dot{x}(t) &= \int_{0}^{t} \dot{x}^{2}_{\tau}(t - s; \Omega) c(s) \mathrm{d}s + x^{2}_{\tau}(t - s; \Omega) c(s)\big|_{s = t} = \int_{0}^{t} \dot{x}^{2}_{\tau}(t - s; \Omega) c(s) \mathrm{d}s + x_{2}(0) c(t) \\ &= \int_{0}^{t} \dot{x}^{2}_{\tau}(t - s; \Omega) c(s) \mathrm{d}s. \notag \end{align*} Differentiating again, we find \begin{align*} \ddot{x}(t) &= \int_{0}^{t} \ddot{x}^{2}_{\tau}(t - s; \Omega) c(s) \mathrm{d}s + \dot{x}^{2}_{\tau}(t - s; \Omega) c(s)\big|_{s = t} \\ &= \int_{0}^{t} \ddot{x}^{2}_{\tau}(t - \tau - s; \Omega) c(s) \mathrm{d}s + \dot{x}^{2}_{\tau}(0+; \Omega) c(t) \\ &= \int_{0}^{t} \ddot{x}^{2}_{\tau}(t - s; \Omega) c(s) \mathrm{d}s + c(t). \end{align*} Plugging this into Equation (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_INITIAL_DATA_ZERO}) and recalling that $x^{2}_{\tau}(\Omega; \Omega)$ is a solution of the homogeneous equation, we get \begin{equation} c(t) \int_{0}^{t} \big(\ddot{x}^{2}_{\tau}(t - s; \Omega) - \Omega^{2} x^{2}_{\tau}(t - 2\tau - s; \Omega)\big) c(s) \mathrm{d}s = f(t) \notag \end{equation} and therefore $c \equiv f$. \end{proof} As a consequence from Theorems \ref{THEOREM_SOLUTION_RIGHT_HAND_SIDE_ZERO} and \ref{THEOREM_SOLUTION_INITIAL_DATA_ZERO}, we obtain using the linearity property of Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}): \begin{theorem} \label{THEOREM_REPRESENTATION_OF_CLASSICAL_SOLUTIONS} Let $\varphi \in C^{2}\big([-2\tau, 0], X\big)$ and $f \in C^{0}\big([0, \infty), X\big)$. The unique classical solution to Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) is given by \begin{equation} \begin{split} x(t) &= x^{1}_{\tau}(t + \tau; \Omega) \varphi(-2\tau) + x^{2}_{\tau}(t + 2\tau; \Omega) \dot{\varphi}(-2\tau) + \int_{-2\tau}^{0} x^{2}_{\tau}(t - s; \Omega) \ddot{\varphi}(s) \mathrm{d}s \\ &+ \left\{ \begin{array}{cl} 0, & t \in [-2\tau, 0), \\ \int_{0}^{t} x^{2}_{\tau}(t - s; \Omega) f(s) \mathrm{d}s, & t \geq 0 \end{array}\right. \end{split} \notag \end{equation} for $t \in [-2\tau, \infty)$. \end{theorem} Finally, we get: \begin{theorem} Let $\varphi \in C^{1}\big([-2\tau, 0], X\big)$ and $f \in L^{1}_{\mathrm{loc}}(0, \infty; X)$. The unique mild solution to Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) is given by \begin{equation} \begin{split} x(t) &= x^{1}_{\tau}(t + \tau; \Omega) \varphi(-2\tau) + x^{2}_{\tau}(t + 2\tau; \Omega) \dot{\varphi}(0) - \int_{-2\tau}^{0} \dot{x}^{2}_{\tau}(t - s; \Omega) \dot{\varphi}(s) \mathrm{d}s \\ &+ \left\{ \begin{array}{cl} 0, & t \in [-2\tau, 0), \\ \int_{0}^{t} x^{2}_{\tau}(t - s; \Omega) f(s) \mathrm{d}s, & t \geq 0 \end{array}\right. \end{split} \notag \end{equation} for $t \in [-2\tau, \infty)$. \end{theorem} \begin{proof} Approximating $\varphi$ in $C^{1}\big([-2\tau, 0], X\big)$ with $(\varphi_{n})_{n \in \mathbb{N}} \subset C^{2}\big([-2\tau, 0], X\big)$ and $f$ in $L^{1}_{\mathrm{loc}}(0, \infty; X)$ with $(f_{n})_{n \in \mathbb{N}} \subset C^{0}\big([0, \infty), X\big)$, applying Theorem \ref{THEOREM_REPRESENTATION_OF_CLASSICAL_SOLUTIONS} to solve the Cauchy problem (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) for the right-hand side $f$ and the initial data $\varphi_{n}$, performing a partial integration for the integral involving $\ddot{\varphi}_{n}$ and passing to the limit as $n \to \infty$, the claim follows. \end{proof} \subsection{Asymptotic behavior as $\tau \to 0$} Again, we assume $X$ to be a Banach space and prove the following generalization of \cite[Lemma 4]{KhuPo2014}. \begin{lemma} \label{LEMMA_DELAYED_EXPONENTIAL_ASYMPTOTICS} Let $\Omega \in L(X)$, $T > 0$, $\tau_{0} > 0$ and let \begin{equation} \alpha := 1 + \|\Omega\|_{L(X)} \exp\big(\tau_{0} \|\Omega\|_{L(X)}\big). \notag \end{equation} Then for any $\tau \in (0, \tau_{0}]$, \begin{equation} \|\exp_{\tau}(t - \tau; \Omega) - \exp(\Omega t)\|_{L(X)} \leq \tau \exp(\alpha T \|\Omega\|_{L(X)}) \text{ for } t \in [0, T]. \notag \end{equation} \end{lemma} \begin{proof} Let $\tau \in (0, \tau_{0}]$. For $t \in [0, \tau]$, the claim easily follows from the mean value theorem for Bochner integration. Next, we want to exploit the mathematical induction to show for any $k \in \mathbb{N}$. \begin{equation} \|\exp_{\tau}(t - \tau; \Omega) - \exp(t \Omega)\|_{L(X)} \leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) \text{ for } t \in ((k - 1) \tau, k \tau]. \notag \end{equation} Indeed, assuming that the claim is true for some $k \in \mathbb{N}$, we use the fundamental theorem of calculus and find for $t \in (k \tau, (k + 1) \tau]$ \begin{align*} \|&\exp_{\tau}(t - \tau; \Omega) - \exp(t \Omega)\|_{L(X)} \\ % &\leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) + \int_{k\tau}^{(k + 1) \tau} \Big\|\frac{\mathrm{d}}{\mathrm{d}s} \exp_{\tau}(s - \tau; \Omega) - \frac{\mathrm{d}}{\mathrm{d}s} \exp(s \Omega)\Big\|_{L(X)} \mathrm{d}s \\ % &\leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) + \|\Omega\|_{L(X)} \int_{k\tau}^{(k + 1) \tau} \|\exp_{\tau}(s - 2 \tau; \Omega) - \exp(s \Omega)\|_{L(X)} \mathrm{d}s \\ % &\leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) + \|\Omega\|_{L(X)} \int_{k\tau}^{(k + 1) \tau} \big\|\exp_{\tau}(s - 2 \tau; \Omega) - \exp\big((s - \tau) \Omega\big)\big\|_{L(X)} \mathrm{d}s \\ &+ \|\Omega\|_{L(X)} \int_{k\tau}^{(k + 1) \tau} \big\|\exp(s\Omega) - \exp\big((s - \tau) \Omega\big)\big\|_{L(X)} \mathrm{d}s \displaybreak \\ % &\leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) + \|\Omega\|_{L(X)} \int_{(k - 1) \tau}^{k \tau} \big\|\exp_{\tau}(s - \tau; \Omega) - \exp(s \Omega)\big\|_{L(X)} \mathrm{d}s \\ &+ \|\Omega\|_{L(X)} \int_{k\tau}^{(k + 1) \tau} \int_{s - \tau}^{s} \Big\|\frac{\mathrm{d}}{\mathrm{d}\sigma} \exp(\sigma \Omega)\Big\|_{L(X)} \mathrm{d}\sigma \mathrm{d}s \\ % &\leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) + \tau^{2} \|\Omega\|_{L(X)} \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) \\ &+ \tau^{2} \|\Omega\|_{L(X)}^{2} \exp\big((k + 1) \tau \|\Omega\|_{L(X)}\big) \\ % &\leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) \Big(1 + \tau \|\Omega\|_{L(X)} + \tau \|\Omega\|_{L(X)}^{2} \exp\big(\tau \|\Omega\|_{L(X)}\big)\Big) \\ % &\leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) \bigg(1 + \tau \|\Omega\|_{L(X)} \Big(1 + \tau \|\Omega\|_{L(X)} \exp\big(\tau \|\Omega\|_{L(X)}\big)\Big)\bigg) \\ % &\leq \tau \exp\big(\alpha k \tau \|\Omega\|_{L(X)}\big) \exp(\alpha \tau \|\Omega\|_{L(X)}\big) \leq \exp\big(\alpha (k + 1) \tau \|\Omega\|_{L(X)}\big) \end{align*} since $\alpha \geq 1$. The claim follows by induction. \end{proof} \begin{corollary} \label{COROLLARY_DELAYED_EXPONENTIAL_ASYMPTOTICS} Let the assumptions of Lemma \ref{LEMMA_DELAYED_EXPONENTIAL_ASYMPTOTICS} be satisfied and let $\gamma \geq 0$. Then \begin{equation} \big\|\exp_{\tau}(t + \gamma; \Omega) - e^{\Omega t}\big\|_{L(X)} \leq (\gamma + \tau) \big(1 + \|\Omega\|_{L(X)}\big) \exp\big(\alpha (T + \gamma + \tau) \|\Omega\|_{L(X)}\big). \notag \end{equation} \end{corollary} \begin{proof} Lemma \ref{LEMMA_DELAYED_EXPONENTIAL_ASYMPTOTICS} and the mean value theorem for Bochner integration yield \begin{align*} \big\|\exp_{\tau}(t + &\gamma; \Omega) - e^{\Omega t}\big\|_{L(X)} \\ &\leq \big\|\exp_{\tau}(t + \gamma; \Omega) - e^{\Omega (t + \gamma + \tau)}\big\|_{L(X)} + \big\|e^{\Omega (t + \gamma + \tau)} - e^{\Omega t}\big\|_{L(X)} \\ &\leq \tau \exp\big(\alpha (T + \gamma + \tau) \|\Omega\|_{L(X)}\big) + (\gamma + \tau) \|\Omega\|_{L(X)} \exp\big((T + \gamma + \tau) \|\Omega\|_{L(X)}\big) \\ &\leq (\gamma + \tau) \big(1 + \|\Omega\|_{L(X)}\big) \exp\big(\alpha (T + \gamma + \tau) \|\Omega\|_{L(X)}\big) \end{align*} as we claimed. \end{proof} Let $T > 0$, $\tau_{0} > 0$, $x_{0}, x_{1} \in X$ and $f \in L^{1}_{\mathrm{loc}}(0, \infty; X)$ be fixed and let $\bar{x} \in C^{1}\big([0, \infty), X\big)$ denote the unique mild solution to the Cauchy problem (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL})--(\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL_IC}) from Section \ref{SECTION_CLASSICAL_HARMONIC_OSCILLATOR}. \begin{theorem} \label{THEOREM_DELAYED_EQUATION_ASYMPTOTICS} Let $\tau > 0$. For any $\tau \in (0, \tau_{0})$, let $x(\cdot; \tau)$ denote the unique mild solution of (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_GENERAL_IC}) for the initial data $\varphi(\cdot; \tau) \in C^{1}\big([-2\tau, 0], X\big)$. Then we have \begin{equation} \begin{split} \|x(\cdot; \tau) - \bar{x}\|_{C^{0}([0, T], X)} &\leq 3 \beta \Big( \|\varphi(-2\tau; \tau) - x_{0}\|_{X} + \|\dot{\varphi}(0; \tau) - x_{1}\|_{X}\Big) \\ &+ 3 \beta \tau \Big(\|\varphi(\cdot; \tau)\|_{C^{1}([-2\tau, 0], X)} + \|f\|_{L^{1}(0, T; X)}\Big) \end{split} \notag \end{equation} with $\beta(T) := 2 \big(1 + \|\Omega\|_{L(X)}\big) \big(1 + \|\Omega^{-1}\|_{L(X)}\big) \exp\big(\alpha (T + 2\tau) \|\Omega\|_{L(X)}\big)$. \end{theorem} \begin{proof} Using the explicit representation of $\bar{x}$ and $x(\cdot; \tau)$ and $x$ from Sections \ref{SECTION_CLASSICAL_HARMONIC_OSCILLATOR} and \ref{SECTION_HARMONIC_OSCILLATOR_WITH_DELAY_SOLUTION_REPRESENTATION}, respectively, we can estimate \begin{equation} \|x(t; \tau) - \bar{x}(t)\|_{X} \leq I_{0, 1}(t) + I_{0, 2}(t) + I_{0, 2}(t) \text{ for } t \in [0, T] \notag \end{equation} with \begin{align*} I_{0, 1}(t) &:= \big\|x_{\tau}^{1}(t + \tau; \Omega) \varphi(-2\tau; \tau) - \tfrac{1}{2} (e^{\Omega t} + e^{-\Omega t}) x_{0}\big\|_{X} \\ &+ \big\|x_{\tau}^{2}(t + 2\tau; \Omega) \dot{\varphi}(0; \tau) + \tfrac{1}{2} \Omega^{-1}(e^{\Omega t} - e^{-\Omega t}) x_{1}\big\|_{X}, \\ % I_{0, 2}(t) &:= \int_{0}^{t} \big\|x^{2}_{\tau}(t - s; \Omega) - \tfrac{1}{2} \Omega^{-1} (e^{\Omega(t - s)} - e^{-\Omega (t -s)})\big\|_{L(X)} \|f(s)\|_{X} \mathrm{d}s, \\ % I_{0, 3}(t) &:= \int_{-2\tau}^{0} \|x_{\tau}^{2}( t - s - \tau; \Omega)\|_{L(X)} \|\dot{\varphi}(s; \tau)\|_{X} \mathrm{d}s. \end{align*} Corollary \ref{COROLLARY_DELAYED_EXPONENTIAL_ASYMPTOTICS} yields \begin{align*} \big\|x_{\tau}^{1}(t + \tau; \Omega) - \tfrac{1}{2} (e^{\Omega t} + e^{-\Omega t})\big\|_{L(X)} &\leq \beta \tau, \\ \big\|x_{\tau}^{2}(t + \tau; \Omega) - \tfrac{1}{2} \Omega^{-1} (e^{\Omega t} - e^{-\Omega t})\big\|_{L(X)} &\leq \beta \tau \end{align*} and, therefore, \begin{align*} I_{0, 1}(t) &\leq \beta \tau \big(\|\varphi(-2\tau; \tau)\|_{X} + \|\dot{\varphi}(0; \tau)\|_{X}\big) + \beta\big(\|\varphi(-2\tau; \tau) - x_{0}\|_{X} + \|\dot{\varphi}(0; \tau) - x_{1}\|_{X}\big) \\ &\leq \beta \tau \|\varphi\|_{C^{1}([-2\tau, 0], X)} + \beta \big(\|\varphi(-2\tau; \tau) - x_{0}\|_{X} + \|\dot{\varphi}(0; \tau) - x_{1}\|_{X}\big). \end{align*} Similarly, \begin{equation} I_{0, 2}(t) \leq 2 \beta \tau \|f\|_{L^{1}(0, T; X)} \text{ and } I_{0, 3}(t) \leq 2 \beta \tau \|\varphi\|_{C^{1}([0, T], X)}. \notag \end{equation} Hence, the claim follows. \end{proof} \begin{corollary} Under conditions of Theorem \ref{THEOREM_DELAYED_EQUATION_ASYMPTOTICS}, we additionally have \begin{equation} \begin{split} \|x(\cdot; \tau) - &\bar{x}\|_{C^{1}([0, T], X)} \leq 3 (1 + \beta(T))(1 + \delta(T)) (1 + T) \Big( \|\varphi(-2\tau; \tau) - x_{0}\|_{X} \\ + &\|\dot{\varphi}(0; \tau) - x_{1}\|_{X} + \tau\big(\|\varphi(\cdot; \tau)\|_{C^{1}([-2\tau, 0], X)} + \|f\|_{L^{1}(0, T; X)} + \|x_{0}\|_{X} + \|x_{1}\|_{X}\big)\Big) \end{split} \notag \end{equation} with $\delta(T) := \|\Omega\|_{L(X)}^{2} \big(2 + \|\Omega^{-1}\|_{L(X)} + \|\Omega^{-1}\|_{L(X)} T\big) e^{\|\Omega\|_{L(X)} T}$. \end{corollary} \begin{proof} Integrating Equation (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL}) and using Equation (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL_IC}) as well as exploiting Equations (\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_MILD_FORMULATION})--(\ref{EQUATION_LINEAR_OSCILLATOR_WITH_DELAY_MILD_FORMULATION_IC}) yields \begin{align*} \|\dot{x}(t; \tau) - \dot{\bar{x}}(t)\| &\leq \|\dot{\varphi}(0; \tau) - x_{1}\|_{X} + \int_{0}^{t} \|\Omega^{2} x(s - 2\tau; \tau) - \Omega^{2} \bar{x}(s)\|_{X} \mathrm{d}s \\ % &\leq I_{1, 1}(t) + I_{1, 2}(t) + I_{1, 3}(t) \text{ for } t \in [0, T] \notag \end{align*} with \begin{align*} I_{1, 1}(t) &:= \|\dot{\varphi}(0; \tau) - x_{1}\|_{X}, \quad I_{1, 2} := \|\Omega\|_{L(X)}^{2} \int_{-2\tau}^{0} \|\varphi(s) - \bar{x}(s + 2\tau)\|_{X} \mathrm{d}s, \\ % I_{1, 3}(t) &:= \|\Omega\|_{L(X)}^{2} \int_{2\tau}^{t} \|x(s - 2\tau; \tau) - \bar{x}(s)\|_{X} \mathrm{d}s \end{align*} Taking into account Equation (\ref{EQUATION_HARMONIC_OSCILLATOR_GENERAL_EXPLICIT_SOLUTION}), we can estimate \begin{equation} \|\bar{x}\|_{C^{0}([0, 2\tau], X)} \leq \big(\|x_{0}\| + \|\Omega^{-1}\|_{L(X)} \|x_{1}\|\big) e^{\|\Omega\|_{L(X)} T} + \|\Omega^{-1}\|_{L(X)} T e^{\|\Omega\|_{L(X)} T} \|f\|_{L^{1}(0, T; X)}. \notag \end{equation} Hence, \begin{align*} I_{1, 2}(t) \leq \delta \tau \Big(\|\varphi\|_{C^{0}([0, T], X)} + \|x_{0}\|_{X} + \|x_{1}\|_{X}\Big). \notag \end{align*} Applying Theorem \ref{THEOREM_DELAYED_EQUATION_ASYMPTOTICS}, we further get \begin{align*} I_{1, 3}(t) \leq 3 \|\Omega\|_{L(X)}^{2} T \beta \Big(&\|\varphi(-2\tau; \tau) - x_{0}\|_{X} + \|\dot{\varphi}(0; \tau) - x_{1}\|_{X} \\ + &\tau \big(\|\varphi(\cdot; \tau)\|_{C^{1}([-2\tau, 0], X)} + \|f\|_{L^{1}(0, T; X)}\big)\Big). \end{align*} Combining these inequalities and using again Theorem Theorem \ref{THEOREM_DELAYED_EQUATION_ASYMPTOTICS}, we deduce the estimate asserted. \end{proof} \section*{Acknowledgment} This work has been funded by a research grant from the Young Scholar Fund supported by the Deutsche Forschungsgemeinschaft (ZUK 52/2) at the University of Konstanz, Konstanz, Germany. \addcontentsline{toc}{chapter}{References}
2,877,628,090,047
arxiv
\section{Introduction} The conventional picture of the atom is simple: a central nucleus clouded by orbiting electrons. Arranging the atoms in various ways and at specific distances from each other leads to the ``structure'' of molecules, solids, and liquids: the tetrahedral structure of methane; the rock-salt structure of salts; and so on. In this conventional picture of the atom it is implicitly assumed that the atomic nuclei are point-like classical particles, whereas the only quantum objects are the orbiting electrons. This is an approximation. In reality the atomic nuclei are not point-like classical particles, but are themselves quantum objects, and like the electrons, distributed in terms of wave functions. For most elements the nuclear wave function is sufficiently localised so that the nuclei effectively behave as classical particles, which is why the conventional picture of atoms serves us so well. Quantum mechanics tells us, however, that the approximation of atomic nuclei as point-like classical particles gets progressively worse as the temperature drops or as the elements get lighter. For the lightest element, hydrogen, nuclear quantum effects (NQEs) can be significant even at room temperature, with the nucleus of the H atom, a proton, being localised in space and able to tunnel through classically forbidden potential energy barriers. If hydrogen was some obscure element trailing off the end of the periodic table perhaps one would not care much about NQEs, but for the very reason that hydrogen is the most ``quantum'' of the elements, i.e., its lightness, it is also the most abundant: A colossal 90\% of the universe by weight is hydrogen; it is present in all organic compounds, the largest class of which are literally called hydrocarbons; it is essential for life by, for example, being the marriage partner of O in H$_2$O. Hydrogen is also of paramount importance to the global economy with practically all industrial catalytic processes having hydrogen implicated either as a reactant, product, or intermediate, with the making and breaking of H-H, C-H, O-H, and N-H bonds at surfaces the bread and butter of heterogeneous catalysis. Real world manifestations of the importance of NQEs are plentiful. For example, the heat capacity of ``classical'' water would be $\sim$40\% \cite{Vega_2010} larger without NQEs; thus in a classical world tea drinkers would have to wait much longer for their water to boil. Similarly many biological reactions (notably in enzyme catalysis) rely on the tunnelling of protons \cite{SHS_bio_tunel_rev,bio_rev}. In addition, in the pharmaceutical industry, novel classes of deuterated drugs, which are expected to have greater biochemical potency, are under development Various experimental approaches are available for understanding NQEs at a fundamental level. Experiments involving isotopic substitution, in which hydrogen is replaced by deuterium (or, less often, tritium), are a powerful and widely used approach. With this strategy one can then explore how e.g. the structure and properties of materials change with isotope substitution \cite{doi:10.1021/cr60292a004,Ubbelohde}. Experiments have shown that deuteration can sometimes lead to dramatic changes in physical properties, e.g. a $>$100 K change in the ferroelectric to paraelectric transition temperature in H-bonded ferroelectrics \cite{PhysRevLett.81.5924,doi:10.1063/1.4862740}. Kinetic measurements in which the rate change upon isotopic substitution is examined (so-called kinetic isotope effects) are also widely employed in chemistry and biology \cite{SHS_bio_tunel_rev,bio_rev,doi:10.1021/cr100436k}. In terms of more direct experimental probes, deep inelastic neutron scattering (DINS) has emerged as an approach that can directly measure the momentum distribution of protons \cite{DINS_1}. It can provide insights into the local environment of H atoms, which can help elucidate NQEs in hydrogen bond (HB) networks, such as in structures of crystalline and amorphous ice \cite{DINS_1,Romanelli_2013,DINS_2,DINS_3}. On surfaces, the invention of Scanning Tunnelling Microscopy (STM) enabled atomic resolution imaging and atom manipulation \cite{STM_1986}, and STM has also been used to probe NQEs. Notably measurements of H and D diffusion on various metal surfaces have been performed and provide strong evidence of H tunnelling at cryogenic temperatures \cite{lauhon_direct_2000,PhysRevB.81.045402,Sykes_Quantum_2012,Davidson_2014_2}. In addition, isotope-dependent switching of HBs in adsorbed water clusters has been visualised on Cu and NaCl \cite{PhysRevLett.100.166101,tetramer_2015}. Fast diffusion of H atoms on surfaces beyond the time-resolution of STM can also be measured indirectly with helium spin echo (HeSE) experiments, a robust and sophisticated scattering experiment \cite{Jardine_HeSE_2009,jardine_determination_2010}. From the theoretical perspective a variety of schemes can be employed to treat NQEs \cite{LSCIVR,MCTDH,SHS_bio_tunel_rev,bio_rev,Marx-Parr_1994}. A particularly elegant approach and the one we focus on in this review is Feynman's path-integral representation of quantum mechanics \cite{Feynman}. Detailed accounts of the path integral representation, its implementation into computer codes, and applications in chemistry, physics and materials science can be found elsewhere \cite{Tuckerman_book,Tuckerman-Marx_1996,doi:10.1063/1.441588,Berne_1982,PhysRevB.30.2555,Marx-Parr_1994,Cao_Voth_1994,PhysRevLett.105.110602,ipi,PI_Ramirez,PILE,Mass_3,habershon_ring-polymer_2013,markland_nuclear_2018}. Thus we do not go in to the details of the theory here except to note that the path-integral framework formulates quantum mechanics as a summation of paths rather than through the wavefunction view of Schr\"odinger. In doing so it provides a classical analogy for quantum mechanics, which is widely applicable for sampling quantum ensembles and approximating quantum dynamics. In the 1980s, pioneering path-integral simulations of materials were performed with either path-integral molecular dynamics (PIMD) \cite{doi:10.1063/1.446740,Gillan_H_qtst} or path-integral Monte Carlo (PIMC) \cite{doi:10.1063/1.441588,Berne_1982,PhysRevB.30.2555,PhysRevB.31.4234,PhysRevLett.58.1648}. In these early studies the interatomic interactions were described with empirical potentials (otherwise known as forcefields). However, with the emergence of density functional theory (DFT), \textit{ab initio} based path integral approaches became possible \cite{Marx-Parr_1994,Tuckerman-Marx_1996}. Early DFT-based PIMD studies appeared in the 1990s with exciting applications on e.g. water and ice \cite{Marx1998,Tuckerman817,Parrinello_2003}. Over the following 20 years or so numerous algorithmic and computational advances served to make the path integral methodology robust and computationally tractable. This has included important work on quantum dynamics within the path-integral framework, with e.g. centroid molecular dynamics (CMD) \cite{CMD,voth_rigorous_1989,MARX1999166}, ring-polymer molecular dynamics (RPMD) \cite{craig_refined_2005,habershon_ring-polymer_2013,trpmd}, and the combination of path integral methods with other electronic structure methods beyond DFT \cite{PI_QMC_1,doi:10.1063/1.4941091,Tachikawa_2013,C4CP05192K}. Developments have also been made in combining the path integral framework with other quantum theories, for example the instanton theory \cite{miller_semiclassical_1975,richardson_ring-polymer_2009,Jonsson_2009,Kastner_2014,Inst_persp}. Indeed, over the years increasingly complex systems have been explored \cite{Klein2003,XZLi_2011,tuckerman_nature_2002,doi:10.1021/jp810590c,Wei_rev_2016,doi:10.1021/acs.jpclett.7b00979,PhysRevLett.120.225901,rossi_nuclear_2016,markland_nuclear_2018} and in some respects it is now a ``golden era" for investigating NQEs with path integral methods. In this brief review, we highlight some key recent findings on the role and importance of NQEs in H containing systems. Although work from other groups is discussed, this review focuses heavily on work carried out by the authors. As such it is not intended as a comprehensive overview of the field; for such reviews the interested reader is referred to refs. \cite{Marx_Tuckerman_2010,Wei_rev_2016,sremarks,markland_nuclear_2018}. Three prominent topics where NQEs manifest in interesting manners under experimental or real world conditions are covered: the phase diagram of hydrogen; the adsorption and diffusion of hydrogen on surfaces and 2D materials; and the structure and stability of H-bonded systems. \section{Quantum nature of condensed phase hydrogen} Hydrogen, when condensed under megabar pressures, exhibits extremely complex solid and liquid phase transitions, accompanied by intriguing physics such as metalisation, superconductivity, and superfluidity. Exploring the phase diagram of hydrogen, therefore, is one of the major (and most heavily debated) topics in condensed matter physics \cite{dias_observation_2017, silvera_response_2017, liu_comment_2017, goncharov_comment_2017}. Fig. \ref{figure_phase-diagram} illustrates schematically that condensed phase hydrogen can broadly be classified into four regimes, entailing molecular solid(s), molecular liquid(s), atomic solid(s), and atomic liquid(s). The boundaries between the various condensed phases depend sensitively on the relative free energies of the different phases, on which NQEs such as zero point energy (ZPE) and quantum delocalisation may have a significant impact. For example, it has been estimated that the difference in the ZPE between different phases can be as large as $\sim$10 meV per atom \cite{pickard_structure_2007,drummond_quantum_2015}. This is enough to alter the relative thermodynamic stability of competing structures and can significantly shift the pressure-temperature phase boundaries by hundreds of kelvin or tens of gigapascal \cite{pickard_structure_2007,drummond_quantum_2015}. \begin{figure}[!ht] \includegraphics[width=10cm]{Hydrogen-phase-diagram.eps} \centering \caption{Schematic cartoon illustration of four broad types of phase for condensed phase hydrogen, namely a molecular solid, a molecular liquid, an atomic solid and an atomic liquid. The molecular solid is composed of $\text{H}_2$ molecules on crystal lattice sites, and depending on the solid phase the molecules are either orientationally ordered (e.g. phase II and III) or disordered (e.g. phase I). Upon heating, the crystalline molecular solid melts into a molecular liquid. The $\text{H}_2$ molecules in both the molecular solid and the liquid have been predicted to dissociate when compressed, and could transform into either an atomic solid or an atomic liquid. } \label{figure_phase-diagram} \end{figure} In experiments in this area NQEs are often indirectly probed by examining H/D isotope effects with e.g. vibrational spectroscopies in diamond anvil cells. H/D isotopic effects have been discussed in the four crystal structures of the molecular solid observed so far, namely phases I, II, III and IV. These studies have shown that the phase I/II boundary has a strong isotope dependence, whereas the phase II/III boundary is almost identical for H and D \cite{silvera_new_1981, lorenzana_evidence_1989, lorenzana_orientational_1990, cui_megabar_1994, mazin_quantum_1997, goncharov_invariant_1995, goncharov_raman_1996,PhysRevLett.119.065301}. In phase I, the $\text{H}_2$ molecules rotate freely, following the quantum rotational partition function \cite{mcmahon_properties_2012}, whereas in phase II the hydrogen molecules are preferentially aligned on their crystalline lattice sites \cite{goncharenko_neutron_2005}. With the discovery of phase IV of solid hydrogen, isotope effects have now been explored up to the 300 GPa regime \cite{eremets_conductive_2011,howie_mixed_2012,howie_proton_2012, zha_synchrotron_2012, loubeyre_hydrogen_2013, eremets_infrared_2013,zha_high-pressure_2013, zha_raman_2014, howie_raman_2015, dalladay-simpson_evidence_2016}. Phase IV consists of alternating layers of an orientationally disordered molecular layer and an atomic layer on a honeycomb lattice. Hence NQEs are also expected to have an impact on the $\text{H}_2$ molecules in phase IV \cite{ackland_bearing_2015}. Room temperature proton tunnelling was also suggested in phase IV to explain measured Raman data \cite{howie_proton_2012}. Overall, the studies mentioned above indicate that in certain regimes NQEs can be important. However, a comprehensive picture of NQEs in condensed phase hydrogen is yet to be established. In what follows we review certain aspects of condensed phase hydrogen with a focus on understanding the role of NQEs on: (i) the solid phase boundaries in the molecular solid regime; and (ii) the melting of the atomic solid. NQEs have also been revealed in other regimes of the hydrogen phase diagram \cite{doi:10.1063/1.1893956, morales_nuclear_2013, kang_revealing_2013, kang_nuclear_2014}. Comprehensive reviews and previous studies to explore the pressure-temperature phase diagram of hydrogen can be found elsewhere, and the interested reader is refereed to e.g. refs. \onlinecite{mao_ultrahigh-pressure_1994, mcmahon_properties_2012, silvera_new_1981, lorenzana_evidence_1989, lorenzana_orientational_1990, cui_megabar_1994, mazin_quantum_1997, goncharov_invariant_1995, goncharov_raman_1996,PhysRevLett.119.065301, eremets_conductive_2011,howie_mixed_2012,howie_proton_2012, zha_synchrotron_2012, loubeyre_hydrogen_2013, eremets_infrared_2013,zha_high-pressure_2013, zha_raman_2014, howie_raman_2015, dalladay-simpson_evidence_2016}. \subsection{The quantum nature of solid molecular hydrogen} State of the art \textit{ab initio} path integral molecular dynamics simulations have helped to reveal the role of NQEs on dense hydrogen in the last two decades. (For studies with complementary approaches see e.g. ref. \cite{drummond_quantum_2015}.) Notably Biermann \textit{et al}. simulated solid hydrogen at 50 K with \textit{ab initio} path integral molecular dynamics. They found that the lattice structure of hydrogen above a pressure of 350 GPa was very diffuse, due to quantum fluctuations of the hydrogens far from their equilibrium lattice positions \cite{biermann_proton_1998,biermann_quantum_1998}. This unexpected fluxional structure suggests a structure qualitatively different from the classical picture. Later Kitamura \textit{et al}. reported different rotational order due to NQEs for some of the solid phases (specifically phases I, II and III) \cite{kitamura_quantum_2000}. They found that in phase I the hydrogen molecules rotate easily on their hcp lattice sites and that the orientational order of the molecules in the crystal is smeared out. For phase II the simulations suggested that the hydrogen molecules had a preferred orientational order with $\text{Cmc}2_1$ symmetry. For phase III the rotational order was also suppressed but the averaged position of the hydrogen molecules suggests phase III has Cmca symmetry. Later Li \textit{et al}. carried out a comprehensive study using \textit{ab initio} PIMD simulations on the same three low temperature solid hydrogen phases. In these studies van der Waals dispersion forces were also taken into account through the application of a van der Waals inclusive DFT functional \cite{PhysRevB.83.195131,doi:10.1063/1.4754130}. This study showed that the classically orientated $\text{H}_2$ molecules in phase II also lose orientational order when NQEs are accounted for, whereas for $\text{D}_2$ the orientational order was maintained (Fig. \ref{figure_XZLi-JPCM-Fig2}). This study also explained the large isotopic effect in the I-II phase transition due to a less corrugated potential energy landscape, where quantum fluctuations play a more significant role. Overall, these simulations significantly improved the agreement between experiment and theory on the phase boundaries of solid hydrogen. They also highlighted the need to account for both van der Waals interactions and NQEs simultaneously, when treating the low temperature region of the hydrogen phase diagram. We note that at a similar time Geneste \textit{et al}. also studied the role NQEs in solid hydrogen, reaching similar conclusions to Li \textit{et al}. \begin{figure}[!ht] \includegraphics[width=8cm]{figure2.eps} \centering \caption{ NQEs have a significant impact on the phase transition between molecular phases I and II of hydrogen. Trajectories of structures obtained from simulations with classical and quantum nuclei at 80 GPa starting from the phase II (P21/c-24) structure. Yellow balls show the representative configurations of the centroids throughout the course of the simulation. The red rods show the static (geometry-optimized) structure. A conventional hexagonal cell containing 144 atoms was used. Panels (a), (c), (e), and (g) show the z-x plane and panels (b), (d), (f), and (h) show the x-y plane of the hcp lattice. The four simulations are: (1) MD with classical nuclei at 50 K (panels (a) and (b)), (2) PIMD for deuterium at 50 K (panels (c) and (d)), (3) PIMD for hydrogen at 50 K (panels (e) and (f)), and (4) PIMD for deuterium at 150 K (panels (g) and (h)). In the MD simulation, the anisotropic inter-molecular interaction outweighs the thermal and quantum nuclear fluctuations. Therefore, the molecular rotation is highly restricted. The thermal plus quantum nuclear fluctuations outweigh the anisotropic inter-molecular interactions in the PIMD simulations of hydrogen at 50 K and deuterium at 150 K. Reprinted with permission from ref \cite{li_classical_2013}. Copyright 2013 IOP Publishing. } \label{figure_XZLi-JPCM-Fig2} \end{figure} \subsection{The role of NQEs on the melting of hydrogen} The melting of hydrogen is another aspect of interest, intimately connected with hydrogen superconductivity and superfluidity at high pressures and low temperatures \cite{babaev_superconductor_2004}. It has been suggested that NQEs affect melting, most likely lowering the melting temperature because of quantum fluctuations \cite{bonev_quantum_2004}. It is also expected that such an effect would be stronger for atomic phases of hydrogen than for molecular phases because of the heavier mass and higher melting temperatures of the $\text{H}_2$ solids \cite{bonev_quantum_2004, deemyad_melting_2008}. Since the atomic phases only exist at pressures on the limits of what can be reached experimentally, computer simulation can play a crucial role. With this in mind Chen \textit{et al}. studied the melting of atomic hydrogen in the 500 GPa to 1.2 TPa regime with DFT-based PIMD simulations \cite{chen_quantum_2013}. Interestingly it was found that NQEs significantly reduced the melting temperature from about 300 K to less than 200 K at 500 GPa (Fig. \ref{figure_Chen-Ncomm-Fig2}). At higher pressures the effect is even more pronounced and at 900 GPa and above, the melting temperature drops below 50 K. This suggests that in this pressure regime a low temperature quantum metallic liquid state of hydrogen is possible. Later Geng \textit{et al}. also studied the role of NQEs on the melting of high pressure hydrogen, and predicted slightly smaller NQEs, that lower the melting temperature by 50 to 100 K \cite{geng_lattice_2015, geng_predicted_2016}. Although not yet confirmed in experiments or substantiated with higher level electronic structure theories, these studies reveal that NQEs can have profound consequences on the melting and physics of hydrogen at high pressures. \begin{figure}[!ht] \includegraphics[width=8cm]{Chen-Ncomm-Fig2-inset.eps} \centering \caption{ NQEs significantly lower the melting temperature of high pressure atomic hydrogen. MD (red) and PIMD (black) label the results considering the H atoms as classical or quantum particles, respectively. Triangles show the upper and lower limit of the melting temperature estimates, with the melting temperature taken as the average (dashed lines). The lowest temperature simulations were performed at 50 K. Therefore, only the upper bounds from PIMD simulations are shown with dashed triangles at 900 GPa and 1200 GPa. Reprinted with permission from ref \cite{chen_quantum_2013}. Copyright 2013 Nature Publishing Group, under the creative commons license 3.0. } \label{figure_Chen-Ncomm-Fig2} \end{figure} Overall, at lower pressure, the phase diagram of condensed hydrogen is relatively well resolved experimentally. Further experimental studies into the vibrations and rotations of hydrogen molecules could shed light on the explicit quantum nature of the nuclei. However, upon increasing pressure, the phase behaviour of hydrogen is under debate and recent experimental progress suggests that metallic hydrogen at low temperatures is not far away if not already detected \cite{dias_observation_2017, silvera_response_2017, liu_comment_2017, goncharov_comment_2017}. The question of NQEs in the metallic transition of hydrogen at low temperatures also remains open. Entering the TPa regime of the phase diagram at low temperatures is a great challenge experimentally, but it is important to get there since many interesting properties are predicted in this regime. At high temperatures evidence of non-negligible NQEs has been obtained, and it is also of interest to further investigate the impact of NQEs on phase transition phenomenon near the critical point \cite{li_supercritical_2015}. Theoretically, disagreements have been shown in different studies because of e.g. the choice of exchange correlation functional in density functional theory, therefore improving the understanding of NQEs based on electronic structure calculations of higher level, such as quantum Monte Carlo, is desired. \section{Hydrogen at surfaces} The adsorption of hydrogen on, and diffusion of hydrogen across surfaces is of central importance to disciplines such as surface science, astrophysics and astrochemistry, and heterogeneous catalysis. NQEs can in principle be important to these processes, particularly at the low temperatures of astrochemistry \cite{Interstellar_2013,Interstellar_2}. Here we discuss four different systems we have worked on, with each one illustrating a different aspect of NQEs at surfaces. \subsection*{A. Adsorption of atomic hydrogen on graphene} Hydrogen adsorption on sp$^2$-bonded carbon materials is relevant to hydrogen storage, graphene based electronic and spintronic devices, and $\text{H}_{2}$ formation in the interstellar medium \cite{doi:10.1021/la051659r,PhysRevLett.103.016806,Elias610,doi:10.1021/ja804409f,Hgra_PRL,Interstellar_2013,Kastner_2010}. Upon H atom adsorption on sp$^2$-bonded carbon materials the carbon atom the hydrogen bonds to transforms to sp$^3$ hybridisation. As a result there is expected to be an energy barrier for the adsorption process. The nature and height of this energy barrier has been extensively studied, using graphene, graphite, and polycyclic aromatic hydrocarbons as model systems \cite{JELOAICA1999157,doi:10.1063/1.1463399,EPJB_Hadgra,Hornekaer1943,Hgra_PRL,Davidson_2014_1,doi:10.1063/1.4931117,doi:10.1021/acs.jpca.5b12761}. Experimentally, the barrier for H atom adsorption on graphite was placed within a broad range of 25 to 250 meV, by means of adsorption and abstraction experiments using H atoms with varying kinetic energies \cite{doi:10.1063/1.3518981}. Early theoretical studies estimated the barrier to be $\sim$ 0.2 eV \cite{JELOAICA1999157,doi:10.1063/1.1463399,EPJB_Hadgra}; a barrier of this height would mean e.g. at the low temperatures of the interstellar medium H atoms would have insufficient thermal energy to adsorb on sp$^2$ carbon substrates. This, in turn, has implications for the mechanism of H$_2$ formation in the interstellar medium. However, the earlier computational studies neglected NQEs and also neglected vdW dispersion forces; both of these effects are likely to influence the chemisorption process. % Especially, quantum tunnelling have been known to be crucial to the formation of an extraordinary rich variety of molecules in the universe \cite{sims_low-temperature_2013,Interstellar_2013,doi:10.1146/annurev.pc.46.100195.000545,Kastner_2010}. In a recent study \cite{Davidson_2014_1}, some of us investigated in detail the role of both vdW interactions and NQEs in the H adsorption process on graphene at cryogenic temperatures to understand this process under interstellar conditions. The key findings are summarised in Fig.~\ref{H-ads}. To cut a long series of simulations short, the main conclusions were: (i) vdW interactions make the chemisorption barrier slightly smaller and narrower; reducing it from 200 meV to 175 meV with the specific vdW-inclusive DFT treatment employed; and (ii) Thermal, but mostly NQEs, reduce the barrier further to $ca.$ 100 meV at 50 K. The quantum free energy barrier was calculated with a constrained-centroid PIMD approach \cite{Gillan_H_qtst} and the reduction in the barrier compared to the classical one was attributed to tunnelling of the H atom near the transition state. This can be seen in Fig. \ref{H-ads}b by the delocalisation of the H atom in the vicinity of the transition state. Overall, this study shows that both vdW interactions and NQEs work together in a cooperative manner to dramatically reduce the barrier to the formation of a covalent (C-H) bond; an effect that is likely to apply broadly to many chemical processes. For this specific system it means that low temperature H atom chemisorption on sp$^2$ bonded materials is likely to be much more facile than previously anticipated, implying that chemisorbed H could be much more prevalent under interstellar medium conditions. \begin{figure}[!ht] \includegraphics[width=10cm]{H_ad_gra.eps} \caption{\label{H-ads} NQEs on low temperature H atom adsorption on graphene. (a) Energy profiles calculated with different approaches for the adsorption of a single H atom on graphene. The highest barrier (blue) is the PBE potential energy barrier to chemisorption. Upon including vdW dispersion forces with PBE-D3 (green) the potential energy barrier gets lower and narrower. The PBE-D3 free energy barrier (pink) computed using \textit{ab initio} MD at 50 K does not differ significantly from the underlying potential energy barrier. However, accounting for NQEs with \textit{ab initio} PIMD (black) significantly lowers the free energy barrier. The H atom height above the surface is measured from the surface plane of the graphene sheet prior to chemisorption. (b) Radius of gyration for the path-integral ring-polymer as a function of the H atom distance from the surface. This is decomposed into lateral (x,y) and normal (z) components relative to the surface plane. Snapshots are also shown from calculations with the ring-polymer constrained close to the physisorption well at 3.0 \AA, at the transition state at 2.1 \AA, and unconstrained in the chemisorbed state at 1.5 \AA~ above the graphene sheet. The snapshots are an aggregation of bead positions for several hundred molecular dynamics steps. The ring polymer is significantly more delocalised in the transition state region (2.1 \AA), a signature of quantum mechanical tunnelling through the barrier. Reprinted with permission from ref. \cite{Davidson_2014_1}. Copyright 2014 American Chemical Society. } \end{figure} \subsection*{B. Dissociative adsorption of H$_2$ at metal surfaces} Dissociative adsorption of molecular hydrogen plays an important role in a wide variety of chemical and physical processes, such as heterogeneous catalysis, energy storage, and sensing \cite{B718842K,lopez_when_2004,teschner_roles_2008,SAKINTUNA20071121}. In particular H$_2$ at metal surfaces is central to many processes in heterogeneous catalysis and has been widely studied on well-defined atomically flat metal surfaces for the last 30-40 years. Recently, highly dilute metal alloys (so called ``single atom alloys'') have emerged as a promising class of materials with unique catalytic functionality \cite{PhysRevLett.103.246102,doi:10.1021/acs.jpclett.8b01888,doi:10.1021/acscatal.8b00881,doi:10.1038/nchem.2915}. In particular by doping a reactive transition metal (e.g. Pd, Pt, Ni) into a relatively unreactive (Cu, Ag, Au) host they offer great potential for highly selective hydrogenation reactions. With this in mind a detailed experimental and computational study of H$_2$ dissociation on a single atom alloy surface of Pd/Cu was recently performed \cite{Davidson_2014_2}. Interestingly it was found experimentally that as the temperature was lowered to cryogenic temperatures the rate of H$_2$ dissociative adsorption \textit{increased}. Complementary measurements of D$_2$ dissociative adsorption showed more conventional behaviour with the rate of adsorption decreasing as the temperature was lowered. The application of DFT and path integral based techniques proved to be very helpful in understanding this system. The key results of which are shown in Fig.~\ref{H2_ad_metal}, from where it can be seen that at the classical level the barrier to H$_2$ dissociative adsorption is fairly large (0.4 eV). This is too large to enable facile H$_2$ adsorption at low temperatures. However, when NQEs are taken into consideration the effective quantum free energy barrier for H$_2$ dissociation drops significantly as the temperature is lowered below 250 K. Although the D$_2$ barrier also drops with temperature, the effect kicks in at a much lower temperature ($ca.$ 150 K). Analysis again reveals that the origin of the barrier reduction is quantum mechanical tunnelling (Fig.~\ref{H2_ad_metal}c). Interestingly, the calculations also suggest that tunnelling enables a different dissociation mechanism at very low temperatures (below 80 K for H$_2$, 50 K for D$_2$), where incident molecules at the Pd site can undergo barrierless dissociation avoiding a physisorbed state into which they would otherwise be trapped. Note that at the higher temperatures of many existing catalytic processes we do not expect NQEs to play a significant role. However, examining these effects in other systems may uncover temperature regimes at room temperature and below where quantum effects can be harnessed, yielding better control of bond-breaking processes at surfaces and uncovering useful chemical properties such as selective bond activation or isotope separation. \begin{figure}[!ht] \includegraphics[width=12cm]{H2_ad_metal.eps} \caption{\label{H2_ad_metal} Quantum tunnelling facilitates H$_2$ dissociative adsorption on metal surfaces at low temperatures. (a) Potential energy surface for H$_2$ dissociation from the physisorbed state on the Pd/Cu(111) substrate. The total energy of the clean surface and the H$_2$ in the gas phase is used as the energy zero. Insets are top and side views of the initial physisorbed state (H$_2$), the transition state (TS), and the final state (2H). White, pink, and cyan spheres indicate H, Cu, and Pd atoms, respectively. (b) Temperature dependence of the effective quantum energy barrier (relative to the gas phase) obtained from harmonic quantum transition state theory calculations that take into account tunnelling through the chemisorption barrier and zero-point energies. (c) Top and side view snapshots taken from an 85 K \textit{ab initio} path integral molecular dynamics simulation of a single H$_2$ at the classical saddle point for dissociative chemisorption on the Pd/Cu(111) surface. The large spread of the beads along the dissociation reaction coordinate indicates that at this temperature the H$_2$ molecule can tunnel through the dissociation barrier. Reprinted with permission from ref \cite{Davidson_2014_2}. Copyright 2014 American Chemical Society. } \end{figure} \subsection*{C. Diffusion of atomic hydrogen} Once H atoms are adsorbed onto a surface, NQEs can also be important to how they diffuse across it. The role of NQEs in H atom diffusion has been extensively studied on a wide variety of surfaces \cite{Interstellar_2013,martinazzo_hydrogen_2013,DURR201361,Hgra_PRL,JPCC_Hdiff_gra,PhysRevB.79.115429,doi:10.1063/1.5029329,Skulason20121400}. Here, we focus on work that we have recently been involved in to understand H atom diffusion across atomically-flat metal surfaces. Such systems have been widely studied thanks to the development of surface sensitive experimental techniques (see e.g. \cite{lin_diffusion_1991,lauhon_direct_2000,jardine_determination_2010,Ru_2013}). Interestingly, experimental measurements of diffusion rates yield qualitatively different temperature dependencies upon moving from one metal surface to the next. Relatively straightforward behaviour is seen on e.g. Pt(111) where according to helium spin echo measurements \cite{jardine_determination_2010}, the rate drops as temperature is lowered. On Ru(0001), a gradual transition from Arrhenius behaviour to a temperature independent (i.e. quantum) regime has been reported \cite{Ru_2013}. However on Ni(100) \cite{lin_diffusion_1991} and Cu(100) \cite{lauhon_direct_2000}, diffusion rates suddenly become T-independent below a certain temperature, indicating a sharp classical to quantum transition. Theoretical studies of H diffusion on metal surfaces has been useful in helping to understand this behaviour for certain specific experiments \cite{lin_diffusion_1991,lauhon_direct_2000,doi:10.1063/1.479392,ZHANG2011689}. For example, the sharp transition on Ni(100) and Cu(100) was attributed to the particular shape of the diffusion barrier \cite{mattsson_h_1993,mattsson_isotope_1997,sundell_quantum_2004,sundell_hydrogen_2005,Sundell_2004_2,Skulason20121400}. In a recent study we set about rationalising this behaviour in general and performed a systematic DFT-based instanton study of H atom diffusion across a range of metal surfaces \cite{Wei_HD_2017}. The key finding was that H atom diffusion on metal surfaces could be categorised into systems possessing either conventional parabolic shaped barriers (“parabolic-tops”) or into systems possessing unusually broad barriers resembling a top hat (“broad-tops”) (Fig.~\ref{barrier}). The unusually broad barriers are found on surfaces (see also refs.~\cite{Skulason20121400,doi:10.1063/1.5029329}) partly because of the very large mismatch in size between the small H atom and the relatively large surface atoms of the lattice. Such barriers have also been seen for magnetic transitions \cite{C6FD00136J}. \begin{figure}[!ht] \centering \includegraphics[width=8cm]{barrier_m.eps} \caption{ a) Potential energy barriers for H atom diffusion on metal surfaces can vary considerably from a conventional parabolic shape to an unconventional broad-topped shape. A variety of H diffusion barriers is shown, obtained from nudged elastic band calculations using DFT, for several transition metal surfaces. % The filled symbols show data for the conventional barriers that are parabolic near the top, and the open symbols are the data points for broad-top barriers. % In (b)-(d) the structures of the various metal surfaces considered are reported: b) Top view of the (100) surface; % c) Top view of the (110) surface; % d) Top view of the (111) surface. % Green arrows show the diffusion paths. % Reprinted with permission from ref \cite{Wei_HD_2017}. Copyright 2017 American Physical Society. } \label{barrier} \end{figure} Comparing the thermal rate integrand (integration of which gives the rate at a given temperature) of the two types of barriers (Fig.~\ref{diffu_cmp}), one finds a qualitative difference. With the conventional parabolic-top barriers, hydrogen diffusion evolves gradually from classical hopping to shallow tunnelling to deep tunnelling as the temperature decreases, and noticeable quantum effects persist at moderate temperature. In contrast, with broad-top barriers quantum effects become important only at the lowest temperatures and the classical to quantum transition is sharp, at which point classical hopping and deep tunnelling both occur (Fig.~\ref{diffu_cmp}b). The unusual behaviour revealed by the simulations has a number of far reaching implications \cite{Wei_HD_2017}, including a new way of defining the classical to quantum crossover temperature (T$_\text{W}$) and a prediction of the sudden emergence of strong isotope effects around T$_\text{W}$ \cite{Wei_HD_2017}. These insights are likely to help and guide the interpretation of existing and future experiments for H diffusion on metals and quantum tunnelling in chemical systems in general. \begin{figure}[!ht] \includegraphics[width=9cm]{FIG4.eps} \caption{\label{diffu_cmp} The shape of the potential energy barrier strongly influences the transition from classical hopping to quantum tunnelling for H diffusion on metals. In % a) and b), the thermal rate integrand is plotted against the incident energy $E$ for examples of the conventional parabolic-top barrier (Ni(111)) and the broad-top barrier (Pd(110)) respectively. % c) Illustration of the tunnelling behaviour the different peak positions of the rate integrand represent, using tunnelling paths represented by the Feynman path-integral. % Adapted with permission from ref \cite{Wei_HD_2017}. Copyright 2017 American Physical Society. } \end{figure} \subsection*{D. Proton penetration of 2D materials} Another topic recently brought into focus is the direct transfer of H atoms or protons through 2D materials such as graphene and h-BN. Generally it was assumed that pristine graphene and h-BN were impermeable to H atoms and protons. Indeed DFT calculations have shown that the barrier for a chemisorbed H or proton to penetrate a pristine graphene sheet is 3.5 eV or more \cite{C3CP52318G,doi:10.1021/acs.jpclett.6b01507,C6CP08923B}. However, recent experiments provide strong evidence that protons can, in fact, penetrate pristine layers of graphene and h-BN \cite{hu_proton_2014}. Based on temperature-dependent proton conductivity measurements, the barriers for the proton penetration process were estimated to be only 0.8 and 0.3 eV for single-layer graphene and h-BN, respectively. In a subsequent study, focusing on the role of isotope effects, the penetration rate reduced by an order of magnitude when the protons (H$^+$) were replaced by deuterons (D$^+$) and the barrier estimated to increase by about 60 meV \cite{Lozada-Hidalgo68}. Partly because of the fascinating measurements, considerable theoretical effort have gone into understanding the microscopic details of how protons penetrate graphene and h-BN and to understand the role of NQEs in the process \cite{doi:10.1021/acs.jpclett.6b01507,doi:10.1021/acs.jpcc.7b08152,Tka_tunneling_2016,doi:10.1021/acs.jpclett.7b02820}. In work that we were involved in, we used DFT-based PIMD to examine the proton penetration process \cite{doi:10.1021/acs.jpclett.7b02820}. With the specific DFT functional used and model system employed we found that the classical $\text{H}^+$ penetration barrier was approximately 3.5 eV, in line with earlier studies. When NQEs were accounted for the penetration barrier for $\text{H}^+$ on graphene was reduced by 0.46 eV ($12\%$) at 300K and for $\text{D}^+$ the reduction was less. The reduction in the free-energy barrier is due to ZPE effects and quantum tunnelling near the transition state (as shown in the insets, the beads are more delocalised near the TS compared with the initial state). Although the role of NQEs is far from negligible, NQEs alone do not describe the enormous discrepancy between the computed and experimental proton penetration barriers. Instead we suggested in ref. \cite{doi:10.1021/acs.jpclett.7b02820} that hydrogenation of the graphene (and h-BN) plays a critical role. In particular hydrogenation can destabilise the low-lying chemisorbed initial state and slightly expands the hexagonal lattice through which the proton penetrates. \begin{figure}[!ht] \includegraphics[width=12cm]{proton_pene.eps} \caption{\label{proton_pene} Quantum contributions and isotope effects for proton penetration of pristine graphene. (a) Free-energy profiles at 300 K obtained with \textit{ab initio} constrained MD and PIMD simulations for proton transfer across a graphene sheet in the presence of water molecules. PIMD simulation snapshots for the initial state and transition state are also shown. Blue (red and pink) balls represent the beads of protons (O and H atoms) for one snapshot in a PIMD simulation. The centroids of the C atoms are shown as brown balls. (b) Zoom-in view of the free energy profiles shown in (a) close to the transition state, for both proton and deuteron penetration. Reprinted with permission from ref \cite{doi:10.1021/acs.jpclett.7b02820}. Copyright 2017 American Chemical Society. } \end{figure} \section{The role of NQEs on hydrogen bonds} Hydrogen is obviously also present in hydrogen-bonded systems and such systems are essential for life, being an essential component of the binding in the condensed phases of water, the structure of many biomolecules, and molecular crystals. In fact, several early studies on NQEs with PIMD were on H-bonded systems such as water and ice. See refs. \cite{Marx_waterrev_2006,doi:10.1021/jp810590c,Marx_Tuckerman_2010,Wei_rev_2016} for reviews of this work. Here, we go beyond aqueous systems towards bio-molecules, organic crystals, and HBs at interfaces. We focus on two specific aspects: (A) quantum delocalisation in HBs; and (B) how NQEs impact upon the strength of HBs. \begin{figure}[!ht] \includegraphics[width=8cm]{FIG5.eps} \caption{\label{Qdelocal} NQEs can strongly influence H-bonded systems, as shown here for two distinct types of system. (a) Illustration of a HB and the order parameter $\delta$ for describing the H position in a HB. (b) Snapshots for typical spatial configurations of an overlayer of water and hydroxyl molecules on Pt (left), Ru (middle), and Ni (right) obtained from PIMD. (c) Free energy profiles ($\Delta F$) for H atom transfer from water to hydroxyl along the intermolecular axes for the systems shown in (b) on Pt (left), Ru (middle), and Ni (right) at 160 K, obtained from MD and PIMD. Reprinted with permission from ref \cite{water_metal_NQEs}. Copyright 2010 American Physical Society. (d) Crystal structure of squaric acid, illustrating the difference between AFE (left) and PE (right) ordering. (e) Probability distributions of the $\delta$ parameter for H$_2$SQ (left) and D$_2$SQ (right) from PIMD, and (inset in right) from MD. In the MD simulations the nuclei are treated as classical particles, in the PIMD simulations they are quantum. All simulations have been performed at the DFT level of theory. Adapted with permission from ref \cite{doi:10.1063/1.4862740}. Copyright 2014 AIP Publishing. } \end{figure} \subsection*{A. Quantum Delocalisation in hydrogen bonds} As noted in the introduction, the quantum nature of hydrogen means that it does not behave like a point-like particle. A key question has been to what extent hydrogens are delocalised in space when incorporated in HBs and how this varies from system to system. In a traditional HB the hydrogen is covalently bonded to one electronegative element and H-bonded to a second, such as the OH-O bond that holds two water molecules together. A HB such as this is asymmetric and conveniently characterised with an order parameter $\delta=|\textbf{R}_1|-|\textbf{R}_2|$ as illustrated in Fig. \ref{Qdelocal}a. Applying pressure to the HB, as done in high pressure studies of e.g. ice, can leave the proton shared symmetrically between the two oxygens it is bonded to \cite{Marx1998,doi:10.1063/1.4818875,doi:10.1063/1.465467,doi:10.1063/1.1677221,Goncharov218}. For ice the intermolecular separation was controlled by pressure. However water on metals represents another class of systems where variable intermolecular separations can be found. In these systems the intermolecular separation between the adsorbed molecules is dictated to a large extent by the spacing between the substrate lattice sites \cite{Morgenstern_2007_1,Carrasco_2012_1,Salmeron_2015,doi:10.1021/acs.chemrev.6b00045}. One particular class of adsorption structures, comprised of a mixture of water and hydroxyl forms across many metal surfaces. Upon moving from one surface to the next, the key difference in the adsorption structure is that the intermolecular separation changes and is tuned by the lattice constant of the underlying substrate \cite{doi:10.1021/ja003576x,PhysRevB.69.113404}. Some years ago we performed a series of DFT-based PIMD simulations on such systems and identified surprisingly pronounced NQEs \cite{water_metal_NQEs}. Specifically we found that metal substrates, by templating the water overlayer, could shorten the corresponding HBs (in analogy to ice under high pressure). Depending on the substrate and the intermolecular separations it imposes, the traditional distinction between covalent and HBs is partially or almost entirely lost (Fig. \ref{Qdelocal}b). Such a picture only emerges with a quantum treatment of the system; treating the nuclei classically preserves the traditional picture because of large classical free energy barriers for proton hopping (Fig. \ref{Qdelocal}c). Soon after the theoretical prediction of strong quantum delocalisation in such systems, direct experimental evidence was obtained from STM for a closely related system \cite{PhysRevB.81.045402}. However, more experiments are needed on a broader range of systems to test the theoretical predictions of ref. \cite{water_metal_NQEs}. Indeed it seems to us that interfacial water is an excellent model system for probing the delicate connection between HB length and quantum delocalsation and studying systems such as these is likely to be a fruitful area of future research. Quantum delocalisation is also key to order-disorder transitions in hydrogen-bonded crystals. This includes high pressure phases of ice \cite{Marx1998,HERRERO2015125,doi:10.1063/1.4818875} and certain organic crystals where the order-disorder transition is connected with an (anti-)ferroelectric to paraelectric transition. One system of particular interest that we have looked at recently is squaric acid \cite{doi:10.1063/1.4862740}. In particular we focussed on understanding the antiferroelectric to paraelectric transition and its isotope dependence (Fig.~\ref{Qdelocal}d). Interestingly, in agreement with experiment, we were able to show that quantum delocalisation and concerted tunnelling resulted in a $ca.$ 200 K difference in the order-disorder transition temperature between the hydrogenated and deuterated crystals (Fig.~\ref{Qdelocal}e). Specifically, as shown in Fig.~\ref{Qdelocal}e, the HBs become disordered (paraelectric) in H$_2$SQ at 200 K and above, while in D$_2$SQ the HBs are ordered up to 400 K. NQEs also explain the superconductivity transition in a recently discovered high temperature superconductor, solid H$_3$S. Calculations accounting for NQEs correctly predict the phase transition from an asymmetric to a symmetric proton phase, reproducing experimental H/D isotope effects in the superconducting transition temperature \cite{errea_quantum_2016}. \subsection*{B. NQEs on the structure and strength of hydrogen bonds} Regardless of the position of hydrogen within a HB, NQEs can influence the structure of H-bonded systems. This geometric effect is sometimes known as the Ubbelohde effect \cite{Ubbelohde}, where replacing H with D can change the heavy atom separation in a HB, and in H-bonded molecular crystals consequently alter their lattice constants. Model studies of the Ubbelohde effect have been performed \cite{McKenzie2012,McKenzie_2014}), and a few years ago some of us carried out an \textit{ab initio} PIMD study of NQEs on a range of H-bonded clusters and crystals \cite{XZLi_2011}. The key conclusion of our work is that NQEs make strong HBs shorter and weak HBs longer. This observation was explained by generalising earlier observations of Manolopoulos and co-workers \cite{Habershon_2009,C1CP21520E,Markland_2012} of an effect that is is now known as the competing quantum effects (CQEs) picture. The CQEs picture has been very successful in understanding and reproducing experimental H/D isotope fractionation ratios between liquid water and its vapour \cite{Markland_2012,Ceriotti_2013_1,LWang_2014}, and has been reviewed in detail recently \cite{Wei_rev_2016}. In short, it arises out of a balance of effects: quantum nuclear motion along the HB direction that strengthens the bond, versus quantum fluctuations out of the plane of the HB that weakens it. Thanks to this competition, for certain systems (such as liquid water), simulations with classical nuclei can give comparable results to those obtained with quantum nuclei \cite{Wei_rev_2016,doi:10.1063/1.4907554,rossi_nuclear_2016,doi:10.1021/acs.jpclett.7b00979}. In fact, the balance of the CQEs in liquid water is so delicate, that even the choice of the electronic structure approach (or force field) can lead to qualitatively opposite results on which of the two CQEs dominates \cite{Parrinello_2003,Morrone_2008,water_ordf}. Temperature can also change the balance between the competing effects \cite{Markland_2012,Wei_BP_BFE}. The existence of CQEs has been supported by several experimental investigations \cite{Romanelli_2013,DINS_3}, specifically DINS experiments on water \cite{Romanelli_2013,DINS_3,Cheng_2016} and STM experiments for water on NaCl \cite{Guo321}. Nonetheless, most previous work has focused on geometrical properties, while direct information on how and to what extent NQEs influence the strengths of HBs has been lacking. Recent developments allow one to estimate this quantitatively in computer simulations for materials and molecular systems of moderate size, through a combination of PIMD and thermodynamic integration using mass as the order parameter \cite{Mass_2,Wei_BP_BFE,Rossi_2015}. Note that there are different ways to combine thermodynamic integration with PIMD \cite{Morales_Singer,DaanFrenkel_book,doi:10.1063/1.2966006,doi:10.1002/jcc.21070,scaled_coordinate}, with early developments dating back to 1991 \cite{Morales_Singer}. Fig.~\ref{BFE} summarises recent quantitative determinations of the impact of NQEs on HB strength for several systems important to everyday life, namely DNA base pairs, a peptide, and paracetamol. It can be seen from Fig.~\ref{BFE} that the absolute influence on the stability of the various materials is small at 10 meV of less per HB. However, NQEs can act to both strengthen or weaken the HBs and can exhibit interesting temperature dependencies. Taking the DNA base pairs as an example, these were examined at room temperature and at a cryogenic temperature (100 K) \cite{Wei_BP_BFE}. It was found that NQEs stabilise the base pairs at room temperature, while counterintuitively, the influence of NQEs was smaller at cryogenic temperatures than it was at room temperature. This was rationalised, as with other systems, in terms of a competition of NQEs between low-frequency and high-frequency vibrational modes. Upon forming a HB, certain high-frequency (covalent bond stretching) modes are softened, reducing the quantum kinetic energy hence stabilising the system. This stabilisation, however, is offset by the quantum kinetic energy gained when low-frequency modes are hardened or created upon forming the HB. Moving on from DNA to proteins, for stacked polyglutamine (polyQ) strands, a peptide often found in amyloid aggregate, NQEs serve to provide an additional degree of stabilisation \cite{Rossi_2015}. Another important field where quantitative estimates of NQEs are highly desirable is in the assessment of the stability ranking of different polymorphs of molecular crystals, where a small free energy change of 10 meV can make a difference. A recent \textit{tour de force} study addressing this found that for paracetamol, $\sim$ 20 \% of the free energy difference between its form I and form II comes from NQEs \cite{Rossi_2016_2}. This work also suggests that estimates of NQEs on the binding free energy within the harmonic approximation can be reasonable for molecular crystals. The evaluations of quantum free energy contributions to H-bonded systems is enjoying rapid development with interesting findings in key systems. Extending these findings, one may ask are there more efficient ways or simple models for estimating the importance of such effects? Recently we have taken a step in this direction with the presentation of a simple model based on the CQEs picture which predicts the temperature dependence of NQEs on the binding strength of broad range of H-bonded complexes \cite{Wei_BP_BFE}. \begin{figure}[!ht] \includegraphics[width=11cm]{FIG6.eps} \caption{\label{BFE} NQEs can either strengthen or weaken HBs, as illustrated here for the binding free energy change of several important H-bonded organic systems: Watson-Crick base pairs \cite{Wei_BP_BFE}, polyglutamine(polyQ) strands \cite{Rossi_2015}, and paracetamol crystals \cite{Rossi_2016_2}, calculated from \textit{ab initio} PIMD simulations. Structural images are adapted with permission from refs \cite{Wei_BP_BFE,Rossi_2015,Rossi_2016_2}. Copyright: 2016 American Chemical Society; 2015 American Chemical Society; 2016 American Physical Society, respectively. } \end{figure} \section{Outlook and future challenges} To sum up, we have discussed examples where NQEs are important and interesting. From solid hydrogen to hydrogen on surfaces to H-bonded systems, NQEs can have a qualitative and quantitative impact on the physiochemical properties of materials. This is especially true at cryogenic temperatures. However, we have also shown examples where such effects can be important at room temperatures. This review has not covered all types of systems where NQEs are important, but rather gives a flavour of the systems that have piqued our interest over the last few years, with a focus on hydrogen containing systems. As noted in the introduction it is an exciting and thriving time for the field due to the development of relatively fast, efficient, and accurate simulation approaches for treating NQEs as well as the development of complementary experimental techniques. However, there are, of course, many outstanding challenges for simulation techniques in this area. Some of these have been discussed elsewhere \cite{Wei_rev_2016,markland_nuclear_2018,Inst_persp,habershon_ring-polymer_2013}, but the issues we feel that are of particular importance are: (i) \textit{Accuracy of the underlying electronic structure theory} -- The results discussed in this brief review have come mainly from DFT-based approaches. DFT is the most widely used electronic structure method for the simulation of materials. However, it has a number of well-documented shortcomings, many of which are directly relevant to the types of systems discussed here. For example, HB strengths, physiorption on surfaces, and covalent bond breaking processes are all sensitive to the choice of the exchange-correlation functional used (see e.g. \cite{Gillan_DFT_water,doi:10.1063/1.4704546,doi:10.1063/1.4869598,PhysRevB.94.115144,doi:10.1063/1.4754130}). Given that the emergence of NQEs can often depend on a subtle balance of competing effects, qualitatively different behaviour can be observed with different exchange-correlation functionals \cite{XZLi_2011,water_ordf,chen_room-temperature_2014}. Thus one always has to take care that the exchange-correlation functional used is suitable for the problem at hand. More broadly, combining path integral methods with higher level methods such as quantum Monte Carlo and quantum chemistry approaches is clearly an exciting avenue for future research \cite{RevModPhys.73.33,AlaviFCIQMC,mcmahon_properties_2012}. Indeed, given the recent progress with quantum Monte Carlo approaches in terms of accuracy, efficiency, and the calculation of forces, QMC-based PIMD simulations look like a promising way of treating particularly challenging systems \cite{PI_QMC_1,doi:10.1063/1.4917171,Zen201715434,PhysRevB.93.241118}. Methods for dealing with non-adiabatic effects is another area where excellent progress is being made on topics such as understanding NQEs in electron transfer \cite{RICHARDSON2017124,doi:10.1063/1.4932362,doi:10.1021/acs.jpclett.7b01249,doi:10.1063/1.453440,doi:10.1063/1.4863919,doi:10.1002/cphc.201200941}. Other non-adiabatic effects, such as electron-phonon coupling, are important for some problems (i.e. dissociative adsorption, dynamics of adsorbates on surfaces) \cite{doi:10.1080/23746149.2017.1381574,PhysRevLett.68.3444,sundell_quantum_2004,doi:10.1063/1.469915,PhysRevLett.116.217601,WANG200266}, but it remains to be fully understood to what extent they impact upon quantum H atom diffusion and the other topics covered in the review. Indeed coupling friction and quantum nuclear approaches is an interesting area for future research. (ii) \textit{Connection with experiment} -- Much of the work discussed in this review was motivated by or aimed at explaining interesting experimental results. And, indeed recent years have seen huge advances in experimental capabilities, from the emergence of deep inelestaic neutron scattering, to scanning tunnelling microscopy, to diamond anvil measurements at ever higher pressures. Given the emergence of such techniques there is an ever greater need for more efficient and more robust techniques to compute experimental observables measured in scattering and spectroscopic studies, including time-dependent properties \cite{PhysRevLett.105.110602,trpmd,PIGLET,Romanelli_2013,doi:10.1063/1.3675838,doi:10.1021/acs.jpcb.8b03896}. Excellent work in this area is ongoing and more is welcomed. (iii) \textit{System complexity} -- When connecting with experiment it is challenging to make the simulation model realistic enough that it faithfully mimics the complexity present in the real experimental system. This is especially true when treating defects for which large simulation cells are required, or liquids where large cells and sufficient sampling of phase space is required. Although enormous strides have been made in reducing the computational overhead of incorporating NQEs (see review \cite{markland_nuclear_2018}), most studies are still limited by the availability of computational resource. Developing higher accuracy interatomic potentials with e.g. machine learning approaches \cite{PhysRevLett.104.136403,doi:10.1002/qua.24890} and efficiently combining enhanced sampling techniques with path integral methods are two promising avenues for future study \cite{Cheng_2016,Laude_2018}. With our interest in graphene, as discussed above, we have recently developed a machine learning potential for graphene \cite{PhysRevB.97.054303} that accurately reproduces DFT for a broad range of properties and are currently using this potential to understand the role of NQEs on the vibrational properties of graphene. Addressing these and other computational challenges will provide interesting work for the future and consequently will help to deepen understanding of the role of NQEs in materials yet further. Although we believe that great strides have been made in the field in the last decade or so, there is still considerably more to learn about the role of NQEs in the chemistry and physics of hydrogen-rich systems. \begin{acknowledgements} The authors thank Ms X.Y. Jiang for helpful suggestions on the review. J.C. and A.M. are grateful to the Alexander von Humboldt Foundation for a post doctoral research fellowship and a Bessel Research Award, respectively. A.M. is also supported by the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement number 616121 (HeteroIce project). Y.X.F and X.Z.L are supported by the National Basic Research Programs of China under Grant Nos. 2016YFA0300900, the National Science Foundation of China under Grant Nos. 11774003, 11604092, and 11634001, and the high-performance computing platform of Peking University. The collaboration between UCL and Peking University was supported in part by UCL's Global Engagement Fund. \end{acknowledgements}
2,877,628,090,048
arxiv
\section{Introduction} The subject of this talk has become of considerable current interest, as evidenced by the other presentations at this meeting \cite{wagner,mueller,lipatov} on both experimental and theoretical developments. The main thrust of this one is to argue that, despite a growing tendency for experimentalists and theorists to say otherwise, the presence of a diffractive contribution which at large $Q^2$ and small $x$ ``scales", i.e. comprises a finite fraction (of order 5 to 10 percent) of all events, does not imply that the mechanism which creates this class of events is pointlike. Furthermore, a long standing mechanism \cite{bj:align} for creating the observed nondiffractive final-state properties (the so-called ``aligned-jet" mechanism) appears quite sufficient to account for the bulk of the experimental evidence on diffractive final states. It is especially appropriate to discuss this at this meeting, the 50th anniversary celebration of ITEP, because for me its origins go back to what I learned in my visits here in the early 1970s. Foremost was the very early observation by Ioffe \cite{ioffe} that, at small $x$ and at large $Q^2$, large longitudinal distances $z$ are important in the spacetime structure of the forward virtual-photon-proton Compton amplitude, the absorptive part of which determines the deep-inelastic structure functions. In the target-proton rest frame the estimate is roughly \begin{equation} z = \frac{1}{M_p~x}\approx \frac{2\nu}{Q^2}\ . \end{equation} Later, Gribov \cite{gribov} used this observation to argue that generalized vector-meson dominance could provide an estimate of the behavior of the structure function in the limit of very small $x$ and very large target (e.g. a nucleus). He used a most straightforward line of reasoning, which however led to a paradox, indeed a disaster, because the resultant structure function did not even approximately scale--it was too big at large $Q^2$ by a full power of $Q^2$. It is this paradox and its resolution which is the centerpiece of the discussion to follow. \section{Basic physics of $F_2(x,Q^2)$ at small $x$} The familiar parton picture of deep-inelastic scattering is easy to apply at small $x$, especially in a typical HERA laboratory frame of reference. A right-moving electron ``sees" a left-moving wee parton in the extreme-relativistic left-moving proton, and Coulomb-scatters from it with momentum transfer $Q$. The lego-plot picture of the final-state particles is sketched in Fig. \ref{fig1}a; one sees the electron and the struck-quark jet each with transverse momentum $Q$. This Coulomb-scattering picture is accurate in the frame of reference where $\eta = 0$ is chosen to be the rapidity halfway between these two features. Also of interest is the (approximate) location of the initial state quark before it was struck (the so-called ``hole" fragmentation region \cite{bj:holes}). It is a distance of order $\ell n\, Q$ to the left of the quark jet, because $\ell n\, \theta/2$ has changed by approximately that amount because of the Coulomb scattering. It is a distance $\ell n\, 1/x$ from the leading-proton fragments. It is also convenient, especially theoretically, to view the same process in a collinear virtual-photon proton reference frame (cf. Fig. \ref{fig1}b). In such a frame there are generically no large-$p_T$ jets, at least at the level of naive, old-fashioned parton model. With QCD, there will be extra gluon initial-state and final-state radiation. Most of this will look like minijet production in collinear reference frames, but occasionally there will be extra genuine gluon jets, especially in the phase-space region between the hole and the leading quark fragments. Note that now the amount of phase-space to the right of the ``hole" region is of order $\ell n\, Q^2$; the extra $\ell n\, Q$ amount of phase space is in the quark jet in HERA reference frames. The total phase space is evidently $\ell n\, Q^2 + \ell n\, 1/x = \ell n\, W^2$, as it should be. \vspace{.5cm} \begin{figure}[htbp] \begin{center} \leavevmode \epsfxsize=5in \epsfbox{8117A01.eps} \end{center} \caption[*]{Lego-plot of final-state hadrons in small-$x$ deep inelastic scattering: \hfill\break (a) HERA laboratory frame, and (b) collinear $\gamma^* - p$ reference frames.} \label{fig1} \end{figure} We now are ready to introduce Gribov's paradox. He viewed the same process in the laboratory frame of the nucleon, but considered for simplicity replacement of the nucleon by a large, heavy nucleus of radius $R$. The picture is that first there is the virtual dissociation of the virtual photon into a hadron system upstream of the target hadron. For HERA conditions and Ioffe's estimate of longitudinal distances, this is a distance of hundreds of fermis in this fixed-target reference frame. This virtual dissociation process is followed by just the geometrical absorption of the virtual-hadron system on the nucleus. Gribov used old-fashioned (or in modern terms light-cone) perturbation theory to make the estimate, which gives a simple and utterly transparent result: \begin{equation} \sigma_T = (1 - Z_3) \, \pi R^2 \ . \end{equation} Note that the estimate is for $\sigma_T$, not $F_2$, and that $(1 - Z_3)$, with $Z_3$ the charge renormalization of the photon, is just the probability the photon is hadron, not photon: \begin{equation} 1 - Z_3 = \frac{\alpha}{3\pi} \int \frac{ds~s~R(s)}{(Q^2+s)^2}\approx \frac{\alpha}{3\pi}~\bar{R}~\ell n\, \frac{1}{x}\ , \end{equation} where $R(s)$ is the sum of squared charges of partons, as used in describing the $e^+$-$e^-$ annihilation cross section. So up to logarithmic factors, the result is that $\sigma_T$, not $F_2$, is independent of $Q^2$. Since \begin{equation} F_2 = \frac{Q^2~\sigma_T}{4\pi^2~\alpha}\ , \end{equation} this means the aforementioned scaling violation by an extra power of $Q^2$. Gribov's structure function is much too big (at least at present energies)!! There are (at least) two ways out of the paradox. One way is, in modern jargon, ``color transparency". Typically the virtual photon dissociates into a bare $q- \bar{q}$ system which on arrival at the nucleus is a small color dipole of spatial extent $Q^{-1}$. It can only interact perturbatively with the target via single gluon exchange. And since the cross section goes as the square of the dipole moment, one gets $\sigma_T$ proportional to $Q^2$, as is needed. Note however that the final state morphology is different from what has been given for the naive parton model; it contains two leading jets (in the virtual photon direction) and a recoil-parton jet in the proton direction, all typically with a $p_T$ scale $Q$ (this in the collinear $\gamma^*$-proton frame; cf. Fig. \ref{fig2}). Also the A-dependence for this mechanism is generically $A^1$. The second mechanism is associated with more infrequent configurations, where the $q$ and $\bar{q}$ created by the virtual photon do not have large $p_T$, but are aligned along the virtual-photon beam direction. This clearly leads to one particle (call it the quark $q$) carrying almost all the momentum and the other (the $\bar{Q}$) carrying much less. When the kinematics is worked out, one finds that the typical momentum carried by the ``slow" $\bar{Q}$ is of order $x^{-1}$ GeV. But do note that this is still hundreds of GeV for HERA conditions, in Gribov's fixed-target reference frame. \vspace{.5cm} \begin{figure}[htbp] \begin{center} \leavevmode \epsfxsize=5in \epsfbox{8117A02.eps} \end{center} \caption[*]{Lego-plot of final-state hadrons for the ``color-transparency", ``Bethe-Heitler", or ``noncollinear photon-gluon fusion" mechanism at small $x$.} \label{fig2} \end{figure} There is enough time, according to Ioffe's basic estimate, for this ``slow" $\bar{Q}$ to evolve nonperturbatively \cite{bj:springer}, and in particular it will be found at large transverse distances from the ``fast" quark, of order the hadronic size scale. (For example there is enough time and space for a nonperturbative color flux-tube to grow between $q$ and $\bar{Q}$.) So on arrival at the target hadron the hadronic progeny of the virtual photon look something like a B-meson, the fast pointlike $q$ analogous to the pointlike $b$-quark, and the slow structured $\bar{Q}$ looking something like the light constituent antiquark orbiting the $b$. It follows that when this configuration evolves, it should be absorbed geometrically on the target nucleus or nucleon, as assumed by Gribov. But the probability, per incident virtual photon, that this configuration actually occurs is easily worked out to be $({\rm constant})/Q^2$, so the scaling of the structure function $F_2$ is recovered. Also, the final-state structure in collinear reference frames contains no jets, so the parton-model final state structure is recovered. The $\bar{Q}$ final-state fragmentation products are in fact located in the ``hole" fragmentation region already described. An additional expectation is that the A-dependence at small $x$ is $A^{2/3}$, even at large $Q^2$. \section{Phenomenology} I am not conversant with all the details of the phenomenology. However to the best of my knowledge the main features of the data support the ``aligned jet" mechanism for the bulk of the events which build $F_2$ at small $x$. In particular, \begin{enumerate} \item The A-dependence of $F_2$ at small $x$ and large $Q^2$ is roughly $A^{2/3}$; shadowing is definitely seen and scales \cite{e665}. \item Leading dijets (in the virtual photon direction) are seen rarely, if at all, when the data is viewed in a collinear $\gamma^*$-proton reference frame. The final-state is ``soft" most of the time \cite{h1}. \item However, in the region of sharp rise of $F_2$, I am not sure that these final-state properties persist as strongly. Certainly the A-dependence is untested there. \end{enumerate} \noindent But in any case, it would appear that the most natural hypothesis is that the data on $F_2$ is dominated by the aligned-jet mechanism at small $x$. \section{Diffraction} With this lengthy prologue, we are now ready to consider the rapidity-gap events seen at HERA. But with these preliminaries the interpretation is very simple and direct. In particular, whenever there is a process where strong absorption occurs (transmission probability small compared to unity), there must be (elastic) shadow scattering. In this case it is the ``slow" $\bar{Q}$ which is the structured, hadronic object which gets absorbed. So we can expect it to be elastically scattered from the proton (or nucleus) as well. What will the final state look like? The $\bar{Q}$ does not emerge unscathed, but will physically separate from the ``fast" quark $q$. So there will be hadronization associated with this color separation, just as in $e^+ - e^-$ annihilation. In the lego plot of the final state, this means that a population of hadrons will be found between the fragmentation region of the quark $q$ and the ``hole" fragmentation region characteristic of the rapidity of the $\bar{Q}$ before-and after-the elastic scattering; the mass of this hadron system is typically of order $Q^2$. Hadrons will {\it not} be found, however, in the rapidity region between the target proton (or nucleus) and the $\bar{Q}$. Actually the distribution in the diffracted mass $m$ can be inferred from the Gribov estimate, Eq. 3, because the momentum change of the $\bar{Q}$ due to the elastic scattering is typically so small that the mass of the $q$-$\bar{Q}$ system is not significantly modified. The Gribov distribution associated with $(1 - Z_3)$ is \begin{equation} \frac{dN}{dm^2} = \frac{m^2}{(Q^2~+~m^2)^2}\ . \end{equation} However this should be multiplied by the alignment probability ${\rm (constant)}/m^2$. Experimentalists prefer to use instead of the diffracted mass the scaled quantity beta: \begin{equation} \beta = \frac{Q^2}{m^2~+~Q^2} \ . \end{equation} Therefore \begin{equation} \frac{dN}{d\beta} \sim ({\rm const}) \ . \end{equation} A constant beta distribution, as estimated here, is in rough agreement with the data \cite{hi:beta,zeus:beta}, especially given the semiquantitative nature of these arguments. However there does appear to be an excess at small beta (large diffracted mass), which requires an extension of this mechanism such as inelastic diffraction of the constituent quark. The other dependence of relevance is that of the $W^2$ dependence of the ratio of the diffractive component to the total. It should be (at fixed $Q^2$) the same as the $s$-dependence of $\sigma_{el}/\sigma_{tot}$ for hadron-hadron interactions. Donnachie and Landshoff \cite{dl} successfully fit the total and elastic cross section data with a Pomeron Regge pole, namely a pure power-law dependence of $\sigma_{tot}$. The behavior is $s^{0.08}$. This should also be the case (up to a logarithm associated with the shrinkage of the elastic peak) for $\sigma_{el}/\sigma_{tot}$. The $W^2$ dependence of $F_2$ in this picture (at fixed $Q^2$) should also be $(W^2)^{0.08}$; this number seems on the low side \cite{hi:beta,zeus:beta} but there is still controversy and uncertainty on what the fixed-$Q^2$ exponent really is. In any case, the fraction of rapidity-gap events should not be a strong function of either $Q^2$ or $W^2$. Finally, the absolute magnitude of the ratio of gap/no-gap events, predicted to be $\sigma_{el}(\bar{Q}-p)/\sigma_{tot}(\bar{Q}-p)$, is reasonable \cite{h1,ahmed}---between 5\%\ and 10\%. Omitted in this line of argument, but certainly possible to include, are the contributions of diffraction-dissociation of $\bar{Q}$ and/or target proton/nucleus. Although a year ago \cite{aaaa} I essentially assumed (in the language of this talk) that the $\bar{Q}$ diffraction dissociation might be dominant, this year it seems more unreasonable---especially in the light of the data that appeared in the intervening time. It does seem that excitations of constituent quarks are not seen in spectroscopy, and that may be reflected in the HERA diffractive data. The fraction of rapidity-gap data for which the proton dissociates should be more or less characteristic of the ratio of single dissociation to elastic scattering (25\%\ or so) seen in $p-\bar p$ collisions. This should be soon checked at HERA. It certainly is possible to sharpen this line of thinking and make more crisp predictions \cite{fs}. And the recent ideas of Buchmuller and Hebecker \cite{bh} bear much similarity to this picture. I apologize for not having done more myself. But the bottom line of this line of thinking, worth emphasizing here again, is that the important mechanism for the small-$x$ final-state structure is not to be found within perturbative QCD. It is not short-distance, weak-coupling dynamics that counts, but large-distance, strong-coupling, strong-absorption dynamics that is at the heart of the matter. There need be nothing more pointlike about the mechanism producing the diffractive final states than the mechanism responsible for elastic proton-proton scattering. \section{What about (BFKL) Hard Diffraction?} The first mechanism for the small-$x$ dynamics which was discussed in Section 2, i.e. ``color-transparency" or ``QCD Bethe-Heitler" (or noncollinear photon-gluon fusion), must at some level also be present. For the reasons already cited, I suspect it is at no more than the 10\%--20\%\ level. But that is only a guess. The best way to isolate it experimentally is via the 3-jet final state morphology exhibited in Fig. \ref{fig2}. This is the seed kernel for building at high energies and fixed $Q^2$ the BFKL $W^2$ dependence \cite{bfkl} via production of extra gluons into the phase space, gluons typically also carrying $p_T$ of order $\sqrt{Q^2}$. The $W^2$ dependence to be expected is much stronger, of order $(W^2)^{0.4}$. Open questions regarding the relevance of this mechanism for HERA include \begin{enumerate} \item whether the normalization of the lowest-order kernel is large enough, \item how much room there is in the available HERA phase space for building up the power-law behavior, and \item whether the scheme is consistent: there exist criticisms regarding ``diffusion into the infrared", as well as claims \cite{cd,vdel,boa,bj} that more careful attention must be paid to energy-conservation constraints within the multi-Regge kinematics. \end{enumerate} \vspace{.5cm} \begin{figure}[htbp] \begin{center} \leavevmode \epsfxsize=5in \epsfbox{8117A03.eps} \end{center} \caption[*]{A log-log sketch of $F_2$ vs $Q^2$ at various fixed values of $W^2$.} \label{fig3} \end{figure} \noindent An observed trend toward the BFKL $W^2$ dependence would clearly be of fundamental importance, implying a new class of nonperturbative, absorptive effects going far beyond those we used in the previous section to interpret existing data. I here make only the most modest suggestion, regarding how to plot the data. I am a firm believer in the importance of searching for the optimum way of presenting data, the way which most directly highlights what is important. My suggestion in this case is to plot log $F_2$ versus log $Q^2$ at fixed $W^2$. A sketch of what I mean is shown in Fig. \ref{fig3}. $W^2$ is chosen rather than $x^{-1}$ because there is no longer scaling in the small-$x$ region, and because the nonscaling depends on gluon emission, which probably is more dependent on the amount of available phase space than anything else. (This is certainly the case for BFKL). The logarithmic scales allow a clear view of how the photoproduction limit is approached, and above that limit by some not-so-well-defined factor is the Gribov bound, unmodified by any damping due to color-transparency or aligned-jet configurations. Existing data for not large $W^2$ show a curve of log $F_2$ vs log $Q^2$ which is concave down. If any part of the curve becomes at high $W^2$ concave up, this would be to me a signal for ``BFKL behavior", because it is the only way the Gribov bound can be reached. The important regime for HERA is, as is well-known, moderate $Q^2$ (0.5 GeV$^2$--15 GeV$^2$) at the highest $W^2$ attainable. My own favorite guess \cite{bj:vietri} on how things will turn out is that the curves will remain concave down, but that the $W^2$ dependence of the maximum of $F_2(Q^2, W^2)$ for given $W^2$ will behave more or less like BFKL. \section{Acknowledgment} It has been some time since I last visited ITEP, and as always it has been a most pleasant and stimulating experience. I thank Boris Ioffe and his colleagues for their warm hospitality. \newpage
2,877,628,090,049
arxiv
\section*{Introduction}\label {0} In this article, we continue the work of \cite{AZ,AF} on the classification of finite-dimensional complex pointed Hopf algebras $H$ with $G=G(H)$ non-abelian. We follow the Lifting Method -- see \cite{AS-cambr} for a general reference; in particular, we focus on the problem of determining when the dimension of the Nichols algebra associated with conjugacy classes of $G$ is infinite. The paper is organized as follows. In Section \ref{conventions}, we review some general facts on Nichols algebras corresponding to finite groups. We discuss the notion of \emph{absolutely real} element of a finite group in subsection \ref{absreal}. We then provide generalizations of \cite[Lemma 1.3]{AZ}, a basic tool in \cite{AZ,AF}, see Lemmata \ref{lemagen2} and \ref{lemagen1}. Section 2 is devoted to pointed Hopf algebras with coradical $\mathbb C \an$. We prove that any finite-dimensional complex pointed Hopf algebra $H$ with $G(H)\simeq \aco$ is isomorphic to the group algebra of $\aco$; see Theorem \ref{mainteor}. This is the first finite non-abelian group $G$ such that all pointed Hopf algebras $H$ with $G(H)=G$ are known. We also prove that $\dim \toba({\mathcal O}_{\pi},\rho)=\infty$, for any $\pi$ in $\an$ of odd order, except for $\pi=(1 \, 2 \, 3)$ or $\pi=(1 \, 3 \, 2)$ in $\ac$ -- see Theorem \ref{lemaan}. This last case is particularly interesting. It corresponds to a ``tetrahedron" rack with constant cocycle $\omega \in {\mathbb G}_3-1$. The technique in the present paper does not provide information on the corresponding Nichols algebra. We also give partial results on pointed Hopf algebras with groups $\ac$ and $\A_6$, and on Nichols algebras $\toba({\mathcal O}_{\pi},\rho)$, with $|\pi|$ even. In Section \ref{nichols-dn}, we apply the technique to conjugacy classes in dihedral groups. It turns out that it is possible to decide when the associated Nichols algebra is finite-dimensional in all cases except for $M({\mathcal O}_x, \sgn)$ (if $n$ is odd), or $M({\mathcal O}_{x}, \sgn \otimes \sgn)$ or $M({\mathcal O}_{x}, \sgn\otimes \varepsilon)$ or $M({\mathcal O}_{xy}, \sgn \otimes \sgn)$ or $M({\mathcal O}_{xy}, \sgn\otimes \varepsilon)$ (if $n$ is even). See below for undefined notations. We finally observe in Section \ref{nichols-other} that there is no finite-dimensional Hopf algebra with coradical isomorphic to the Hopf algebra $(\mathbb C \mathbb A_5)^J$ discovered in \cite{Ni}, except for $(\mathbb C \mathbb A_5)^J$ itself. \section{Generalities}\label{conventions} For $s\in G$ we denote by $G^s$ the centralizer of $s$ in $G$. If $H$ is a subgroup of $G$ and $s\in H$ we will denote ${\mathcal O}_{s}^{H}$ the conjugacy class of $s$ in $H$. Sometimes we will write in rack notations $x\triangleright y=xyx^{-1}$, $x$, $y\in G$. Also, if $(V,c)$ is a braided vector space, that is $c\in GL(V\otimes V)$ is a solution of the braid equation, then $\toba(V)$ denotes its Nichols algebra. We denote by ${\mathbb G}_n$ the group of $n$-th roots of 1 in $\mathbb C$. \subsection{Preliminaries} Let $G$ be a finite group, ${\mathcal O}$ a conjugacy class of $G$, $s\in {\mathcal O}$ fixed, $\rho$ an irreducible representation of $G^s$, $M({\mathcal O}, \rho)$ the corresponding irreducible Yetter-Drinfeld module. Let $t_1 = s$, \dots, $t_{M}$ be a numeration of ${\mathcal O}$ and let $g_i\in G$ such that $g_i \triangleright s = t_i$ for all $1\le i \le M$. Then $M({\mathcal O}, \rho) = \oplus_{1\le i \le M}g_i\otimes V$. Let $g_iv := g_i\otimes v \in M({\mathcal O},\rho)$, $1\le i \le M$, $v\in V$. If $v\in V$ and $1\le i \le M$, then the coaction and the action of $g\in G$ are given by $$ \delta(g_iv) = t_i\otimes g_iv, \qquad g\cdot (g_iv) = g_j(\gamma\cdot v), $$ where $gg_i = g_j\gamma$, for some $1\le j \le M$ and $\gamma\in G^s$. The Yetter-Drinfeld module $M({\mathcal O}, \rho)$ is a braided vector space with braiding given by \begin{equation} \label{yd-braiding} c(g_iv\otimes g_jw) = t_i\cdot(g_jw)\otimes g_iv = g_h(\gamma\cdot v)\otimes g_iv\end{equation} for any $1\le i,j\le M$, $v,w\in V$, where $t_ig_j = g_h\gamma$ for unique $h$, $1\le h \le M$ and $\gamma \in G^s$. Since $s\in Z(G^s)$, the Schur Lemma says that \begin{equation}\label{schur} s \text{ acts by a scalar $q_{ss}$ on } V. \end{equation} Let $G$ be a finite non-abelian group. Let ${\mathcal O}$ be a conjugacy class of $G$ and let $\rho$ be an irreducible representation of the centralizer $G^s$ of a fixed $s\in {\mathcal O}$. Let $M({\mathcal O}, \rho)$ be the irreducible Yetter-Drinfeld module corresponding to $({\mathcal O}, \rho)$ and let $\toba({\mathcal O}, \rho)$ be its Nichols algebra. As explained in \cite{AZ, AF, G1}, we look for a braided subspace $U$ of $M({\mathcal O}, \rho)$ of diagonal type such that the dimension of the Nichols algebra $\toba(U)$ is infinite. This implies that the dimension of $\toba({\mathcal O}, \rho)$ is infinite too. \begin{lema}\label{trivialbraiding} If $W$ is a subspace of $V$ such that $c(W\otimes W) = W\otimes W$ and $\dim \toba(W) =\infty$, then $\dim \toba(V) =\infty$.\qed \end{lema} \bigbreak Recall that a braided vector space $(V,c)$ is of \emph{diagonal type} if there exists a basis $v_1, \dots, v_{\theta}$ of $V$ and non-zero scalars $q_{ij}$, $1\le i,j\le \theta$, such that $c(v_i\otimes v_j) = q_{ij} v_j\otimes v_i$, for all $1\le i,j\le \theta$. A braided vector space $(V,c)$ is of \emph{Cartan type} if it is of diagonal type and there exists $a_{ij} \in {\mathbb Z}$, $-|q_{ii}| < a_{ij} \leq 0$ such that $q_{ij}q_{ji} = q_{ii}^{a_{ij}}$ for all $1\le i\neq j\le \theta$; by $|q_{ii}|$ we mean $\infty$ if $q_{ii}$ is not a root of 1, otherwise it means the order of $q_{ii}$ in the multiplicative group of the units in $\mathbb C$. Set $a_{ii}=2$ for all $1\le i\le \theta$. Then $(a_{ij})_{1\le i,j\le \theta}$ is a generalized Cartan matrix. \begin{theorem}\label{cartantype} (\cite[Th. 4]{H1}, see also \cite[Th. 1.1]{AS1}). Let $(V,c)$ be a braided vector space of Cartan type. Then $\dim \toba(V) < \infty$ if and only if the Cartan matrix is of finite type. \qed \end{theorem} We say that $s\in G$ is \emph{real} if it is conjugate to $s^{-1}$; if $s$ is real, then the conjugacy class of $s$ is also said to be \emph{real}. We say that $G$ is \emph{real} if any $s\in G$ is real. The next application of Theorem \ref{cartantype} was given in \cite{AZ}. Let $G$ be a finite group, $s\in G$, ${\mathcal O}$ the conjugacy class of $s$, $\rho: G^s \to GL(V)$ irreducible; $q_{ss}\in \mathbb C^{\times}$ was defined in \eqref{schur}. \begin{lema}\label{odd} Assume that $s$ is real. If $\dim\toba({\mathcal O}, \rho)< \infty$ then $q_{ss} = -1$ and $s$ has even order.\qed \end{lema} If $s^{-1}\neq s$, this is \cite[Lemma 2.2]{AZ}; if $s^2 = \id$ then $q_{ss} = \pm 1$ but $q_{ss} = 1$ is excluded by Lemma \ref{trivialbraiding}. The class of real groups includes finite Coxeter groups. Indeed, all the characters of a finite Coxeter group are real valued, see subsection \ref{absreal} below, and \cite{BG} for $H_4$. Therefore, we have: \begin{theorem}\label{nichols-weyl-impar} Let $G$ be a finite Coxeter group. If $s \in G$ has odd order, then $\dim\toba({\mathcal O}_s, \rho) = \infty$, for any $\rho \in \widehat{G^{s}}$. \qed \end{theorem} \subsection{Absolutely real groups}\label{absreal} Let $G$ be a finite group. We say that $s \in G$ is \emph{absolutely real} if there exists an \emph{involution} $\sigma$ in $G$ such that $\sigma s \sigma=s^{-1}$. If this happens, any element in the conjugacy class of $s$ is absolutely real and we will say that the conjugacy class of $s$ is \emph{absolutely real}. We say that $G$ is absolutely real if any $s\in G$ is so. The finite Coxeter groups are absolutely real. Indeed, \begin{itemize} \item[(i)] the dihedral groups are absolutely real, by straightforward computations. \item[(ii)] the Weyl groups of semisimple finite dimensional Lie algebras are absolutely real, by \cite[Th. C (iii), p. 45]{C}. \item[(iii)] $H_3$ is absolutely real, by Proposition \ref{a5-h4-absreal} below. \item[(iv)] $H_4$ is absolutely real, we have checked it using GAP3, \cite{Sch97}. \end{itemize} \begin{obs}\label{sorites-invreal} Let $G$, $H$ be finite groups. We note: \begin{itemize} \item $(s,t) \in G\times H$ is absolutely real iff both $s\in G$ and $t\in H$ are absolutely real. \item $G\times H$ is absolutely real iff both $G$ and $H$ are absolutely real. \item Assume $H$ abelian. Then $H$ is absolutely real iff $H$ has exponent 2, i. e. $H\simeq {\mathbb Z}_2^n$ for some integer $n$. \item If $G$ is absolutely real and $H$ is abelian of exponent 2 then $G\times H$ is absolutely real. \end{itemize} \end{obs} \bigbreak We first discuss when an element of $\an$ is absolutely real. Assume that $\trasp\in\sn$ is of type $(1^{m_1}, 2^{m_2}, \dots, n^{m_n})$. Then $\trasp\in\an$ iff $\displaystyle\sum_{j \, \, \text{even}} m_j$ is even. \begin{lema}\label{an-invreal} (a). If $m_1 \geq 2$, then $\pi$ is absolutely real in $\an$. \noindent (b). If $\displaystyle\sum_{h\in {\mathbb N}} (m_{4h} + m_{4h +3}) $ is even then $\pi$ is absolutely real in $\an$. \end{lema} \begin{proof} Let $\tau_j := (1\,2\, \dots \, j)$ for some $j$ and take $$g_j = \begin{cases} (1 \,\,\, j-1) (2\,\,\, j-2) \cdots (k-1\,\,\, k+1), &\text{ if } j = 2k \text{ is even}, \\ (1 \,\,\, j-1) (2 \,\,\, j-2) \cdots (k \,\,\, k+1), &\text{ if } j = 2k + 1 \text{ is odd.} \end{cases} $$ It is easy to see that $g_j \tau_j g_j = \tau_j^{-1}$, $g_j^2= \id$ and $$ \sgn(g_j) = \begin{cases} (-1)^{k-1}, &\text{ if } j = 2k \text{ is even}, \\ (-1)^{k}, &\text{ if } j = 2k + 1 \text{ is odd.} \end{cases} $$ To prove (b), observe that there exists an involution $\sigma\in \sn$ such that $\sigma \pi\sigma = \pi^{-1}$, which is a product of ``translations" of the $g_j$'s. Since the sign of $\sigma$ is $(-1)^{\sum_{h\in {\mathbb N}} (m_{4h} + m_{4h +3})}$, $\sigma \in \an$ iff $\sum_{h\in {\mathbb N}} (m_{4h} + m_{4h +3})$ is even; (b) follows. We prove (a). By assumption there are at least two points fixed by $\pi$, say $n-1$, $n$. By the preceding there exists an involution $\sigma\in \mathbb S_{n-2}$ such that $\sigma \pi\sigma = \pi^{-1}$. If $\sigma\in \mathbb A_{n-2} \subset \an$ we are done, otherwise take $\widetilde \sigma = \sigma (n-1\, n)\in \an$; $\widetilde \sigma$ is an involution and $\widetilde \sigma \pi\widetilde \sigma = \pi^{-1}$. \end{proof} \begin{prop}\label{a5-h4-absreal} The groups $\aco$ and $H_3$ are absolutely real. \end{prop} \begin{proof} The type of $\pi\in \aco$ is either $(1^5)$, $(1^2,3^1)$, $(1,2^2)$ or $(5^1)$; in the first two cases $\pi$ is absolutely real by Lemma \ref{an-invreal} part (a), in the last two by part (b). Since $H_3\simeq \aco \times {\mathbb Z}_2$ (see \cite[Section 2.13]{Hu}), then the Coxeter group is absolutely real by Remark \ref{sorites-invreal}. \end{proof} \subsection{Generalizations of Lemma \ref{odd}} The next two Lemmata are variations of \cite[Lemma 2.2]{AZ}. A result in the same spirit appears in \cite{FGV}. We deal with elements $s$ having a power in ${\mathcal O}$, the conjugacy class of $s$. Clearly, if $s^j=\sigma s \sigma^{-1}$ is in ${\mathcal O}$, then $s^{j^l}=\sigma^l s \sigma^{-l}$ is in ${\mathcal O}$, for every $l$. So, $s^{j^{|\sigma|}}=s$; this implies that $|s|$ divides $j^{|\sigma|}-1$. Hence \begin{align}\label{ecN} N \quad \text{divides} \quad j^{|\sigma|}-1, \end{align} with $N:=|q_{ss}|$, recall \eqref{schur}. \begin{lema}\label{lemagen2} Let $G$ be a finite group, $s\in G$, ${\mathcal O}$ the conjugacy class of $s$ and $\rho \in \widehat{G^s}$. Assume that there exists an integer $j$ such that $s$, $s^j$ and $s^{j^2}$ are distinct elements and $s^j$ is in ${\mathcal O}$. If $\dim\toba({\mathcal O},\rho)<\infty$, then $s$ has even order and $q_{ss}=-1$. \end{lema} \begin{proof} We assume that $\dim\toba({\mathcal O},\rho)<\infty$, thus $N>1$. It is easy to see that \begin{align}\label{ecrel} \sigma^{-h} s^{j^l} \sigma^h=(\sigma^{-h} s \sigma^h)^{j^l}=(s^{j^{|\sigma|-h}})^{j^l}=s^{j^{|\sigma|-h+l}}, \end{align} for every $l$, $h$. We will call $t_l:=s^{j^l}$, $g_l:=\sigma^{l}$, $l=0$, $1$, $2$; so $t_l=g_l s g_l^{-1}$, for $l=0$, $1$, $2$. The other relations between $t_l$'s and $g_h$'s are obtained from \eqref{ecrel}. For $v \in V-0$ and $l=1$ or $2$, we define $W_l:=\mathbb C-\text{span of } \{g_0v,g_lv \}$. Hence, $W_l$ is a braided vector subspace of $M({\mathcal O},\rho)$ of Cartan type with $$\mathcal Q_l =\begin{pmatrix} q_{ss} & q_{ss}^{j^{|\sigma|-l}} \\ q_{ss}^{j^l} & q_{ss} \end{pmatrix},\qquad \mathcal A_l =\begin{pmatrix} 2 & a_{12}(l) \\ a_{21}(l) & 2 \end{pmatrix}, $$ where $a_{12}(l)=a_{21}(l)\equiv j^{|\sigma|-l}+j^l \mod(N)$. Since $\dim\toba({\mathcal O},\rho)<\infty$, we have that $a_{12}(l)=a_{21}(l)= 0$ or $-1$. We consider now two cases.\\ (i) Let us suppose that $a_{12}(1)=a_{21}(1)= 0$. This implies that $j^{|\sigma|-1}+j \equiv 0 \mod(N)$. Since $N$ divides $j^{|\sigma|}-1$, we have that $N$ divides $j^2+1$. We consider now two possibilities. \begin{itemize} \item Assume that $a_{12}(2)=a_{21}(2)= 0$. Then $j^{|\sigma|-2}+j^2 \equiv 0 \mod(N)$. Since $N$ divides $j^{|\sigma|}-1$, we have that $N$ divides $j^4+1$. So, $-1 \equiv 1 \mod (N)$; hence the result follows. \item Assume that $a_{12}(2)=a_{21}(2)= -1$. Then $j^{|\sigma|-2}+j^2 \equiv -1 \mod(N)$. We can see that $N$ divides $j^4+j^2+1$. This implies that $N$ divides $1$, a contradiction. \end{itemize} (ii) Let us suppose that $a_{12}(1)=a_{21}(1)= -1$. This implies that $j^{|\sigma|-1}+j \equiv -1 \mod(N)$. Since $N$ divides $j^{|\sigma|}-1$, we have that $N$ divides $j^2+j+1$. We consider now two possibilities. \begin{itemize} \item Assume that $a_{12}(2)=a_{21}(2)= 0$. Then $j^{|\sigma|-2}+j^2 \equiv 0 \mod(N)$. So, $N$ divides $j^4+1$. It is easy to see that $N$ divides $j^2$. Since $j$ and $|s|$ are relatively prime, $N$ must be $1$, a contradiction. \item Assume that $a_{12}(2)=a_{21}(2)= -1$. This means that the subspace $\widetilde{W}:=\mathbb C-\text{span of } \{g_0v,g_1v,g_2v \}$ of $M({\mathcal O},\rho)$ is of Cartan type with $$\mathcal Q =\begin{pmatrix} q_{ss} & q_{ss}^{j^{|\sigma|-1}} & q_{ss}^{j^{|\sigma|-2}}\\ q_{ss}^j & q_{ss} & q_{ss}^{j^{|\sigma|-1}}\\ q_{ss}^{j^2} & q_{ss}^j & q_{ss} \end{pmatrix}, \qquad \mathcal A =\begin{pmatrix} 2 & -1 & -1 \\ -1 & 2 &-1 \\ -1 & -1 &2 \end{pmatrix}. $$ By Theorem \ref{cartantype}, we have that $\dim\toba({\mathcal O},\rho)=\infty$, a contradiction. \end{itemize} This concludes the proof. \end{proof} \begin{lema}\label{lemagen1} Let $G$ be a finite group, $s\in G$, ${\mathcal O}$ the conjugacy class of $s$ and $\rho=(\rho,V) \in \widehat{G^s}$ such that $\dim\toba({\mathcal O},\rho)<\infty$. Assume that there exists an integer $j$ such that $s^j\neq s$ and $s^j$ is in ${\mathcal O}$. \begin{itemize} \item[(a)] If $\deg \rho >1$, then $s$ has even order and $q_{ss}=-1$. \item[(b)] If $\deg \rho =1$, then either $s$ has even order and $q_{ss}=-1$, or $q_{ss}$ $\in {\mathbb G}_3-1$. \end{itemize} \end{lema} \begin{proof} We will proceed and use the notation as in the proof of Lemma \ref{lemagen2}. If $s^{j^2}\neq s$, then the result follows by Lemma \ref{lemagen2}. Assume that $s^{j^2}= s$. This implies that $|s|$ divides $j^2-1$, so $N$ divides $j^2-1$. (a) Let $v_1$ and $v_2$ in $V$ linearly independent and let $W=\mathbb C$-- span of $\{g_0 v_1, \, g_0 v_2, \, g_1 v_1, \, g_1 v_2 \}$, with $g_0:=\id$ and $g_1:=\sigma$. Thus $W$ is a braided vector subspace of $M({\mathcal O},\rho)$ of Cartan type with $$\mathcal Q =\begin{pmatrix} q_{ss} & q_{ss} & q_{ss}^{j^{|\sigma|-1}} & q_{ss}^{j^{|\sigma|-1}}\\ q_{ss} & q_{ss} & q_{ss}^{j^{|\sigma|-1}} & q_{ss}^{j^{|\sigma|-1}}\\ q_{ss}^j & q_{ss}^j & q_{ss} & q_{ss}\\ q_{ss}^j & q_{ss}^j & q_{ss} & q_{ss} \end{pmatrix}, \qquad \mathcal A =\begin{pmatrix} 2 & a_{12} & a_{13} & a_{14} \\ a_{21} & 2 & a_{23} & a_{24} \\ a_{31} & a_{32} & 2 & a_{34} \\ a_{41} & a_{42} & a_{43} & 2 \end{pmatrix}, $$ where $a_{ij}=a_{ji}$, $i\neq j$, $a_{12}\equiv 2 \equiv a_{34} \mod (N)$, $a_{13}=a_{14}=a_{23}=a_{24}$ and $$ a_{13} \equiv j^{|\sigma|-1}+j \mod(N). $$ If $a_{12}=0$ or $a_{34}=0$, then $N$ divides $2$ and the result follows. Besides, if $a_{13}=0$, then $j^{|\sigma|-1}+j\equiv 0 \mod(N)$; this implies that $N$ divides $j^2+1$, so $N$ divides 2 and the result follows. On the other hand, if $a_{ij}=-1$, for all $i,j$, we have that the matrix $\mathcal A$ is not of finite type; hence $\dim \toba ({\mathcal O}, \rho)=\infty$, from Theorem \ref{cartantype}. This is a contradiction by hypothesis. Therefore, (a) is proved. (b) For $v \in V-0$ we define $W:=\mathbb C-\text{span of } \{g_0v,g_1v \}$, with $g_0:=\id$ and $g_1:=\sigma$. Hence, $W$ is a braided vector subspace of $M({\mathcal O},\rho)$ of Cartan type with $$\mathcal Q =\begin{pmatrix} q_{ss} & q_{ss}^{j^{|\sigma|-1}} \\ q_{ss}^{j} & q_{ss} \end{pmatrix}, \quad \mathcal A =\begin{pmatrix} 2 & a_{12} \\ a_{21} & 2 \end{pmatrix}, $$ where $a_{12}\equiv j^{|\sigma|-1}+j \mod(N)$. Since $\dim\toba({\mathcal O},\rho)<\infty$, we have that $a_{12}= 0$ or $-1$. We consider now two possibilities.\\ (i) Assume that $a_{12}= 0$. This implies that $j^{|\sigma|-1}+j \equiv 0 \mod(N)$. Since $N$ divides $j^{|\sigma|}-1$, we have that $N$ divides $j^2+1$. Thus, $N$ divides $2$; hence $N=2$, and the result follows.\\ (ii) Assume that $a_{12}= -1$. This implies that $j^{|\sigma|-1}+j \equiv -1 \mod(N)$. Since $N$ divides $j^{|\sigma|}-1$, we have that $N$ divides $j^2+j+1$. Hence, $N$ divides $j+2$. If $p$ is a prime divisor of $N$, then $p$ divides $j-1$ or $j+1$, because $N$ divides $j^2-1$. If $p$ divides $j+1$, then $p$ divides $1$, a contradiction. So, $N$ divides $j-1$. Hence, $N$ divides $3$, i.e. $N=3$ and the result follows. \end{proof} \section{On Nichols algebras over $\mathbb A_n$} \label{nichols-an} We recall that we will denote ${\mathcal O}$ or ${\mathcal O}_{\pi}$ the conjugacy class of an element $\pi$ in $\an$, and $\rho$ in $\widehat{\an^{\pi}}$, a representative of an isomorphism class of irreducible representations of $\an^{\pi}$. We want to determine pairs $({\mathcal O},\rho)$, for which $\dim \toba({\mathcal O},\rho)=\infty$, following the strategy given in \cite{AZ,AF}; see also \cite{G1}. The following is a helpful criterion to decide when a conjugacy class of an even permutation $\pi$ in $\sn$ splits in $\mathbb A_n$. \begin{prop}\label{split}\cite[Proposition 12.17]{JL} Let $\pi\in \an$, with $n>1$. \begin{itemize} \item[(1)] If $\pi$ commutes with some odd permutation in $\sn$, then ${\mathcal O}_{\pi}^{\an}={\mathcal O}_{\pi}^{\sn}$ and $[\sn^{\pi}:\an^{\pi}]=2$. \item[(2)] If $\pi$ does not commute with any odd permutation in $\sn$, then ${\mathcal O}_{\pi}^{\sn}$ splits into two conjugacy classes in $\mathbb A_n$ of equal size, with representatives $\pi$ and $(1 \, 2)\pi(1 \, 2)$, and $\sn^{\pi}=\an^{\pi}$.\qed \end{itemize} \end{prop} \begin{rmk}\label{obser} (i) Notice that if $\pi$ satisfies (1) of Proposition \ref{split}, then $\pi$ is real. The reciprocal is not true, e.g. consider $\tau_5=(1 \, 2\, 3\, 4\, 5)$ in $\aco$. (ii) One can see that if $\pi$ in $\an$ is of type $(1^{m_1},2^{m_2},\dots,n^{m_n})$, then $\pi$ satisfies (2) of Proposition \ref{split} if and only if $m_1=0$ or $1$, $m_{2h}=0$ and $m_{2h+1}\leq 1$, for all $h\geq 1$. Thus, if $\pi \in \an$ has even order, then $\pi$ is real. \end{rmk} We state the main Theorem of the section. \begin{theorem}\label{lemaan} Let $\pi\in \an$ and $\rho \in \widehat{\an^{\pi}}$. Assume that $\pi$ is neither $(1 \, 2 \, 3)$ nor $(1 \, 3 \, 2)$ in $\ac$. If $\dim\toba({\mathcal O}_{\pi},\rho) < \infty$, then $\pi$ has even order and $q_{\pi \pi}=-1$. \end{theorem} \begin{proof} If $|\pi|$ is even the result follows by Lemma \ref{odd} and Remark \ref{obser} (ii). Let us suppose that $|\pi|\geq 5$ and odd . If $\pi^{-1}$ is in ${\mathcal O}_{\pi}$, then the result follows by Lemma \ref{odd}. Assume that $\pi^{-1}\not \in {\mathcal O}_{\pi}$. We consider two cases. (i) If $\pi^2 \in {\mathcal O}_{\pi}$, then $\pi^4$ is in ${\mathcal O}_{\pi}$, and $\pi^4\neq \pi^2$ because $|\pi| \geq 5$. Hence, the result follows from Lemma \ref{lemagen2}. (ii) Assume that $\pi^2\not \in{\mathcal O}_{\pi}$. We know that there exist $\sigma$ and $\sigma'$ in $\sn$, necessarily odd permutations, such that $\pi^{-1}=\sigma \pi \sigma^{-1}$ and $\pi^{2}=\sigma' \pi \sigma'^{-1}$. Then $\sigma''=\sigma\sigma' \in \an$ and $\pi^{-2}=\sigma'' \pi \sigma''^{-1}$; so $\pi^{-2}$ is in ${\mathcal O}_{\pi}$. This implies that $\pi^4$ is in ${\mathcal O}_{\pi}$, and $\pi^4\neq \pi^{-2}$ because $5 \leq |\pi|$ is odd. Now, the result follows from Lemma \ref{lemagen2}. Finally, let us suppose that $|\pi|=3$, with type $(1^a,3^b)$. If $a\geq 2$ or $b\geq 2$, then $\pi$ is real, by Lemma \ref{an-invreal} (a) and Remark \ref{obser}, respectively. Hence, the result follows by Lemma \ref{odd}. This concludes the proof. \end{proof} \subsection{Case $\at$} Obviously, $\at \simeq {\mathbb Z}_3$; thus $\at$ is not real. This case was considered in \cite[Theorem 1.3]{AS1}. \subsection{Case $\ac$} It is straightforward to check that $\ac$ is not real, since $(1 \, 2 \, 3)$ is not real in $\ac$. Let $\pi$ in $\ac$; then the type of $\pi$ may be $(1^4)$, $(2^2)$ or $(1,3)$. If the type of $\pi$ is $(1^4)$, then $\dim \toba({\mathcal O}_{\pi}, \rho)=\infty$, for any $\rho$ in $\widehat{\ac}$, by Lemma \ref{trivialbraiding}. If the type of $\pi$ is $(1,3)$, then $\pi$ is not real; moreover we have $${\mathcal O}_{(1 \, \, 2 \,\, 3)}=\{(1 \, \, 2 \,\, 3),(1 \, \, 3 \, \, 4), (1 \, \, 4 \,\, 2),(2 \, \, 4 \,\, 3)\},$$ $${\mathcal O}_{(1 \, \, 3 \,\, 2)}=\{(1 \, \, 3 \,\, 2),(1 \, \, 2 \,\, 4), (1 \, \, 4 \,\, 3),(2 \, \, 3 \,\, 4)\},$$ and $\ac^{\pi}=\langle \pi \rangle\simeq {\mathbb Z}_3$. If $\rho \in \widehat{\ac^{\pi}}$ is trivial, then $\dim \toba({\mathcal O}_{\pi}, \rho)=\infty$; otherwise it is not known. The following result is a variation of \cite[Theorem 2.7]{AZ}. \begin{prop}\label{a4} Let $\pi$ in $\ac$ of type $(2^2)$. Then $\dim \toba({\mathcal O}_{\pi}, \rho)=\infty$, for every $\rho$ in $\widehat{\ac^{\pi}}$. \end{prop} \begin{proof} We can assume that $\pi=(1 \, 2)(3 \, 4)$. If we call $t_1:=\pi$, $t_2:=(1 \, 3)(2 \, 4)$ and $t_3:=(1 \, 4) (2 \, 3)$, then ${\mathcal O}_{\pi}= \{ t_1 , t_2 , t_3 \}$ and $\ac^{\pi}=\langle t_1 \rangle \times \langle t_2 \rangle \simeq {\mathbb Z}_2 \times {\mathbb Z}_2$. If $g_1=\id$, $g_2=(1 \, 3 \, 2)$ and $g_3=(1 \, 2 \, 3)$, then $t_j=g_j \pi g_j^{-1}$, $j=1,2,3$, and \begin{align*} t_1g_2&=g_2t_3, \quad &t_2g_1&=g_1t_2, \quad &t_3g_1&=g_1t_3,\\ t_1g_3&=g_3t_2, \quad &t_2g_3&=g_3t_3, \quad &t_3g_2&=g_2t_2. \end{align*} Let $\rho$ in $\widehat{\ac^{\pi}}$ and $M({\mathcal O}_{\pi},\rho):=g_1v \oplus g_2v \oplus g_3v$, where $\langle v \rangle$ is the vector space affording $\rho$. Thus $M({\mathcal O}_{\pi},\rho)$ is a braided vector space with braiding given by -- see \eqref{yd-braiding}-- $c(g_j v \otimes g_j v)= g_j t_1 \cdot v \otimes g_j v$ and $c(g_j v \otimes g_1 v)= g_1 t_j \cdot v \otimes g_j v$, $j=1,2,3$ and \begin{align*} c(g_1 v \otimes g_2v)&=g_2 t_3 \cdot v \otimes g_1v, \qquad &c(g_1v \otimes g_3v)&=g_3 t_2 \cdot v \otimes g_1v, \\ c(g_2 v \otimes g_3v)&=g_3 t_3 \cdot v \otimes g_2v, \qquad &c(g_3v \otimes g_2v)&=g_2 t_2 \cdot v \otimes g_3v. \end{align*} Clearly, $\dim\toba({\mathcal O}_{\pi},\varepsilon \otimes \varepsilon)=\dim\toba({\mathcal O}_{\pi},\varepsilon \otimes \sgn)=\infty$, by Lemma \ref{trivialbraiding}. If we consider $\rho=\sgn \otimes \varepsilon$ (resp. $\sgn \otimes \sgn$), then $M({\mathcal O}_{\pi},\rho)$ is of Cartan type with matrix of coefficients $(q_{ij})_{ij}$ given by $$\mathcal Q =\begin{pmatrix} -1 & -1 & 1\\ 1 & -1 & -1\\ -1 & 1 & -1 \end{pmatrix},\qquad (\text{ resp. } \mathcal Q =\begin{pmatrix} -1 & 1 & -1\\ -1 & -1 & 1\\ 1 & -1 & -1 \end{pmatrix}). $$ In both cases the Cartan matrix is $\mathcal A =\begin{pmatrix} 2 & -1 & -1\\ -1 & 2 & -1\\ -1 & -1 & 2 \end{pmatrix}$. Therefore, $\dim\toba({\mathcal O}_{\pi}, \rho)=\infty$, by Theorem \ref{cartantype}. \end{proof} \medbreak \subsection{Case $\aco$} Here is the key step in the consideration of this case. \begin{lema}\label{a5} Let $\pi \in \aco$. Then $\dim \toba({\mathcal O}_{\pi}, \rho)=\infty$, for every $\rho$ in $\widehat{\aco^{\pi}}$. \end{lema} \begin{proof} Let $\pi \in \aco$. If the type of $\pi$ is either $(1^5)$, $(1^2,3)$ or $(5)$, we have that $\dim \toba({\mathcal O}_{\pi}, \rho)=\infty$, by Lemma \ref{odd} and Proposition \ref{a5-h4-absreal}. Let us assume that the type of $\pi$ is $(2^2)$. For $j=1,2,3$, let $t_j$ and $g_j$ be as in the proof of Proposition \ref{a4}. By Proposition \ref{split} and straightforward computations, we have that ${\mathcal O}_{\pi}^{\aco}={\mathcal O}_{\pi}^{\mathbb S_5}$ and $\aco^{\pi}=\langle t_1 \rangle \times \langle t_2 \rangle \simeq {\mathbb Z}_2 \times {\mathbb Z}_2$. Notice that $t_j \in {\mathcal O}_{\pi}^{\aco}$, $j=1,2,3$. Let $\rho\in \widehat{\aco^{\pi}}$ and $W:=g_1v \oplus g_2v \oplus g_3v$, where $\langle v \rangle$ is the vector space affording $\rho$; then $W$ is a braided vector subspace of $M({\mathcal O}_{\pi}, \rho)$. Therefore, $\dim \toba({\mathcal O}_{\pi}, \rho)=\infty$, by the same argument as in the proof of Proposition \ref{a4}. \end{proof} As an immediate consequence of Lemma \ref{a5} we have the following result. \begin{theorem}\label{mainteor} Any finite-dimensional complex pointed Hopf algebra $H$ with $G(H)\simeq \aco$ is necessarily isomorphic to the group algebra of $\aco$. \end{theorem} \begin{proof} Let $H$ be a complex pointed Hopf algebra with $G(H)\simeq \aco$. Let $M \in {}_{\mathbb C \aco}^{\mathbb C \aco}\mathcal{YD}$ be the infinitesimal braiding of $H$ -see \cite{AS-cambr}. Assume that $H\neq \mathbb C \aco$; thus $M\neq 0$. Let $N\subset M$ be an irreducible submodule. Then $\dim\toba(N)=\infty$, by Lemma \ref{a5}. Hence, $\dim\toba(M)=\infty$ and $\dim H=\infty$. \end{proof} \subsection{Case $\A_6$} Let $\pi$ be in $\A_6$. If the type of $\pi$ is $(1^6)$, $(1^2,2^2)$, $(1^3,3)$, $(3^2)$ or $(1,5)$, then $\pi$ is absolutely real by Lemma \ref{an-invreal}, and if the type of $\pi$ is $(2,4)$, then $\pi$ is real because it has even order -- see Remark \ref{obser} (ii). Hence, $\A_6$ is a real group. We summarize our results in the following statement. \begin{theorem} Let $M({\mathcal O},\rho)$ be an irreducible Yetter-Drinfeld module over $\mathbb C \mathbb A_6$, corresponding to a pair $({\mathcal O},\rho)$. If $\dim\toba({\mathcal O},\rho)<\infty$, then ${\mathcal O}={\mathcal O}_{\pi}$, with $\pi=(1 \, 2)(3 \, 4 \, 5 \, 6)$, and $\rho=\sgn \in \widehat{{\mathbb Z}_4}$. \end{theorem} \begin{obs} In this Theorem we do not claim that the condition is sufficient. \end{obs} \begin{proof} Let $\pi$ be in $\A_6$. If the type of $\pi$ is \begin{itemize} \item $(1^6)$, then $\dim \toba({\mathcal O}_{\pi}, \rho)=\infty$, for any $\rho$ in $\widehat{\A_{6}^{\pi}}$, by Lemma \ref{trivialbraiding}. \item $(1^3,3)$, $(3^2)$ or $(1,5)$, then $\dim \toba({\mathcal O}_{\pi}, \rho)=\infty$, for any $\rho$ in $\widehat{\A_{6}^{\pi}}$, by Lemma \ref{odd}. \end{itemize} Let us suppose that the type of $\pi$ is $(1^2,2^2)$; we can assume that $\pi=(1 \, 2)(3\, 4)$. It is easy to check that $$\A_6^{\pi}=\langle a:=(3 \, 4)(5 \, 6), \, b:=(1 \, 3 \, 2 \, 4)(5 \, 6)\rangle \simeq \mathbb D_4.$$ Notice that $\pi=b^2$. It is known that $\widehat{\mathbb D_4}=\{\rho_1, \, \rho_2, \, \rho_3, \, \rho_4, \, \rho_5 \}$, where $\rho_j$, $j=1$, $2$, $3$ and $4$, are the following characters \begin{align*} \rho_1(a)&= 1, \quad &\rho_2(a)&= -1, \quad &\rho_3(a)&= 1, \quad &\rho_4(a)&= -1,\\ \rho_1(b)&= 1, \quad &\rho_2(b)&= 1, \quad &\rho_3(b)&= -1, \quad &\rho_4(b)&= -1, \end{align*} and $\rho_5$ is the $2$-dimensional representation given by \begin{align*} \rho_5(a)=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \qquad \rho_5(b)=\begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix}. \end{align*} It is clear that $\rho_j(\pi)=1$, $j=1$, $2$, $3$ and $4$. Then $\dim\toba({\mathcal O}_{\pi},\rho)=\infty$, by Lemma \ref{trivialbraiding}. Let us consider now that $\rho=\rho_5$. We define $t_1:=(1 \, 2)(3 \, 4)$, $t_2:=(1 \, 3)(2 \, 4)$, $t_3:=(1 \, 4)(2 \, 3)$, $g_1:=\id$, $g_2:=(1 \, 3 \, 2)$ and $g_3:=(1 \, 2 \, 3)$. It is clear that \begin{align*} \rho(t_1)=\begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix}, \qquad \rho(t_2)=\begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix}=-\rho(t_3). \end{align*} If $v_1:=\begin{pmatrix} i \\ 1\end{pmatrix}$ we have that $\rho(t_1)(v_1)=-v_1$ and $\rho(t_2)(v_1)=v_1=-\rho(t_3)(v_1)$. We define $W:=\mathbb C-\text{span of } \{g_1v_1,g_2v_1,g_3v_1 \}$. Then $W$ is a braiding subspace of $M({\mathcal O}_{\pi},\rho)$ of Cartan type with \begin{align*} \mathcal Q=\begin{pmatrix} -1 & -1 & 1 \\ 1 & -1 & -1 \\ -1 & 1 & -1 \end{pmatrix}, \qquad \mathcal A=\begin{pmatrix} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \end{pmatrix}. \end{align*} Since $\mathcal A$ is not of finite type we have that $\dim\toba({\mathcal O}_{\pi},\rho)=\infty$, by Theorem \ref{cartantype}. Finally, let us assume that the type of $\pi$ is $(2,4)$. Then ${\mathcal O}_{\pi}$ has $90$ elements and $\A_{6}^{\pi}=\langle \pi \rangle \simeq {\mathbb Z}_4$. We call $\widehat{{\mathbb Z}_4}=\{\chi_0,\chi_1,\chi_2,\chi_3 \}$, where $\chi_l(\pi)=i^l$, $l=0$, $1$, $2$, $3$. It is clear that if $\rho=\chi_l$, with $l=0$, $1$ or $3$, then $\rho(\pi)\neq -1$. This implies that $\dim\toba({\mathcal O}_{\pi},\rho)=\infty$, by Lemma \ref{odd}. \end{proof} \begin{obs} We can see that every maximal abelian subrack of ${\mathcal O}_{(12)(3456)}$ has two elements. Hence, $M({\mathcal O}_{(12)(3456)},\rho)$ is a negative braided space in the sense of \cite{AF}. \end{obs} \medbreak \subsection{Case $\am$, $m\geq 7$} Let $\pi \in \A_m$, with $|\pi|$ even. We now investigate the Nichols algebras associated with $\pi$ by reduction to the analogous study for the orbit of $\pi$ in $\sm$, \cite{AF}. By Remark \ref{obser} (ii), ${\mathcal O}_{\pi}^{\am}={\mathcal O}_{\pi}^{\sm}$ and $[\sm^{\pi}:\am^{\pi}]=2$. So, we can determinate the irreducible representations of $\am^{\pi}$ from those of $\sm^{\pi}$. We know that if the type of $\pi$ is $(1^{b_1},2^{b_2},\dots,m^{b_m})$, then $\sm^{\pi}= T_1 \cdots T_m, $ with $T_i \simeq {\mathbb Z}_i^{b_i}\rtimes \mathbb S_{b_i}$, $1\leq i\leq m$. \medbreak {\bf{Some generalities and notation.}} Let $G$ be a finite group, $H$ a subgroup of $G$ of index two, and $\eta$ a representation of $G$. It is easy to see that $$ \eta '(g):=\begin{cases}\eta(g) &, \text{if $g \in H$},\\ -\eta(g) &,\text{if $g \in G \setminus H$}, \end{cases} $$ defines a new representation of $G$. Notice that $\operatorname{Res}_{H}^{G}\eta=\operatorname{Res}_{H}^{G}\eta '$. On the other hand, any representation $\rho$ of $H$ defines a representation $\overline{\rho}$ of $H$, call it the \emph{conjugate representation of $\rho$}, given by $\overline{\rho}(h):=\rho(ghg^{-1})$, for every $h \in H$, where $g$ is an arbitrary fixed element in $G\setminus H$. Since $g$ is unique up to multiplication by an element of $H$, the conjugate representation is unique up to isomorphism. Let $s\in H$ such that ${\mathcal O}_{s}^{H}={\mathcal O}_{s}^{G}$; thus $[G^s:H^s]=2$. Let $\eta$ in $\widehat{G^s}$. Then we have two cases: \begin{itemize} \item[(i)] $\eta \not\simeq \eta '$. If $\rho:=\operatorname{Res}_{H^s}^{G^s} \eta$, then $\rho \in \widehat{H^s}$, $\rho\simeq \overline{\rho}$ and $\operatorname{Ind}_{H^s}^{G^s} \rho \simeq \eta \oplus \eta'$. \item[(ii)] $\eta \simeq \eta '$. We have that $\operatorname{Res}_{H^s}^{G^s} \eta \simeq \rho \oplus \overline{\rho}$ and $\operatorname{Ind}_{H^s}^{G^s} \rho \simeq \eta \simeq \operatorname{Ind}_{H^s}^{G^s} \overline{\rho}$. \end{itemize} Moreover, if $\rho$ is an irreducible representation of $H^s$, then $\rho$ is a restriction of some $\eta \in \widehat{G^s}$ or is a direct summand of $\operatorname{Res}_{H^s}^{G^s} \eta$ as in (ii), see \cite[Ch. 5]{FH}. \begin{obs} If $\eta \in \widehat{G^s}$ and $\rho:=\operatorname{Res}_{H^s}^{G^s} \eta$, it is easy to check that \begin{align} M({\mathcal O}^{G}_{s},\eta)&\simeq M({\mathcal O}^{H}_{s},\rho), & &\text{for the case (i)},\\ M({\mathcal O}^{G}_{s},\eta)&\simeq M({\mathcal O}^{H}_{s},\rho) \oplus M({\mathcal O}^{H}_{s},\overline{\rho}), & & \text{for the case (ii)}, \end{align} as braided vector spaces. \end{obs} We apply these observations to the case $G=\sm$ and $H=\am$. We use some notations given in \cite[Section II.D]{AF}. \begin{lema} Assume that the type of $\pi$ is $((2r)^n)$, with $r\geq 1$ and $n$ even. Let $\rho$ in $\widehat{\A_{m}^{\pi}}$, with $m=2rn$. \begin{itemize} \item[(a)] If $q_{\pi \pi}\neq -1$, then $\dim\toba({\mathcal O}_{\pi}, \rho) = \infty$. \item[(b)] If $\rho \simeq \overline{\rho}$ and $q_{\pi \pi}= -1$, then \begin{itemize} \item[(I)] if $r=1$, then $\dim \toba({\mathcal O}, \rho)= \infty$. \item[(II)] Assume that $r > 1$. If $\deg \rho>1$, then $\dim \toba({\mathcal O}, \rho)= \infty$. Assume that $\deg \rho=1$. If $\rho=\chi_{r,\dots,r} \otimes \mu$, with $r$ even or odd, or if $\rho=\chi_{c,\dots,c} \otimes \mu$, with $r$ even and $c=\frac{r}{2}$ or $\frac{3r}{2}$, where $\mu=\varepsilon$ or $\sgn$, then the braiding is negative; otherwise, $\dim \toba({\mathcal O}, \rho)= \infty$. \end{itemize} \end{itemize} \end{lema} \begin{proof} (a) follows by Remark \ref{obser} (ii) and Lemma \ref{odd}. (b). Since \emph{$\rho \simeq \overline{\rho}$}, $\rho=\operatorname{Res}^{\sm^{\pi}}_{\am^{\pi}}(\eta)$, with $\eta\in \widehat{\sm^{\pi}}$, $\eta \not \simeq \eta '$ and $\operatorname{Ind}^{\sm^{\pi}}_{\am^{\pi}}\rho \simeq \eta \oplus \eta '$. Notice that $\eta(\pi)=-\Id$ because $\rho(\pi)=-\Id$. Now, as the racks are the same, i.e. ${\mathcal O}_{\pi}^{\am}={\mathcal O}_{\pi}^{\sm}$, we can apply \cite[Theorem 1]{AF}. \end{proof} \begin{obs} Keep the notation of the Lemma. If $\rho$ is not isomorphic to its conjugate representation $\overline{\rho}$, then there exists $\eta \in \widehat{\sm^{\pi}}$ such that $\operatorname{Res}^{\sm^{\pi}}_{\am^{\pi}}(\eta)=\rho \oplus \overline{\rho}$ and $\operatorname{Ind}^{\sm^{\pi}}_{\am^{\pi}}\rho \simeq \eta \simeq \eta ' \simeq \operatorname{Ind}^{\sm^{\pi}}_{\am^{\pi}}\overline{\rho}$. Clearly, $\eta(\pi)$ and $\overline{\rho}(\pi)$ act by scalar $-1$, and we have that $ M({\mathcal O}^{\sm}_{\pi},\eta)\simeq M({\mathcal O}^{\am}_{\pi},\rho) \oplus M({\mathcal O}^{\am}_{\pi},\overline{\rho})$ as braided vector spaces. We do not get new information with the techniques available today. \end{obs} \bigbreak \section{On Nichols algebras over $\dn$}\label{nichols-dn} We fix the notation: the dihedral group $\dn$ of order $2n$ is generated by $x$ and $y$ with defining relations $x^2 = e = y^n$ and $xyx = y^{-1}$. Let $\omega$ be a primitive $n$-th root of 1 and let $\chi$ be the character of $\langle y \rangle$, $\chi(y)=\omega$. If $s \in \dn$ then we denote the conjugacy class by ${\mathcal O}_s^n$ or simply ${\mathcal O}_s$. \begin{theorem}\label{dn} Let $M({\mathcal O}, \rho)$ be the irreducible Yetter-Drinfeld module over $\mathbb C\dn$ corresponding to a pair $({\mathcal O}, \rho)$. Assume that its Nichols algebra $\toba({\mathcal O}, \rho)$ is finite-dimensional. \begin{enumerate} \item[(a)] If $n$ is odd, then $({\mathcal O}, \rho) = ({\mathcal O}_{x}, \sgn)$, where $\sgn \in \widehat{\dn^x}$, $\dn^x=\langle x \rangle \simeq \mathbb Z_2$. \item[(b)] If $n =2m$ is even, then $({\mathcal O}, \rho)$ is one of the following: \begin{itemize} \item[(i)] $({\mathcal O}_{y^{m}}, \rho)$ where $\rho\in \widehat{\dn}$ satisfies $\rho(y^{m}) = -1$. \item[(ii)] $({\mathcal O}_{y^{h}}, \chi^j)$ where $1\leq h\leq n-1$, $h\neq m$ and $\omega^{hj} = -1$. \item[(iii)] $({\mathcal O}_{x}, \sgn \otimes \sgn)$ or $({\mathcal O}_{x}, \sgn\otimes \varepsilon)$, where $\sgn \otimes \sgn$, $\sgn\otimes \varepsilon \in \widehat{\dn^x}$, $\dn^x=\langle x \rangle \oplus \langle y^m \rangle \simeq \mathbb Z_2 \times \mathbb Z_2$. \item[(iv)] $({\mathcal O}_{xy}, \sgn \otimes \sgn)$ or $({\mathcal O}_{xy}, \sgn\otimes \varepsilon)$, where $\sgn \otimes \sgn$, $\sgn\otimes \varepsilon \in \widehat{\dn^{xy}}$, $\dn^{xy}=\langle xy \rangle \oplus \langle y^m \rangle \simeq \mathbb Z_2 \times \mathbb Z_2$. \end{itemize} \end{enumerate} In the cases (i) and (ii) the dimension is finite. In the cases (iii) and (iv), the braiding is negative in the sense of \cite{AF}. \end{theorem} \begin{obs}\label{obs:isoracks} There are isomorphisms of braided vector spaces \begin{align*}M({\mathcal O}_{x}, \sgn \otimes \sgn) &\simeq M({\mathcal O}_{xy}, \sgn \otimes \sgn), \\ M({\mathcal O}_{x}, \sgn\otimes \varepsilon) &\simeq M({\mathcal O}_{xy}, \sgn \otimes \varepsilon). \end{align*}\end{obs} \begin{obs}\label{cociente} Assume for simplicity that $n$ is odd and that $n= de$, where $d$, $e$ are integers $\ge 2$. Then the (indecomposable) rack ${\mathcal O}^n_x$ is a disjoint union of $e$ racks isomorphic to ${\mathcal O}^d_x$; in other words, ${\mathcal O}^n_x$ is an extension of ${\mathcal O}^e_x$ by ${\mathcal O}^d_x$ (and vice versa), see \cite[Section 2]{AG1}. Thus, there is an epimorphism of braided vector spaces $M({\mathcal O}^n_{x}, \sgn) \to M({\mathcal O}^e_{x}, \sgn)$, as well as an inclusion $M({\mathcal O}^d_{x}, \sgn) \to M({\mathcal O}^n_{x}, \sgn)$. The techniques available today do not allow to compute the Nichols algebra $\toba({\mathcal O}^n_{x}, \sgn)$ from the knowledge of the Nichols algebra $\toba({\mathcal O}^e_{x}, \sgn)$. \end{obs} \begin{obs} In Theorem \ref{dn} we do \emph{not} claim that the conditions are sufficient. See Tables \ref{tabladnimpar}, \ref{tabladnpar}. For instance, it is known that $\dim \toba({\mathcal O}^n_{x}, \sgn) < \infty$ when $n = 3$ -- see \cite{ms}; for other odd $n$, this is open. \end{obs} \begin{table}[t] \begin{center} \begin{tabular}{|p{5cm}|p{2cm}|p{1,5cm}|p{1,7cm}|}\cline{1-2} \hline {\bf Orbit} & {\bf Isotropy \newline group} & {\bf Rep.} & $\dim \toba(V)$ \\ \hline $e$ & $\dn$ & any & $\infty$ \\ \hline ${\mathcal O}_{y^{h}}= \{y^{\pm h}\}$, $h\neq 0$, \newline $ \mid {\mathcal O}_{y^{h}} \mid =2$ & ${\mathbb Z}_n \simeq \langle y \rangle$ & any &$\infty$ \\ \hline ${\mathcal O}_{x}= \{xy^{h}:0\leq h\leq n-1\}$, \newline $ \mid {\mathcal O}_{x} \mid =n$ & ${\mathbb Z}_2 \simeq \langle x \rangle$ & $\varepsilon$ &$\infty$ \\ \cline{3-4} & & $\sgn$ & negative\newline braiding \\ \hline \end{tabular} \end{center} \caption{Nichols algebras of irreducible Yetter-Drinfeld modules over $\dn$, $n$ odd.}\label{tabladnimpar} \end{table} Let us now proceed with the proof of Theorem \ref{dn}. \begin{proof} If $s=\id$, then $q_{ss}=1$ and $\dim \toba({\mathcal O},\rho)=\infty$, from Lemma \ref{trivialbraiding}. We consider now two cases. \emph{CASE 1:} $n$ odd. (I) If $s=y^h$, with $1\leq h \leq n$, it is easy to see that ${\mathcal O}_{y^h}=\{ y^h,y^{-h} \}$ and $\dn^{y^h}=\langle y \rangle \simeq {\mathbb Z}_n$. Then $\widehat{{\mathbb Z}_n}=\{ \chi_l \}_{l=1}^n$, where $\chi_l(y)=\omega^l$, with $\omega=\exp(\frac{i 2\pi}{n}) \in {\mathbb G}_{n}$ a primitive $n$-th root of $1$. Let us consider $M({\mathcal O}_{y^h},\chi_l)$; it is a braided vector space of diagonal type. If $q_{ss}\neq -1$, then $\dim \toba({\mathcal O}_{y^h},\chi_l)=\infty$, from Lemma \ref{odd}. Assume $q_{ss}=-1$; so we have $ -1=\chi_l(s)=\chi_l(y^h)=\omega^{lh}. $ This is a contradiction because $n$ is odd. (II) If $s=x$, then ${\mathcal O}_{x}=\{ x,xy,\dots,xy^{n-1}\}$ and $\dn^{x}=\langle x \rangle \simeq {\mathbb Z}_2$. Clearly, $\dim \toba({\mathcal O}_{x},\varepsilon)=\infty$. On the other hand, $M({\mathcal O}_{x},\sgn)$ is a negative braided vector space, since every abelian subrack of ${\mathcal O}_{x}$ has one element; indeed $xy^{j}xy^{k}=xy^{k}xy^{j}$, $0\leq j,k \leq n-1$, if and only if $j=k$. Therefore, the part (a) of the Theorem is proved. \begin{table}[t] \begin{center} \begin{tabular}{|p{5,4cm}|p{2,1cm}|p{2cm}|p{1,7cm}|} \hline {\bf Orbit} & {\bf Isotropy \newline group} & {\bf Rep.} & $\dim \toba(V)$ \\ \hline $e$ & $\dn$ & any & $\infty$ \\ \hline ${\mathcal O}_{y^{m}}= \{y^{m}\}$, $ \mid {\mathcal O}_{y^{m}} \mid = 1$ & $\dn$ & \vspace{1pt}$(V,\rho)\in \widehat{\dn}$, $\rho(y^{m}) = 1$ & $\infty$ \\ \cline{3-4} & & \vspace{0pt} $(V,\rho)\in \widehat{\dn}$, $\rho(y^{m}) = -1$ & \vspace{0pt} $2^{\dim V}$ \\ \hline ${\mathcal O}_{y^{h}}= \{y^{\pm h}\}$, $h\neq 0, m$, \newline $ \mid {\mathcal O}_{y^{h}} \mid =2$ & ${\mathbb Z}_n \simeq \langle y \rangle$ & $\chi^j$, \newline $\omega^{hj} = -1$ & 4 \\ \cline{3-4} & & $\chi^j$, \newline$\omega^{hj} \neq -1$ & $\infty$ \\ \hline ${\mathcal O}_{x}= \{xy^{2h}:0\leq h\leq m-1\}$ \newline $ \mid {\mathcal O}_{x} \mid = m$ & ${\mathbb Z}_2 \times {\mathbb Z}_2 \simeq \newline \langle x\rangle \oplus \langle y^m \rangle$ & $\varepsilon \otimes \varepsilon$, \newline $\varepsilon \otimes \sgn$ &$\infty$ \\ \cline{3-4} & & $\sgn \otimes \sgn$, \newline $\sgn \otimes \varepsilon$ & negative\newline braiding \\ \hline ${\mathcal O}_{xy}= \{xy^{2h +1}:0\leq h\leq m-1\}$ \newline $ \mid {\mathcal O}_{x} \mid = m$ & ${\mathbb Z}_2 \times {\mathbb Z}_2 \simeq\newline \langle xy \rangle \oplus \langle y^m \rangle$ & $\varepsilon \otimes \varepsilon$, \newline $\varepsilon \otimes \sgn$ &$\infty$ \\ \cline{3-4} & & $\sgn \otimes \sgn$, \newline $\sgn \otimes \varepsilon$ & negative\newline braiding \\ \hline \end{tabular} \end{center} \caption{Nichols algebras of irreducible Yetter-Drinfeld modules over $\dn$, $n=2m$ even.}\label{tabladnpar} \end{table} \medbreak \emph{CASE 2:} $n$ even. Let us say $n=2m$. (I) If $s=y^m$, then ${\mathcal O}_{y^m}=\{y^m\}$ and $\dn^{y^m}=\dn$. Clearly, $\dim \toba({\mathcal O}_{y^m},\rho)=\infty$, for every $\rho \in \widehat{\dn}$ with $\rho(s)=\Id$. On the other hand, if $(\rho,V) \in \widehat{\dn}$ is such that $\rho(s)=-\Id$, then it is straightforward to prove that $\toba({\mathcal O}_{y^m}, \rho)=\bigwedge(V)$, the exterior algebra of $V$; hence $\dim \toba({\mathcal O}_{y^m},\rho)=2^{\dim V}$. (II) If $s=y^h$, $h \neq 0,m$; then ${\mathcal O}_{y^h}=\{ y^h ,y^{-h}\}$ and $\dn^{y^h}=\langle y \rangle \simeq {\mathbb Z}_n$. From Lemma \ref{odd}, it is clear that $\dim \toba({\mathcal O}_{y^h},\chi_l)=\infty$, for every $l$ such that $\chi_l(y^h)\neq -1$, i.e. $\omega^{hl}\neq -1$. On the other hand, it is easy to see that $\toba({\mathcal O}_{y^h},\chi_l)=\bigwedge(M({\mathcal O}_{y^h},\chi_l))$, hence $\dim \toba({\mathcal O}_{y^h},\chi_l)=4$, for every $\chi_l$ with $\chi_l(y^h)=-1$. (III) If $s=x$, then ${\mathcal O}_{x}=\{ x y^{2h} \, : \, 0\leq h \leq m-1 \}$ and $\dn^{x}=\langle x \rangle \oplus \langle y^m \rangle \simeq {\mathbb Z}_2 \times {\mathbb Z}_2$. From Lemma \ref{trivialbraiding}, $\dim \toba({\mathcal O}_{x},\varepsilon \otimes \varepsilon )=\dim \toba({\mathcal O}_{x},\varepsilon \otimes \sgn)=\infty$. For the cases $\rho= \sgn \otimes \varepsilon$ or $\sgn \otimes \sgn$, we note the following fact. \begin{itemize} \item[(i)] If $m$ is odd and $0 \leq j,k \leq m-1$, we have that $$ x y^{2j} x y^{2k} = xy^{2k} xy^{2j} \quad \text{ if and only if } \quad j=k.$$ \item[(ii)] If $m$ is even and $0 \leq j \leq k \leq m-1$, we have that $$ xy^{2j} xy^{2k} = xy^{2k} xy^{2j} \quad \text{ if and only if } \quad \text{$k=j$ or $j+\frac{m}{2}$}.$$ \end{itemize} The cases (i) and (ii) say that every maximal abelian subrack of ${\mathcal O}_{x}$ has one and two elements, respectively. Hence, in both cases the braiding is negative. Indeed, the result is obvious for the case (i), while in the case (ii) we have that if $t_j:= xy^{2j}= xy^j \, x \, (x y^j)^{-1}$ and $t_k:=xy^{2k}=xy^k \, x \, (xy^k)^{-1}$ commute in ${\mathcal O}_{x}$, then $q_{jj}=-1=q_{kk}$ and $q_{jk}q_{kj}=1$; thus the braiding is negative. (IV) If $s=xy$, then ${\mathcal O}_{xy}=\{ xy^{2h+1} \, : \, 0\leq h \leq m-1 \}$ and $\dn^{xy}=\langle xy \rangle \oplus \langle y^m \rangle \simeq {\mathbb Z}_2 \times {\mathbb Z}_2$. The result follows as in (III) using the isomorphism $\dn \to \dn$, $x\mapsto xy$, $y\mapsto y$. \end{proof} \section{On Nichols algebras over semisimple Hopf algebras}\label{nichols-other} Let $A$ be a Hopf algebra. Let $J\in A\otimes A$ be a twist and let $A^J$ be the corresponding twisted Hopf algebra. If $A$ is a Hopf subalgebra of a Hopf algebra $H$, then $J$ is a twist for $H$ and $A^J$ is a Hopf subalgebra of $H^J$. Now, if $A$ is semisimple, then this induces a bijection \begin{multline}\label{isom-twist} \{\text{isoclasses of Hopf algebras with coradical $\simeq A$}\} \\\overset{\thicksim}{\longrightarrow} \{\text{isoclasses of Hopf algebras with coradical $\simeq A^J$}\}, \end{multline} that preserves standard invariants like dimension, Gelfand-Kirillov dimension, etc. Let now $A = \mathbb C \mathbb A_5$ and let $J\in A\otimes A$ be the non-trivial twist defined in \cite{Ni}. By \eqref{isom-twist}, we conclude immediately from Theorem \ref{mainteor}. \begin{theorem}\label{twist-a-cinco} Let $H$ be a finite-dimensional Hopf algebra with coradical isomorphic to $(\mathbb C \mathbb A_5)^J$. Then $H\simeq (\mathbb C \mathbb A_5)^J$. \qed \end{theorem} Again, this is the first classification result we are aware of, for finite-dimensional Hopf algebras with coradical isomorphic to a fixed non-trivial semisimple Hopf algebra. Recently, two semisimple Hopf algebras $B\simeq (\mathbb C\,\mathbb D_3 \times \mathbb D_3)^{J'}$ and $C\simeq (\mathbb C\,\mathbb D_3 \times \mathbb D_5)^{J''}$ were discovered in \cite{GN}. Both $B$ and $C$ are simple, that is they have no non-trivial normal Hopf subalgebra. Since there are finite-dimensional non-semisimple pointed Hopf algebras with group either $\mathbb D_3$ or $\mathbb D_5$, there are finite-dimensional non-semisimple Hopf algebras with coradical isomorphic either to $B$ or to $C$. \subsection*{Acknowledgement} We are grateful to Professor John Stembridge for information on Coxeter groups, in particular reference \cite{BG}. We thank Mat\'\i as Gra\~na, Sebasti\'an Freyre and Leandro Vendram\'\i n for interesting discussions.
2,877,628,090,050
arxiv
\section{Introduction} Given a hypergraph $\mathcal{H}$, let $v(\mathcal{H})$ denote the number of vertices of $\mathcal{H}$ and $e(\mathcal{H})$ denote the number of hyperedges. We denote the sets of vertices and hyperedges of $\mathcal{H}$ by $V(\mathcal{H})$ and $E(\mathcal{H})$, respectively. We say that a hypergraph is $r$-uniform if every hyperedge has size $r$. By $K_t^{(r)}$ we denote the $t$-vertex $r$-uniform clique (if $r=2$ we omit the superscript). The set of the first $n$ integers is denoted by $[n]$, and for a set $S$, we denote by $\binom{S}{r}$ the set of $r$-element subsets of $S$. Furthermore we denote the power set of a set $S$ by $2^S$. For sets $A$ and $B$ we denote their disjoint union by $A\sqcup B$. Ramsey theory is among the oldest and most intensely investigated topics in combinatorics. It began with the seminal result of Ramsey from 1930. \begin{theorem}[Ramsey~\cite{Ramsey}] Let $r,t$ and $k$ be positive integers. Then there exists an integer $N$ such that any coloring of the $N$-vertex $r$-uniform complete hypergraph with $k$ colors contains a monochromatic copy of the $t$-vertex $r$-uniform complete hypergraph. \end{theorem} Estimating the smallest value of such an integer $N$ (the so-called Ramsey number) is a notoriously difficult problem and only weak bounds are known. Given the difficulty of this problem, many people began investigating variations of this problem where graphs other than the complete graphs are considered. An example of an early result in this direction due to Chv\'atal~\cite{TreeComplete} asserts that the Ramsey number of a $t$-clique versus any $m$-vertex tree is precisely $N=1 + (m-1)(t-1)$. That is, any red-blue coloring of the complete graph $K_N$ yields a red $K_t$ or a blue copy of a given $m$-vertex tree. We now give the definition of the Ramsey number for general collections of hypergraphs. \begin{definition} Let $\mathcal{H}_1,\mathcal{H}_2,\dots,\mathcal{H}_k$ be nonempty collections of $r$-uniform hypergraphs. The Ramsey number $R^r_k(\mathcal{H}_1,\mathcal{H}_2,\dots,\mathcal{H}_k)$ is defined to be the minimum integer $N$ such that if the hyperedges of the complete $r$-uniform $N$-vertex hypergraph are colored with $k$ colors, then for some $1\le i \le k$, there is a monochromatic copy of a member of $\mathcal{H}_i$. If $k$ is clear by context, then we omit $k$ in this notation. If some of the collections $\mathcal{H}_i$ consist of a single hypergraph $\mathcal{G}$, then we write $\mathcal{G}$ in place of $\mathcal{H}_i = \{ \mathcal{G}\}$. \end{definition} Ramsey problems for a variety of hypergraphs and classes of hypergraphs have been considered (for a recent survey of such problems see~\cite{ramseysurvey}). In this article, we will primarily be concerned with families of hypergraphs defined in a natural way from a given graph $G$ (or hypergraph $\mathcal{H}$). In the case when $G$ is a path or a cycle, Berge~\cite{Berge} introduced a very general class of hypergraphs defined in terms of $G$. In particular if $G=P_t$, the path with $t$ edges, then a Berge-$P_t$ is any hypergraph with $t$ hyperedges $e_1,e_2,\dots,e_t$ containing vertices $v_1,v_2,\dots,v_{t+1}$ such that $v_i,v_{i+1} \in e_i$ for all $1 \le i \le t$ (a Berge-cycle is defined analogously). The Ramsey problem for Berge-paths and cycles has received much attention. Of particular interest is a result of Gy\'{a}rf\'{a}s and S\'{a}rk\"{o}zy~\cite{Gyafas_bergecycle} showing that the 3-color Ramsey number of a 3-uniform Berge-cycle of length $n$ is asymptotic to $\frac{5n}{4}$ (the 2-color case was settled exactly in~\cite{2colorcycles}). The general definition of a Berge-$G$ for an arbitrary graph $G$ was introduced by Gerbner and Palmer in~\cite{gerbner2017extremal}. Since their publication, the Tur\'an problem for Berge-$G$-free hypergraphs has been investigated heavily (see, for example~\cite{AnsteeBerge},~\cite{Tait}~and~\cite{thres}). Complete graphs were considered by several authors in~\cite{gyori},~\cite{iran},~\cite{newgyarfas},~\cite{generallemmas} and~\cite{gerbner}. However, the analogous Ramsey problem has not yet been investigated beyond the special cases of paths and cycles. We will recall the definition of the set of Berge-copies of a graph $G$. In fact, we will give a more general definition in which rather than starting with a graph $G$ we may start with any uniform hypergraph. \begin{definition} Let $\mathcal{H}=(V,\mathcal{E})$ be a $k$-vertex $s$-uniform hypergraph. Then given an integer $r \ge s$, $B\mathcal{H}$ (the set of Berge-copies of $\mathcal{H}$) is defined to be the set of $r$-uniform hypergraphs $\mathcal{H}'=(W,\mathcal{F})$ such that there exist $U \subseteq W$ and bijections $\phi:V\to U$, $\psi:\mathcal{E} \to \mathcal{F}$ such that for all $e=\{u_1,u_2,\dots,u_s\}\in \mathcal{E}$, $\{\phi(u_1),\phi(u_2),\dots,\phi(u_s)\} \subseteq \psi(e)$. In this case, we call $U$ the \textit{core} of $\mathcal{H}'$. \end{definition} \begin{remark} For simplicity, we will often (when it cannot lead to confusion) say that a hypergraph is a $B\mathcal{H}$ to mean it is an element of $B\mathcal{H}$. For example we may, in a colored hypergraph, say that a certain hypergraph is a red $BK_t$, meaning that it is an element of the set $BK_t$ which is red. Similar terminology will be used with respect to the other structures which we define later. \end{remark} One of the main topics of the present paper is determining the Ramsey number of the set of Berge-copies of a hypergraph (mainly in the graph case). We show that the $2$-color Ramsey number of $BK_t$ versus $BK_s$ is linear. In particular, we prove the following theorem: \begin{theorem}\label{berge-Ramsey} \begin{displaymath} R^3(BK_s, BK_t) = \begin{cases} t +s -1 & \textrm{if $s=t=2$, $s=t=3$ or $\{s,t\}= \{2,3\}$ or $\{s,t\}= \{2,4\}$} ,\\ t+s-2 & \textrm{if $s = 2$, $t\geq 5$, or $s=3$, $t\geq 4$ or $s=t=4$}, \\ t+s-3 & \textrm{if $s \geq 4$ and $t\geq 5$.} \end{cases} \end{displaymath} \end{theorem} For higher uniformity, we will show the following theorem. \begin{theorem}\label{berge:4-uniform} \begin{displaymath} R^4(BK_t, BK_t) = \begin{cases} t +2 & \textrm{if $2\leq t \leq 5$},\\ t+1 & \textrm{Otherwise}. \end{cases} \end{displaymath} \end{theorem} Moreover, for general uniformity $k$ we prove \begin{theorem}\label{berge:5-uniform} For $k\geq 5$ and $t \ge t_0(k)$ (for $k=5$, $t_0 =23$ suffices), \begin{displaymath} R^k(BK_t, BK_t) = t. \end{displaymath} \end{theorem} \begin{remark} We remark that a similar direction (but with mostly non-overlapping results) has been pursued by two other groups independently~\cite{axenovich,othergroup}. In particular,~\cite{axenovich} is primarily concerned with non-uniform hypergraphs whereas we focus solely on the uniform case. \end{remark} In addition to Berge-hypergraphs, we consider a variety of related structures. First, we discuss a more restrictive class of hypergraphs defined from a given hypergraph $\mathcal{H}$. \begin{definition} Let $\mathcal{H} = (V,\mathcal{E})$ be a $k$-vertex $s$-uniform hypergraph and let $S \subset V$. The trace of $\mathcal{H}$ on $S$, denoted $\Tr(\mathcal{H},S)$, is the hypergraph with vertex set $S$ and hyperedge set $\{h \cap S:h\in \mathcal{E}\}$. Then, given $r \ge s$, $T \mathcal{H}$ is defined to be the set of $r$-uniform hypergraphs $\{\mathcal{H}':\Tr(\mathcal{H}',V(\mathcal{H})) = \mathcal{H}\}$. For each such element $\mathcal{H}' \in T\mathcal{H}$, we refer to $V(\mathcal{H})$ as the core of $\mathcal{H}'$. \end{definition} This notion originates from the idea of shattering sets and the Sauer-Shelah lemma~\cite{SS1,SS2,SS3}. This lemma provides an upper bound on the size of an $n$-vertex (non-uniform) hypergraph avoiding $\Tr(\mathcal{H},S) = 2^S$ for all $k$-vertex sets $S$. Frankl and Pach~\cite{FranklPach} investigated the same problem with the restriction that the hypergraph is $r$-uniform. In the case when $\mathcal{H}$ is a (graph) cycle, $T\mathcal{H}$ was studied under the name weak $\beta$-cycle~\cite{weakbeta}. In the case of complete graphs, bounds were obtained by Mubayi and Zhao in~\cite{MubayiZhao}. For a survey on extremal problems for traces see~\cite{tracesurvey}. We now turn our attention to an even more restrictive notion called the expansion of a hypergraph. \begin{definition} Let $\mathcal{H}=(V,\mathcal{E})$ be an $s$-uniform hypergraph. The $r$-expansion $H \mathcal{H}$, for $r\ge s$, is defined to be the $r$-uniform hypergraph formed by adding $r-s$ distinct new vertices to every hyperedge in $\mathcal{H}$. Precisely, for each hyperedge $e \in \mathcal{E}$, let $U_e = \{u_{e,1},u_{e,2},\dots,u_{e,r-s}\}$, and define $H \mathcal{H} = (V \cup (\cup_{e \in \mathcal{E}} U_e),\mathcal{F})$ where $\mathcal{F} = \{e \cup U_e:e\in E\}$. We call $V$ the core of $\mathcal{H}$ and $V(\mathcal{H}) \setminus V$, the set of expansion vertices. \end{definition} If $\mathcal{H}$ is a cycle we recover the well-known notion of linear cycle. Ramsey and Tur\'an problems for linear cycles have been investigated intensely (see, for example~\cite{lincyc}). The Tur\'an problem when $\mathcal{H}$ is a complete graph was investigated in~\cite{mubayi} and~\cite{Pik}. See~\cite{expansions} for a detailed survey of Tur\'an problems on expansions. In this article, we investigate the $2$-color Ramsey number of the 3-expansion of complete graphs $K_t$. By definition, a $3$-expansion of complete $K_t$ has $\binom{t}{2}+t$ vertices. Thus $R^3(HK_{t}, HK_{t}) \geq \binom{t}{2}+t$. We prove in the following theorem yielding a cubic upper bound on $R^3(HK_{t}, HK_{s})$. \begin{theorem}\label{th:expansion-upper} For $t, s\geq 2$, we have \begin{displaymath} R^3(HK_{t}, HK_{s}) \leq 2st(s+t). \end{displaymath} \end{theorem} \begin{remark} Suppose $t \ge s$, as a lower bound we can take a blue clique on $t + \binom{t}{2} -1$ vertices. However, there is still a gap in the order of magnitudes of quadratic versus cubic. \end{remark} In \cite{conlon2015hedgehogs}, Conlon, Fox and R{\"o}dl proved the same bound for diagonal Ramsey numbers. They showed that $R^3(HK_{t}, HK_{t}) \leq 4t^3$. This bound was latter improved by Fox and Li in~\cite{fox2019ramsey} where it was shown that $R^3(HK_{t}, HK_{t}) = O(t^2 \ln t)$. Next we consider another way a hypergraph can be defined from another arbitrary hypergraph called a suspension~\cite{suspension} (or earlier enlargement~\cite{enlargement}). Conlon, Fox and Sudakov considered the Ramsey numbers of the 3-suspension of a graph versus a 3-uniform clique in a short section of \cite{conlon2010hypergraph}. \begin{definition} Let $\mathcal{H}=(V,\mathcal{E})$ be an $s$-uniform hypergraph. The $r$-suspension $S\mathcal{H}$, for $r \ge s$, is defined to be the hypergraph formed by adding a single fixed set of $r-s$ distinct new vertices to every edge in $\mathcal{H}$. Precisely, let $U = \{u_1,u_2,\dots,u_{r-s}\}$, and define $S \mathcal{H} = (V \cup U, \mathcal{F})$ where $\mathcal{F} = \{e \cup U:e \in E\}$. We call $V$ the core of $S\mathcal{H}$ and $U$ the set of suspension vertices. \end{definition} For suspensions of hypergraphs, we are only able to obtain Ramsey-type bounds using standard Ramsey number techniques. In particular, we show the following. \begin{theorem}\label{th:suspension} For $r\geq 3$, we have $$(1+o(1))\frac{\sqrt{2}}{e} t \sqrt{2}^t < R^r(SK_t, SK_t) \leq R^2(K_t, K_t)+ (r-2).$$ \end{theorem} Finally, we discuss a a class of hypergraphs defined from a graph which is larger than the class defined by a Berge-hypergraph. \begin{definition} The $2$-shadow of a hypergraph $\mathcal{H} = (V,\mathcal{E})$, denoted $\partial_2(\mathcal{H})$, is the graph $G = (V,E)$ where $E = \{\{x,y\}:\{x,y\}\subseteq e\in \mathcal{E}\}$. Given a graph $G = (V,E)$, define $\partial G$ to be the set of hypergraphs $\{\mathcal{H}: E(G) \subseteq \partial_2(\mathcal{H})\}$. \end{definition} In~\cite{mubayi}, Mubayi determined the Tur\'an number of $\partial K_t$ in all uniformities. In this paper, we prove the following. \begin{theorem}\label{thm:2-shadow} We have \begin{enumerate}[label=\rm{(\arabic*)}] \item $R^3(\partial K_2, \partial K_2) = 3$. \item $R^3(\partial K_2, \partial K_s) = s$ for $s\geq 3$. \item $R^3(\partial K_t, \partial K_s) = t+s-3$ for $s,t\geq 3$. \item $R^r(\partial K_t, \partial K_s) = \max\{s,t\}$ for $r \ge 4$ and $s,t \ge r$. \end{enumerate} \end{theorem} \begin{remark} Observe that for any graph $G$, we have $\{HG,SG\} \subset TG \subset BG \subset \partial G$. \end{remark} \noindent\textbf{Organization} The organization of the paper is as follows: In Section~\ref{sc:Berge}, we give the proof of Theorems~\ref{berge-Ramsey},~\ref{berge:4-uniform} and \ref{berge:5-uniform}. In Section~\ref{2shadow}, we give the proof of Theorem~\ref{thm:2-shadow}. In Section~\ref{sc:trace}, we show some results on the Ramsey number of trace-cliques. In Section~\ref{sc:exp-susp}, we give the proof of Theorems~\ref{th:expansion-upper} and~\ref{th:suspension}. \vspace{0.5cm} \section{Ramsey number of Berge-hypergraphs}\label{sc:Berge} To avoid tedious case analysis, some of the small cases are verified by computer. The code is available at \url{https://github.com/wzy3210/berge_Ramsey}. We list below the results verified by the computer. \begin{proposition}\label{prop:comp} We have \begin{enumerate}[label=\rm{(\arabic*)}] \item $R^3(BK_3, BK_4)= 5$. \item $R^3(BK_4, BK_5) = 6$. \item $R^4(BK_t,BK_t) \leq t+2$ for $2\leq t\leq 5$. \item $R^4(BK_6,BK_6) \leq 7$. \end{enumerate} \end{proposition} \subsection{Proof of Theorem~\ref{berge-Ramsey}} Recall that the number $R^3(BK_s, BK_t)$ is the smallest number $N$ such that any $2$-edge-colored complete $3$-uniform hypergraph (with colors blue and red) on $n\geq N$ vertices either contains a blue Berge $K_s$ or a red Berge $K_t$. In this subsection, we will show that \begin{displaymath} R^3(BK_s, BK_t) = \begin{cases} t +s -1 & \textrm{if $s=t=2$, $s=t=3$ or $\{s,t\}= \{2,3\}$ or $\{s,t\}= \{2,4\}$} ,\\ t+s-2 & \textrm{if $s = 2, t\geq s+3$, or $s=3, t\geq s+1$ or $s=t=4$}, \\ t+s-3 & \textrm{if $s \geq 4$ and $t\geq 5$.} \end{cases} \end{displaymath} Let us first deal with the cases when one of $s$ or $t$ is small. In particular, we prove them in the following proposition. \begin{proposition}\label{prop:small-case} We have \begin{enumerate}[label=\rm{(\arabic*)}] \item\label{sm:1} $R^3(BK_2,BK_2) = 3$. \item\label{sm:2} $R^3(BK_2,BK_3) = 4$. \item\label{sm:3} $R^3(BK_3,BK_3) = 5$. \item\label{sm:2-4} $R^3(BK_2,BK_4) = 5$. \item\label{sm:4} $R^3(BK_4,BK_4) = 6$. \item\label{sm:5} $R^3(BK_2,BK_t) = t$ when $t \geq 5$. \item\label{sm:6} $R^3(BK_3,BK_t) = t+1$ when $t\geq 4$. \end{enumerate} \end{proposition} \begin{proof} \ref{sm:1} is trivial since any non-trivial edge-colored $3$-uniform hypergraph contains at least $3$ vertices and any edge is a $BK_2$. For~\ref{sm:2}, $R^3(BK_2, BK_3) > 3$ since a single red edge is a complete $K_3^{(3)}$ and is not a red $BK_3$. For the upper bound, suppose we have an edge-colored $K^{(3)}_4$. If it has a blue edge, we get a blue $BK_2$. Otherwise all of the $4$ edges are red, in which case we have a red $BK_3$. Similar reasoning gives~\ref{sm:2-4} and~\ref{sm:5}. For~\ref{sm:3}, $R^3(BK_3,BK_3) > 4$ since an edge-colored $K^{(3)}_4$ with two red and two blue edges does not have a monochromatic $BK_3$. Similar reasoning gives the lower bound of~\ref{sm:4}. The upper bounds of~\ref{sm:3} and~\ref{sm:4} follow from Lemma~\ref{berge:induction}. For~\ref{sm:6}, we first show that $R^3(BK_3, BK_t) > t$. Let $\mathcal{H}$ be an edge-color $K^{(3)}_t$ with two special vertices $v_1, v_2$ such that any hyperedge containing both $v_1, v_2$ is blue and all other hyperedges are colored red. Observe that any blue Berge clique or red Berge clique cannot contain both $v_1$ and $v_2$. Therefore, there is no blue $BK_3$ or red $BK_t$ in $\mathcal{H}$. For the upper bound, it is checked by computer that $R^3(BK_3, BK_4)= 5$ and the bound $R^3(BK_3, BK_t) \leq t+1$ ($t\geq 5$) follows from Lemma~\ref{berge:induction}, which will be proven later. \end{proof} Next we show the lower bound in the following proposition. \begin{proposition}\label{berge:lower} Suppose $s,t\geq 3$. We then have \begin{displaymath} R^3(BK_t, BK_s) \geq t+s-3. \end{displaymath} \end{proposition} \begin{proof} We will construct a $2$-edge-colored complete $3$-uniform hypergraph $\mathcal{H}$ on $t+s-4$ vertices without a blue $BK_t$ and red $BK_s$. Let $V(\mathcal{H}) = A\sqcup B$ where $\abs{A} = t-2$ and $\abs{B} = s-2$. For all $a,a' \in A$, $b\in B$, color the hyperedge $\{a,a',b\}$ blue. For all $a \in A$, $b,b'\in B$, color the hyperedge $\{a,b,b'\}$ red. Moreover, color all triples in $A$ blue and all triples in $B$ red. Observe that any blue Berge clique contains at most one vertex from $B$ and any red Berge clique contains at most one vertex from $A$. It follows that $\mathcal{H}$ does not contain a blue $BK_t$ or a red $BK_s$. Hence $R^3(BK_t, BK_s) \geq t+s-3$. \end{proof} Before we present the proof of Theorem~\ref{berge-Ramsey}, we will prove the following lemma. \begin{lemma}\label{berge:induction} \label{berge} Suppose $t,s \geq 3$. Then \begin{displaymath} R^3(BK_t,BK_{s}) \leq \max\{R^3(BK_{t-1},BK_{s}),R^3(BK_{t},BK_{s-1})\}+1. \end{displaymath} \end{lemma} \begin{proof} Without loss of generality, assume $t \geq s$. Let $\mathcal{H}$ be a 2-edge-colored complete 3-uniform hypergraph with vertex set $V$ of size at least $N:= \max\{R^3(BK_{t-1},BK_{s}),R^3(BK_t,BK_{s-1})\}+1$. We want to show that $\mathcal{H}$ contains either a blue $BK_t$ or a red $BK_s$ as a sub-hypergraph. Fix $v\in V$ and let $\mathcal{H}'$ be the hypergraph induced by $V':=V\backslash\{v\}$. Since $\abs{V'} \geq R^3(BK_{t-1},BK_{s})$, it follows by definition that $\mathcal{H}'$ contains a blue $BK_{t-1}$ or a red $BK_{s}$. If there is a red $BK_s$ we are done. Otherwise suppose we have a blue $BK_{t-1}$, with the vertex set $Y$ as its core. Now let us consider $G$, the blue trace of $v$ in $\mathcal{H}$, i.e., $G$ is a 2-edge-colored complete graph with vertex set $V'$ and there exists an edge $\{x,y\}$ in $G$ if and only if the hyperedge $\{x,y,v\}$ in $\mathcal{H}$ is colored blue. \begin{claim}\label{cl:large-red-degree} Either we can extend $Y$ using $v$ to obtain a blue $BK_t$ or there exists a vertex $u\in Y$ with $d_G(u) \leq 1$. Moreover if $d_G(u) = 1$ and $\{u,w\}$ is the only edge containing $u$, then $d_G(w) < N-2.$ \end{claim} \begin{proof} Consider the incidence graph of $G$, i.e., the bipartite graph $I = Y \cup E(G)$ such that for every $u\in Y$, $e\in E(G)$, $u$ is incident to $e$ if and only if $u\in e$. Observe that $Y$ is the core of a blue $BK_{t-1}$ with none of its hyperedges containing $v$. Therefore, by our definition of $G$ (the blue trace of $v$ in $\mathcal{H}$), if there is a matching of $Y$ in $I$, then we can obtain a blue $BK_t$ with $Y\cup \{v\}$ as its core. Now assume $I$ does not contain a matching of $Y$. We first claim that there exists a vertex $u\in Y$ with $d_G(u) \leq 1$. Note that the degree of each $e \in E(G)$ is at most $2$. Thus, if $d_I(u) \geq 2$ for all $u\in Y$, then it follows that for every $S\subseteq Y$, $|N_I(S)| \geq |S|$, which gives us a matching on $Y$ by Hall's condition. Thus by contradiction, we have a vertex in $Y$ of degree at most $1$ in $G$. Suppose now $d_G(u) = 1$ for some $u$ in $Y$ and $e = \{u,w\}$ is the unique edge containing $u$. We claim that $d_G(w) < N-2$. Suppose not, i.e., $d_G(w) \geq N-2$. This implies that $\{v,w,z\}$ is a blue edge for every $z \in V(\mathcal{H})\backslash \{v,w\}$. Moreover, by our lower bound in Proposition~\ref{prop:small-case} (when $s,t$ are small) and Proposition~\ref{berge:lower}, there exists another vertex $y \in V'\backslash Y$. It follows that we can extend $Y$ into the core of a blue $BK_t$ with the following embedding: for each $z \in Y\backslash\{w\}$, embed $\{v,z\}$ to the hyperedge $\{v,z,w\}$. Then embed $\{v,w\}$ to $\{v,w,y\}$. Thus if we do not have a blue $BK_t$ with $Y\cup v$ as its core, then we have $d_G(w) < N-2$. \end{proof} This claim says that either there exists $u \in Y$ such that $\{v,u,x\}$ is red for every $x \in V'\backslash\{u\}$, or there exists $u,w\in V'$ such that $\{v,u,x\}$ is red for every $x\not= w$ and there exists $w_x$ such that $\{v,w,w_x\}$ is red. Note that the second case covers the first case by taking $w_x = u$. So it suffices to assume the second case. Now since $N-1 \geq R^3(BK_{t},BK_{s-1})$, it follows that $\mathcal{H}'$ either contains a blue $BK_{t}$ or a red $BK_{s-1}$. We are done in the former case. Otherwise, suppose that $\mathcal{H}'$ contains a red $BK_{s-1}$. We will show that we can extend this $BK_{s-1}$ by adding the vertex $v$ into its core. Let $X$ be the core of the Berge-$K_{s-1}$. Now for every $x\in X$ with $x\notin \{u,w\}$, we know that the edge $\{v,u,x \}$ is colored red. Hence we can embed $\{v,x\}$ into the red hyperedge $\{v,u,x\}$. It follows that we have an embedding of the edges from $v$ to all but at most two vertices of $X$, namely $u,w$. In the case that $w \in X$, we can embed $\{v,w\}$ into the hyperedge $\{v,w,w_x\}$, which is red. Now if $u\notin X$, we are done. Otherwise, assume $u \in X$. Note that by the lower bounds in Proposition~\ref{prop:small-case} (when $s,t$ are small) and Proposition~\ref{berge:lower} $|V'|=N-1\geq \max\{R^3(BK_{t-1}, BK_{s}), R^3(BK_t, BK_{s-1})\} \geq s+1.$ Hence it follows that there exists another vertex $y \in V(\mathcal{H}')\backslash (X\cup\{w\})$. Note that by our choice of $u$, $\{v,u,y\}$ is red. Thus we can embed $\{v,u\}$ into $\{v,u,y\}$. The above embedding extends $X$ into the core of a red $BK_s$ and we are done. \end{proof} \begin{lemma}\label{lm:4t-base-case} $R^3(BK_4,BK_t) = t+1$ for $t \geq 5$. \end{lemma} \begin{proof} We will proceed by induction on $t$. The base case that $R^3(BK_4, BK_5) = 6$ is verified by computer. Suppose now that Lemma~\ref{lm:4t-base-case} is true for all $5\leq t' < t$. Let $\mathcal{H}$ be a $2$-edge-colored complete 3-uniform hypergraph on $t+1$ vertices. Note that by Proposition~\ref{prop:small-case}, we have $R^3(BK_3, BK_{t}) = t+1$. Hence $\mathcal{H}$ either contains a blue $BK_3$ or a red $BK_{t}$. If the latter happens, we are done. So suppose $\mathcal{H}$ contains a blue $BK_3$, with the vertex set $Y$ as its core. Note that $t+1 \geq 7$ and a Berge-triangle contains at most $6$ vertices. Hence there exists a vertex $v$ that is not used by any hyperedge in the blue $BK_3$. Similar to Lemma \ref{berge:induction}, let $G$ be the blue trace of $v$ in $\mathcal{H}$. Again by Claim \ref{cl:large-red-degree}, either we can extend $Y$ using to $v$ to obtain a blue $BK_4$ or there exists a vertex $u \in Y$ with $d_G(u)\leq 1$. Moreover, if $d_G(u)=1$ and $\{u,w\}$ is the only edge containing $u$, then $d_G(w) < t-1$. In the former case, we are done. Otherwise, WLOG, assume that there exists a $u \in Y$ and $w \in V(\mathcal{H})\backslash\{v,u\}$ such that $\{v,u,x\}$ is red for every $x\neq w$ and there exists some vertex $w_x$ such that $\{v,w,w_x\}$ is red. By induction, $\mathcal{H}[V(\mathcal{H})\backslash\{v\}]$ contains either a blue $BK_4$ or a red $BK_t$. In the former case, we are done. In the latter case, we can extend the red $BK_t$ to a red $BK_{t+1}$ in the same way as in Lemma~\ref{berge:induction}. \end{proof} Now this result together with Lemma~\ref{berge} allows us to show the following proposition. \begin{proposition}\label{prop:berge-general} $R^3(BK_t,BK_s) \leq t + s -3,$ for $t,s \geq 4$ and $\max\{s,t\}\geq 5$. \end{proposition} \begin{proof} We already know this is true if one of $t$ or $s$ is 4, and so for $t,s \geq 5$ the result follows from induction on $t+s,$ using Lemma~\ref{berge}. \end{proof} \noindent Theorem~\ref{berge-Ramsey} follows from Proposition~\ref{prop:small-case},~\ref{berge:lower}~and~\ref{prop:berge-general}. \subsection{Proof of Theorem~\ref{berge:4-uniform}} In this section, for ease of reference, sometimes we use the notation $h \to e$ to denote that the hyperedge $h \in E(\mathcal{H})$ is mapped to the vertex pair $e \in E(G)$ when constructing the embedding of $E(G)$ in $E(\mathcal{H})$. Let us first deal with Theorem~\ref{berge:4-uniform} for small values of $t$. \begin{proposition}\label{prop:4-uniform-small} For $2\leq t\leq 5$, $R^4(BK_t,BK_t)= t+2$. \end{proposition} \begin{proof} For the lower bound, we use the fact that if $R^4(BK_t,BK_t) = n$, then $\binom{n}{4} \geq 2\binom{t}{2}-1$. For $2\leq t\leq 5$, this shows that $R^4(BK_t,BK_t) \geq t+2$. The upper bound that $R^4(BK_t,BK_t) \leq t+2$ for $2\leq t\leq 5$ is verified by computer. \end{proof} Now we want to show that $R^4(BK_t, BK_t) = t+1$ for all $t \geq 6$. Again we start with the lower bound by showing the following proposition. \begin{proposition}\label{4-uniform-lower} $R^4(BK_t, BK_t) \geq t+1$ for all $t\geq 6$. \end{proposition} \begin{proof} We want to construct a $2$-edge-coloring of a complete $4$-uniform hypergraph on $t$ vertices without a monochromatic $BK_t$. Let $\mathcal{H}$ be a $K^{(4)}_{t}$ with two special vertices $v_1, v_2$. Any hyperedge containing both $v_1, v_2$ is colored blue. All other hyperedges are colored red. We claim that there is no monochromatic $BK_t$ in $\mathcal{H}$. Indeed, there is no red $BK_t$ since only one of $v_1, v_2$ can be in any red $BK_t$. For blue $BK_t$, note that by our coloring there are only $\binom{t-2}{2}$ blue edges, which are fewer than the $\binom{t}{2}$ edges needed for $BK_t$. \end{proof} Now let us move on to the upper bound. \begin{lemma}\label{4-uniform-upper} For $t\geq 6$, we have that $$R^4(BK_t, BK_t) \leq t+1.$$ \end{lemma} \begin{proof} We prove the lemma by inducting on $t$. The base case that $R^4(BK_6,BK_6) \leq 7$ is verified by computer. Now assume that $t \geq 7$ and the lemma is true for all $t'<t$. Let $\mathcal{H}$ be a $2$-edge-colored complete $4$-uniform hypergraph on a vertex set $V$ of size $t+1$. For ease of reference, given a set of vertices $S$, let $d_b(S)$ and $d_r(S)$ denote the number of blue and red hyperedges containing $S$ as subset, respectively. \begin{claim}\label{cl:almost-blue} Suppose $\mathcal{H}$ does not contain a monochromatic $BK_{t}$. Let $v$ be a fixed vertex in $\mathcal{H}$. If there is a monochromatic $BK_{t-1}$ (without loss of generality, assume it is blue) without using any hyperedge containing $v$, then there exists another vertex $u$ such that $d_b(\{v,u\})\leq 2$, i.e., all hyperedges containing both $v,u$ are red except for at most two. \end{claim} \begin{proof} Let $\mathcal{H}_b$ be the blue Berge-$K_{t-1}$ hypergraph not using any hyperedge containing $v$. Let $\{u_1, u_2, \ldots u_{t-1}\}$ be the core of $\mathcal{H}_b$. Construct a bipartite graph $G= A \cup B$ where $A = \{u_1, \ldots, u_{t-1}\}$ and $B =\binom{V\setminus\{v\}}{3}$. For $u_i\in A$, $S \in B$, $u_i$ is adjacent to $S$ in $G$ if and only if $u_i \in S$ and $\{v\} \cup S$ is a blue edge in $\mathcal{H}$. Note that for every $S \in B$, $d_G(S) \leq 3$. Therefore, if $d_G(u_i) \geq 3$ for every $u_i \in A$, then there exists a matching of $A$ in $G$ by Hall's theorem, which implies that we can extend $\mathcal{H}_b$ to a blue $BK_{t}$ by adding $v$ into the core of $\mathcal{H}_b$. This contradicts our assumption that $\mathcal{H}$ does not have a monochromatic $BK_{t}$, and the proof of Claim~\ref{cl:ex:induct} is complete. \end{proof} Now for every $v \in V$, there exists a monochromatic $BK_{t-1}$ in $\mathcal{H}[V\backslash\{v\}]$ by induction. Hence by Claim~\ref{cl:almost-blue}, for every vertex $v$, there exists another $u$ in $V$, such that $d_c(\{v,u\})\geq \binom{t-1}{2}-2$ for some $c \in$ \{blue,red\}. We then call the pair $\{v,u\}$ a \textit{c couple} where $c \in$ \{blue,red\}. Moreover, call $\{a,b\}$ a `bad pair' of $\{v,u\}$ if the hyperedge $\{a,b,v,u\}$ is not in color $c$. By Claim~\ref{cl:almost-blue}, every vertex is contained in a couple. It follows that we have at least $(t+1)/2 \geq 4$ couples so at least two of them are of the same color. Without loss of generality, let $\{v_1,u_1\}$ and $\{v_2,u_2\}$ be two red couples. Our goal is to obtain a red embedding of a $BK_t$ using mostly edges containing $\{v_1,u_1\}$ and $\{v_2,u_2\}$. We assume that $\{v_1,u_1\} \cap \{v_2,u_2\} = \emptyset$ and remark that the other case is similar and simpler. Let $\{a_1,b_1\}, \{a_2,b_2\}$ be the two possible bad pairs of $\{v_1,v_2\}$. Let $\{c_1,d_1\}$, $\{c_2,d_2\}$ be two possible bad pairs of $\{v_2,u_2\}$. If $\{v_1, u_1\}$ has exactly two bad pairs, we can assume that for at least one of them (with loss of generality the pair $\{a_2, b_2\}$) there is a red edge $h$ containing it. Otherwise $\{a_1,b_1\}$ and $\{a_2,b_2\}$ are blue couples with no bad pairs and it is easy to find a blue $BK_t$ by only using the blue edges containing $\{a_1,b_1\}$ and $\{a_2,b_2\}$. If $\{v_1, u_1\}$ has exactly one bad pair, let $\{a_1,b_1\}$ be that pair and pick $\{a_2, b_2\}$ arbitrarily. Note that $\{a_2, b_2\}$ is contained in some red edge $h$. If $\{v_1,u_1\}$ has no bad pair, then pick $\{a_1,b_1\}$ and $\{a_2,b_2\}$ arbitrarily. Moreover, we assume that $\{v_1,u_1,v_2,u_2\}$ is a red edge and observe that otherwise constructing the embedding is easier. Suppose $\{a_1,b_1\}$ and $\{a_2,b_2\}$ have a common vertex $u$. If $u \notin \{v_2, u_2\}$, relabel $a_1, b_1$ such that $a_1 = u$, and if $u \in \{v_2, u_2\}$ relabel $u_2, v_2,a_1, b_1$ such that $b_1 = u_2 = u$. Otherwise just relabel $a_1, b_1$ such that $a_1 \not \in \{v_2,u_2\}$. Let $x_1,x_2,\dots,x_{t-4}$ be an enumeration of $V':= V \setminus \{v_1,v_2,u_1,u_2,a_1\}$. If $b_1 \not\in \{v_2,u_2\}$, assume $x_1 = b_1$. Othewise assume without loss of generality that $b_1=u_2$. We are going to construct the embedding in three phases: \begin{description} \item \textit{Phase 1:} Embed all vertex pairs in $V'$. Consider the following embedding: For $i,j\in\{1,\dots,t-4\}$, embed $\{x_i,x_j\}$ in $\{u_1,v_1,x_i,x_j\}$ if $i+j$ is odd otherwise in $\{u_2,v_2,x_i,x_j\}$. We have a red $BK_{t-4}$ except possibly for at most three missing edges. Without loss of generality, let $\{x_{i_1},x_{j_1}\}$, $\{x_{i_2},x_{j_2}\}$, $\{x_{i_3},x_{j_3}\}$ be the three possible bad pairs where $i_1+j_1$ is odd and both $i_2+j_2$ and $i_3+j_3$ are even. If $\{x_{i_1}, x_{j_1}\}$ is indeed a bad pair of $\{v_1, u_1\}$, then it follows that $\{x_{i_1}, x_{j_1}\} = \{a_2, b_2\}$. Then we can embed $\{x_{i_2},x_{j_2}\}$ in $\{v_1, u_1, x_{i_2},x_{j_2}\}$, embed $\{x_{i_3},x_{j_3}\}$ in $\{v_1, u_1, x_{i_3},x_{j_3}\}$ and embed $\{x_{i_1}, x_{j_1}\}$ in $h$. Otherwise, $\{x_{i_1}, x_{j_1}\}$ does not exist and the above embedding still works except when one of $\{x_{i_2},x_{j_2}\},\{x_{i_3},x_{j_3}\}$ is the pair $\{a_2, b_2\}$. We can then use $h$ to embed $\{a_2, b_2\}$. \item \textit{Phase 2:} Embed all edges from $\{v_1, u_1, v_2, u_2\}$ to vertices in $V'$. Consider the following embedding: \begin{align*} \{v_1,u_1,a_1, x_i\} &\to \{x_i,u_1\} \textrm{ for $i\neq 1$}.\\ \{v_1,u_1,v_2, x_i\} &\to \{x_i,v_1\} \textrm{ for $i\neq 1$}. \\ \{v_2,u_2,a_1, x_i\} &\to \{x_i,u_2\}.\\ \{v_1,v_2,u_2 x_i\} &\to \{x_i,v_2\}. \end{align*} Note that $x_1$ can only be contained in one bad pair otherwise we would have picked $x_1$ to be $a_1$. Hence among the three edges $\{v_1,u_1,x_1,v_2\}$, $\{v_1,u_1,x_1,u_2\}$, $\{v_1,u_1,a_1,x_1\}$, at least two of them are red. Embed $\{x_1, v_1\}$, $\{x_1, u_1\}$ into those two red edges. If all three are red, do not use $\{v_1,u_1,u_2,x_1\}$ in this part of the embedding. Now let us analyze the potential bad cases. There are at most $3$ of these edges in Phase 2 that are not red. If $\{u_1,v_1,a_1,x_i,\}, i\not = 1$ is blue, then use the edge $\{v_1,u_1,u_2,x_i\}$ to embed $\{u_1,x_i\}$. If $\{v_1,u_1,v_2,x_i\},i\not=1$ is blue, then use the edge $\{v_1,u_1,u_2,x_i\}$ to embed $\{v_1,x_i\}$. If there are two different indexes $i,j$ such that $h_1 \in \{\{v_2,u_2,a_1,x_i\}, \{v_1,v_2,u_2,x_i\}\}$ and $h_2 \in \{\{v_2,u_2,a_1,x_j\},\{v_1,v_2,u_2,x_j\}\}$ are blue, then replace $h_1$ with $\{u_1, v_2, u_2,x_i\}$ and replace $h_2$ with $\{u_1, v_2, u_2, x_j\}$. The same embedding works if there is only one bad pair of $\{v_2, u_2\}$ in this phase. If for some $i$ both edges $\{v_1,v_2,u_2,x_i\}, \{v_2,u_2,a_1,x_i\}$ are blue, then it follows that the edge $\{v_2,u_2,x_i,y\}$ is red for every vertex $y$, with $y\notin \{v_1,a_1,v_2,u_2,x_i\}$. Consider the set of edges $E_i = \{\{v_2,u_2,x_i,y\}: y\notin \{v_1,v_2,u_2,a_1,x_i\} \}$. Note that $|E_i| = t-4$. In Phase $1$, at most $\ceil{(t-6)/2}$ edges in $E_i$ are used except when $t$ is even and $i$ is odd, in which case $\floor{(t-6)/2}$ edges in $E_i$ are used. If $t$ is even and $i$ is odd, we have at least $t-4-\floor{(t-6)/2} \geq 3$ edges in $E_i$ still available. In other cases, we have at least $t-4-\ceil{(t-6)/2} \geq 2$ edges in $E_i$ still available. Either there exist two edges in $E_i$ that can be used to embed $\{v_2,x_i\}$ and $\{u_2,x_i\}$, or in Phase $1$ there exists some $j$ such that $\{v_1,u_1,x_i,x_j\}$ is blue and $\{v_2, u_2, x_i,x_j\}$ is used to embed $\{x_i, x_j\}$. In this case, there exists some $k \in \{1,\dots t-4\}\backslash \{i\}$ such that $i+k$ is even and $\{v_1, u_1,x_i, x_k\}$ is red. Embed $\{x_i, x_k\}$ into $\{v_1, u_1, x_i, x_k\}$. It follows that we again have two available red edges containing $x_i, v_2, u_2$ to embed $\{ v_2,x_i\}$, $\{u_2,x_i\}$. \item \textit{Phase 3:} Embed the edges in $\displaystyle\binom{\{u_1,v_1,u_2,v_2\}}{2}$. If the edge $\{u_1,v_1,v_2,a_1\}$ is red, then use it to embed $\{v_1,v_2\}$. Otherwise we know that $\{v_2,a_1\}$ and $\{u_2,a_1\}$ are the two bad pairs of $\{v_1,u_1\}$. It follows that the edge $\{v_1,u_1,u_2,x_1\}$ is still available and the edge $\{v_1,u_1,v_2,x_1\}$ was used to embed $x_1$ with one of $v_1$ or $u_1$ (without loss of generality, assume $v_1$). In this case, embed $\{v_1,x_1\}$ in $\{v_1,u_1,u_2,x_1\}$ instead and use the edge $\{v_1,u_1,v_2,x_1\}$ to embed $\{v_1,v_2\}$. Now we will embed $\{v_1,u_2\}$ and $\{u_1,u_2\}$. Let $E_{u_2} = \{\{v_1,u_1,u_2,y\}: y \notin \{v_1,u_1, v_2, u_2\}\}$. Note that $|E_{u_2}| = t-3$ and at most $2$ edges in $E_{u_2}$ are blue. Hence at least $(t-3)-2\geq 2$ of the edges in $E_{u_2}$ are red. For each red edge in $E_{u_2}$, if it was used, it was because there exists some bad pair of $\{v_1,u_1\}$ which did not use $u_2$. That in turn implies that there are still at least $2$ edges in $E_{u_2}$ that are red and available. Hence we can embed $\{v_1,u_2\}$ and $\{u_1,u_2\}$ into these two edges. Similarly we can find an edge of the form $\{v_2,u_1,u_2,y\}$ to embed $\{u_1,v_2\}$. Finally, by counting the edges used, it is easy to check that there are still red edges of the form $\{v_1,u_1,x,y\}$ and $\{v_2,u_2,x,y\}$ available to embed both $\{v_1, u_1\}$ and $\{v_2,u_2\}$, since each pair is in at least $\binom{t-1}{2}-2$ red edges. \qedhere \end{description} \end{proof} In the case of cliques of different sizes we have the following bounds which are trivial from Theorem~\ref{berge:4-uniform}. \begin{proposition} Suppose $t \ge s \geq 2$ and $t \ge 6$, then \begin{displaymath} t \le R^4(BK_t,BK_s) \le t+1. \end{displaymath} \end{proposition} \begin{proof} The construction is trivial, we just take a clique on $t-1$ vertices. The upper bound follows since $s \le t$ implies $R^4(BK_t,BK_s) \le R^4(BK_t,BK_t)$. \end{proof} For $s=t-1$ we obtain the same bound as the case $s=t$. \begin{proposition} $R^4(BK_t,BK_{t-1}) = t+1$ for $t\geq 6$. \end{proposition} \begin{proof} The same construction works as the $R^4(BK_t,BK_{t})$ case, and the upper bound follows from $R^4(BK_t,BK_{t-1})\le R^4(BK_t,BK_{t})$. \end{proof} \begin{theorem} Assume $2 \le s \le t-2,$ and $t\geq 34$, then $R^4(BK_t,BK_{s}) = t.$ \end{theorem} \begin{proof} In a red-blue coloring of a hypergraph $\mathcal{H}$, given a pair of vertices $\{v,u\}$, we define its blue degree to be $d_B(\{v,u\}) = \abs{h\in E(\mathcal{H}):\{v,u\}\subseteq h \mbox{ and $h$ is blue}\}}$. The red degree $d_R(\{v,u\})$ is defined analogously. Let \begin{displaymath} \delta_B^2 = \min_{\{v,u\}\in \binom{V(\mathcal{H})}{2}}d_B(\{v,u\}), \end{displaymath} and define $\delta_R^2$ similarly. Call $\{v,u\}$ a $c$ couple, $c\in \{blue,red\}$, if all but at most 5 of the hyperedges $\{v,u,x,y\}$ are $c$ colored, and also call a pair $\{x,y\}$ a bad pair of the $c$ couple $\{v,u\}$ if the hyperedge $\{v,u,x,y\}$ is not colored $c$. Note that if $\delta_B^2=0$ then we can find a pair $\{v,u\}$ such that $\{v,u,x,y\}$ is red for all $x,y$, and therefore there is a red $BK_{t-2}$. So we can assume $\delta_B^2 \geq 1$. \begin{claim} \label{cl3} Suppose there are two blue couples, then either we can find a blue $BK_t$ or we can find two red couples such that each have at most 4 bad pairs.\end{claim} \begin{proof} Assume we have two disjoint blue couples $\{u_1,v_1\}$ and $\{u_2,v_2\}$, the case where these pairs are not disjoint is similar and simpler, and enumerate the other $t-4$ vertices as $x_1,x_2,\dots,x_{t-4}.$ Now let us do a preliminary embedding, for $i,j\in [t-4]$ use $\{u_1,v_1,x_i,x_j\}$ to embed $\{x_i,x_j\}$ when $i+j$ is odd and $\{u_2,v_2,x_i,x_j\}$ otherwise. If $i+j$ is odd and in this part of the embedding we used a red edge $\{u_1,v_1,x_i,x_j\}$ to embed $\{x_i,x_j\}$, but the edge $\{u_2,v_2,x_i,x_j\}$ is blue, then use the edge $\{u_2,v_2,x_i,x_j\}$ instead. If $i+j$ is even and in this part of the embedding we used a red edge $\{u_2,v_2,x_i,x_j\}$ to embed $\{x_i,x_j\}$, but the edge $\{u_1,v_1,x_i,x_j\}$ is blue, then use the edge $\{u_1,v_1,x_i,x_j\}$ instead. Let us call such a change to the embedding a swap. If both edges $\{u_1,v_1,x_i,x_j\}$ and $\{u_2,v_2,x_i,x_j\}$ are red or blue, then we do not change anything. Note that at this point we have embedded a $BK_{t-4}$ such that every edge is blue except at most five edges, in particular the possible pairs which are simultaneously bad pairs of $\{u_1,v_1\}$ and $\{u_2,v_2\}$. Let $e_1,e_2,\dots,e_k$ be these common bad pairs, $k \leq 5$. We begin with a simple observation which we will use again later. \begin{observation} \label{finalobs} If $k \le 1$ we could complete the embedding in such a way that each pair is contained in at least 1 blue edge. \end{observation} If $k \geq 2$ and all but at most one $e_i$ is in at least 5 blue edges, then we can greedily embed the edges, starting from the one that is in less than 5 blue edges, since each is in at least one unused blue edge. So we can either find two of the $e_i$ which are in at most 4 blue edges and the claim is proven or we complete the embedding of a blue $BK_{t-4}$, and if that is the case we will see we can complete this embedding to a blue $BK_{t}.$ Since for any fixed $i$, there are at most $\lceil \frac{t-4}{2} \rceil$ indices $j$ such that $i+j$ is odd and also $x_i$ can be in at most 10 bad pairs of $\{u_1,v_1\}$ or $\{u_2,v_2\}$, it follows that for every $i\in [t-4]$ there are at least $t-5 - \lceil \frac{t-4}{2} \rceil - 10 \geq 4$ values of $j \in [t-4]$ not used in the previous steps of the embedding such that the edge $\{u_1,v_1,x_i,x_j\}$ is blue. Then again by Hall's Theorem in the incidence graph with components $X=\{\{x_i,v_2\}:i\in [t-4]\} \cup \{\{x_i,u_2\}:i \in [t-4]\}$ and $Y$ the set of blue edges in $\{\{x_i,x_j,u_2,v_2\}: 1\leq i < j \leq t-4\}$, we can find an embedding of the edges $\{x_i,v_2\}$ and $\{x_i,u_2\}$ for $i \in [t-4]$, and similarly we can find an embedding of the edges $\{x_i,v_1\}$ and $\{x_i,u_1\}$ for $i \in [t-4]$. We have not yet used the hyperedges of the form $\{v_1,u_1,v_2,y\}$; there are at least $t-8 \geq 26$ of these which are blue, and we can use them to embed $\{v_1,u_1\}, \{v_1,v_2\}$ and $\{u_1,v_2\}$. Similarly we can embed $\{v_2,u_2\},\{u_1,u_2\}$ and $\{u_1,u_2\}$. Therefore either we can complete the matching or we find two pairs $e_1,e_2$ which are red couples, with at most 4 bad pairs. This completes the proof of Claim~\ref{cl3}. \end{proof} \begin{claim}\label{cl4} Suppose there are two red couples such that at least one has at most 4 bad pairs, then either we can find a red $BK_{t-2}$ or we can find two blue couples such that each have at most 1 bad pair.\end{claim} \begin{proof} Again we will assume the red couples are disjoint. Let $\{u_1,v_1\}$ and $\{u_2,v_2\}$ be couples such that $\{u_1,v_1\}$ has at most 4 bad pairs, and let $\{a_1,b_1\},\{a_2,b_2\},\{a_3,b_3\},\{a_4,b_4\}$ be the bad pairs of $\{u_1,v_1\}$. Suppose these pairs are arranged by their red degree in increasing order. Now let $x_1,x_2,\dots,x_{t-6}$ be an enumeration of the set $V'= V\backslash \{v_1,v_2,u_1,u_2,a_1,a_2\}$. Let us consider the following embedding which is similar to the one used in the previous claim: For $i,j\in [t-6]$ use $\{u_1,v_1,x_i,x_j\}$ to embed $\{x_i,x_j\}$ when $i+j$ is odd and $\{u_2,v_2,x_i,x_j\}$ otherwise. Similarly as in Claim~\ref{cl3}, if we encounter a bad pair of one couple but not the other, then we can change the embedding to use more red edges, and at the end we have an embedding of a $BK_{t-6}$ with almost every edge red, the only possible exceptions are the common bad pairs of $\{u_1,v_1\}$ and $\{u_2,v_2\}$ in $V'$. Hence here we have at most two ($\{a_3,b_3\}$ and $\{a_4,b_4\}$). If the red degree of these edges is at least 2, then we can greedily embed these two in these pairs to complete a red clique on $V'$. Otherwise one of these, and by the ordering also $\{a_1,b_1\}$ and $\{a_2,b_2\}$, will be in at most 1 red pair. Similarly as in the proof of Claim~\ref{cl3}, we use Hall's theorem to embed $\{x_i,v_2\}$, $\{x_i,u_2\}$, $\{x_i,v_1\}$ and $\{x_i,u_1\}$ for $i \in [t-6]$ (here the number $t-5 - \lceil \frac{t-4}{2} \rceil - 10$ is replaced by $t-7 - \lceil \frac{t-6}{2} \rceil - 8$, which is at least 5). Since $\{v_1,u_1,v_2,y\}$ is red for at least $t-7\geq 29$, and these hyperedges have not been used yet, it follows that we have enough hyperedges to embed $\{v_1,u_1\},\{v_1,v_2\}$ and $\{u_1,v_2\}$ and similarly we can embed $\{v_2,u_2\},\{v_1,u_2\}$ and $\{u_1,u_2\}$. \end{proof} Note that if there is at most one blue couple, say $\{v,u\}$, we may put $V' = V\backslash\{u\}$ and for every pair $x,y \in V'$ the red degree of $\{x,y\}$ is at least 6. Then by Hall's Theorem, we can find a red $BK_{t-1}$. So we can assume there are at least two blue couples. Thus, by Claim~\ref{cl3} either we find a blue $BK_{t}$ or we have two red couples such that at least one has at most 4 bad pairs, the conditions of Claim~\ref{cl4}. From here we either find a red $BK_{t-2}$ or satisfy conditions stronger than those of Claim~\ref{cl3}. In this case, there is at most one shared bad pair and so we would be able to find a blue $BK_{t}$ by Observation~\ref{finalobs}. \end{proof} \begin{remark} Instead of using Hall's Theorem in the second part of the embedding on the previous claims, if we use a more complicated case analysis the constraint $t \ge 34$ can be relaxed somewhat, but we elected not to in order to make the proof easier to follow. \end{remark} \subsection{Proof of Theorem \ref{berge:5-uniform}} In this short section, we will show that $R^k(BK_t,BK_t) = t$ when t is sufficiently large. \begin{claim}\label{cl:min-red-deg} If for all $v,u \in V$, there are at least $\binom{k}{2}$ red distinct hyperedges containing both $v$ and $u$, then $\mathcal{H}$ contains a red $BK_t$. \end{claim} \begin{proof} Consider the bipartite graph $G$ with vertex set $V(G) = A \sqcup B$, where $A = \binom{V(\mathcal{H})}{2}$ and $B$ is the set of all hyperedges of $\mathcal{H}$. For $a \in A$, $h \in B$, $a$ is adjacent to $h$ in $G$ if and only if $a \subset h$ and $h$ is colored red in $\mathcal{H}$. Note that for every $h\in B$, $d_G(h) \leq \binom{k}{2}$. Hence, if for all $\{v,u\} \in A$, $d_G(\{v,u\}) \geq \binom{k}{2}$, then by Hall's theorem we have a matching of $A$ in $G$, which implies the existence of a red $BK_t$ in $\mathcal{H}$. \end{proof} \begin{claim}\label{cl:k-uniform} If $\binom{t-4}{k-4} \geq 2 \binom{k}{2} -1$, then $R^k(BK_t,BK_t) \leq t$. \end{claim} \begin{proof} If the condition in Claim \ref{cl:min-red-deg} does not hold, then there exist two vertices $v, u \in V(\mathcal{H})$ such that all but at most $\binom{k}{2}-1$ hyperedges containing both $v$ and $u$ are blue. We claim that there exists a copy of a blue $BK_t$ in $\mathcal{H}$ using only blue hyperedges containing both $v$ and $u$. Consider again the bipartite graph $G$ with vertex set $V(G)= A \sqcup B$, where $A = \binom{V(\mathcal{H})}{2}$ and $B$ is the set of blue hyperedges of $\mathcal{H}$ containing both $v$ and $u$. Note that for every $a\in A$ there are at least $\binom{t-4}{k-4} - \binom {k}{2}+1 \geq \binom{k}{2}$ blue hyperedges containing $a$, and again by Hall's theorem we have a blue~$BK_t$. \end{proof} Using Claim \ref{cl:k-uniform}, we show that $R^k(BK_t, BK_t) = t$ when $k\geq 5$ and $t$ sufficiently large. We did not make an attempt to find the best possible constant. \begin{corollary} We have \begin{enumerate}[label=\rm{(\arabic*)}] \item $R^5(BK_t,BK_t) = t$ when $t\geq 23$. \item $R^6(BK_t,BK_t) = t$ when $t\geq 13$. \item $R^7(BK_t,BK_t) = t$ when $t\geq 12$. \item $R^{k}(BK_t,BK_t) = t$ when $k \in \{8,9,10\}$ and $t \geq k+4$. \item $R^k(BK_t,BK_t) = t$ when $k \geq 11$ and $t \geq k+3$. \end{enumerate} \end{corollary} \begin{remark} Note that for $k\geq 11$, this result is sharp since for $t = k+2$ we have that $\binom{t}{r}\leq 2\binom{t}{2} -2.$ Hence $R^k(BK_t,BK_t) \geq r+3.$ \end{remark} \subsection{Superlinear lower bounds for sufficiently many colors} \label{superlinear} In this subsection we show that for all uniformities and for sufficiently many colors, the Ramsey number for a Berge $t$-clique is superlinear. We start with the case $r=3$. \begin{claim}\label{cl:sup-base} For any $\epsilon<1$ we have $R^{3}_3(BK_t,BK_t,BK_t)\ge (t-1)t^{\epsilon}$ for $t$ sufficiently large. \end{claim} \begin{proof} Let $\epsilon<1$. Take a vertex set consisting of $t-1$ disjoint sets of vertices $V_1, V_2, \ldots, V_{t-1}$, each of size $t^{\epsilon}$. If a hyperedge contains vertices from three different $V_i$, then color it green. By the well-known lower bound on the diagonal Ramsey number $R(K_{t^{1-\epsilon}},K_{t^{1-\epsilon}}) =\Omega(2^{t^{1-\epsilon}/2})$, we can find a coloring of $K_{t-1}$ containing no clique of size $t^{1-\epsilon}$ when $t$ is sufficiently large. Given such a red-blue coloring on the complete graph with vertex set $\{1,2,\dots,t-1\}$ we color the hyperedges consisting of two vertices from $V_i$ and one from $V_j$ by the color of $\{i,j\}$ in the graph. We color every hyperedge completely contained in some $V_i$ red. Observe that the core of any red or blue $BK_t$ may contain vertices in less than $t^{1-\epsilon}$ different classes and so has a total of less than $t$ vertices. \end{proof} \begin{remark} This proof can give a slightly better bound on the order of $\frac{t^2}{\log(t)}$ but we write the bound in terms of $\epsilon$ for a simpler presentation. \end{remark} \begin{theorem} \label{colors} For any uniformity $r \geq 4$, and sufficiently large $c$ and $t$, we have \begin{displaymath} R^{r}_c(BK_t,BK_t,\dots,BK_t) > t^{1+ \left(\frac{r-3}{r-2}\right)^{r-3}-\left(\frac{r-3}{r-2}\right)^{r-2}}. \end{displaymath} \end{theorem} Theorem \ref{colors} will follow from the following claim which we will prove by induction on $r$ by choosing the optimal $\epsilon$. \begin{claim} \label{rec} For any uniformity $r \geq 3$, and for any $\epsilon$ where $\epsilon<1$, for sufficiently large $c$ and $t$, we have \begin{displaymath} R^{r}_c(BK_t,BK_t,\dots,BK_t) > t^{1+(1-\epsilon)^{r-3}-(1-\epsilon)^{r-2}}. \end{displaymath} \end{claim} \begin{proof} The base case follows from Claim~\ref{cl:sup-base}. Now assume that $r\geq 4$. Let $\epsilon< 1$. Let $c_s$ be the number of colors required for Claim~\ref{rec} to hold for an $s$-uniform hypergraph for $2 \le s \le r-1$. Let $M$ be the lower bound we obtain by induction for the function $R^{r-1}_{c_{r-1}}(BK_{t^{1-\epsilon}},BK_{t^{1-\epsilon}},\dots,BK_{t^{1-\epsilon}}).$ We will show \begin{displaymath} R^{r}_{c_{r}}(BK_t,BK_t,\dots,BK_t) > M \cdot t^\epsilon. \end{displaymath} for some constant $c_r$ depending on $r$. Take the complete $r$-uniform hypergraph $\mathcal{H}$ on $N = M \cdot t^\epsilon$ vertices. Partition the vertex set into sets $V_1,V_2,\dots,V_M$ each consisting of $t^\epsilon$ vertices. We consider $s$-uniform complete hypergraphs $\mathcal{H}_s$ defined on the vertex set $\{1,2,\dots,M\}$ for $2 \le s \le r-1$. Since the lower bounds in Claim \ref{rec} are decreasing (in $r$), we have for $c_s$ colors a coloring of $\mathcal{H}_s$ with no Berge clique of size $t^{1-\epsilon}$ provided $t$ is sufficiently large. Assume, indeed, that $t$ is at least the maximum required for any $s$. Now, given the colorings of $\mathcal{H}_i$ with $c_i$ colors, we define a coloring on $\mathcal{H}$ with $c_r = \sum_{s=2}^{r-1} c_s+2$ colors and no monochromatic $BK_t$. For $2 \le s \le r-1$ we color all hyperedges containing elements of the vertex sets $V_{i_1},V_{i_2},\dots,V_{i_s}$ with the same color as $\{i_1,i_2,\dots,i_s\}$ in the coloring of $\mathcal{H}_s$. Observe that the core of a monochromatic $BK_t$ in $\mathcal{H}$ can contain vertices from fewer than $t^{1-\epsilon}$ classes. Since $\mathcal{H}_s$ has no monochromatic $BK_{t^{1-\epsilon}}$, and each class has $t^\epsilon$ vertices, it follows that $\mathcal{H}$ has no monochromatic $BK_t$ using hyperedges containing vertices from between 2 and $r-1$ classes. Finally, we may color the hyperedges contained in each $V_i$ with any color used so far and the hyperedges containing vertices from $r$ classes with a new color. It remains to verify that $M \cdot t^\epsilon$ yields the required bound. Indeed, \begin{displaymath} M\cdot t^\epsilon = t^{(1-\epsilon)\left(1+(1-\epsilon)^{r-4}-(1-\epsilon)^{r-3}\right)}\cdot t^\epsilon = t^{1+(1-\epsilon)^{r-3}-(1-\epsilon)^{r-2}}. \qedhere \end{displaymath} \end{proof} We now discuss briefly the case of forbidding Berge-cliques of higher uniformity. First we collect some basic lemmas about the Ramsey number for Berge cliques in different uniformities. \begin{lemma} \label{differentorders} For any $r,c,a,b$, where $a<b$ and for $t$ sufficiently large, we have \begin{displaymath} R_c^r(BK_t^{(b)},BK_t^{(b)},\dots,BK_t^{(b)}) \ge R_c^r(BK_t^{(a)},BK_t^{(a)},\dots,BK_t^{(a)}). \end{displaymath} \end{lemma} \begin{proof} It is sufficient to prove that for sufficiently large $t$, there is an injection from $\binom{[t]}{a}$ to $\binom{[t]}{b}$ mapping sets to one of their supersets. Let $S \subset \binom{[t]}{a}$ and $\phi(S)$ be the elements of $\binom{[t]}{b}$ which contain some element from $S$. We have $\abs{S}\binom{t-a}{b-a} \le \abs{\phi(S)} \binom{b}{a}$ by double-counting the relations between the two levels. Then $\abs{\phi(S)} \ge \abs{S}$ is obvious for sufficiently large $t$, and we have the desired injection by Hall's theorem. \end{proof} \begin{corollary} \label{higherorder} For any uniformity $r$, $a<r$, and sufficiently large $c$ and $t$, we have \begin{displaymath} R^{r}_c(BK_t^{(a)},BK_t^{(a)},\dots,BK_t^{(a)}) \ge t^{1 + \left(\frac{r-3}{r-2}\right)^{r-3}-\left(\frac{r-3}{r-2}\right)^{r-2}}. \end{displaymath} \end{corollary} \begin{proof} The result is immediate from Lemma \ref{differentorders} and Theorem \ref{colors}. \end{proof} \section{Ramsey numbers of 2-shadow graphs and proof of Theorem \ref{thm:2-shadow}} \label{2shadow} In this short section, we discuss some results on the Ramsey number of $R^r(\partial K_t, \partial K_s)$. On the one hand, we have $R^r(\partial K_t, \partial K_s) \le R^r(BK_t, BK_s)$. Most of the constructions from Section \ref{sc:Berge} are also constructions for $R^r(\partial K_t, \partial K_s)$; however, there are some exceptions. \begin{proposition} For $s,t \ge 3$, we have $R^3(\partial K_t, \partial K_s) = t+s-3$. For $s \ge 3$, $R^3(\partial K_2, \partial K_s) = s$ and $R^3(\partial K_2, \partial K_2) =3$. \end{proposition} \begin{proof} It is easy to see that $R^3(\partial K_2, \partial K_2) =3$ and $R^3(\partial K_2, \partial K_s) = s$ for $s\geq 3$. We will now show $R^3(\partial K_t, \partial K_s) \leq t+s-3$ for $s,t\ge 3$ by inducting on $s+t$. The cases when $s$ or $t$ is 3 are trivial. Assume the theorem holds for smaller $s+t$ and take a $2$-edge-colored complete $3$-uniform hypergraph $\mathcal{H}$ on the vertex set $V$ of size $s+t-3$ where $s,t\geq 4$. If for all $x,y \in V$ we have that there exists $z$ such that $\{x,y,z\}$ is blue, then we have complete blue clique in the 2-shadow. Otherwise suppose there is a pair of vertices $x,y$ such that for all $z \in V\setminus \{x,y\}$ we have $\{x,y,z\}$ is red, then consider the subhypergraph of $\mathcal{H}$ induced by $V \setminus \{x\}$. By induction, there exists either a blue $\partial K_t$, in which case we are done, or a red $\partial K_{s-1}$ with $Y$ as its core. Then we can extend it to a red $\partial K_s$ with $Y\cup \{x\}$ as its core by adding the red hyperedges $\{x,y,z\}$ where $z \in Y$. The lower bound construction is to take a set of $t-2$ vertices $A$ and a set of $s-2$ vertices $B$ and color a hyperedge red if and only if it intersects $A$ in at most 1 vertex. \end{proof} \begin{proposition} For $r \ge 4$ and $s,t \ge 2$, we have $R^r(\partial K_t, \partial K_s) = \max\{s,t,r\}$. \end{proposition} \begin{proof} Consider a $2$-edge-colored complete $r$-uniform hypergraph on $N= \max\{s,t,r\}$ vertices. Suppose first, that for all $x,y \in V$ there exists $z_1,z_2,\dots,z_{r-2}$ such that $\{x,y,z_1,z_2,\dots,z_{r-2}\}$ is blue, then there is a blue $K_N$ in the shadow. On the other hand, if there are $x,y \in V$, such that for all $z_1,z_2,\dots,z_{r-2}$, $\{x,y,z_1,z_2,\dots,z_{r-2}\}$ is red, then it is easy to see that there is a red $K_N$ in the 2-shadow. Thus, $R^r(\partial K_t, \partial K_s) \le \max\{s,t,r\}$. On the other hand taking a clique of the appropriate color on $\max\{s,t,r\}-1$ vertices yields a construction for the lower bound. \end{proof} \begin{remark} The superlinear lower bounds constructed in Subsection \ref{superlinear} are in fact constructions for hypergraphs without monochromatic cliques in the 2-shadow. Thus, the same lower bounds hold. \end{remark} \section{Ramsey numbers of trace-cliques}\label{sc:trace} Throughout this section, we assume that $a,b$ are positive integers. \begin{lemma} \label{weak} $R^{a+b+1}(T K^{(a+1)}_{t},T K^{(b+1)}_{s}) \leq R^{a+b+1}(T K^{(a+1)}_{t-1},T K^{(b+1)}_{s})+s-b$, for $t\geq a+1, s \geq b+1$. \end{lemma} \begin{proof} Let $N = R^{a+b+1}(T K^{(a+1)}_{t-1},T K^{(b+1)}_{s})+s-b$, and $\mathcal{H}$ be a $2$-edge-colored (blue and red) complete $(a+b+1)$-uniform hypergraph on $N$ vertices. Let $\mathcal{H}'$ be an induced subhypergraph of $\mathcal{H}$ on $R^{a+b+1}(T K^{(a+1)}_{t-1},T K^{(b+1)}_{s})= N-(s-b)$ vertices, obtained by removing a set $Y$ of $s-b$ vertices. Then $\mathcal{H}'$ contains either a blue $T K^{(a+1)}_{t-1}$ or a red $T K^{(b+1)}_{s}$. In the second case we are done, so let us assume that $\mathcal{H}'$ contains a blue $T K^{(a+1)}_{t-1}$ with core $X$. Let $Z$ be a set of $b$ vertices of $\mathcal{H}'$ which does not intersect $X$ (there is such a set since $v(\mathcal{H}') \geq v(T K^{(a+1)}_{t-1}) \geq t-1+b$) and put $S = Y \cup Z.$ Consider the edges of the form $A\cup B$ where $A\subseteq X, |X| = a$ and $B \subseteq S, |B| = b+1$. If for some fixed $B$, $A \cup B$ is blue for every subset $A$ of $X$ of size $a$, then pick $v \in B \cap Y$, and together with these edges and the edges defining the blue $T K^{(a+1)}_{t-1}$, $X \cup \{v\}$ is the core of a blue $T K^{(a+1)}_{t-1}$. If this is not the case, then for any $B \subset S$ of size $b+1$, there exists $A_B \subseteq S$ such that $A_B \cup B$ is red, and therefore, $S$ together with these edges is the core of a red $T K^{(b+1)}_{s}$. \end{proof} \begin{theorem} Let $t \geq a+1, s\geq b+1$. Then $R^{a+b+1}(T K^{(a+1)}_{t},T K^{(b+1)}_{s}) \leq (t-a)(s-b) + a + b.$ \end{theorem} \begin{proof} We are going to prove this result by induction on $t$, the base case is where $t = a+1$, we have that $R^{a+b+1}(T K^{(a+1)}_{a+1},T K^{(b+1)}_{s}) = s+a = (s-b) + b + a$, so the result follows. Now assume that for some $t-1 \geq a+1 $ the result is true, then by Lemma~\ref{weak} we have that \begin{align*} R^{a+b+1}(T K^{(a+1)}_{t},T K^{(b+1)}_{s}) &\leq R^{a+b+1}(T K^{(a+1)}_{t-1},T K^{(b+1)}_{s})+s-b\\ &\leq (t-1-a)(s-b) + a + b + (s-b) = (t-a)(s-b) + a + b. \qedhere \end{align*} \end{proof} \begin{proposition} \label{tracesus} Suppose that $t\geq a+1 \geq 3$ and $s\geq 2.$ Then \begin{displaymath}R^{a+1}(S K^{(a)}_t,T K_s) \leq t + \max\{R^{a+1}(S K^{(a)}_{t-1},T K_s),R^{a+1}(S K^{(a)}_t,T K_{s-1})\}. \end{displaymath} \end{proposition} \begin{proof} Let $\mathcal{H}$ be an $(a+1)$-uniform hypergraph with vertex set $V$ of size \begin{displaymath} N = t + \max\{R^{a+1}(S K^{(a)}_{t-1},T K_s),R^{a+1}(S K^{(a)}_t,T K_{s-1})\}. \end{displaymath} Since $N > R^{a+1}(S K^{(a)}_{t-1},T K_s)$, it follows that we can find either a blue $S K^{(a)}_{t-1}$ or a red $T K_s$. In the latter case, we are done, so assume there is a blue $S K^{(a)}_{t-1}$ with defining vertices $X$ and suspension vertex $u$. Now, if for some $v \in V\backslash (X\cup\{u\})$ it holds that for every set $A \subseteq X$ of size $a-1$ we have that $A\cup\{v,u\}$ is blue, then we can add $v$ to $X$ and obtain a blue $S K^{(a)}_t$. Otherwise suppose that for every $v$ we can find a set $A_v$ such that $A_v \cup \{v,u\}$ is red. Let $V' = V\setminus(X \cup \{u\})$. Note that $\abs{V'} \geq R^{a+1}(S K^{(a)}_t,T K_{s-1})\}$. It follows that we can find either a blue $S K^{(a)}_t$ or a red $T K_{s-1}$ in $\mathcal{H}[V']$. If we find a blue $S K^{(a)}_t$, we are done. Otherwise suppose we can find a red $T K_{s-1}$ defined on the set $Y$. Then we can extend $Y$ to a red $T K_s$ by adding to $Y$ the vertex $u$ together with the edges $A_v\cup\{v,u\}$ for every $v \in Y$ since $A_v$ does not intersect $V'$. \end{proof} \begin{corollary} Suppose that $t\geq a \geq 2$ and $s \geq 2$. Then \begin{displaymath} R^{a+1}(S K^{(a)}_t,T K_s) \leq \binom{t}{2}+(s-1)t. \end{displaymath} \end{corollary} \begin{proof} This bound follows by induction on $s+t$ from Proposition \ref{tracesus}. The case when $s=2$ or $t=a$ are trivial. Assume we had the bound for smaller values of $s+t$ and observe that Proposition \ref{tracesus} and induction imply that \begin{displaymath} R^{a+1}(S K^{(a)}_t,T K_s) \le t + \max \left(\binom{t-1}{2} + (s-1)(t-1), \binom{t}{2}+(s-2)t\right) \le \binom{t}{2}+(s-1)t, \end{displaymath} as required. \end{proof} \begin{proposition} Suppose that $t\geq a+1$ and $s\geq 2$. Then \begin{displaymath} R^{a+1}(S K^{(a)}_t,\partial K_s) \ge (s-1)\floor{\frac{t}{a}}+1. \end{displaymath} \end{proposition} \begin{proof} Take a vertex set of size $(s-1)\floor{\frac{t}{a}}$ and divide it into $s-1$ classes $V_1,V_2,\dots,V_{s-1}$ of size at most $\floor{\frac{t}{a}}$. Color every hyperedge which intersects each $V_i$ in at most 1 with red, and color every other hyperedge blue. Clearly this construction has no red $\partial K_s$. We will now show it has no blue $S K^{(a)}_t$. Indeed, suppose that $X$ is the core of the blue suspension and $v$ is the suspension vertex. Let $V_{i_1},\dots,V_{i_k}$ denote the classes which have nonempty intersection with $X \cup \{v\}$, then $t+1 = \abs{X\cup \{v\}} = \sum_{j = 1}^k \abs{(X \cup \{v\})\cap V_{i_j}} \leq \frac{kt}{a}$. It follows that $k > a$. Suppose, without loss of generality, that $v \in V_{i_{a+1}}$. Then we may take $x_j \in X\cap V_{i_j}$ for $j = 1,\dots,a$ so that the edge $\{x_1,\dots,x_a,v\}$ is red, and thus not a member of a blue suspension, contradiction. \end{proof} Thus, we have the following corollaries. \begin{corollary} Suppose that $t\geq a+1$ and $s\geq 2$. Then \begin{displaymath} R^{a+1}(S K^{(a)}_t,T K_s) \ge (s-1)\floor{\frac{t}{a}}+1. \end{displaymath} \end{corollary} \begin{corollary} $R^{a+1}(S K^{(a)}_t,T K_t) = \Theta_{a}(t^2).$ \end{corollary} \begin{proposition} Suppose that $t\geq a+2$ and $s\geq b+2$. Then \begin{displaymath} R^{a+b+1}(HK^{(a+1)}_t,TK^{(b+1)}_s) \leq M + t+b\binom{t}{a+1} - b,\end{displaymath} where $M = \max \left(R^{a+b+1}(HK^{(a+1)}_{t-1},TK^{(b+1)}_s),R^{a+b+1}(HK^{(a+1)}_t,TK^{(b+1)}_{s-1})\right).$ \end{proposition} \begin{proof} Let $\mathcal{H}$ be an $(a+b+1)$-uniform hypergraph with vertex set $V$ of size \begin{displaymath} N = M + t+b\binom{t}{a+1} - b. \end{displaymath} Since $N > M$, we can find either a blue $HK^{(a+1)}_{t-1}$ or a red $TK^{(b+1)}_s$. If the latter case occurs we are done, so assume there is a blue $H K^{(a+1)}_{t-1}$ with core $X$ of size $t-1$ and set of expansion vertices $X'$ of size $\binom{t-1}{a+1}b$. Now let $v$ be a vertex not in $X \cup X'$. We will try to extend $X$ together with $v$. Let $A_1,A_2,\dots,A_{\binom{t-1}{a}}$ be an ordering of the subsets of $X$ of size $a$. Let $V_1 = V\backslash(X \cup X' \cup \{v\})$ and set $X_1 = X'$. For each $i = 1, 2\dots,\binom{t-1}{a}$, if there is a set $B_i$ of size $b$ in $V_i$ such that that $B_i \cup A_i \cup \{v\}$ is blue, then set $V_{i+1} = V_i \backslash B_i$ and $X_{i+1} = X_i \cup B_i$, otherwise we stop. If we can do this for every $i$ then the set $X\cup\{v\}$ defines a blue $HK^{(a+1)}_t$ with expansion set $X_{\binom{t-1}{a}}$. If not, then there is an index $i$ such that we have to stop. This means that for every set $B$ of size $b$ in $V_i$ we have that $A_i \cup B \cup \{v\}$ is red. Now the size of $V_i$ is $N-(t-1) - \binom{t-1}{a+1}b - (i-1)b -1 \geq N-t - \binom{t-1}{a+1}b - (\binom{t-1}{a}-1)b = M.$ So by the definition of $M$, we can find either a blue $HK^{(a+1)}_t$ using $V_i$ or a red $TK^{(b+1)}_{s-1}$. In the first case we are done, so suppose we have a red $TK^{(b+1)}_{s-1}$ with defining vertices $Y$. Now we can extend $Y$ together with $v$ to a red $TK^{(b+1)}_s$, since for every $B \subseteq Y$ of size $b$ we have that the edge $B\cup A_i \cup \{v\}$ is red. \end{proof} \begin{corollary} Suppose that $t\geq a+1$ and $s\geq b+1$. Then \begin{displaymath} R^{a+b+1}(HK^{(a+1)}_t,TK^{(b+1)}_s) \le b \binom{t+1}{a+2}+\binom{t+1}{2} -tb+s\left(b\binom{t}{a+1}+t-b\right). \end{displaymath} \end{corollary} \section{Ramsey number of expansion and suspension hypergraphs} \label{sc:exp-susp} \subsection{Expansion hypergraphs and Proof of Theorem \ref{th:expansion-upper}} In this section, we give an upper bound on $R^3(HK_{t}, HK_s)$. Recall that $H^r(K_t)$ is the the $r$-graph obtained from the complete graph $K_t$ by enlarging each edge by a set of $(r-2)$ distict new vertices. Moreover, $R^r(H^r(K_t), H^r(K_t))$ is the smallest integer $n$ such that every $2$-edge-coloring of the complete $r$-uniform hypergraph $\mathcal{H}$ on $n$ vertices contains a monochromatic $H^r(K_t)$. For ease of reference, we will use $R^r(HK_t, HK_t)$ to denote $R^r(H^r(K_t), H^r(K_t))$. We first prove the following lemma. \begin{lemma}\label{th:ex:induct} For $s,t \geq 2$, we have that \begin{displaymath} R^3(HK_{t+1}, HK_{s+1}) \leq \max\{R^3(HK_{t+1},HK_s), R^3(HK_{t},HK_{s+1})\} + 2st. \end{displaymath} \end{lemma} \begin{proof} Without loss of generality, we assume that $t\leq s$. Let \begin{displaymath} N = \max\{R^3(HK_{t+1},HK_s), R^3(HK_{t},HK_{s+1})\} + 2st \end{displaymath} and $\mathcal{H}_N$ be a $2$-edge-colored compete $3$-uniform hypergraph on $N$ vertices. Let \begin{displaymath} W = \{v_1, v_2, \ldots, v_{2st}\} \subset V(\mathcal{H}_N) \end{displaymath} and $\mathcal{H}' = \mathcal{H}[V(\mathcal{H}_N) \backslash W]$. Note that $|\mathcal{H}'| \geq R^3(HK_{t}, HK_{s+1})$. Thus by definition of Ramsey number, there exists either a blue expansion of $K_{t}$ or a red expansion of $K_{s+1}$. If the latter happens, we are done. Thus assume that we have a blue expansion $\mathcal{H}_b$ of $K_t$. Note that $\mathcal{H}_b$ has $\binom{t}{2} + t$ vertices. Let $\{u_1,\ldots u_t\}$ be the core of $\mathcal{H}_b$. Let $F= V(\mathcal{H})\backslash V(\mathcal{H}_b)$. \begin{claim}\label{cl:ex:induct} Suppose that $\mathcal{H}_N$ does not have a blue expansion of $K_{t+1}$. Then for every $v \in W$, there exists some $u$ in the core of $\mathcal{H}_b$ such that $\{v,u,w\}$ is colored red for all $w$ except at most $(t-1)$ elements from $F\backslash\{v\}$. \end{claim} \begin{proof} Fix a vertex $v \in W$. Construct a bipartite graph $G= A \cup B$ where $A = \{u_1, \ldots, u_t\}$ and $B =F\backslash\{v\}$. For $u_i\in A$, $w \in B$, $u_i$ is adjacent to $w$ in $G$ if and only if $\{v,u_i,w\}$ is a blue edge in $\mathcal{H}_N$. Note that for every $w \in B$, $d_G(w) \leq t$. Therefore, if $d_G(u_i) \geq t$ for every $u_i \in A$, then there exists a matching of $A$ in $G$ by Hall's theorem, which implies that we can extend $\mathcal{H}_b$ to a blue expansion of $K_{t+1}$ by adding $v$ into the core of $\mathcal{H}_b$. This contradicts our assumption that $\mathcal{H}_N$ does not have a blue expansion of $K_{t+1}$. Hence it follows that there exists a vertex $v' \in A$ such that $\{v,v',w\}$ is colored red for all except $t-1$ elements of $F\backslash\{v\}$. This finishes the proof of Claim~\ref{cl:ex:induct} \end{proof} Now since $\abs{W} = 2st$, by pigeonhole principle, there exists some $u$ in the core of $\mathcal{H}_b$ so that there exists $W_u = \{w_1, w_2, \ldots, w_s\}$ such that for any $w \in W_u$, the hyperedge $\{w,u, w'\}$ is red for all $w'$ except at most $(t-1)$ elements of $F\backslash\{w\}$. Let $M(w_i)$ be the elements $w'$ in $W$ such that $\{u,w_i, w'\}$ is blue. Now let $W' = W_u \cup V(\mathcal{H}_b) \cup \bigcup_{i=1}^s M(w_i)$ and $\mathcal{H}'' = \mathcal{H}_N[V(\mathcal{H}_N) \backslash W']$. Note that\\ $|\mathcal{H}''| \geq R^3(HK_{t+1}, HK_s)$ since $2st \geq st + \binom{t}{2}+t$. Hence there either exists a blue expansion of $K_{t+1}$ or there exists a red expansion of $K_s$. If the former happens, we are done. Hence assume we have a red expansion $\mathcal{H}_r$ of $K_s$. Suppose $\{v_1, v_2, \ldots, v_s\}$ is the core of $\mathcal{H}_r$. Now we can extend $\mathcal{H}_r$ to a red expansion of $K_{s+1}$ by adding $u$ into the core of $\mathcal{H}_r$ together with the red edges in $\{\{u,w_i, v_i\}: i\in [s]\}$. This completes the proof of the lemma. \end{proof} Now we are ready to show that $R^3(HK_t, HK_s) \leq 2(s+t) st$. The proof is by induction on $s + t$. We first show that $R^3(HK_2, HK_s) \leq 4s^2 + 8s$. This is clearly true since any blue edge in a 3-uniform hypergraph is a blue expansion of $K_2$. Hence given any $2$-edge-colored complete 3-uniform hypergraph $\mathcal{H}$ with $4s^2 + 8s$ vertices, if there is no blue edge, then all edges are red, which implies that we have a red expansion of $K_s$, since $4s^2 + 8s \geq \binom{s}{2}+s$. Similarly, $R^3(HK_t, HK_2) \leq 4t^2 + 8t$. Now assume the theorem holds for $HK_{t'}, HK_{s'}$ such that $t'+s' < t+s$. Without loss of generality, assume that $t\leq s$. Then by the Lemma~\ref{th:ex:induct}, \begin{align*} R^3(HK_{t}, HK_{s}) &\leq \max\{R^3(HK_{t},HK_{s-1}), R^3(HK_{t-1},HK_{s})\} + 2(s-1)(t-1)\\ &\leq 2(s+t-1)t(s-1) + 2(s-1)(t-1)\\ &\leq 2st(s+t). \end{align*} Hence we are done by induction. \subsection{Ramsey number of suspension hypergraphs} Recall that the $r$-suspension $SK_t$ of the complete graph $K_t$, is the $r$-uniform hypergraph formed by adding a single fixed set of $r-2$ distinct new vertices to every edge in $K_t$. Clearly, $R^{r}(SK_t, SK_t) \leq R^2(K_t,K_t)+(r-2)$. The proof is simple: let $\mathcal{H}$ be a $2$-edge-colored $K^{(r)}_{R^2(K_t, K_t)+(r-2)}$. Fix a set of $(r-2)$ vertices $S$ and consider the complete graph $G$ on the remaining $R^2(K_t, K_t)$ vertices, where the color of an edge $e$ in $G$ is the same color as the hyperedge $e\cup S$ in $\mathcal{H}$. By the definition of the Ramsey number, there exists a monochromatic clique in $G$, which gives us the core of the monochromatic $SK_t$ in $\mathcal{H}$. Before we prove the lower bound, let us recall the symmetric version of the Lov\'{a}sz local lemma~\cite{Alon-Spencer}: \begin{quotation}\it Let ${\mathcal {A}}=\{A_{1},\ldots ,A_{q}\}$ be a finite set of events in the probability space $\Omega$. Suppose that each event $A_i$ is mutually independent of a set of all but at most $d$ of the other events $A_j$, and that $\Pr(A_i) \leq p$ for all $1\leq i\leq q$. If $$ep(d+1)<1,$$ then $$\Pr \left ( \displaystyle\bigwedge_{i=1}^q \overline{A_i}\right ) > 0.$$ \end{quotation} Now we can show a lower bound of $R^{r}(SK_t, SK_t)$ with the local lemma. \begin{proposition} Fix $t\geq r \geq 3$. If $$e \left ( 1 + \binom{t}{2} \binom{r}{2} \binom{n}{t-2}\right ) 2^{1-\binom{t}{2}} <1,$$ then $R^r(SK_t, SK_t) > n$. \end{proposition} \begin{proof} Let $\mathcal{H}$ be a complete $r$-uniform hypergraph on $n$ vertices. Color each hyperedge blue or red randomly and independently with probability $\frac{1}{2}$. For a set of $r-2$ vertices $S$ and another set of $t$ vertices $T$ disjoint from $S$, let $A_{S,T}$ be the event that the suspension hypergraph $\mathcal{H}_{S,T}$ with $T$ as core and $S$ as the suspending vertex set is monochromatic. Note that for each fixed $S,T$, $$\Pr(A_{S,T}) = 2^{1-\binom{t}{2}} = p.$$ Note that $A_{S,T}$ is mutually independent of all other events $A_{S',T'}$ satisfying \\ $E(\mathcal{H}_{S,T}) \cap E(\mathcal{H}_{S',T'}) = \emptyset$. Let us give an upper bound on the number of events $A_{S',T'}$ that $A_{S,T}$ is mutually dependent of. There are $\binom{t}{2}$ choices to pick an edge they share, which contains $r$ vertices. Among the $r$ vertices, $r-2$ of them must be the suspension vertices. There are $\binom{r}{r-2}$ ways to choose the suspension vertices $S'$. There are then at most $\binom{n}{t-2}$ ways to choose the remaining $t-2$ vertices of $T$. Hence it follows that $$d \leq \binom{t}{2} \binom{r}{2} \binom{n}{t-2}.$$ By the Lov\'{a}sz local lemma, it follows then that if $ep(d+1) <1$, we have that $$\Pr\left ( \displaystyle\bigwedge_{S,T} \overline{A_{S,T}} \right ) > 0.$$ Hence there exists a coloring of $\mathcal{H}$ without any monochromatic $SK_t$. \end{proof} \begin{remark} For any fixed $r$, this gives asymptotically the same lower bound as Ramsey number $R^2(K_t,K_t)$, i.e. $R^r(SK_t, SK_t) > (1+o(1))\frac{\sqrt{2}}{e} t \sqrt{2}^t$. \end{remark} \section{Acknowledgements} We would like to thank J\'er\^ome Argot and Xueping Zhao for their generous help providing the computing resources. The third author would like to thank Gyula O.H.~Katona for his hospitality and guidance during the third author's visit in Budapest. We also like to thank the anonymous referee for their useful suggestions. The research of the first and second authors was partially supported by the National Research, Development and Innovation Office, NKFIH, grant K116769. The research of the first author was supported in part by NSF grant DMS-1600811. The research of the second author was supported in part by IBS-R029-C1.
2,877,628,090,051
arxiv
\subsection{Fuzzification} Mass-to-charge peaks that affect the expert classification are converted to a fuzzy logic level based on their relative abundance. The truth level for a particular ion in a specific classification can be Boolean by setting a threshold: when the abundance is above the appropriate side of the threshold, the truth level is high. It is convenient to allow for a {\it gray} area with a piece-wise linear function mapping the abundance into a $[0,1]$ range, so that an abundance just outside the threshold can have a graduated approach in its affect on the classification. A given mineral type will require that several ions be present or absent in a range of relative abundances. The requirement for a specific ion is encoded in a fuzzy membership function $\mu_{\gamma,\chi}(A)$ where $\gamma$ is the mineral classification, $\chi$ is a specific ion denoted as a chemical symbol or the m/z of the ion, and $A$ is the mass spectrum of the sample being classified. The spectrum, $A$, can be represented as the relative abundance as a function of m/z, $a(\phi)$, where $\phi$ is the m/z. The first step in finding $\mu_{\gamma,\chi}(A)$ is locating the maximum abundance, $p$, within the error bound, $\epsilon$, of $\chi$: \begin{equation} p(A,\chi) = \max_{\phi = \chi - \epsilon}^{\chi + \epsilon}(a(\phi)). \end{equation} The error bound is required because of uncertainty due not only to drift in magnetic field strength between calibrations but to an electric space-charge effect depending on the number of ions desorbed \cite{Marshall} (pages 244-245). { \renewcommand{\baselinestretch}{1.0} \begin{figure} \centerline{ \includegraphics[width=3.5in]{Peak.eps} } \caption{Illustration of maximum abundance within $\epsilon$ of $\chi$.\vspace{.25in} } \label{fig_peak} \end{figure} } The membership value then becomes a function mapping $p$ onto a fuzzy logical level, $[0,1]$. The linguistic expression for whether an ion's abundance is appropriate for a particular mineral composition can take a form similar to: ``the relative abundance of iron should be small'' or ``the relative abundance of calcium (Ca) should be high.'' The level at which the logic level is $100\%$ true or false is a judgement call, which can better be performed by allowing interpolation. Experts could conceivably compromise on levels of abundance that constitute an absolute true or false and allow a function to interpolate between those levels. For lack of apparent need for a more complex interpolation, we chose to implement a piece-wise linear function. In general we could have medium relative abundance functions; but, in practice, we have only found need to define functions for relative high or low abundances. The low (not) abundance function can be formed as the negative of a high abundance function. So, $\mu$ takes one of the two forms, shown in Fig. \ref{fig_membership}: \begin{equation} \mu_{\gamma,\chi}(p) = \left\{ \begin{array}{ll} 0 & p<l \\ \frac{ p-l }{ h-l } & l \leq p < h \\ 1 & p \geq h \end{array} \right. \end{equation} for required high abundance, or for low abundance \begin{eqnarray} \mu_{\gamma,\sim\chi}(p) & = & 1 - \mu_{\gamma,\chi}(p) \nonumber \\ & = & \left\{ \begin{array}{ll} 1 & p<l \\ \frac{ h-p }{ h-l } & l \leq p < h \\ 0 & p \geq h \end{array} \right., \end{eqnarray} where $l$ is low hard logic threshold and $h$ is the high logic threshold, $l < h$, and the symbol `$\sim$' indicates the negation (i.e. NOT $\chi$). The primary method for choosing $l$ and $h$ has an expert operator look at many spectra taken from various locations on a homogeneous ``known.'' From these observations thresholds can be set and the size of the ``gray'' area chosen. { \renewcommand{\baselinestretch}{1.0} \begin{figure} \centerline{ \includegraphics[width=3.5in]{fuzzy_membership.eps} } \caption{Examples of membership function for high and low abundance requirements.\vspace{.25in} } \label{fig_membership} \end{figure} } \subsection{Inference Rules} Inference rules are based on logical expressions of requirements. The fuzzy membership functions determine the truth value of each term in the expression. Logical operations are defined using the logic levels from the fuzzy membership values of an ion with respect to the mineral class. The ``and'' ($\wedge$) function is implemented with the product operation ($A \wedge B \rightarrow \mu_A \mu_B$), the ``or'' ($\vee$) function with the equivalent ($A \vee B \rightarrow \mu_A + \mu_B - \mu_A \mu_B$), and ``not'' ($\sim$) as negation, $1-\mu$. As an example, our rule for augite (AGT) has requirements on the ions of iron (Fe), titanium (Ti) and Ca. The expression for membership in the class $\mbox{AGT}$ is the product of the result of the membership functions, $\mu_{{AGT},Fe}$, $\mu_{{AGT},\sim Ti}$, and $\mu_{{AGT},Ca}$ and can be written compactly as: \begin{equation} \mu_{AGT}(A) = \bigwedge_{\chi \in}^{\{Fe,\sim Ti,Ca\}} \hspace{-12pt}\mu_{{AGT},\chi}(A). \end{equation} The benefit of using the product operation over minimums and maximums is the additive affect in the ``or'' terms. In particluar, ilmenite is defined as having one or more of the metals, Fe, magnesium (Mg), or manganese (Mn), above a certain accumulative abundance. This allows the membership function to be constructed such that appropriate combination of the metals will give a sufficiently high membership value. The product implementation for ``and'' terms also discounts membership of multiple terms below full membership, providing a more conservative approach (e.g. two terms of membership value $0.9$ ``and''-ed together discounts the resulting membership value to $0.81$ instead of the minimum value $0.9$). \subsection{Hard Classification} Although leaving the result of the classification as the value of the fuzzy membership in each the mineral classes is useful in some applications, our current application requires discrete classifications. Defuzzification is accomplished by finding the mineral class with the maximum membership value. A threshold, $\nu$, is set at a minimum required membership. If the spectra does not have a maximum membership in one of the classes greater than $\nu$ then it is classified as unknown, UNK. The hard classification, $\Gamma$ is: \begin{equation}\label{hard_class} \Gamma(A) = \left\{ \begin{array}{ll} (\gamma | \mu_\gamma = \displaystyle\max_{\chi}\mu_\chi) & \mbox{for} \displaystyle\max_{\chi}\mu_\chi \geq \nu\\ \mbox{UNK} & \mbox{for} \displaystyle\max_{\chi}\mu_\chi < \nu \end{array} \right. \end{equation} As a side note, the membership value in the set of UNK can be defined as the negation of the maximum membership value: $\mu_{\mbox{UNK}} = 1 - \max_{\chi}\mu_\chi$. Unknown spectra are left for the instrument operator to classify or remain as `unknown.' An unknown spectra may occur when the desorption laser spot covers more than one class of mineral or there is a problem with the desorption process or acquired signal. An automated method for assigning a value based on the membership values and the neighboring spots in the sample is presented in Section \ref{map}. One item of note, some basalt samples from the INL site have the characteristic of an unusually high potassium abundance not noted in the mineral literature \cite{MineralBook}. This has been reported in other mass spectrometry literature for analysis of INL basalt \cite{Ingram99}. To adjust for the high potassium abundance, spectra used by the inference engine are preprocessed to re-scale based on the highest peak excluding potassium. The software implementing the system allows particular ions to be rescaled in this way when required. \section{Introduction} \label{intro_sec} \input{intro.tex} \section{Classification of Spectra with Fuzzy Thresholds} \label{fuzzy_thresh} \input{Fuzzyclass.tex} \section{Ensemble Statistics for Generating and Refining Rulebases} \label{PeakTool} \input{Peaktool.tex} \section{Spatially Based Classification of Indeterminate Samples} \label{map} \input{Map.tex} \section{Results and Discussion} \label{results} \input{results.tex} \section{Conclusion and Future Directions} \label{conclusion} \input{new_conclussion.tex} \section{Acknowledgements} We thank Paul L. Tremblay for editorial suggestions to this paper and for being instrumental in the design and construction of the imaging FTMS. This work was supported by the U.S. Department of Energy under DOE/NE Idaho Operations Office Contract DE-AC07-05ID14517. \section{List of Symbols} \label{symbols} \input{symbol_list.tex} \newpage
2,877,628,090,052
arxiv
\section{Introduction} In light unstable nuclei, various exotic structures such as the magic number breaking, new cluster structures, and the neutron halo structure, have been discovered. In a series of Be isotopes, it has been revealed that the structure changes rapidly with the increase of the neutron number $N$ and the cluster structure develops in neutron-rich Be isotopes as discussed in many theoretical and experimental studies \cite{Oertzen-rev,AMDsupp,KanadaEn'yo:2012bj,SEYA,OERTZENa,OERTZENb,KanadaEnyo:1995tb,ARAI,Dote:1997zz,ENYObe10,ITAGAKIa,ITAGAKIb,OGAWA,Descouvemont02,ENYObe11,KanadaEn'yo:2003ue,Ito:2003px,Ito:2005yy,FREER,SAITO04,Curtis:2004wr,Millin05,Freer:2006zz,Bohlen:2007qx,Curtis:2009zz,Yang14}. The cluster structure in the ground states of $^{11}$Be and $^{12}$Be is considered to play an important role in the vanishing of the neutron magic number $N=8$. For $^{11}$Be, the breaking of $N=8$ shell has been known experimentally from the abnormal spin-parity $1/2^+$, and for $^{12}$Be, it has been suggested by slow $\beta$ decay \cite{Suzuki:1997zza} and more directly evidenced by the intruder configuration observed in 1$n$-knockout reactions \cite{Navin:2000zz,Pain:2005xw} as well as other experiments \cite{iwasaki00,iwasaki00b,shimoura03}. These nuclei have the largely deformed ground states with intruder neutron configurations having more remarkable cluster structure than the neighboring isotope, $^{10}$Be. Also in neutron-rich B isotopes, the enhancement of cluster structures has been theoretically predicted \cite{KanadaEnyo:1995ir}, whereas, in neutron-rich C isotopes, no cluster structure is predicted to develop at least in the ground states \cite{AMDsupp,thiamova2004,KanadaEn'yo:2004bi}. These facts indicate that the development of cluster structure strongly depends on proton and neutron numbers of the system. A problem to be solved is how one can experimentally observe the structure change along the isotope chain, i.e., the enhancement and weakening of the cluster structure with the increase of the neutron number $N$. Since the enhanced cluster structure in neutron-rich nuclei enlarges the deformation and spatial extent of proton density, the change of the cluster structure may affect such observables as electric quadrupole moments and charge radii. The former is not necessarily a direct information of proton structure because it is sensitive not only to the proton distribution but also to the neutron configuration through the angular momentum coupling. Moreover, it gives no information for the $J^\pi=0^+$ ground states of even-even nuclei, in which the quadrupole moment is trivially zero. The latter, the charge radius, is usually not sensitive to the neutron configuration and it reflects more directly the proton density, at least for the radial extent, and therefore, the $N$ dependence of the charge radius can be a probe to clarify the change of the cluster structure. Recently, root mean square (rms) charge radii of neutron-rich Be isotopes have been precisely measured by means of isotope shift. In the systematics of charge radii in Be isotopes, the large charge radii of $^{11}$Be and $^{12}$Be, which have been recently measured, can be understood by the remarkable cluster structure in the deformed ground states of $^{11}$Be and $^{12}$Be \cite{Nortershauser:2008vp,Krieger:2012jx}. For neutron-rich B and C isotopes, change radii have yet to be measured except for $^{14}$C near the stability line. Instead of isotope shift measurement, recently, a new experimental approach to determine rms radii of point-proton density (proton radii) by the charge changing interaction cross section has been proposed and applied to B and C isotopes \cite{Yamaguchi:2011zz,estrade14}. Our aim here is to clarify how the structure change with the $N$ increase is reflected in proton radii. For this aim, we investigate the $N$ dependence of proton radii in the isotope chains of Be, B, and C and the influence of change of cluster structures and intrinsic deformations on proton radii. We try to answer the question whether the $N$ dependence proton radii can be a probe for the cluster structure in neutron-rich nuclei. In this study, we calculate the ground states of Be, B, and C isotopes with the method of antisymmetrized molecular dynamics (AMD) \cite{AMDsupp}. The method has been proven to be a useful approach to describe structures, in particular, cluster structures, in light neutron-rich nuclei. Systematic studies with the simple version of AMD have predicted that structures of Be and B isotopes change rapidly with the increase of the neutron number \cite{AMDsupp,KanadaEnyo:1995tb,KanadaEnyo:1995ir}. Advanced studies with the variation after spin and parity projections (VAP) in the AMD framework have described the breaking of $N=8$ magicity in neutron-rich Be \cite{ENYObe11,KanadaEn'yo:2003ue}. The latter method (the AMD+VAP) describes better the details of structures in ground and excited states than the former method (the simple AMD), in which the variation is performed before the spin projection. In the present study, we apply the AMD+VAP to Be, B, and C isotopes, and discuss the structure change focusing on the $N$ dependence of the proton radius in each series of isotopes. The paper is organized as follows. We describe the framework of the AMD+VAP in Section \ref{sec:formulation}, and show the results of Be, B, and C isotopes in Section \ref{sec:results}. Section \ref{sec:discussion} discusses the structure change with the $N$ increase and its influence on proton radii. The paper concludes with a summary in Section \ref{sec:summary}. \section{Formulation of AMD+VAP}\label{sec:formulation} We describe Be, B, and C isotopes with AMD wave functions by applying the VAP method. For the detailed formulation of the AMD+VAP, please refer to Refs.~\cite{ENYObe10,ENYObe11,KanadaEn'yo:2003ue}. The method is basically the same as that used in those previous studies. A difference in the present calculation from Refs.~\cite{ENYObe10,ENYObe11,KanadaEn'yo:2003ue} is that we do not adopt an artificial barrier potential, which has been used in previous studies to describe highly excited resonance states. \subsection{AMD wave functions} An AMD wave function is given by a Slater determinant of Gaussian wave packets; \begin{equation} \Phi_{\rm AMD}({\bf Z}) = \frac{1}{\sqrt{A!}} {\cal{A}} \{ \varphi_1,\varphi_2,...,\varphi_A \}, \end{equation} where ${\cal{A}}$ is the antisymmetrizer, and the $i$th single-particle wave function is written by a product of spatial($\phi_i$), intrinsic spin($\chi_i$) and isospin($\tau_i$) wave functions as, \begin{eqnarray} \varphi_i&=& \phi_{{\bf X}_i}\chi_i\tau_i,\\ \phi_{{\bf X}_i}({\bf r}_j) & = & \left(\frac{2\nu}{\pi}\right)^{4/3} \exp\bigl\{-\nu({\bf r}_j-\frac{{\bf X}_i}{\sqrt{\nu}})^2\bigr\}, \label{eq:spatial}\\ \chi_i &=& (\frac{1}{2}+\xi_i)\chi_{\uparrow} + (\frac{1}{2}-\xi_i)\chi_{\downarrow}. \end{eqnarray} $\phi_{{\bf X}_i}$ and $\chi_i$ are spatial and spin functions, and $\tau_i$ is the isospin function fixed to be up (proton) or down (neutron). Accordingly, an AMD wave function is expressed by a set of variational parameters, ${\bf Z}\equiv \{{\bf X}_1,{\bf X}_2,\ldots, {\bf X}_A,\xi_1,\xi_2,\ldots,\xi_A \}$, indicating single-nucleon Gaussian centroids and spin orientations for all nucleons. These parameters are determined by the energy variation after spin-parity projection to obtain optimized AMD wave functions for $J^\pi$ states. Namely, in the AMD+VAP method, the parameters ${\bf X}_i$ and $\xi_{i}$($i=1\sim A$) for the lowest $J^\pi$ state are determined so as to minimize the energy expectation value of the Hamiltonian, $\langle \Phi|H|\Phi\rangle/\langle \Phi|\Phi\rangle$, with respect to the spin-parity eigen wave function projected from an AMD wave function; $\Phi=P^{J\pi}_{MK}\Phi_{\rm AMD}({\bf Z})$. Here, $P^{J\pi}_{MK}$ is the spin-parity projection operator. In the present calculation, we choose the width parameter $\nu$ for single-nucleon Gaussian wave packets to minimize energies of stable nuclei ($^9$Be, $^{11}$B, and $^{12}$C) and use the fixed $\nu$ value in each series of isotopes. The adopted $\nu$ values are $\nu=0.20$ fm$^{-2}$ for Be isotopes, and $\nu=0.19$ fm$^{-2}$ for B and C isotopes. The fixing $\nu$ parameter may not be appropriate to describe details of neutron distribution in very neutron-rich nuclei. However, since our main concern in the present study is systematics of proton distribution, we fix the parameter to remove a possible artifact in proton radii caused by the change of $\nu$. If the size of cluster cores in neutron-rich nuclei does not change from that in stable nuclei, the fixing $\nu$ can be a reasonable assumption. For more detailed study, different width parameters for protons and neutrons or independent widths for all nucleons should be adopted as done in the method of fermionic molecular dynamics (FMD) \cite{Feldmeier:1994he,Neff:2002nu} and an extended version of AMD \cite{furutachi09}. In the AMD framework, the existence of clusters is not assumed {\it a priori}, but Gaussian centroids of all single-nucleon wave packets are independently treated. Nevertheless, if the system favors a specific cluster structure such the structure is automatically obtained by the energy variation because the AMD model space contains wave functions for various cluster structures. We comment here that, in the simple AMD used in Refs.~\cite{KanadaEnyo:1995tb,KanadaEnyo:1995ir}, the energy variation was performed not after but before the spin projection (the variation before projection:VBP) for the AMD wave function with fixed single-nucleon intrinsic spins. In the present study, an advanced method, the AMD+VAP, in which the VAP is performed for the AMD wave function with flexible intrinsic spins, is adopted. The AMD+VAP method better describes structures of the ground and excited states of light nuclei and also useful to investigate details of the structure change between shell-model-like states and cluster states than the simple AMD. Note that the AMD wave function is similar to the wave function used in FMD calculations \cite{Neff:2002nu}, though some differences exist in the width parameter and variational procedure, as well as adopted effective interaction. \section{Results}\label{sec:results} \subsection{effective interactions} In the present calculation of Be and B isotopes, we used the same effective nuclear interaction as that used for $^{11}$Be and $^{12}$Be in previous studies \cite{ENYObe11,KanadaEn'yo:2003ue}. It is the MV1 force \cite{MV1} for the central force supplemented by a two-body spin-orbit force with the two-range Gaussian form same as that in the G3RS force \cite{LS}. The Coulomb force is approximated using a seven-range Gaussian form. Namely, we use the interaction parameters, $m=0.65$, $b=0$, and $h=0$, for the Majorana, Bartlett, and Heisenberg terms of the central force, and the strengths $u_{I}=-u_{II}=3700$ MeV of the spin-orbit force in the calculation of Be and B isotopes. The breaking of the $N=8$ magicity in $^{11}$Be and $^{12}$Be is successfully described with this set of interaction parameters as discussed in the previous studies \cite{ENYObe11,KanadaEn'yo:2003ue}. For C isotopes, we use $m=0.62$ and $b=h=0$ for the central force, which is the parametrization same as that used for $^{12}$C in the previous AMD+VAP calculation \cite{KanadaEn'yo:1998rf,KanadaEn'yo:2006ze}. In the present calculation of C isotopes, we tune the spin-orbit force strength and use $u_{I}=-u_{II}=2600$ MeV so as to reproduce the experimental excitation energies of the $2^+_1$ states in C isotopes. \subsection{Experimental data of rms proton and matter radii} In the comparison of the calculated proton radii with the experimental data, we reduce the rms proton radii ($r_p$) from the rms charge radii ($r_c$) determined by isotope shift measurements as, \begin{equation} r_p=\sqrt{r^2_c-R^2_p}, \end{equation} where $R_p=0.8$ fm is the rms charge radii of an isolate proton. The experimental data of the charge radii for Be isotopes, $^{11}$B, and $^{12,14}$C are taken from Refs.~\cite{Nortershauser:2008vp,Krieger:2012jx,angeli04}. In the experimental studies of the charge changing interaction cross section ($\sigma_{\rm cc}$), the proton radii have been deduced from a Glauber model analysis of the $\sigma_{\rm cc}$. We label thus deduced proton radii as $r_{\rm cc;G}$ in the present paper. The $r_{\rm cc;G}$ of neutron-rich B isotopes have been deduced from the $\sigma_{\rm cc}$ at $\sim$900 MeV/u in Ref.~\cite{estrade14}, and those of neutron-rich C isotopes have been deduced from the $\sigma_{\rm cc}$ at $\sim$300 MeV/u in Ref.\cite{Yamaguchi:2011zz}. We also perform a rough evaluation of the proton radii of B and C isotopes from the experimental data of the $\sigma_{\rm cc}$ at $\sim$900 MeV/u on the C target in Ref.~\cite{chulkov00} using a following simple ansatz, \begin{equation}\label{eq:evaluated-rp} \sigma_{\rm cc}=F \pi(r_p+r_{m,^{12}{\rm C}})^2, \end{equation} where $r_{m,^{12}{\rm C}}$ is the rms matter radius of the target nucleus, $^{12}$C, and $F$ is the normalization factor for this beam energy. We assume $r_{m,^{12}{\rm C}}$ equals to the proton radius $r_p$ of $^{12}$C, which is experimentally known from the charge radius, and determine the factor $F$ by the $\sigma_{\rm cc}$ for $^{12}$C beam in the same experiment.. Using the common factor $F$ determined by the inputs of $r_p$ and $\sigma_{\rm cc}$ for $^{12}$C, we evaluate the proton radii for B and C isotopes from the $\sigma_{\rm cc}$ in Ref.~\cite{chulkov00}. We call thus evaluated proton radii with the simple ansatz of Eq.~\ref{eq:evaluated-rp} as $r_{\rm cc:S}$. Since there are many available data of the $\sigma_{\rm cc}$ for various neutron-rich isotopes in Ref.~\cite{chulkov00}, this evaluation is helpful to see the $N$ dependence of proton radii up to $N=14$ in B and C isotopes. As for the rms matter radii, the radii $r_{\rm I}$ were deduced from the interaction cross section $\sigma_{\rm I}$ using the Glauber analysis \cite{ozawa2001}. Consistency of the matter radii determined by the Glauber analysis at various beam energies has been checked (see Ref.~\cite{ozawa2001} and references therein). \subsection{Be and B isotopes} We perform the AMD+VAP calculation for the ground states of Be and B isotopes. For $^{12}$Be, in which two $0^+$ states degenerate in the low-energy region, we also calculate the $0^+_2$ state by the VAP with respect to the orthogonal component to the $0^+_1$ state, and superpose the obtained two AMD wave functions for the $0^+_1$ and $0^+_2$ states to take into account mixing of the configurations. Figure \ref{fig:be-ene} shows the binding energy of Be and B isotopes. The reproduction of the experimental biding energy in the present calculation is not perfect because of the limitation of the effective interaction. The reproduction can be improved by fine tuning of the interaction parameters or by introducing mass dependent interaction parameters. However, in the present study, we use the same parameters as the previous studies, which can describe the breaking of neutron magicity, to discuss the structure change long the isotopes, focusing on structure of protons. \begin{figure}[tb] \begin{center} \includegraphics[width=5.5cm]{be-ene-fig.eps} \end{center} \vspace{0.5cm} \caption{Binding energy of Be and B isotopes. The theoretical values are calculated with the AMD+VAP using MV1($m=0.65$)+LS($u_{I}=-u_{II}=3700$ MeV) force. $\nu=0.20$ fm$^{-2}$ and 0.19 fm $^{-2}$ are used for Be and B isotopes, respectively. \label{fig:be-ene}.} \end{figure} Figure \ref{fig:be-radii} shows the rms radii of proton, neutron, and matter distributions of Be isotopes. For $^{12}$Be, we show radii calculated after and before the superposition of two AMD wave functions for $0^+_{1,2}$ obtained by the VAP. The proton radius is relatively large in $^7$Be and also in $^9$Be because of the remarkable cluster structure. As the neutron number $N$ increases, the proton radius becomes the smallest in $^{10}$Be at $N=6$ and it increases in $^{11}$Be and $^{12}$Be, which have dominantly the intruder neutron configuration, and becomes larger in $^{14}$Be. The increase of the proton radii in the $N\ge 6$ region reflects the development of cluster structure. The $N$ dependence of the proton radius is consistent with the experimental data reduced from the charge radii determined by isotope shift measurements. The trend of the $N$ dependence of the present result is also similar to the FMD predictions in the $N\le 8$ region \cite{Krieger:2012jx}. For $^{14}$Be, the present calculation predicts an increase of the proton radius because of the further development of the cluster structure and deformation, whereas the FMD calculation does not show such an increase in $^{14}$Be. In the present result, the neutron radius grows more rapidly in the $N\ge 6$ region as $N$ increases than the proton radius. The $N$ dependence of the matter radius, which is mainly determined by that of the neutron radius, is consistent with the experimental matter radii $r_{\rm I}$ deduced from the interaction cross section \cite{ozawa2001} except for a jump at $N=7$ in the experimental data. The extremely large matter radius in $^{11}$Be is caused by the neutron-halo structure, which is not described well in the present calculation because the wave function is limited to a Gaussian form and is not enough to describe the long tail of the halo neutron in the framework of the AMD+VAP. \begin{figure}[tb] \begin{center} \includegraphics[width=5.5cm]{be-radii.eps} \end{center} \vspace{0.5cm} \caption{ \label{fig:be-radii} Proton radii, neutron radii, and matter radii calculated with the AMD+VAP. For $^{12}$Be, the radius calculated with the single AMD wave function for each of the $0^+_1$ and $0^+_2$ states before the superposition is also shown (AMD-single). The radii of AMD-single for the $0^+_1$ are almost equal to those for the ground state after the superposition. The experimental proton radii are those reduced from the experimental charge radii \cite{Nortershauser:2008vp,Krieger:2012jx,angeli04}. The experimental matter radii ($r_{\rm I}$) deduced from the interaction cross section \cite{ozawa2001} are also shown. } \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=5.5cm]{b-radii.eps} \end{center} \vspace{0.5cm} \caption{Proton radii, neutron radii, and matter radii calculated with the AMD+VAP. The experimental proton radius for $^{11}$B is reduced from the experimental charge radius \cite{angeli04}. The proton radii $r_{\rm cc;G}$ deduced from the charge changing interaction cross section $\sigma_{\rm cc}$ by the Glauber analysis in Ref.~\cite{estrade14}, and the proton radii $r_{\rm cc:S}$ evaluated from $\sigma_{\rm cc}$ in Ref.~\cite{chulkov00} using Eq.~\ref{eq:evaluated-rp} are also shown. The experimental matter radii ($r_{\rm I}$) are those deduced from the interaction cross section \cite{ozawa2001}. \label{fig:b-radii}} \end{figure} \begin{table}[ht] \caption{\label{tab:b-moment} Electric quadrupole moments and magnetic dipole moments of B isotopes. Theoretical values are calculated with the AMD+VAP. The experimental data are taken from Refs.~\cite{nucldata,Okuno1995,Ueno:1995hp,Izumi:1995mm,Ogawa:2003gp}. } \begin{center} \begin{tabular}{ccccc} \hline & \multicolumn{2}{c}{AMD} & \multicolumn{2}{c}{exp.} \\ & $\mu$($\mu_N$) & $Q$ (mb) & $\mu$($\mu_N$) & $Q$ (mb) \\ $^{9}$B &2.65 & 6.54 & & \\ $^{11}$B &2.79 & 3.90 & 2.689 &4.065(0.026)\\ $^{13}$B &2.97 & 3.65 & 3.178 &3.693(0.11)\\ $^{15}$B &2.64 & 4.15 & 2.650(0.013) & 3.80(0.10)\\ $^{17}$B &2.62 & 4.90 & 2.545(0.02) & 3.86(0.15) \\ $^{19}$B &2.75 & 3.79 & & \\ \hline \end{tabular} \end{center} \end{table} Figure \ref{fig:b-radii} shows the rms radii for B isotopes. As $N$ increases, the calculated proton radius becomes smallest at $N=6$ and increases in the $6\le N \le 12$ region in consequence of the developed cluster structure in the deformed neutron structure. The proton radius decreases from $N=12$ to $N=14$ because of the weakening of the cluster structure in $^{19}$B. Note that the weakening of the cluster structure in $^{19}$B has not been obtained in the previous study in Ref.~\cite{KanadaEnyo:1995ir}, in which the adopted spin-orbit force was too weak to describe the shape coexistence in $N=14$ isotones \cite{KanadaEn'yo:2004cv}. In B isotopes, the charge radius is experimentally known only for $^{11}$B. We show, in Fig.~\ref{fig:b-radii}, the experimental data of proton radii $r_{\rm cc;G}$ deduced from the charge changing interaction cross section $\sigma_{\rm cc}$ by the Glauber analysis reported in Ref.~\cite{estrade14}. We also show the proton radii $r_{\rm cc;S}$ evaluated from $\sigma_{\rm cc}$ in Ref.~\cite{chulkov00} using Eq.~\ref{eq:evaluated-rp}. The $N$ dependence of $r_{\rm cc;S}$ is consistent with that of $r_{\rm cc;G}$ for $^{11}$B, $^{13}$B, and $^{15}$B, but it is different at $N=12$ for $^{17}$B. The difference at $N=12$, in principle, comes from the discrepancy of the $\sigma_{\rm cc}$ between two experiments in Ref.~\cite{chulkov00} and Ref.~\cite{estrade14}. The present calculation with the AMD+VAP shows the $N$ dependence consistent with $r_{\rm cc;G}$ deduced from $\sigma_{\rm cc}$ in Ref.~\cite{estrade14}. The neutron and matter radii show the $N$ dependence similar to each other. They show a kink at $N=6$ and the increasing behavior in the $6\le N \le 12$ region. The experimental matter radii $r_{\rm I}$ deduced from the interaction cross section show a monotonic increase of matter radii in the $6\le N \le 14$ region and are consistent with the present result except for $^{19}$B. The present calculation probably underestimates the large neutron radius of $^{19}$B caused by a neutron halo structure. The present calculation predicts the kink at $N=6$ in the $N$ dependences of proton, neutron, and matter radii, which is consistent with the experimental proton and matter radii. It is interesting that the kink exists not at the $N=8$ magic number but at the $N=6$ in B isotopes. In order to discuss the $N$ dependence of the proton radius around $N=12$ in more details, we also investigate moments of the $J^\pi=3/2^-$ ground states of B isotopes. Table \ref{tab:b-moment} shows the calculated electric quadrupole moments ($Q$) and magnetic moments ($\mu$) with the experimental data. It is found that the present calculation reasonably reproduces the $Q$ moments of $^{11}$B, $^{13}$B, and $^{15}$B, but it overestimates the $Q$ moment of $^{17}$B. Since the experimental $\mu$ moment is smallest in $^{17}$B, it is likely that the contribution of the proton orbital angular momentum to the total spin $3/2^-$ is somewhat quenched in the realistic ground state of $^{17}$B, which usually reduces the $Q$ moment. Another possibility is the weakening of the cluster structure in $^{17}$B, which reduces both the $Q$ moment and $r_p$. In the present calculation, no quenching of proton orbital angular momentum contribution nor the weakening of cluster structure is obtained in $^{17}$B. A more precise measurement of proton radii of $^{17}$B is required. \subsection{C isotopes} The $0^+_1$ and $2^+_1$ states of C isotopes are calculated with the AMD+VAP. Figure \ref{fig:c-ene-be2} shows the binding energy, the $2^+_1$ excitation energy, and $B(E2;2^+_1\rightarrow 0^+)$ of C isotopes. The present calculation reasonably reproduces the experimental data except for $E_x(2^+_1)$ in $^{20}$C and the $B(E2)$ value in $^{14}$C, which are overestimated by about a factor two. Figure \ref{fig:c-radii} shows the rms proton, neutron, and matter radii of C isotopes. Even though the neutron and matter radii increase in the $N\ge 6$ region as $N$ increases, the proton radius is almost unchanged. The weak $N$ dependence of the proton radius indicates the insensitivity of the proton distribution to the neutron structure. This is contrast to the cases of Be and B isotopes having the rather strong $N$ dependence of proton radii. The $N$ dependence of the matter radius in the present result is consistent with the experimental $r_{\rm I}$ deduced from the interaction cross section. The proton radii $r_{\rm cc;S}$ evaluated from the experimental data of the $\sigma_{\rm cc}$ at approximately 900 MeV/u show a weak $N$ dependence in the $8\le N \le 12$ region and seem to be consistent with the present prediction. There exists an experimental data of $r_{\rm cc;G}$ for $^{16}$C deduced from the $\sigma_{\rm cc}$ at approximately 300 MeV by the Glauber analysis ~\cite{Yamaguchi:2011zz} seems to somewhat deviate from other data. \begin{figure}[tb] \begin{center} \includegraphics[width=5.5cm]{c-ene-be2.eps} \end{center} \vspace{0.5cm} \caption{Binding energy, $2^+$ excitation energy, and $E2$ transition strength of C isotopes. The theoretical values are calculated with the AMD+VAP ($\nu=0.19$ fm$^{-2}$) using MV1($m=0.62$)+LS($u_{I}=-u_{II}=2600$ MeV) force. The experimental data are taken from Refs.~\cite{nucldata,McCutchan:2012tw,Wiedeking:2008zzb,Ong:2007jb,Voss:2012zz,Petri:2011zz}. Theoretical values for $B(E2)$ of the shell model calculation \cite{Sagawa:2004ut} are also shown. \label{fig:c-ene-be2}} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=5.5cm]{c-radii.eps} \end{center} \vspace{0.5cm} \caption{ Proton radii, neutron radii, and matter radii calculated with the AMD+VAP. The experimental proton radii for $^{12,14}$C are reduced from the experimental charge radius \cite{angeli04}. The proton radius $r_{\rm cc;G}$ of $^{16}$C deduced from the $\sigma_{\rm cc}$ by the Glauber analysis in Ref.~\cite{Yamaguchi:2011zz}, and the proton radii $r_{\rm cc;S}$ evaluated from the $\sigma_{\rm cc}$ in Ref.~\cite{chulkov00} using Eq.~\ref{eq:evaluated-rp} are also shown. The experimental matter radii ($r_{\rm I}$) are those deduced from the interaction cross section \cite{ozawa2001}. \label{fig:c-radii}} \end{figure} \section{Discussions}\label{sec:discussion} In this section, we describe the intrinsic structure change with the increase of the neutron number in each series of isotopes and discuss its effect to the $N$ dependence of proton radii. Figure \ref{fig:bebc-dense} shows the distributions of proton, neutron, and matter densities of Be, B, and C isotopes obtained by the AMD+VAP. The density distributions of intrinsic states before the spin and parity projections are displayed. In all series of Be, B, and C isotopes, the intrinsic neutron structures change rapidly with the increase of $N$. \begin{figure}[tb] \begin{center} \includegraphics[width=15.0cm]{bebc-dense.eps} \end{center} \vspace{0.5cm} \caption{(Color online) Distributions of proton, neutron, and matter densities calculated with the AMD+VAP. The densities of intrinsic states are integrated with respect to the $z$ axis and plotted on $x$-$y$ plane. Here, the axes of the intrinsic frame are chosen so as to be $\langle x^2\rangle \ge \langle y^2\rangle \ge \langle z^2\rangle$. \label{fig:bebc-dense}} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=5.5cm]{be-raa-fig.eps} \end{center} \vspace{0.5cm} \caption{$\alpha$-$\alpha$ distance in Be isotopes. \label{fig:be-raa}} \end{figure} In Be isotopes, the $2\alpha$ cluster core structure is formed as shown in the dumbbell shape in the proton density. Following the development of the prolate neutron deformation, the cluster structure in Be isotopes is enhanced in the $7\le N \le 10$ region, resulting in the increase of the proton radius in this region. Figure~\ref{fig:be-raa} shows the $\alpha$-$\alpha$ distance measured by Gaussian centroids for four protons as $|{\bf X}_1+{\bf X}_2-{\bf X}_3+{\bf X}_4|/2\sqrt{\nu}$, which indicates a degree of the $2\alpha$ cluster development in Be isotopes. The $\alpha$-$\alpha$ distance describes the $N$ dependence of the proton radius in Be isotopes. In B isotopes, the neutron density is most compact at $N=6$ for $^{11}$B because of the $p_{3/2}$ sub-shell closure feature. Also the proton structure in $^{11}$B is compact and shows no cluster structure, whereas, in neutron-rich B isotopes with $N\ge 8$, the two-center cluster structure develops as shown in the proton distribution. The development of the cluster structure is remarkable at $N=10$ and $N=12$ for $^{15}$B and $^{17}$Be resulting in the enhanced proton radii of these nuclei, whereas it slightly weakens in $^{19}$Be. In C isotopes, the proton density always stays in a compact region in neutron-rich C with $N\ge 8$ even though the neutron structure rapidly changes with the increase of $N$. It indicates the robustness of the proton structure of $Z=6$ system in neutron-rich C isotopes, in which protons are deeply bound. The stable proton structure is reflected in the weak $N$ dependence of the proton radius. As discussed above, in neutron-rich Be and B isotopes, which have two-center cluster structures, the proton structure changes sensitively to the neutron structure change. In contrast, in C isotopes, the proton structure is insensitive to the neutron structure and has the weak $N$ dependence. The sensitivity of the proton structure to the neutron structure is essential in the $N$ dependence of the proton radius. Development and weakening of the two-center cluster structures in Be and B isotopes play an important role in the change of proton radii with the $N$ increase. \begin{figure}[tb] \begin{center} \includegraphics[width=6cm]{be-defo-rp-fig.eps} \end{center} \vspace{0.5cm} \caption{Deformation parameter $\beta$ for proton, neutron, and matter densities, and proton radii of Be, B, and C isotopes calculated with the AMD+VAP. \label{fig:be-defo-rp}} \end{figure} To see how the neutron structure change affects the $N$ dependence of proton radii through the proton structure change, we show, in Fig.~\ref{fig:be-defo-rp}, the $N$ dependence of the deformation parameters $\beta_p$ and $\beta_n$ for proton and neutron densities, respectively, in the intrinsic wave functions compared with the $N$ dependence of proton radii in Be, B, and C isotopes. Here, the definition of $\beta$ is that defined in Ref.~\cite{KanadaEn'yo:1996hi}. In Be isotopes, the change of the proton deformation correlates with the neutron deformation except for $^{14}$Be at $N=10$. In $^{14}$Be, the neutron deformation is not as large as that in $^{12}$Be, but the wide distribution of the neutron density stretches the two-center proton density, resulting in the larger proton deformation than that in $^{12}$Be. The proton deformation just describes the $N$ dependence of the proton radius in Be isotopes. Also in B isotopes, the change of proton deformation strongly correlates with the neutron deformation. The $N$ dependence of the proton deformation is consistent with that of the proton radius in the neutron-rich $N\ge 8$ region, in which B isotopes have the two-center cluster structure as mentioned previously. However, in the region from $N=6$ to $N=8$, the $N$ dependence of the proton radius is opposite to that of the proton deformation. Namely, the proton radius slightly increases from $^{11}$B to $^{13}$B even though the deformation becomes small at the neutron magic number $N=8$. As discussed in the previous section, since $^{11}$B has the smallest neutron radius and no cluster structure, it has the smallest proton radius in B isotopes. In C isotopes, the change of $\beta_p$ is consistent with $\beta_n$. Note that the consistency between $\beta_p$ and $\beta_n$ does not necessarily mean the consistency in the shapes between proton and neutron density distributions but the $\gamma$ parameters for proton and neutron distributions are different from each other in some C isotopes. The $N$ dependences of proton and neutron deformations in C isotopes are weaker than those in Be and B isotopes. Moreover, the change of proton deformation makes only the small change of the proton radii. This situation of neutron-rich C isotopes having no cluster structure is different from the cases of neutron-rich Be and B isotopes having two-center cluster structures, in which the proton radius correlates with the proton deformation. As a result, proton radii in C isotopes are insensitive to the neutron structure change and do not depend so much on the neutron number. The weak $N$ dependence of the proton radius in C isotopes is considered to originate in stable oblate proton deformation and the non-cluster structure. In the systematic analysis of the structure change and its effect on proton radii in Be, B, and C isotopes, we can reach the more general picture that, in light nuclei, the strong $N$ dependence of proton radii is found in the isotopes that have prolate deformations in both proton and neutron densities. In neutron-rich Be and B isotopes, the prolate proton deformation is caused by the development of two-center cluster structure. Since the cluster structure can be easily stretched by the prolate neutron deformation, the central proton density becomes low and the proton radii can be enhanced. In other words, the decrease of the central proton density in the developed cluster structure in neutron-rich nuclei is important in the sensitivity of proton radii to the structure change. Consequently, the $N$ dependence of proton radii can be a probe to observe development of cluster structure. . \section{Summmary}\label{sec:summary} We investigated the $N$ dependence of proton radii of Be, B, and C isotopes. In the result of the AMD+VAP calculation for Be and B isotopes, we found that the proton radius sensitively reflects the neutron structure change through the the development of cluster structure, in particular, in neutron-rich nuclei. In contrast, the proton radius in C isotopes shows a weak $N$ dependence because of the stability of the proton structure in $Z=6$ nuclei. We compared the $N$ dependence of the calculated proton radii with that of the experimental radii reduced from the charge radii measured by means of isotope shift and those deduced from the charge changing interaction cross section, and found that the present result is consistent with the existing experimental data. In the analysis of the structure change and its effect on proton radii in Be, B, and C isotopes, we found that the $N$ dependence of proton radii can be a probe to clarify enhancement and weakening of cluster structures. In neutron-rich Be and B nuclei, the two-center cluster structure is enhanced in the prolately deformed neutron structure. The $N$ dependence of proton radii reflects rather sensitively the cluster structure change, because the central proton density becomes low in consequence of the stretching of the cluster structure. Precise measurements of proton radii for B and C isotopes are required to confirm the cluster structure in neutron-rich B isotopes and the non-cluster structure in C isotopes.. \section*{Acknowledgments} The author would like to thank Prof.~Tanihata and Prof.~Kanungo for fruitful discussions. She also thanks Prof.~Kimura for valuable comments. The computational calculations of this work were performed using the supercomputer at YITP. This work was supported by JSPS KAKENHI Grant Number 26400270.
2,877,628,090,053
arxiv
\section{#1}} \setcounter{footnote}{1} \def\textmatrix#1&#2\\#3&#4\\{\bigl({#1 \atop #3}\ {#2 \atop #4}\bigr)} \newcommand{\bydef}{\stackrel{\rm def}{=}} \newcommand{\cle}{{\mathcal{E}}} \newcommand{\clh}{\mathcal{H}} \newcommand{\clk}{{\mathcal{K}}} \newcommand{\cll}{{\mathcal{L}}} \newcommand{\clm}{{\mathcal{M}}} \newcommand{\clN}{\mathcal{N}} \newcommand{\zbar}{{\overline{z}}} \newcommand{\wbar}{{\overline{w}}} \newcommand{\Dbar}{\overline{\mathbb D}} \newcommand{\Ebar}{\overline{E}} \def\C{\mathbb{C}} \def\N{\mathbb{N}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\CF{\mathcal{F}} \def\s{\psi^{-1}} \def\p{\varphi^{-1}} \def\v{\varphi} \def\bl{\boldsymbol} \def\o{\omega} \def\O{\Omega} \def\K{\mathcal K} \def\D{\mathbb D} \def\L{\Longrightarrow} \def\ov{\overline} \def\lo{\longrightarrow} \def\t{\tilde} \def\m{\mathcal} \def\mb{\mathbb} \def\mr{\mathrm} \def\w{\widehat} \def\a{\alpha} \def\b{\beta} \def\g{\gamma} \def\d{\displaystyle\sum} \def\j{\mathcal J} \def\wi{\widetilde} \def\e{equivalent} \def\eq{equivalence} \def\mo{multiplication operator~} \def\mos{multiplication operators~} \def\i{\prime} \def\n{normalised kernel~} \def\r{respectively} \def\u{unitarily equivalent} \def\h{ Hermitian holomorphic vector bundle} \def\ec{equivalence class} \def\ue{unitary equivalence} \def\rk{reproducing kernel~} \def\rks{reproducing kernels~} \def\ul{\underline} \def\W{With respect to} \def\wr{with respect to } \def\H{homogeneous} \def\on{orthonormal} \def\c{curvature~} \def\cs{curvatures~} \def\as{associated} \def\ho{\homogeneity} \def\Mob{\mbox{M\"{o}b~} \def\mob{\mbox{M\"{o}b} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{ex}[thm]{Example} \newtheorem{Notation}[thm]{Notation} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\Bea}{\begin{eqnarray*}} \newcommand{\Eea}{\end{eqnarray*}} \newcommand{\inner}[2]{\langle #1,#2 \rangle \newcommand{\bm}[1]{\mb{\boldmath ${#1}$}} \newcounter{cnt1} \newcounter{cnt2} \newcounter{cnt3} \newcommand{\blr}{\begin{list}{$($\roman{cnt1}$)$} {\usecounter{cnt1} \setlength{\topsep}{0pt} \setlength{\itemsep}{0pt}}} \newcommand{\bla}{\begin{list}{$($\alph{cnt2}$)$} {\usecounter{cnt2} \setlength{\topsep}{0pt} \setlength{\itemsep}{0pt}}} \newcommand{\bln}{\begin{list}{$($\arabic{cnt3}$)$} {\usecounter{cnt3} \setlength{\topsep}{0pt} \setlength{\itemsep}{0pt}}} \newcommand{\el}{\end{list}} \font\myfont=cmr12 at 16pt \usepackage{enumerate} \usepackage{color} \title[The Agler-Young class]{\myfont The Agler-Young class} \author[Bhattacharyya]{Tirthankar Bhattacharyya} \address[Bhattacharyya]{Department of Mathematics, Indian Institute of Science, Bangalore 560 012, India.} \email{tirtha@iisc.ac.in} \author[Shyam Roy]{Subrata Shyam Roy} \address[Shyam Roy]{Indian Institute of Science Education and Research Kolkata, Campus Road, Mohanpur, West Bengal 741 246, India. } \email{ssroy@iiserkol.ac.in} \author[Yadav]{Tapesh Yadav} \address[Yadav]{Department of Mathematics, Indian Institute of Science, Banaglore 560 012, India.} \email{tapeshyadav1@gmail.com} \begin{document} \thanks{Subject classification: Primary 47A13, 47A62, 47B35; Secondary 30H10.} \thanks{Key words and phrases: The Agler-Young class, Toeplitz operators, Dilation, Fundamental functions, Fundamental operators.} \thanks{This research is supported by University Grants Commission, India via CAS} \begin{abstract} \vspace*{5mm} This note introduces a special class of tuples of bounded operators on a Hilbert space. It is called the Agler Young class. Major results about this class include a Wold decomposition and a dilation theorem. The structure of the dilation is completely spelt out. A characterization of this class using the hereditary functional calculus of Agler is obtained and examples are discussed. Toeplitz operators play a major role in this note. An Agler-Young pair arising from a truncated Toeplitz operator is characterized. Thus, we extend results obtained in the case of commuting operators by several authors over many decades to the non-commutative situation. The results for the commuting case can be recovered as special cases. \end{abstract} \maketitle \section{Introduction} \subsection{Block Toeplitz operators} In Hilbert space operator theory, it is important to identify a special class of nice operators or operator tuples and to decode its structure in terms of simpler objects in the same class. An example of this kind of endeavours which has stood the test of time is the Wold decomposition of an isometry: given an isometry $A$ on a Hilbert space $\mathcal H$, there are two $A$-reducing subspaces $\mathcal H_1$ and $\mathcal H_2$ such that $\mathcal H = \mathcal H_1 \oplus \mathcal H_2$, $A |_{\mathcal H_1} $ is a unitary and $A |_{\mathcal H_2} $ is a unilateral shift (of some multiplicity, possibly infinite). This simple theorem has profound applications in different areas of mathematics and statistics, see \cite{BDF}, \cite{BKS}, \cite{CPS}, \cite{GG}, \cite{GS}, \cite{HL}, \cite{KMN}, \cite{KM} and \cite{KO}. It is clear, then, that for a Wold type decomposition to work for an operator (or a tuple of operators), a crucial ingredient is the unilateral shift (of multiplicity one or higher). This is the simplest example of a Toeplitz operator. Let us collect some preliminaries of {\em block Toeplitz operators} from the seminal paper of Rabindranathan \cite{Dadu}. Let $\mathcal E$ be a Hilbert space. Let $\mathcal O(\mathbb D, \mathcal E)$ be the class of all $\mathcal E$ valued holomorphic functions on the open unit disc $\mathbb D = \{ z \in \mathbb C : |z| < 1\}$. Let $$H^2(\mathcal E) = \{ f = \sum_{n=0}^\infty a_k z^k \in \mathcal O(\mathbb D, \mathcal E) : \{a_k\}_{k\ge 0} \subset \mathcal E \mbox{ and } \| f \|^2 = \sum_{n=0}^\infty \| a_k \|^2 < \infty\}.$$ If $\mathcal E = \mathbb C$, we just write $H^2$. The space $H^2(\mathcal E)$ is isometrically isomorphic to $H^2 \otimes \mathcal E$ and sometimes this identification will be used without further mentioning it. \begin{defn} \label{BlockT} If $\v$ is an $L^\infty(\mathcal B(\mathcal E))$ valued function defined on the unit circle $\mathbb T = \{z\in \mathbb C: |z| = 1\}$, then the Toeplitz operator $T_\v$ on $H^2(\mathcal E)$ is defined as $$ T_\v g = P_+ M_\v g \mbox{ for } g \in H^2(\mathcal E)$$ where $M_\v$ is the multiplication operator on $L^2(\mathcal E)$ and $P_+$ is the projection from $L^2(\mathcal E)$ onto $H^2(\mathcal E)$. \end{defn} Here the $L^\infty$ and $L^2$ are with respect to the normalized Haar measure of the circle group and we consider the natural embedding of $H^2(\mathcal E)$ as a subspace of $L^2(\mathcal E)$. If dim $\mathcal E > 1$, then the Toeplitz operator $T_\varphi$, in the definition above, is popularly called the $block$ Toeplitz operator. We shall not always do that. It will always be clear from the context whether the Hardy space concerned consists of scalar valued functions or vector valued functions. \subsection{A canonical element} In search of the right class of operator tuples that will play the role that the unilateral shift so efficiently played in the Wold decomposition theorem, consider the following example. Every $L^2(\mathcal B(\mathcal E))$ function has a Fourier series expansion with respect to the natural basis $\{ e_n(\theta) = \exp({in\theta}), n \in \mathbb Z\}$ of $L^2$ (see, for example, Lemma 4.2.8 of \cite{SG}). An $L^\infty(\mathcal B(\mathcal E))$ function $\varphi$ is in $L^2(\mathcal B(\mathcal E))$. Let $$\varphi = \sum_{n \in \mathbb Z} A_n e_n$$ be its Fourier series expansion. Here, the $A_n$ are from $\mathcal B(\mathcal E)$. The function $\varphi$ is said to be holomorphic if all the negative Fourier coeffcients are $0$. Let $H^\infty\big(\m B (\cle)\big)$ denote the subalgebra of $L^\infty(\mathcal B(\mathcal E))$ that consists of all such holomorphic elements of $L^\infty(\mathcal B(\mathcal E))$. Consider an $(n-1)$-tuple $\bl f = (f_1, f_2, \ldots , f_{n-1})$ of functions from $H^\infty\big(\m B (\cle)\big)$. Set $$\v_i(z) = z{f_i}(z) + {f_{n-i}}(z)^* \mbox{ for } i=1,2, \ldots ,n-1.$$ This tuple of functions $(\v_1, \v_2, \ldots ,\v_{n-1})$ defined on $\mathbb T$ and taking values in $L^\infty(\mathcal B(\mathcal E))$ is called the {\em co-analytic extension} of $(f_1, f_2, \ldots ,f_{n-1})$. Let $S_i = T_{\v_i}$ for $i=1,2, \ldots , n-1$ and let $S_n$ be the pure isometry $M_z$ on $H^2(\cle)$. For $z$ on the unit circle, $$ \v_{n-i}(z)^* z = (\zbar f_{n-i}(z)^* + f_i(z)) z = f_{n-i}(z)^* + zf_i(z) = \v_i(z).$$ So, $\underline{S} = (S_1, S_2, \ldots,S_n)$ has the property that $S_n$ is a unilateral shift (of multiplicity equal to the dimension of the space $\cle = \mathcal D_{S_n^*}$ and the rest of the $S_i$ are Toeplitz operators that satisfy $$ S_i = S_{n-i}^*S_n.$$ \begin{defn} The tuple $\underline{S}$ described above is called the canonical Agler-Young isometry associated with the function tuple $\bl f$. The functions $f_1, f_2, \ldots , f_{n-1}$ are called the fundamental functions of the canonical Agler-young isometry $\underline{S}$. \end{defn} \subsection{The Agler-Young class} We shall work towards a Wold decomposition of an Agler-Young isometry defined below. If $T$ is a contraction, let $D_{T} = (I - T^*T)^{1/2}$ and $\mathcal D_{T} = \overline{\rm Ran} D_{T}$. This notation goes back to Sz.-Nagy. \begin{defn} For a tuple of bounded operators $\underline{S} = (S_1, S_2, \ldots,S_n)$ on a Hilbert space $\mathcal H$ such that $S_n$ is a contraction, the $n-1$ operator equations \begin{equation} \label{AYdef} S_i - S_{n-i}^*S_n = D_{S_n} X_i D_{S_n} \text{ for } i=1,2, \ldots , n-1 \text{ and } X_{i} \in \mathcal B(\mathcal D_{S_n})\end{equation} are called its fundamental equations. The operator tuple $\underline{S}$ is said to be in the Agler-Young class $AY_n$ or called an Agler-Young tuple if $S_n$ is a contraction and $\underline{S}$ satisfies the fundamental equations. \end{defn} For an Agler-Young tuple $(S_1, S_2, \ldots,S_n)$, the solution tuple $(X_1, X_2, \ldots ,X_{n-1})$ is unique and will be called the {\em fundamental operator tuple} of $\underline{S}$. An Agler-Young tuple is called an {\em Agler-Young isometry} if $S_n$ is an isometry. An Agler-Young isometry is called a $pure$ Agler-Young isometry if the isometry $S_n$ is pure, i.e., $(S_n^*)^k$ converges strongly to $0$ as $k \rightarrow \infty$, i.e., $S_n$ is a shift of some multiplicity. The canonical Agler-Young isometry defined above is a pure Agler-Young isometry. One still needs the concept similar to that of a unitary, i.e., the other ingredient in Wold decomposition. \begin{defn} An Agler-Young isometry is called an Agler-Young unitary if $S_n$ is a unitary operator. \end{defn} An Agler-Young unitary $\underline{S} = (S_1, S_2, \ldots,S_n)$ has the property that $S_n$ is a unitary operator that commutes with each $S_i$ for $i=1,2, \ldots, n-1$. Indeed, $$S_nS_i = S_n S_{n-i}^*S_n = S_n(S_i^*S_n)^*S_n = S_nS_n^*S_iS_n = S_iS_n.$$ From definition, it is clear that a tuple of bounded operators $\underline{S} = (S_1, S_2, \ldots,S_n)$ is an Agler-Young isometry if and only if $S_n$ is an isometry and $S_i = S_{n-i}^*S_n$ for $i=1,2, \ldots ,n-1$. This is a characterization which can be immediately used to prove that the restriction of an Agler-Young isometry to an invariant subspace is again an Agler-Young isometry. Indeed, consider an Agler-Young isometry $\underline{S} = (S_1, S_2, \ldots,S_n)$ on a Hilbert space $\mathcal H$ and let $\clm$ be an invariant subspace. Let $S_i|_\clm = T_i$ for $i=1,2, \ldots ,n$. Obviously then with respect to the decomposition $\clh = \clm \oplus \clm^\perp$, the $S_i$ have the decomposition: $S_i = \textmatrix T_i & A_i \\ 0 & B_i \\$ for suitable operators $A_i$ and $B_i$. Clearly, $T_n$ is an isometry. We use $S_i = S_{n-i}^*S_n$ to get $$\left( \begin{array}{cc} T_i & A_i \\ 0 & B_i \\ \end{array} \right) = \left( \begin{array}{cc} T_{n-i}^* & 0 \\ A_{n-i}^* & B_{n-i}^* \\ \end{array} \right) \left( \begin{array}{cc} T_n & A_n \\ 0 & B_n \\ \end{array} \right) = \left( \begin{array}{cc} T_{n-i}^*T_n & \star \\ \star & \star \\ \end{array} \right)$$ and hence the tuple $\underline{T} = (T_1, T_2, \ldots, T_n)$ satisfies the characterization mentioned above. Needless to say that a canonical Agler-Young isometry defined earlier is an Agler-Young isometry. \subsection{The Wold decomposition} The first main theorem of this work is a structure theorem for Agler-Young isometries in the style of Wold. It is stated below and proved in Section 2. \begin{theorem}[\textbf{The Wold decomposition theorem} for an Agler-Young isometry] \label{Wold} Let $\underline{S} = (S_1, S_2. \ldots,S_n)$ be an Agler-Young isometry on $\mathcal H$ with $\dim\mathcal D_{S_n^*} < \infty$. Then there is a unique orthogonal decomposition $\mathcal H = \mathcal H_1 \oplus \mathcal H_2$ of the Hilbert space $\mathcal H$ such that \begin{enumerate} \item [(a)]$\mathcal H_1$ and $\mathcal H_2$ are common reducing subspaces for the $S_i$, \item [(b)]$(S_1|_{\m H_1}, S_2|_{\m H_1}, \ldots ,S_n|_{\m H_1})$ is an Agler-Young unitary. \item [(c)] $S_n|_{\mathcal H_2}$ is a pure isometry (a unilateral shift) $V$. There is a unitary operator $W:\m H_2\to H^2(\m D_{V^*})$ and a unique $(n-1)$-tuple $\bl f=(f_1, f_2, \ldots , f_{n-1})$ of $\mathcal B(\mathcal D_{V^*})$ valued bounded holomorphic functions such that the tuple $$ (WS_1|_{\m H_2}W^*, WS_2|_{\m H_2}W^*, \ldots ,WS_n|_{\m H_2}W^*)$$ is the canonical pure Agler-Young isometry associated with $\bl f$. Further, the following relation is satisfied for every $i =1, 2, \ldots , n-1:$ \begin{equation} \label{determinant} S_{n-i}^* - S_i S_n^* = 0_{\mathcal H_1} \oplus W^*\left( T_{f_i} (I - T_zT_z^*) \right)W.\end{equation} \end{enumerate} \end{theorem} The finite dimensionality condition is redundant in case $\underline{S}$ is a commuting Agler-Young isometry, see Corollary \ref{comAYiso}. \subsection{Dilation} One of the most well-known results in single variable operator theory is the Sz.-Nagy dilation of a contraction to an isometry. In the above, we have defined an Agler-Young tuple. Our next major result is about dilating such a tuple to an Agler-Young isometry. \begin{defn} Let $\mathcal{H} \subset \mathcal{K}$ be two Hilbert spaces. Suppose $\underline{S} = (S_1, S_2, \ldots,S_n)$ and $\underline{V} = (V_1, V_2, \ldots , V_n)$ are tuples of bounded operators acting on $\mathcal{H}$ and $\mathcal{K}$ respectively, that is, $S_i \in \mathcal{B}(\mathcal{H})$ and $V_i \in \mathcal{B}(\mathcal{K})$. The operator tuple $\underline{V}$ is called a $dilation$ of the operator tuple $\underline{S}$ if \begin{equation} S_{i_1} S_{i_2} \ldots S_{i_k} h = P_{\mathcal{H}} V_{i_1} V_{i_2} \ldots V_{i_k} h \mbox{ for } h \in \mathcal{H}, \;\; k\ge 1 \mbox{ and } 1 \le i_1,i_2, \ldots , i_k \le n. \label{dilation} \end{equation} If, moreover, $\clh$ is an invariant subspace for each $V_i^*$, then $\clh$ is called a co-invariant subspace of $\underline{V}$. The dilation is called minimal if $$\clk = \overline{\mathrm{span}} \{V_{i_1} V_{i_2} \ldots V_{i_k} h \mbox{ for } h \in \mathcal{H}, \;\; k\ge 1 \mbox{ and } 1 \le i_1,i_2, \ldots , i_k \le n\}.$$ \end{defn} The definition of minimality is natural because the dilation space $\clk$ has to contain all elements of the form $V_{i_1} V_{i_2} \ldots V_{i_k} h$ and hence could not be any smaller than what is described in the definition. \begin{rem} Note that if the $V_i^*$ do leave $\clh$ as an invariant subspace, then equation \eqref{dilation} is automatically satisfied. Indeed, in this case, the $V_i$ have the decomposition $ V_i = \textmatrix S_i & 0 \\ \star & \star \\$ with respect to the decomposition $\clk = \clh \oplus (\clk \ominus \clh)$ of the space $\clk$ which immediately implies that $$ V_{i_1} V_{i_2} \ldots V_{i_k} = \begin{pmatrix} S_{i_1} S_{i_2} \ldots S_{i_k} & 0 \\ \star & \star \end{pmatrix}$$ which in turn implies \eqref{dilation}. \end{rem} Dilation of an operator is a highly successful tool and was inrtoduced by Sz.-Nagy in \cite{NagyDilation} where he proved that a contraction can be dilated to an isometry. Constructing explicit dilation is always a challenge and has been done in only a few cases. \begin{enumerate} \item The isometric dilation (named after Sz.-Nagy because he proved its existence) for a contraction was constructed by Sch$\ddot{\mbox{a}}$ffer in \cite{Schaffer}. \item The commuting isometric dilation for a pair of commuting contractions was constructed by And$\hat{\mbox{o}}$ in \cite{Ando}. \item The dilation of a contractive tuple to a tuple of isometries with orthogonal ranges was constructed by Popescu in \cite{Popescu}. \item The $\Gamma$-isometric dilation for a $\Gamma$-contraction was constructed by Bhattacharyya, Pal and Shyam Roy in \cite{BhPSR}, although the existence had been shown by Agler and Young earlier in \cite{AYJoT}. \end{enumerate} We construct here an explicit dilation for an Agler-Young tuple. The dilation is an Agler-Young isometry. The theorem is stated below and proved in Section 4. \begin{theorem}[\textbf{The dilation theorem} for an Agler-Young contraction] \label {schaeffer} Let $$\underline{S} = (S_1, S_2, \ldots,S_n)$$ be an Agler-Young contraction on a Hilbert space $\mathcal{H}$. Let $\bl X = (X_1, X_2, \ldots , X_{n-1})$ be the $(n-1)$-tuple of operators obtained from defining equation \eqref{AYdef} with $X_i\in\m B (\mathcal D_{S_n}),i=1,\ldots n-1.$ Let $$\m K_0=\m H\oplus\m D_{S_n}\oplus\m D_{S_n}\oplus\ldots=\m H\oplus\ell^2(\m D_{S_n}).$$ Consider the operator tuple $\underline{V}^{\bl X} = (V_1^{\bl X}, V_2^{\bl X}, \ldots ,V_{n-1}^{\bl X}, V_n)$ defined on $\mathcal{K}_0$ by $$ V_i^{\bl X}(h_0,h_1,h_2,\ldots)= (S_ih_0,X_{n-i}^*{D}_{S_n}h_0+X_ih_1, X_{n-i}^*h_1+X_ih_2, X_{n-i}^*h_2+X_ih_3,\ldots )$$ for $i=1,2, \ldots ,n-1$ and $$ V_n(h_0,h_1,h_2,\ldots) = (S_nh_0,{D}_{S_n}h_0,h_1, h_2,\ldots).$$ Consider $\clh$ as a subspace of $\clk_0$ by identifying $h$ of $\clh$ with the vector $h \oplus 0\oplus 0\oplus\ldots$ of $\clk_0,$ where $0\oplus 0\oplus\ldots$ is the identically zero sequence in $\ell^2(\mathcal{D}_{S_n}).$ Then \begin{enumerate} \item[(1)] $\clh$ is a co-invariant subspace of $\underline{V}^{\bl X}$ and $\underline{V}^{\bl X}$ is an Agler-Young isometric dilation of $\underline{S}$. \item[(2)] If $(W_1, W_2, \ldots ,W_{n-1}, V_n)$ is any Agler-Young isometric dilation for $\underline{S}$ on $\clk_0$ whose action is such that $\clh$ is a co-invariant subspace, then $W_i = V_i^{\bl X}$ for $i=1,2, \ldots ,n-1.$ \item[(3)] If $ (W_1, W_2, \ldots ,W_n)$ is an Agler-Young isometric dilation of $\underline{S}=(S_1,S_2\ldots,S_n),$ where $W_n$ is a minimal isometric dilation of $S_n,$ then $(W_1, W_2, \ldots ,W_n)$ is unitarily equivalent to $(V_1^{\bl X}, V_2^{\bl X}, \ldots ,V_{n-1}^{\bl X}, V_n).$ \end{enumerate} \end{theorem} We would like to emphasize two important points of the theorem above. \begin{enumerate} \item[(a)] By part (1), the dilation takes place on the minimal isometric dilation space of the contraction $S_n$ and it is automatically minimal because the dilation space could not be any smaller. \item[(b)] Parts (2) and (3) give a natural uniqueness. \end{enumerate} \subsection{Organization} A satisfactory characterization of the Agler-Young class is obtained using hereditary polynomials introduced by Agler in his landmark paper \cite{AglerFamily}, where he outlined an abstract approach to model theory. This characterization enables us to conclude that a $\Gamma_n$-contraction (see \cite{SS}) is in the class $AY_n$ and a tetrablock contraction (see \cite{BhTetra}) is a member of $AY_3.$ Therefore, the Agler-Young class provides a broader stage to study these classes of operators which have received considerable attention recently. Since it is impossible to describe all important results without going into all the details, further definitions and results are introduced in appropriate places in the paper. Section 2 proves the Wold decomposition of an Agler-Young isometry. Section 3 extends an Agler-Young isometry to an Agler-Young unitary and finds a complete set of invariants for a pure Agler-Young isometry. Section 4 proves that any Agler-Young contraction can be dilated to an Agler-Young isometry, the dilation is unique in a natural way and the dilation has a nice explicit structure as mentioned above. Section 5 gives an alternative description of the dilation for a pure Agler-Young tuple and a functional model. This section has an invariant subspace theorem following the classical work of Beurling, Lax and Halmos. Section 6 proves a von Neumann type inequality with respect to the hereditary functional calculus, relates the Agler-Young class to a certain family and shows that the Agler-Young isometries are the extremals of this family. Section 7 introduces the connection with truncated Toeplitz operators which, after being introduced by Sarason in \cite{Sarason}, has matured into a major theme of research. We characterize those Agler-Young pairs, the first component of which is a truncated Toeplitz operator. Section 8 deals with the commutative case which is what has been studied so far in the literature and some of the existing results are obtained as special cases of the non-commutative theory developed in this article. \section{Proof of the Wold decomposition} The proof of the Wold decomposition theorem will involve several lemmas. We recall that for an Agler-Young isometry $\underline{S} = (S_1, S_2, \ldots,S_n)$, the condition \eqref{AYdef} is the same as \begin{equation} \label{ar} S_n^*S_i=S_{n-i}^* \mbox{ for } i=1,\ldots,n-1. \end{equation} \begin{lem}\label{conj} Suppose that $\{A_k\}$ and $\{B_k\}$ are sequences of bounded operators on a Hilbert space $\m H$ which converge to $A$ and $B$ respectively, in the strong operator topology of $\m B(\m H)$ and $F$ is a finite rank operator on $\m H.$ Then, the sequence $\{A_kFB_k^*\}$ converges to $AFB^*$ in the norm topology of $\m B(\m H).$ \end{lem} \begin{proof} It is enough to prove the result for a rank one operator. For $x, y\in \m H,$ consider the rank one operator on $\m H$ defined by $(x\otimes y)h=\langle h,y\rangle x$ for $h\in\m H.$ If $\{x_k\}$ and $\{y_k\}$ are sequences of vectors in $\m H$ converging to $x$ and $y$ in the norm of $\m H,$ respectively, then it is easy to see that $\{x_k\otimes y_k\}$ converges to $x\otimes y$ in the norm topology of $\m B(\m H). $ Consequently, by hypothesis, $\{A_kx\otimes B_ky\}$ converges to $Ax\otimes By$ in the norm topology of $\m B(\m H)$ for any $x,y\in\m H.$ Since $A(x\otimes y)B^*=Ax\otimes By$, we conclude that $\{A_k(x\otimes y)B_k^*\}$ converges to $A(x\otimes y)B^*$ in the norm topology of $\m B(\m H)$ for $x,y\in\m H.$ This completes the proof. \end{proof} We shall use the following lemma whose proof is obvious. \begin{lem}\label{double} If $\{\zeta(k,l)\}$ is a bounded double sequence of real numbers, then there exists a convergent subsequence $\zeta(k_r,l_m)$ such that both the iterated limits \Bea \lim_{r\to\infty}\big(\lim_{m\to\infty}\zeta(k_r,l_m)\big)\mbox{~and~} \lim_{m\to\infty}\big(\lim_{r\to\infty}\zeta(k_r,l_m)\big) \Eea exist and both are equal to the double limit $\displaystyle\lim_{r,m\to\infty}\zeta(k_r,l_m).$ \end{lem} \begin{lem}\label{rev} If $T, U$ are bounded operators on a Hilbert space $\m H$ such that $U$ is a unitary and the sequence $\{T{U^*}^k\}$ converges to $0$ in the strong operator topology, then $T=0.$ \end{lem} \begin{proof} We prove this by showing that $TT^*h=0$ for every $h\in\m H.$ Let $\{P_l\}$ be a sequence of finite rank projections which converges in the strong operator topology to ${\rm Id}_{\m H}$ as $l \rightarrow \infty$. Then $\|AP_lBh-ABh\|\leq\|A\|\|P_lBh-Bh\|\to 0$ as $l\to\infty,$ for $A, B\in\m B(\m H)$ and $h\in\m H.$ Hence, we conclude that \bea\label{lim1} \lim_{l\to\infty}\|T{U^*}^kP_lU^kT^*h\|=\|TT^*h\| \mbox{~for every fixed~} k\in\mb N \mbox{~and~} h\in\m H. \eea Since $P_l$ is a finite rank operator for each $l\in \mb N,$ applying Lemma \ref{conj} with $A_k=B_k=T{U^*}^k,$ we obtain from the hypothesis that \Bea \lim_{k\to\infty}\|T{U^*}^kP_lU^kT^*\|=0 \mbox{~for every fixed~} l\in\mb N. \Eea In particular, we have \bea\label{lim2} \lim_{k\to\infty}\|T{U^*}^kP_lU^kT^*h\|=0 \mbox{~for every fixed~} l\in\mb N \mbox{~and~} h\in\m H. \eea For fixed $h\in\m H,$ define the double sequence $\zeta:\mb N\times\mb N\to\mb R$ by $\zeta(k,l)=\|T{U^*}^kP_lU^kT^*h\|.$ From Equation \eqref{lim1} and Equation \eqref{lim2}, we have \bea\label{sub} \lim_{k\to\infty}\big(\lim_{l\to\infty}\zeta(k,l)\big)=\|TT^*h\| \mbox{~and~} \lim_{l\to\infty}\big(\lim_{k\to\infty}\zeta(k,l)\big)=0, \eea respectively. Since $\{\zeta(k,l)\}$ is a bounded double sequence of real numbers, by Lemma \ref{double}, there exists a convergent subsequence $\zeta(k_r,l_m)$ such that both the iterated limits \Bea \lim_{r\to\infty}\big(\lim_{m\to\infty}\zeta(k_r,l_m)\big)\mbox{~and~} \lim_{m\to\infty}\big(\lim_{r\to\infty}\zeta(k_r,l_m)\big) \Eea exist and both are equal to the double limit $\displaystyle\lim_{r,m\to\infty}\zeta(k_r,l_m).$ Therefore, by Equation \eqref{sub} \Bea \|TT^*h\|=\lim_{r\to\infty}\big(\lim_{m\to\infty}\zeta(k_r,l_m)\big)=\lim_{m\to\infty}\big(\lim_{r\to\infty}\zeta(k_r,l_m)\big)=0. \Eea Hence $\|TT^*h\|=0,$ so, $TT^*h=0.$ This completes the proof. \end{proof} \begin{lem}\label{ayv} Let $U$ and $V$ be a unitary and a pure isometry on Hilbert spaces $\mathcal H_1, \mathcal H_2$ respectively, and let $T:\mathcal H_1\to\mathcal H_2$ be a bounded operator such that $ V^*TU=T.$ Then $T = 0$. \end{lem} \begin{proof} By iteration, we get from hypothesis that ${V^*}^nTU^n=T$ for every positive integer $n$. Therefore, $ T{U^*}^n={V^*}^nT.$ Since $V$ is a pure isometry, the sequence $\{{V^*}^n\}$ converges to $0,$ in the strong operator topology. Therefore, the sequence $\{T{U^*}^n\}$ converges to $0,$ in the strong operator topology. So, the proof follows from Lemma \ref{rev}. \end{proof} \textbf{We are now ready to prove Theorem \ref{Wold}.} By the Wold decomposition of an isometry, we may write $S_n = U\oplus V$ on $\mathcal H = \mathcal H_1\oplus\mathcal H_2,$ where $\mathcal H_1,\mathcal H_2$ are reducing subspaces for $S_n$, the operator $S_n|_{\mathcal H_1} = U$ is unitary and the operator $S_n|_{\mathcal H_2} = V$ is a pure isometry. Let us write $$ S_i = \begin{bmatrix} S_{11}^{(i)} & S_{12}^{(i)}\\ S_{21}^{(i)} & S_{22}^{(i)} \end{bmatrix},\, i = 1,\ldots,n-1. $$ with respect to this decomposition, where $S_{jk}^{(i)}$ is a bounded operator from $\mathcal H_k$ to $\mathcal H_j.$ Now, \Bea S_n^*S_iS_n &=& \begin{bmatrix} U^* & 0\\ 0 & V^* \end{bmatrix}\begin{bmatrix} S_{11}^{(i)} & S_{12}^{(i)}\\ S_{21}^{(i)} & S_{22}^{(i)} \end{bmatrix} \begin{bmatrix} U & 0\\ 0 & V \end{bmatrix}\\ &=& \begin{bmatrix}U^* S_{11}^{(i)}U & U^* S_{12}^{(i)}V\\V^* S_{21}^{(i)}U & V^* S_{22}^{(i)}V \end{bmatrix} ,\, i = 1,\ldots,n-1. \Eea Note that $$S_n^*S_i=S_{n-i}^*\iff S_n^*S_{n-i}=S_i^* \mbox{(replacing $i$ by $n-i$)} \iff S_i=S^*_{n-i}S_n.$$ Putting $S_i=S^*_{n-i}S_n$ in $S_n^*S_i=S_{n-i}^*$, we obtain $S_n^*S_{n-i}^*S_n=S_{n-i}^*$ which is the same as $S_n^*S_{n-i}S_n=S_{n-i}$. In other words, \begin{equation} \label{ar2} S_n^*S_i S_n=S_i \mbox{ for all } i=1,2, \ldots ,n-1. \end{equation} Using this, we have \begin{enumerate} \item[(i)] $U^* S_{12}^{(i)}V=S_{12}^{(i)},$ \item[(ii)] $V^* S_{21}^{(i)}U= S_{21}^{(i)}.$ \end{enumerate} Clearly, (i) is equivalent to $V^* {S_{12}^{(i)}}^*U={ S_{12}^{(i)}}^*,$ hence by Lemma \ref{ayv}, $ {S_{12}^{(i)}}^*=0,$ so, $S_{12}^{(i)}=0.$ Another application of Lemma \ref{ayv} together with (ii), shows that $ { S_{21}^{(i)}}^*=0.$ So, \Bea S_i= \begin{bmatrix} S_{11}^{(i)} & 0 \\ 0 & S_{22}^{(i)} \end{bmatrix},\, i = 1,\ldots,n-1. \Eea Since $S_n^*S_i=S_{n-i}^*,$ we have $U^*S_{11}^{(i)}={S^{(n-i)}_{11}}^*$ and $V^*S_{22}^{(i)}={S_{22}^{(n-i)}}^*$. The relation \eqref{ar2} remains true for both the reduced tuples $$(S_1|_{\mathcal H_1}, \ldots ,S_{n-1}|_{\mathcal H_1}, U) \mbox{ and } (S_1|_{\mathcal H_2}, \ldots ,S_{n-1}|_{\mathcal H_2}, V).$$ For the first one, the relation \eqref{ar2} means $U^* S_i|_{\mathcal H_1} U = S_i|_{\mathcal H_1}$ for all $i.$ Since $U$ is a unitary, commutativity follows. Now we prove part (c) of the theorem, i.e., the structure of the second tuple above. Since $V$ is a pure isometry, it is unitarily equivalent to the shift on $H^2(\mathcal D_{V^*})$. This unitary equivalence is implemented by a unitary $W$ mentioned in the statement of the theorem. To avoid cumbersome notation, we put $S_{22}^{(i)} = V_i$ for $i=1,2, \ldots , n-1$. The relation \eqref{ar2} gives $$V_i = V_{n-i}^* V = V^*V_iV \mbox{ for } i=1,2, \ldots , n-1.$$ Since dim $\mathcal D_{V^*} =$ dim $\mathcal D_{S_n^*} < \infty$, the operators $WV_1W^*, WV_2W^*, \ldots , WV_{n-1}W^*$ are Toeplitz operators with symbols $\v_1, \v_2, \ldots \v_{n-1}$ from $L^\infty\big(\m B (\m D_{V^*})\big)$. This is where finite dimensionality of $\mathcal D_{S_n^*} = \mathcal D_{V^*}$ is being used. The relations between the symbols that are satisfied because of \eqref{ar2} are $\v_{n-i}(z) = \v_i(z)^*z$. Let $\v_{i}(z)=\sum_{k=-\infty}^\infty A_k^{(i)}z^k$ be the Fourier expansion of $\v_i$ for $\{A_k^{(i)}\}_{k=-\infty}^\infty \subseteq \m B(\m D_{V^*})$ and $\vert z\vert=1.$ Then, \Bea \sum_{k=-\infty}^\infty A_k^{(n-i)}z^k=\sum_{k=-\infty}^\infty (A_k^{(i)})^*z^{-k+1} =\sum_{k=-\infty}^\infty (A_{-k}^{(i)})^*z^{k+1}=\sum_{k=-\infty}^\infty (A_{-k+1}^{(i)})^*z^k,\,\, \vert z\vert=1. \Eea So, $A_k^{(n-i)}=(A^{(i)}_{-k+1})^*$ for all $k\in\mb Z.$ Define $f_{i}(z)=\sum_{k=1}^\infty A_k^{(i)}z^{k-1}$. Then \begin{eqnarray*}\v_i(z) & = & zf_i(z) + \sum_{k=-\infty}^0 A_k^{(i)} z^k \\ & = & zf_i(z) + \sum_{k=1}^\infty A_{-k+1}^{(i)} z^{-k+1} = zf_i(z) + \sum_{k=1}^\infty (A_{k}^{(n-i)})^* z^{-k+1} = zf_i(z) + f_{n-i}(z)^*.\end{eqnarray*} To compute, $ S_{n-i}^* - S_i S_n^*$, we note that \begin{eqnarray*} S_{n-i}^* - S_i S_n^* & = & \left( \begin{array}{cc} (S_{n-i}|_{\mathcal H_1})^* - S_i|_{\mathcal H_1} (S_n|_{\mathcal H_1})^* & 0 \\ 0 & (S_{n-i}|_{\mathcal H_2})^* - S_i|_{\mathcal H_2} (S_n|_{\mathcal H_2})^* \\ \end{array} \right) \\ & = & \left( \begin{array}{cc} (S_n|_{\mathcal H_1})^*S_i|_{\mathcal H_1} - S_i|_{\mathcal H_1} (S_n|_{\mathcal H_1})^* & 0 \\ 0 & V_{n-i}^* - V_i V^*\\ \end{array} \right).\end{eqnarray*} On the $\mathcal H_1$ part, we get $0$ by unitarity of $S_n|_{\mathcal H_1}$. For the $\mathcal H_2$ part, using the form of the $\v_i$, we get $$ V_{n-i}^* - V_i V^* = T_{\v_{n-i}}^* - T_{\v_i} T_z^* = T_{f_{n-i}}^* T_z^* + T_{f_{i}} - (T_z T_{f_{i}} + T_{f_{n-i}}^*) T_z^* = T_{f_i} (I - T_z T_z^*).$$ Hence \eqref{determinant} follows. Uniqueness of the tuple $(f_1, f_2, \ldots ,f_{n-1})$ follows from \eqref{determinant} by virtue of the fact that any $\m B (\m D_{V^*})$ valued function $\boldmath f$ is uniquely determined by the action of $T_{\boldmath f}$ on the space $\m D_{V^*}$ which is in fact the subspace of $H^2 (\m D_{V^*})$ consisting of $\m D_{V^*}$ valued $constant$ functions. The uniqueness of the decomposition follows from uniqueness in the Wold decomposition of the isometry $S_n$. This completes the proof of the theorem. \qed \section{Consequences of the Wold decomposition} As an immediate consequence of the Wold decomposition theorem, we get a structure theorem for a pure Agler-Young isometry. \begin{cor} \label{PureStructure} Let $\underline{S} = (S_1, S_2, \ldots S_n)$ be a pure Agler-Young isometry with $\dim \mathcal D_{S_n^*} < \infty$. Then there is a function tuple $\bl f = (f_1, f_2, \ldots f_{n-1})$ from $H^\infty(\mathcal B (\mathcal D_{S_n^*}))$ such that $\underline{S}$ is unitarily equivalent (by a unitary $W,$ say) to the canonical Agler-Young isometry associated with $\boldmath f$. Moreover, \begin{equation} S_{n-i}^* - S_i S_n^* = W^*\left( T_{f_i} (I - T_zT_z^*) \right)W.\end{equation} \end{cor} \begin{proof} The proof follows from part (c) of Theorem \ref{Wold}. \end{proof} It is important to note the structure of a commuting Agler-Young isometry. See also Theorem 4.10 of \cite{SS}. We give a different proof here. For two bounded operators $T_1$ and $T_2$, the notation $[T_1, T_2]$ denotes the commutator $T_1T_2 - T_2T_1$. \begin{cor} Let $(S_1, S_2. \ldots,S_n)$ be a commuting Agler-Young isometry on $\mathcal H$. Then there is a unique orthogonal decomposition $\mathcal H = \mathcal H_1 \oplus \mathcal H_2$ of the Hilbert space $\mathcal H$ such that \begin{enumerate}[(a)] \item $\mathcal H_1$ and $\mathcal H_2$ are common reducing subspaces for the $S_i$, \item $(S_1|_{\m H_1}, S_2|_{\m H_1}, \ldots ,S_n|_{\m H_1})$ is a commuting Agler-Young unitary. \item $S_n|_{\mathcal H_2}$ is a pure isometry $V$. There is a unitary operator $W:\m H_2\to H^2(\m D_{V^*})$ and a unique $(n-1)$-tuple $(X_1, X_2, \ldots , X_{n-1})$ of operators on $\mathcal B(\mathcal D_{V^*})$ satisfying \begin{equation} \label{simple} [X_i, X_j] = 0 \mbox{ and } [X_j, X_{n-i}^*] = [X_i, X_{n-j}^*] \mbox{ for } 1 \le i,j \le n-1 \end{equation} such that $ WS_i|_{\m H_2}W^*$ is the multiplication on $H^2(\m D_{V^*})$ by $zX_i + X_{n-i}^*$. Further, the following relation is satisfied for every $i =1, 2, \ldots , n-1$: \begin{equation*} S_{n-i}^* - S_i S_n^* = 0_{\mathcal H_1} \oplus W^*\left( (I - T_zT_z^*)^{1/2} X_i (I - T_zT_z^*)^{1/2} \right)W.\end{equation*} \end{enumerate} \label{comAYiso} \end{cor} \begin{proof} We already know the decomposition from the Wold decomposition theorem. Moreover, the finite dimensionality condition is not required because of commutativity. Recall that the finite dimensionality of $\mathcal D_{S_n^*}$ was used to infer that $S_1|_{\mathcal H_2}, S_2|_{\mathcal H_2}, \ldots ,S_{n-1}|_{\mathcal H_2}$ were Toeplitz operators. In the present context, this conclusion is immediate from commutativity. However, commutativity also brings in severe constraints. Since $ WS_i|_{\m H_2}W^*$ now commutes with a shift, it is an analytic Toeplitz operator. But its symbol is of the form $zf_i(z) + f_{n-i}(z)^*$. Hence, analyticity forces the $f_i$ to be constant, say, $X_i$. Thus, we get the symbol of $ WS_i|_{\m H_2}W^*$ to be $zX_i + X_{n-i}^*$. Now, we invoke commutativity of $ WS_i|_{\m H_2}W^*$ with $ WS_j|_{\m H_2}W^*$ for $i,j=1,2, \ldots, n-1$. So, $zX_i + X_{n-i}^*$ has to commute with $zX_j + X_{n-j}^*$. And this gives equation \eqref{simple}. The last assertion follows from \eqref{determinant} by noting that a constant multiplier leaves the range of the projection $(I - T_zT_z^*)$ invariant. \end{proof} \begin{cor} The restriction of an Agler-Young unitary to a common invariant subspace is an Agler-Young isometry. Conversely, if $\underline{S}$ is an Agler-Young isometry with $\dim\mathcal D_{S_n^*} < \infty$, then it is the restriction of an Agler-Young unitary $\underline{R}$ to a common invariant subspace. \end{cor} \begin{proof} If an Agler-Young unitary is restricted to a common invariant subspace, then the restriction is clearly an Agler-Young isometry. Conversely, given an Agler-Young isometry $\underline{S} = (S_1, S_2, \ldots,S_n)$ on $\mathcal H$ with $\dim\mathcal D_{S_n^*} < \infty$, we have the Wold decomposition $\mathcal H = \mathcal H_1 \oplus \mathcal H_2$ as in Theorem \ref{Wold}. On the reducing subspace $\mathcal H_1$, the restriction of $\underline{S}$ is an Agler-Young unitary. We know the structure of $\underline{S}$ restricted to the reducing subspace $\mathcal H_2$ from Theorem \ref{Wold}. Without loss of generality, take $\mathcal H_2$ to be $H^2 (\m D_{V^*})$ so that we can omit the unitary $W$ in the following discussion. Identify $H^2 (\m D_{V^*})$ as a subspace of $L^2 (\m D_{V^*})$, the Hilbert space of $\m D_{V^*}$ valued square integrable functions on the unit circle, define $\mathcal K$ to be $\mathcal H_1 \oplus L^2 (\m D_{V^*})$ and $$\underline{R} = (S_1|_{\mathcal H_1} \oplus M_{\v_1},S_{2}|_{\mathcal H_1} \oplus M_{\v_2}, \ldots , , \ldots ,S_{n-1}|_{\mathcal H_1} \oplus M_{\v_{n-1}}, U \oplus M_z)$$ where $\v_1, \v_2, \ldots , \v_{n-1}$ are as in Theorem \ref{Wold}. That completes the proof. \end{proof} \begin{rem} The restriction of a commuting Agler-Young unitary to a common invariant subspace is a commuting Agler-Young isometry. Conversely, if $\underline{S}$ is a commuting Agler-Young isometry, then it is the restriction of a commuting Agler-Young unitary. No finite dimensionality assumption is required. This is because of Corollary \ref{comAYiso} which provides just the right Wold decomposition that is required. Indeed, in presence of commutativity, we take $\mathcal K = \mathcal H_1 \oplus L^2 (\m D_{V^*})$ and $\underline{R} = (S_1|_{\mathcal H_1} \oplus M_{\v_1},S_{2}|_{\mathcal H_1} \oplus M_{\v_2}, \ldots , , \ldots ,S_{n-1}|_{\mathcal H_1} \oplus M_{\v_{n-1}}, U \oplus M_z)$ where $\v_i$ now is the analytic function $zX_i + X_{n-i}^*$ obtained from Corollary \ref{comAYiso}. \end{rem} We proceed to give a set of complete invariants for Agler-Young isometries. Two operator tuples $\underline{A} = (A_1, A_2, \ldots , A_{n-1}, A_n)$ and $\underline{B} = (B_1, B_2, \ldots , B_{n-1}, B_n)$ acting on Hilbert spaces $H$ and $K$ respectively are called unitarily equivalent if there is a single unitary operator $U : H \rightarrow K$ such that $B_i = UA_iU^*$ for each $i=1,2, \ldots ,n.$ The following definition is in the same spirit. \begin{defn} Let $\mathcal E $ and $\mathcal E^\prime$ be Hilbert spaces. Consider two sets of fundamental functions $f_1, f_2, \ldots ,f_{n-1}$ and $g_1, g_2, \ldots ,g_{n-1}$ from $H^\infty\big(\mathcal B(\mathcal E)\big)$ and $H^\infty\big(\mathcal B(\mathcal E^\prime)\big)$ respectively. They are called unitarily equivalent if there is a single unitary operator $U : \mathcal E \rightarrow \mathcal E^\prime$ such that $g_i(z) = Uf_i(z) U^*$ for every $z \in \mathbb D$ and each $i=1,2, \ldots ,n$. \end{defn} \begin{prop} If two sets of fundamental functions are unitarily equivalent, then their associated pure Agler-Young isometries are unitarily equivalent. Conversely, if $\underline{A}$ and $\underline{B}$ are two pure Agler-Young isometries with $\dim \mathcal D_{A_n^*} < \infty$ and $\dim \mathcal D_{B_n^*} < \infty$ and if $\underline{A}$ and $\underline{B}$ are unitarily equivalent, then their fundamental functions are unitarily equivalent. \end{prop} \begin{proof} Suppose we have two Hilbert spaces $\cle$ and $\cle^\prime$ and two sets of functions: $f_1, f_2, \ldots ,f_{n-1}$ from $H^\infty(\mathcal B(\mathcal E))$ and $g_1, g_2, \ldots ,g_{n-1}$ from $H^\infty(\mathcal B(\mathcal E^\prime))$ with the assumption that there is a unitary $U:\m E\to\m E^\i$ such that $Uf_i(z)U^*=g_i(z)$ for $z\in\mb T$ and $i=1,\ldots, n-1.$. Let $\v_i(z)=zf_i(z)+f_{n-i}(z)^*$ and $\psi_i(z)=zg_i(z)+g_{n-i}(z)^*$ for $i=1,\ldots, n-1$. These co-analytic extensions are unitarily equivalent too because $$U\v_i(z)U^* = U(zf_i(z) + f_{n-i}(z)^*)U^* = zg_i(z) + g_{n-i}(z)^* = \psi_i(z).$$ Therefore, considering the Fourier expansions $\v_i(z)=\sum_{k=-\infty}^\infty \alpha_k^{(i)}z^k$ and $\psi_i(z)=\sum_{k=-\infty}^\infty \beta_k^{(i)}z^k,$ where $\{\alpha_n^{(i)}\}_{n=-\infty}^\infty\subseteq \m B(\m E)$ and $\{\beta_k^{(i)}\}_{k=-\infty}^\infty\subseteq \m B(\m E^\i),$ we have \begin{equation}\label{coeff} U\alpha_k^{(i)}U^*=\beta_k^{(i)}\mbox{~ for~} i=1,\ldots,n-1 \mbox{~and~} k\in\mb Z. \end{equation} Let $h\in L^2(\m E)$ have the Fourier expansion $h(z)=\sum_{n=-\infty}^\infty h_nz^n$ for $\{h_n\}_{n=-\infty}^\infty\subseteq\m E.$ Define a unitary $\tilde U:L^2(\m E)\to L^2(\m E^\i)$ by $(\tilde Uh)(z)=\sum_{n=-\infty}^\infty (Uh_n)z^n$. If $P_+:L^2(\m E)\to H^2(\m E)$ and $P_+^\i:L^2(\m E^\i)\to H^2(\m E^\i)$ denote the canonical projections, then it is easy to verify that $\tilde UP_+=P^\i_+\tilde U.$ Consider $(T_{\v_1},T_{\v_2}, \ldots , T_{\v_{n-1}}, T_z)$ and $(T_{\psi_1},T_{\v_2}, \ldots , T_{\psi_{n-1}}, T_z)$ acting on $H^2(\cle)$ and $H^2(\cle^\prime)$ respectively. It is also true that $\tilde UT_{\v_i}=T_{\psi_i}\tilde U$ for $i=1,\ldots,n-1.$ Indeed, $\tilde UP_+M_{\v_i}=P_+^\i M_{\psi_i}\tilde U$ because \begin{align} (\tilde UP_+M_{\v_i}h)(z) &= P_+^\i\tilde U(M_{\v_i}h)(z) \nonumber \\ &=P_+^\i\sum_{n=-\infty}^\infty U\big(\sum_{k=-\infty}^\infty \alpha^{(i)}_kh_{n-k}\big)z^n \nonumber \\ &= P_+^\i\sum_{n=-\infty}^\infty \big(\sum_{k=-\infty}^\infty U\alpha^{(i)}_kU^*Uh_{n-k}\big)z^n \nonumber \\ &=P_+^\i\sum_{n=-\infty}^\infty \big(\sum_{k=-\infty}^\infty \beta^{(i)}_kUh_{n-k}\big)z^n=P_+^\i(M_{\psi_i}\tilde Uh)(z), \nonumber \end{align} proving that the Agler-Young isometries are unitarily equivalent. Conversely, if two pure Agler-Young isometries $\underline{A}$ and $\underline{B}$ are unitarily equivalent with the finite dimensionality assumptions mentioned above, we know that $\underline{A}$ and $\underline{B}$ are unitarily equivalent to two canonical pure Agler-Young isometries. Let the Fourier coefficients of the corresponding $\v_i$ and $\psi_i$ be $\alpha_k^{(i)}$ and $\beta_k^{(i)}$ respectively. Then \eqref{coeff} has to hold. Then obviously, the fundamental functions are unitarily equivalent. \end{proof} The following corollary, which is an immediate consequence of the proposition and Corollary \ref{PureStructure}, is a far reaching generalization of \cite[Corollary 3.2]{S}. In the commuting case, the assumption about finite dimensionality of the defect spaces is not required, see Corollary 5.2 of \cite{SS}. \begin{cor} Two pure Agler-Young isometries $\underline{A} = (A_1, A_2, \ldots ,A_n)$ and $\underline{B} = (B_1, B_2, \ldots ,B_n)$ with $\dim \mathcal D_{A_n^*} < \infty$ and $\dim \mathcal D_{B_n^*} < \infty$ are unitarily equivalent if and only if the two $(n-1)$-tuples $$(A_1^*-A_{n-1}A_n^*, A_2^*-A_{n-2}A_n^*, \ldots , A_{n-1}^* - A_1A_n^*)$$ and $$(B_1^*-B_{n-1}B_n^*, B_2^*-B_{n-2}B_n^*, \ldots , B_{n-1}^* - B_1B_n^*)$$ are unitarily equivalent. \end{cor} We shall end this section with a neat result which characterizes pure Agler-Young isometries with a remarkable simplicity. \begin{prop}\label{TheCs} Let $S_n$ be a pure isometry and let $(C_1, C_2, \ldots ,C_{n-1})$ be a tuple of bounded operators such that each $C_i$ commutes with either $S_n$ or $S_n^*$. Let $S_i=C_iS_n+C_{n-i}^*$. Then $\underline{S} = (S_1, S_2, \ldots , S_n)$ is a pure Agler-Young isometry. Conversely, if $\underline{S} = (S_1, S_2, \ldots , S_n)$ is a pure Agler-Young isometry with $\dim \mathcal D_{S_n^*} < \infty$, then there exists a tuple $(C_1, C_2, \ldots ,C_{n-1})$ of bounded operators such that each $C_i$ commutes with either $S_n$ or $S_n^*$ and $S_i=C_iS_n+C_{n-i}^*$. \end{prop} \begin{proof} If $S_n$ is a pure isometry and $S_i=C_iS_n+C_{n-i}^*$ such that each $C_i$ commutes with either $S_n$ or $S_n^*$, then \begin{align} S_{n-i}^* S_n = (C_{n-i}S_n + C_i^*)^*S_n &= S_n^* C_{n-i}S_n + C_iS_n \\ &= \left\{ \begin{array}{c} C_{n-i} S_n^*S_n + C_iS_n \mbox{ if } C_{n-i} \mbox{ commutes with } S_n^* \\ S_n^*S_n C_{n-i} + C_iS_n \mbox{ if } C_{n-i} \mbox{ commutes with } S_n \end{array} \right. \end{align} In either case, we get $S_{n-i}^* S_n = S_i$. Hence $\underline{S}$ is a pure Agler-Young isometry. Conversely, if $(S_1, S_2, \ldots , S_n)$ is a pure Agler-Young isometry with $\dim \mathcal D_{S_n^*}<\infty$, then by Corollary \ref{PureStructure}, $S_n$ is a pure isometry and there is a function tuple $\bl f = (f_1, f_2, \ldots f_{n-1})$ from $H^\infty(\mathcal B (\mathcal D_{S_n^*}))$ and a unitary $W: \mathcal H \rightarrow H^2 (\mathcal D_{S_n^*})$ such that $(WS_1W^*, WS_2W^*, \ldots , WS_nW^*)$ is equal to the canonical Agler-Young isometry associated with $\bl f$. Let $C_i = W^*T_{f_i}W$ for $i=1,2, \ldots ,n-1$. Then the $C_i$ commute with $S_n$ and $$S_i = W^*(WS_iW^*)W = W^*(T_{f_i}T_z + T_{f_{n-i}}^*)W = C_iS_n + C_{n-i}^*.$$ \end{proof} \begin{rem} If $(S_1, S_2, \ldots , S_n)$ is a commuting pure Agler-Young isometry (with no finite dimensionality assumption), then also there exists a tuple $(C_1, C_2, \ldots ,C_{n-1})$ of bounded operators such that each $C_i$ commutes with either $S_n$ or $S_n^*$ and $S_i=C_iS_n+C_{n-i}^*$. In fact, $C_i = W^*T_{f_i}W$ for $i=1,2, \ldots ,n-1$ where the functions $f_i$ are the constants $X_i$ obtained in Corollary \ref{comAYiso}. \end{rem} We shall return to Agler-Young isometries in Section 6 when we show that their adjoints are the $extremals$ of the $family$ of adjoints of elements from the Agler-Young class. \section{Proof of the dilation theorem (Theorem 2)} In a nutshell, the content of this section is to prove the following. \vspace*{5mm} \begin{centerline} {\em $\underline{S} = (S_1, S_2, \ldots,S_n)$ is in the Agler-Young class} \end{centerline} \begin{centerline} {\em if and only if $\underline{S}$ has a dilation to an Agler-Young isometry.} \end{centerline} \vspace*{5mm} We start with a preliminary lemma. \begin{lem} If $\underline{S} = (S_1, S_2, \ldots,S_n)$ is an $n$-tuple of bounded operators on a Hilbert space $\clh$ having an Agler-Young isometric dilation, then it has a minimal Agler-Young isometric dilation. \end{lem} \begin{proof} Let us start with an Agler-Young isometric dilation $\underline{W} = (W_1, W_2, \ldots,W_n)$ acting on $\clk \supset \clh$ of $\underline{S}$. Consider the subspace $$\clk_{\rm{min}} = \overline{\rm{span}}\{ W_n^mh : h \in \clh \mbox{ and } m=0,1, \ldots \}.$$ It is obviously invariant under $W_n$. It is not invariant under the rest of the $W_i$, but we can consider the compressions of $W_i$ to $\clk_{\rm{min}}$ for $i=1,2, \ldots , n-1$. The tuple $$\underline{R} = (R_1, R_2, \ldots ,R_n) = (P_{\clk_{\rm{min}}}W_1|_{\clk_{\rm{min}}}, P_{\clk_{\rm{min}}}W_2|_{\clk_{\rm{min}}}, \ldots, W_n|_{\clk_{\rm{min}}})$$ is an Agler-Young isometric dilation of $\underline{S}$. Indeed, the restriction of $W_n$ to $\clk_{\rm{min}}$ is an isometry and $$ (P_{\clk_{\rm{min}}}W_{n-i}|_{\clk_{\rm{min}}})^* W_n|_{\clk_{\rm{min}}} = P_{\clk_{\rm{min}}}W_{n-i}^* W_n|_{\clk_{\rm{min}}} = P_{\clk_{\rm{min}}} W_i|_{\clk_{\rm{min}}}$$ showing that $\underline{R}$ is an Agler-Young isometry. Moreover, $\clh \subset \clk_{\rm{min}}$. The Agler-Young isometry $\underline{R}$ not only dilates $\underline{S}$, more is true. It in fact has $\clh$ as a co-invariant subspace. To see that, first note that $$ P_\clh R_i W_n^mh = P_\clh W_i W_n^mh = S_i S_n^m h = S_i P_\clh W_n^mh \mbox{ for } h \in \clh \mbox{ and } m=0,1, \ldots .$$ This proves that $ P_\clh R_i = S_i P_\clh$. Now, for $h \in \clh$ and $k \in \clk_{\rm{min}}$, we have \begin{align*} \langle R_i^*h, k \rangle = \langle h, R_i k \rangle = \langle P_\clh h, R_i k \rangle &= \langle h, P_\clh R_i k \rangle \\ &= \langle h, S_i P_\clh k \rangle = \langle P_\clh S_i^* h, k \rangle = \langle S_i^* h, k \rangle.\end{align*} That completes the proof of co-invariance. Thus, we have an Agler-Young isometric dilation $\underline{R}$ of $\underline{S}$ which moreover enjoys the property of minimality and co-invariance. \end{proof} \begin{lem} Let $\underline{S} = (S_1, S_2, \ldots,S_n)$ be an $n$-tuple of bounded operators on a Hilbert space $\clh$ having an Agler-Young isometric dilation $\underline{W} = (W_1, W_2, \ldots,W_n)$ acting on $\clk \supset \clh$. Then $\underline{S}$ is in the Agler-Young class. \label{compression} \end{lem} \begin{proof} By virtue of the lemma above, we shall assume that $\clk = \clk_{\rm{min}} = \overline{\rm{span}}\{ W_n^mh : h \in \clh \mbox{ and } m=0,1, \ldots \}$. Now we have the advantage that $\clk$ is the space of minimal isometric dilation of the contraction $S_n$. We know that a contraction has only one minimal isometric dilation up to unitary invariance. Thus, there is a unitary $$U: \clk \rightarrow \m K_0=\m H\oplus\m D_{S_n}\oplus\m D_{S_n}\oplus\ldots=\m H\oplus\ell^2(\m D_{S_n})$$ such that $UW_nU^* = V_n$ where $V_n$ is the following version of the minimal unitary dilation of $S_n$ (Sch$\ddot{\mbox{a}}$ffer's costruction): $$ V_n(h_0,h_1,h_2,\ldots) = (S_nh_0,{D}_{S_n}h_0,h_1, h_2,\ldots).$$ This $U$ fixes $\clh$ as well and hence each $UW_iU^*$ leaves $\clh$ as an invariant subspace. Thus corresponding to the decomposition $\clk = \clh \oplus (\clk \ominus \clh)$, the operators $UW_iU^*$ have the block matrix representation: $$UW_nU^*=\begin{pmatrix} S_n&0\\D&E \end{pmatrix} \mbox{ and } UW_iU^* = \begin{pmatrix} S_i&0\\D_i&E_i \end{pmatrix} $$ where $D, D_i$ are from $ \mathcal H$ to $\ell^2(\mathcal{D}_{S_n})$ and $E, E_i$ are on $\ell^2(\mathcal{D}_{S_n}) $. Moreover, $$D=\begin{pmatrix}{D}_{S_n} \\0\\0\\ \vdots \end{pmatrix} \mbox{ and } E=\begin{pmatrix} 0&0&0&\dots\\I&0&0&\dots\\0&I&0&\dots\\\dots&\dots&\dots&\dots \end{pmatrix} .$$ Since $(W_1, W_2, \ldots, W_{n-1},{V_n})$ is an Agler-Young isometry, we have for $i=1,2, \ldots, n-1$, \begin{align} UW_iU^* & = UW_{n-i}^*U^* UW_nU^* \nonumber \\ \mbox{ or, } \begin{pmatrix} S_i&0\\D_i&E_i \end{pmatrix} & = \begin{pmatrix} S_{n-i}^*&D_{n-i}^*\\0&E_{n-i}^* \end{pmatrix} \begin{pmatrix} S_n&0\\D&E \end{pmatrix} \nonumber \\ \mbox{ or, } \begin{pmatrix} S_i&0\\D_i&E_i \end{pmatrix} & = \begin{pmatrix} S_{n-i}^*S_n + D_{n-i}^* D & D_{n-i}^* E \nonumber \\ E_{n-i}^*D&E_{n-i}^*E\end{pmatrix} \nonumber \end{align} Out of four equations that we can get from above, we need three - the ones corresponding to $(1,1)$, $(1,2)$ and $(2,1)$ entries. \begin{equation} \label{New11} S_{n-i}^*S_n + D_{n-i}^* D = S_i \mbox{ for } i=1,2, \ldots, n-1. \end{equation} \begin{equation} \label{New12} D_{n-i}^*E = 0. \end{equation} \begin{equation} \label{New21} E_{n-i}^*D = D_i \mbox{ for } i=1,2, \ldots, n-1. \end{equation} From equation \eqref{New12}, we get $E^*D_{n-i} = 0$ which, because of what $E$ is, implies that only the first component of $D_{n-i}$ is non-zero. This non-zero component is an operator from $\clh$ to $\mathcal D_{S_n}$, say $Z_{n-i}$. Equation \eqref{New21} tells us that $Z_i = X_{n-i}^* D_{S_n}$ where $X_{n-i}$ is the $(1,1)$ entry of $E_{n-i}$ when written in its block matrix form as an operator on $\mathcal D_{S_n} \oplus \mathcal D_{S_n} \oplus \cdots $. Now the proof is complete in view of the equation \eqref{New11}. \end{proof} We shall now prove the converse, viz., every Agler-Young contraction has an Agler-Young isometric dilation. \textbf{Proof of The Dilation Theorem} (1) It is evident from the definition that $V_n$ on $\m K_0$ is the minimal isometric dilation of $S_n$ (Sch$\ddot{\mbox{a}}$ffer's construction, see \cite{Schaffer}). Let us compute the adjoints of $(V^{\bl X}_i)^*$ and $V_n^*$. A straightforward computation shows that they are as follows. \Bea && (V^{\bl X}_i)^*(h_0,h_1,h_2,\ldots)=(S_i^*h_0+D_{S_n}X_{n-i}h_1, X_i^*h_1+X_{n-i}h_2,X_i^*h_2+X_{n-i}h_3,\ldots),\\ && V_n^*(h_0,h_1,h_2,\ldots)=(S_n^*h_0+D_{S_n}h_1,h_2,h_3,\ldots). \Eea The Hilbert space $\m H$, embedded in $\m K_0$ by the map $h\mapsto (h,0,0,\ldots)$ is jointly co-invariant under $V^{\bl A}_i$ and $V_n$ because $(V^{\bl X}_i)^*{|_\m H}=S_i^*$ and $ V_n^*{|_\m H}=S_n^*$ for $i=1,\ldots, n-1.$ Since $V_n$ is an isometry, in order to show that $(V_1^{\bl X},\ldots,V^{\bl X}_{n-1}, V_n)$ is an Agler-Young isometric dilation of $(S_1,\ldots,S_n)$ it is enough to verify that \bea V_n^*V_i^{\bl X}=(V_{n-i}^{\bl X})^* \mbox{~for~} i=1,\ldots, n-1. \eea For for $ i=1,\ldots, n-1,$ note that \Bea && V_n^*V_i^{\bl X}(h_0,h_1,h_2,\ldots)\\ &=& V_n^*(S_ih_0,X_{n-i}^*{D}_{S_n}h_0+X_ih_1, X_{n-i}^*h_1+X_ih_2, X_{n-i}^*h_2+X_ih_3,\ldots )\\ &=& (S_n^*S_ih_0+D_{S_n}X^*_{n-i}D_{S_n}h_0+D_{S_n}X_ih_1,X_{n-i}^*h_1+X_ih_2, X_{n-i}^*h_2+X_ih_3,\ldots)\\ &=& (S^*_{n-i}h_0+D_{S_n}X_ih_1,X_{n-i}^*h_1+X_ih_2, X_{n-i}^*h_2+X_ih_3,\ldots)\\ &=&(V_{n-i}^{\bl X})^* (h_0,h_1,h_2,\ldots), \Eea where for the penultimate equality recall that $S^*_{n-i}=S_n^*S_i+D_{S_n}X_{n-i}^*D_{S_n}$ for $i=1,\ldots, n-1.$ (2) Let us start by writing the block operator matrices of $V_1^{\bl X},V_2^{\bl X},\ldots,V_{n-1}^{\bl X},V_n$. It is evident from their defining formulae that $$V_n=\begin{pmatrix} S_n&0\\D&E \end{pmatrix} \mbox{ and } V_i^{\bl X} = \begin{pmatrix} S_i & 0 \\ C_i & Y_i \end{pmatrix}$$ with respect to the decomposition $\mathcal H\oplus \ell^2(\mathcal {D}_{S_n})$ of $\mathcal K_0$, where $D, C_i : \mathcal H \rightarrow \ell^2(\mathcal{D}_{S_n})$ are $$D=\begin{pmatrix}{D}_{S_n} \\0\\0\\ \vdots \end{pmatrix} \mbox{ and } C_i = \begin{pmatrix}X_{n-i}^*{D}_{S_n} \\0\\0\\ \vdots \end{pmatrix}$$ and $E, Y_i$ on $\ell^2(\mathcal{D}_{S_n}) $ are $$E=\begin{pmatrix} 0&0&0&\dots\\I&0&0&\dots\\0&I&0&\dots\\\dots&\dots&\dots&\dots \end{pmatrix} \mbox{ and } Y_i = \begin{pmatrix} X_i & 0 & 0 & 0 & \ldots \\ X_{n-i}^* & X_i & 0 & 0 & \ldots \\ 0 & X_{n-i}^* & X_i & 0 & \ldots \\ \vdots & \vdots & \vdots & \ldots & \ldots \end{pmatrix}.$$ Let $(W_1, W_2, \ldots ,W_{n-1}, V_n)$ be any Agler-Young isometric dilation for $\underline{S} = (S_1, S_2, \ldots ,S_n)$ on $\clk_0$ such that $\clh$ is a co-invariant subspace. Because of co-invariance of $\mathcal H$, we have the following matrix form: $$W_i= \begin{pmatrix} S_i&0\\D_i&E_i \end{pmatrix} \mbox{ for } i=1,\ldots n-1.$$ Since $(W_1, W_2, \ldots, W_{n-1},{V_n})$ is an Agler-Young isometry, we have for $i=1,2, \ldots, n-1$, \begin{align} W_i & = W_{n-i}^* V_n \nonumber \\ \mbox{ or, } \begin{pmatrix} S_i&0\\D_i&E_i \end{pmatrix} & = \begin{pmatrix} S_{n-i}^*&D_{n-i}^*\\0&E_{n-i}^* \end{pmatrix} \begin{pmatrix} S_n&0\\D&E \end{pmatrix} \nonumber \\ \mbox{ or, } \begin{pmatrix} S_i&0\\D_i&E_i \end{pmatrix} & = \begin{pmatrix} S_{n-i}^*S_n + D_{n-i}^* D & D_{n-i}^* E \nonumber \\ E_{n-i}^*D&E_{n-i}^*E\end{pmatrix} \nonumber \end{align} We get several equations from the above, which we list below \begin{equation} \label{11} S_{n-i}^*S_n + D_{n-i}^* D = S_i \mbox{ for } i=1,2, \ldots, n-1. \end{equation} \begin{equation} \label{12} D_{n-i}^*E = 0. \end{equation} \begin{equation} \label{21} E_{n-i}^*D = D_i \mbox{ for } i=1,2, \ldots, n-1. \end{equation} \begin{equation} \label{22} E_{n-i}^* E = E_i \mbox{ for } i=1,2, \ldots, n-1. \end{equation} From equation \eqref{12}, we get $E^*D_{n-i} = 0$. Recalling that $E$ is really a shift, this implies that only the first component of $D_{n-i}$ is non-zero. From equation \eqref{11}, it is clear that this first component is $X_iD_{S_n}$. Hence $D_i = C_i$ so that $$W_i = \begin{pmatrix} S_i&0\\C_i&E_i \end{pmatrix}.$$ We now have to show that $E_i = Y_i$ for $i=1,2, \ldots ,n$. Let $$ E_i = (( \; A_{ml}^{(i)} \; ))_{m,l=1}^\infty \mbox{ for } i=1,2, \ldots, n-1.$$ The equation \eqref{22} gives us for $i=1,2, \ldots, n-1$, \begin{align} E^*E_{n-i} &= E_i^* \nonumber \\ \mbox{ or, } \begin{pmatrix} 0 & I & 0 & 0 & \ldots \\ 0 & 0 & I & 0 & \ldots \\ 0 & 0 & 0 & I & \ldots \\\ \vdots & \vdots & \vdots & \vdots & \ldots \end{pmatrix} \begin{pmatrix} A_{11}^{(n-i)} & A_{12}^{(n-i)} & A_{13}^{(n-i)} & \ldots \\ A_{21}^{(n-i)} & A_{22}^{(n-i)} & A_{23}^{(n-i)} & \ldots \\ A_{31}^{(n-i)} & A_{32}^{(n-i)} & A_{33}^{(n-i)} & \ldots \\ \vdots & \vdots & \vdots & \ldots \end{pmatrix} &= \begin{pmatrix} A_{11}^{(i)^*} & A_{21}^{(i)^*} & A_{31}^{(i)^*} & \ldots \\ A_{12}^{(i)^*} & A_{22}^{(i)^*} & A_{32}^{(i)^*} & \ldots \\ A_{13}^{(i)^*} & A_{23}^{(i)^*} & A_{33}^{(i)^*} & \ldots \\ \vdots & \vdots & \vdots & \ldots \end{pmatrix} \nonumber \\ \mbox{ or, } \begin{pmatrix} A_{21}^{(n-i)} & A_{22}^{(n-i)} A_{23}^{(n-i)} & \ldots \\ A_{31}^{(n-i)} & A_{32}^{(n-i)} A_{33}^{(n-i)} & \ldots \\ A_{41}^{(n-i)} & A_{42}^{(n-i)} A_{43}^{(n-i)} & \ldots \\ \vdots & \vdots & \vdots & \ldots \end{pmatrix} &= \begin{pmatrix} A_{11}^{(i)^*} & A_{21}^{(i)^*} & A_{31}^{(i)^*} & \ldots \\ A_{12}^{(i)^*} & A_{22}^{(i)^*} & A_{32}^{(i)^*} & \ldots \\ A_{13}^{(i)^*} & A_{23}^{(i)^*} & A_{33}^{(i)^*} & \ldots \\ \vdots & \vdots & \vdots & \ldots \end{pmatrix} \label{star} \nonumber \\ \mbox{ or, } A_{(l+1)m}^{(n-i)} &= A_{ml}^{(i)^*} \end{align} Hence $A_{ml}^{(i)} = A_{(l+1)m}^{(n-i)^*} = A_{(m+1(l+1)}^{(i)}$. So each $E_i$ is a Toeplitz matrix. Let $$ E_i = \begin{pmatrix} e_0^{(i)} & e_{-1}^{(i)} & e_{-2}^{(i)} & \ldots \\ e_1^{(i)} & e_0^{(i)} & e_{-1}^{(i)} & \ldots \\ e_2^{(i)} & e_1^{(i)} & e_0^{(i)} & \ldots \\ \vdots & \vdots & \vdots & \ldots \end{pmatrix}.$$ From equation \eqref{21}, we get, for each $i$, $$(D_{S_n}, 0 , 0 , \ldots) \begin{pmatrix} e_0^{(i)} & e_{-1}^{(i)} & e_{-2}^{(i)} & \ldots \\ e_1^{(i)} & e_0^{(i)} & e_{-1}^{(i)} & \ldots \\ e_2^{(i)} & e_1^{(i)} & e_0^{(i)} & \ldots \\ \vdots & \vdots & \vdots & \ldots \end{pmatrix} = (D_{S_n}X_i, 0 , 0 , \ldots).$$ This leads to $D_{S_n} e_0^i = D_{S_n} X_i$ and $D_{S_n} e_{-k}^{(i)} = 0$ for $k \in \mathbb N$. These two equations mean that $e_0^i = X_i$ and $e_{-k}^{(i)} = 0$ for $k \in \mathbb N$. So $$ E_i = \begin{pmatrix} X_i & 0 & 0 & 0 & \ldots \\ e_1^{(i)} & X_i & 0 & 0 & \ldots \\ e_2^{(i)} & e_1^{(i)} & X_i & 0 & \ldots \\ \vdots & \vdots & \vdots & \ldots & \ldots \end{pmatrix}.$$ Also, equation \eqref{star} gives us that $$A_{21}^{(n-i)} = A_{11}^{(i)^*} = X_i^*, A_{31}^{(n-i)} = A_{12}^{(i)^*} = 0, A_{41}^{(n-i)} = A_{13}^{(i)^*} = 0, \ldots .$$ So, $$ E_i = \begin{pmatrix} X_i & 0 & 0 & 0 & \ldots \\ X_{n-i}^* & X_i & 0 & 0 & \ldots \\ 0 & X_{n-i}^* & X_i & 0 & \ldots \\ \vdots & \vdots & \vdots & \ldots & \ldots \end{pmatrix}.$$ Thus $E_i = Y_i$ and that finishes the proof. (3) The proof of assertion (3) simply consists of noting that $W_n$ by virtue of being a minimal isometric dilation of $S_n$ is unitarily equivalent to $V_n$. Let the unitary be $U$, i.e., $UW_nU^* = V_n$. Then $(UW_1U^*, UW_2U^*, \ldots , UW_nU^*)$ is an Agler-Young isometric dilation of $(S_1, S_2, \ldots ,S_n)$ with the last component of the dilation being $V_n$. By (2) above, this means that $UW_iU^* = V_i$ and we are done. \qed \section{Pure Agler-Young contractions} In case, $S_n$ is a pure contraction, that is, $S_n^{*^m}$ converges strongly to the zero operator as $m \rightarrow \infty$, we have a simpler form of the dilation. Such an Agler-Young contraction, that is, whose last component is a pure contraction, is called a pure Agler-Young contraction. The following lemma is a dilation result as well as a functional model. Note the specific structure of the Agler-Young isometry that serves as the dilation tuple. We need some background material for it. Let $\Theta_{A}$ be the celebrated Sz.-Nagy Foias characteristic function of a contraction $A$. It is a $\mathcal B( \mathcal D_{A}, \mathcal D_{A^*})$ valued function on $\mathbb D$ defined as $$ \Theta_{A} = [-A + z D_{A^*} (I - zA^*)^{-1} D_{A}]|_{\mathcal D_{A}}.$$ For a complete discussion of its properties and usefulness, see \cite{SNF}. The function $\Theta_A$ induces a multiplier $M_{\Theta_A}$ from $H^2 (\mathcal D_{A})$ into $H^2 ( \mathcal D_{A^*})$, i.e., $$ (M_{\Theta_A} f)(z) = \Theta_A(z) f(z) \mbox{ for } f \in H^2 (\mathcal D_{A}).$$ If $A$ is pure, then $M_{\Theta_A}$ is an isometry. Sz.-Nagy and Foias showed that every pure contraction, say $A$, defined on a Hilbert space $\mathcal H$ is unitarily equivalent to the operator $$\mathbb {A} =P_{\mathbb H_{A}}(T_z)|_{\mathbb H_{A}} \mbox{ on the Hilbert space } \mathbb H_{A}=(H^2( \mathcal D_{A^*}) \ominus M_{\Theta_A}(H^2(\mathcal D_A)).$$ This is known as the Sz.Nagy-Foias model for a pure contraction. We can use their result to produce the required model for a pure Agler-Young contraction. Let $\theta$ be the characteristic function of the pure contraction $S_n$, i.e., $\theta = \Theta_{S_n}$ in the notation of the above. Let us remember that $M_\theta$ is an isometry because $S_n$ is pure. \begin{lem} \label{W} Let $\underline{S} = (S_1, S_2, \ldots ,S_n)$ be a pure Agler-Young contraction on $\mathcal H$. Suppose $S_n$ is not an isometry and $\dim \mathcal D_{S_n^*} < \infty$. Then there are $n-1$ bounded operators $Y_1, Y_2, \ldots , Y_{n-1}$ on $\mathcal D_{S_n^*}$ such that $\underline{S}$ is unitarily equivalent to the commuting tuple $\mathbb{S} = (\mathbb S_1, \mathbb S_2, \ldots \mathbb S_n)$ on the function space $ \mathbb H_{\underline{S}} = H^2(\mathcal D_{S_n^*}) \ominus M_{\theta_{S_n}} H^2(\mathcal D_{S_n})$ defined by $\mathbb S_i = P_{\mathbb H_{\underline{S}}} T_{Y_i + zY_{n-i}^*}|_{\mathbb H_{\underline{S}}}$ for $1 \le i \le n-1$ and $\mathbb S_n = P_{\mathbb H_{\underline{S}}} T_z|_{\mathbb H_{\underline{S}}}$. \end{lem} \begin{proof} For any $Y_1, Y_2, \ldots , Y_{n-1} \in \mathcal B(\mathcal D_{S_n^*})$, the tuple $$(T_{Y_1 + zY_{n-1}^*}, T_{Y_2 + zY_{n-2}^*}, \ldots , T_{Y_{n - 1}+ zY_1^*}, T_z)$$ is a canonical Agler-young isometry. Indeed, the associated $f_1, f_2, \ldots ,f_{n-1}$ are constant functions $f_i(z) = Y_i, i=1,2, \ldots ,n-1$. We shall show that $\underline{S}$ dilates to such an Agler-Young isometry by embedding $\mathcal H$ isometrically into $H^2(\mathcal D_{S_n^*})$ via an isometry $W$ as a proper co-invariant subspace for $\underline{T}$ and showing that \begin{equation} \label{identification} WS_i^*W^* = T_{Y_i + zY_{n-i}}^*|_{W\mathcal H}, i=1,2, \ldots n-1 \mbox{ and } WS_n^*W^* = T_z^* |_{W\mathcal H}.\end{equation} Under the isometry, the space $\mathcal H$ is identified with the range of $W$ in $H^2(\mathcal D_{S_n^*})$ and an operator $A$ on $\mathcal H$ is identified with $WAW^*$ on the range of $W$. Hence equation \eqref{identification} will mean that $S_i^*$ is unitarily equivalent to $T_{Y_i + zY_{n-i}}^*|_{W\mathcal H}$ for $i=1,2, \ldots n-1$ and $S_n^*$ is unitarily equivalent to $T_z^* |_{W\mathcal H}$. That will prove the statement of the lemma. The isometry $W$ is defined as $(Wh) (z) = D_{S_n^*} (I - zS_n^*)^{-1} h$. If we expand the right hand side of the definition of $Wh$, we get the function $\sum_{k=0}^\infty (D_{S_n^*} (S_n^*)^k h) z^k$. Its norm in $H^2 ( \mathcal D_{S_n^*})$ is $$\sum_{k=0}^\infty \| D_{S_n^*} (S_n^*)^k h \|^2 = \sum_{k=0}^\infty \langle S_n^k D_{S_n^*}^2 (S_n^*)^k h , h\rangle .$$ This is a telescopic sum and equals $ \| h \|^2 - \lim_{k \rightarrow \infty} \| (S_n^*)^k h \|^2 = \| h \|^2$. Thus $W$ is an isometry. If $f(z) = \sum_{k=0}^\infty a_k z^k$ with $a_k \in \mathcal D_{S_n^*}$ is an arbitrary element of $H^2 ( \mathcal D_{S_n^*})$, then for $h \in \mathcal H$, \begin{align*} \langle W^*f, h \rangle &= \langle W^*(\sum_{k=0}^\infty a_k z^k) , h \rangle \\ &= \langle \sum_{k=0}^\infty a_k z^k , Wh \rangle \\ &= \langle \sum_{k=0}^\infty a_k z^k , \sum_{k=0}^\infty (D_{S_n^*} (S_n^*)^k h) z^k \rangle \\ &= \sum_{k=0}^\infty \langle a_k , D_{S_n^*} (S_n^*)^k h \rangle = \sum_{k=0}^\infty \langle S_n^k D_{S_n^*} a_k , h \rangle \end{align*} so that $W^*f = \sum_{k=0}^\infty S_n^k D_{S_n^*} a_k$. It immediately follows from this computation that $W^*T_z = S_nW^*$ because $(T_zf)(z) = \sum_{k=0}^\infty a_k z^{k+1} = \sum_{k=1}^\infty a_{k-1} z^{k}$ so that $$ W^*T_z f = \sum_{k=1}^\infty S_n^{k} D_{S_n^*} a_{k-1} = S_n \sum_{k=1}^\infty S_n^{k-1} D_{S_n^*} a_{k-1} = S_nW^*f.$$ Hence $W\mathcal H$ is a co-invariant subspace of $T_z$. Moreover, $WS_n^*W^* = T_{z}^*|_{W\mathcal H}$. Thus $T_z$ is the minimal isometric dilation of $WS_nW^*$. Consequently, by uniqueness of minimal isometric dilation of a contraction, there is a unitary $U: \mathcal K_0 \rightarrow H^2 ( \mathcal D_{S_n^*})$ such that $UV_nU^* = T_z$ where $\mathcal K_0$ and $V_n$ are as in the last section. This $U$ also fixes $\mathcal H$, i.e., the image under $U$ of the subspace $\mathcal H \oplus 0 \oplus 0 \oplus \ldots $ of $\mathcal K_0$ is $W\mathcal H$. Now, $(UV_1U^*, \ldots ,UV_{n-1}U^*, T_z)$ is an Agler-Young isometry that leaves $W\mathcal H$ co-invariant. Since the last component of this Agler-Young isometry is $T_z$, we know from Corollary \ref{PureStructure} that it is a canonical Agler-Young isometry $\underline{T} = (T_{\v_1}, \ldots ,T_{\v_{n-1}}, T_z)$. It is an Agler-Young isometric dilation of the given $\underline{S}$ because $(V_1, V_2, \ldots ,V_n)$ is so. Range of $W$ is a proper subspace because otherwise $S_n$ will be a shift of some multiplicity, but by assumption it is not an isometry. To reach the special structure of the $\v_i$ as mentioned in the statement, we need to note that there is a relation between $W$ and $M_\theta$, viz., $$WW^* + M_\theta M_\theta^* = I.$$ We are not proving this here in detail. The proof can be done by applying $WW^* + M_\theta M_\theta^*$ on vectors of $H^2(\mathcal D_{S_n^*})$ of the form $\zeta/(1 - z\overline{w})$ (where $\zeta \in \mathcal D_{S_n^*}$) and can also be easily found in the literature, see Lemma 3.3 of \cite{BhP} for example. Since $W$ and $M_\theta$ both are isometries (in presence of pureness of $S_n$), $WW^*$ and $M_\theta M_\theta^*$ are complementary orthogonal projections. Now it follows from the computations we have done above involving $W$ that there is a unitary between $\mathcal H$ and the range of $W$ which is $\mathbb H_{S_n}$. Moreover, this unitary which is just $W$ mapping $\mathcal H$ onto $\mathbb H_{S_n}$ also conjugates the operators rightly: $$WS_i^*W^* = T_{\v_i}^*|_{\mathbb H_{S_n}} \mbox{ and } WS_n^*W^* = T_z^*|_{\mathbb H_{S_n}}.$$ The range of $M_\theta$, which is automatically closed being the range of an isometry, and which equals $(W\mathcal H)^\perp = {\mathbb H_{S_n}}^\perp$ is an invariant subspace of the canonical Agler-Young isometry $(T_{\v_1}, \ldots ,T_{\v_{n-1}}, T_z)$. Thus we are lead to the situation that we have a non-trivial invariant subspace of $T_z$ which is also invariant under the Toeplitz operators $T_{\v_1}, T_{\v_2}, \ldots , T_{\v_{n-1}}$. The non-triviality of the invariant subspace is due to the fact that $S_n$ is not an isometry, i.e., not a shift. A Toeplitz operator and the operator $T_z$ can have a common non-trivial invariant subspace only if the Toeplitz operator has an analytic symbol, see \cite{Nakazi}. Since $\underline{T}$ is a canonical Agler-Young isometry, the $\v_i$ are co-analytic extensions of some $f_1, f_2, \ldots ,f_{n-1}$ which are in fact holomorphic. The only way $\v_i$ can be analytic is if the $f_i$ are constants, say, $Y_i$. Thus, $\v_i(z) = Y_i + zY_{n-i}^*$ for $i=1,2, \ldots ,n-1$. \end{proof} The theorem above is a far reaching non-commutative generalization of Theorem 3.1 of \cite{BhP}. In general, an Agler-Young tuple does not enjoy the property that its adjoint tuple is an Agler-Young tuple, any canonical Agler-Young isometry with non-constant $\bl{f}$ is such an example. However, the theorem above allows us to conclude the following about the adjoint tuple. \begin{cor} Let $\underline{S} = (S_1, S_2, \ldots ,S_n)$ be a pure Agler-Young contraction on $\mathcal H$. Suppose $S_n$ is not an isometry and $\dim \mathcal D_{S_n^*} < \infty$. Then the adjoint tuple $\underline{S}^* = (S_1^*, S_2^*, \ldots ,S_n^*)$ is an Agler-Young contraction. \end{cor} \begin{proof} From the dilation result above for a pure Agler-Young contraction, we infer that there are $Y_1, Y_2, \ldots , Y_{n-1} \in \mathcal B ( \mathcal D_{S_n^*})$ such that equation \eqref{identification} holds where $W : \mathcal H \rightarrow H^2 ( \mathcal D_{S_n^*})$ is as defined before. These $Y_i$ will be shown to satisfy $$S_i^* - S_{n-i}S_n^* = D_{S_n^*} Y_i^* D_{S_n^*}.$$ For simpler computations, we shall use the identification of the Hilbert space $H^2 ( \mathcal D_{S_n^*})$ with the tensor product $H^2 \otimes \mathcal D_{S_n^*}$ where $H^2$ is the Hardy space of scalar valued functions on $\mathbb D$. In this picture, $T_z$ denotes multiplication by $z$ on $H^2$ and $$ Wh = \sum_{k=0}^{\infty} z^k \otimes D_{S_n^*} S_n^{*K}h, W^*(T_z\otimes I) = S_nW^* \mbox{ and } W^*(I \otimes Y_i + T_z\otimes Y_{n-i}^*) = S_iW^*.$$ Hence $S_i^* - S_{n-i}S_n^*$ on $\mathcal H$ is identified on the range of $W$ with \begin{align*} & W (S_i^* - S_{n-i}S_n^*)W^* |_{\rm{Ran} W} \\ = & (I \otimes Y_i + T_z\otimes Y_{n-i}^*)^*WW^* - P_{\rm{Ran} W} (I \otimes Y_{n-i} + T_z\otimes Y_{i}^*) (T_z\otimes I)^* WW^*|_{\rm{Ran} W} \\ = & P_{\rm{Ran} W} (I \otimes Y_i^* + T_z^* \otimes Y_{n-i} - T_z^* \otimes Y_{n-i} - T_zT_z^*\otimes Y_{i}^*) (T_z\otimes I)^* WW^* |_{\rm{Ran} W}\\ = & P_{\rm{Ran} W} (P_{\mathbb C} \otimes Y_i^*)|_{\rm{Ran} W} \end{align*} where $P_{\mathbb C}$ is the projection in $H^2$ onto the one-dimensional subspace of constants. The penultimate line above is reached by co-invariance of the range of $W$ by all the Toeplitz operators $I \otimes Y_i + T_z\otimes Y_{n-i}^*$ and $T_z \otimes I$. Now, it is known that $P_{\rm{Ran} W} (P_{\mathbb C} \otimes Y_i^*)|_{\rm{Ran} W} = WD_{S_n^*} Y_i^* D_{S_n^*} W^*|_{\rm{Ran} W}$, see Theorem 5.1 in \cite{S}. Hence we are done. \end{proof} Since the discussion in this section so far has greatly depended on closed subspaces of $H^2(\mathcal E)$ that are invariant under $T_z$ as well as under all $T_{\v_i}$, we can ask the question of whether there is a description of such an invariant subspace. The following proposition answers that question. Recall that by Beurling-Lax-Halmos theorem, for any closed subspace $\mathcal M$ of $H^2 \otimes \mathcal E$ that is invariant under $T_z$, there is an auxiliary space $\mathcal F$ and a $\mathcal B ( \mathcal F , \mathcal E)$ valued inner function $\theta$ on $\mathbb D$ (in fact, the $\theta$ could be taken to be the characteristic function of a certain pure contraction) such that $\mathcal M =$ Ran$ M_\theta$. This $\theta$ is called the Beurling-Lax-Halmos function of the subspace $\mathcal M$. \begin{prop} Let $\mathcal E$ be a Hilbert space, let $f_1, f_2, \ldots ,f_{n-1}$ be functions from $H^\infty(\mathcal B (\mathcal E))$ and let $\v_1, \v_2, \ldots ,\v_{n-1}$ be their co-analytic extensions. Let $\mathcal M$ be a closed subspace of $H^2 \otimes \mathcal E$ that is invariant under $T_z$. Let $\theta$ be the Beurling-Lax-Halmos function of $\mathcal M$ with $\mathcal F$ being the auxiliary space. Then $\mathcal M$ is invariant under all the $T_{\v_i}$ if and only if there is a unique tuple $(g_1, g_2, \ldots , g_{n-1})$ from $H^\infty(\mathcal B (\mathcal F))$ such that its co-analytic extension tuple $(\psi_1, \psi_2, \ldots ,\psi_{n-1})$ satisfies $$ \v_i(z) \theta(z) = \theta(z) \psi_i(z) \mbox{ for all } z \in \mathbb D \mbox{ and for all } i=1,2, \ldots ,n-1.$$ \end{prop} \begin{proof} Clearly, the condition is sufficient for $\mathcal M$ to be simultaneously invariant under $T_{\v_1}, T_{\v_2}$, \ldots ,$T_{\v_n}$. It is the necessity that we need to prove. To that end, we do the following computation involving operators on $H^2(\mathcal F)$. $$(M_\theta^* T_{\v_{n-i}} M_\theta)^* T_z = M_\theta^* T_{\v_{n-i}}^* M_\theta T_z = M_\theta^* T_{\v_{n-i}}^* T_z M_\theta = M_\theta^* T_{\v_i} M_\theta.$$ Consequently, the tuple $(M_\theta^* T_{\v_1} M_\theta, M_\theta^* T_{\v_2} M_\theta, \ldots ,M_\theta^* T_{\v_n} M_\theta)$ is a pure Agler-Young isometry. By Corollay \ref{PureStructure}, it is a canonical Agler-Young isometry ($T_{\psi_1}, T_{\psi_2}$, \ldots ,$T_{\psi_n}$), say. Since $M_\theta^* T_{\v_i} M_\theta = T_{\psi_i}$, we have $M_\theta M_\theta^* T_{\v_i} M_\theta = M_\theta T_{\psi_i}$. By virtue of the fact that $\mathcal M$ is an invariant subspace for $T_{\v_i}$, the projection $M_\theta M_\theta^*$ in the last equation is redundant. Hence $T_{\v_i} M_\theta = M_\theta T_{\psi_i}$. \end{proof} \section{A von Neumann type inequality} von Neumann proved that for any contraction $T$ and any polynomial $p$, one has \begin{equation} \label{vN} \| p(T) \| \le \| p \|_\infty \end{equation} where $\| p \|_\infty = \sup \{ |p(z)| : z \in \mathbb D\}$. This is a characterization of contractions that led to the study of spectral and complete spectral sets. The class that we are studying, viz., the Agler-Young class, is defined by a system of operator equations. Does a von Neumann type inequality as in Equation \eqref{vN} characterize the Agler-Young class? This is the question we shall answer in this section by falling back on an argument which originated in \cite{BhPSR} as a beautiful application of the operator version of Fejer-Riesz Theorem. In the following, $w(n)$ denotes a constant, depending on $n$. \begin{lem} \label{hered} The following are equivalent. \begin{enumerate} \item $\underline{S} = (S_1, S_2, \ldots ,S_n) \in AY_n$ with the numerical radius of each $X_i$ being not greater than $w(n)$, a constant depending on $n$, \item $ w(n)(I - S_n^* S_n) \ge \mathrm{Re} (\exp{i\theta} (S_i - S_{n-i}^*S_n) \mbox{ for all } \theta \in [0,2\pi).$ \end{enumerate} \end{lem} \begin{proof} The proof uses the following lemma. \begin{lem}[Lemma 4.1 of \cite{BhPSR}] Let $\Sigma$ and $D$ be two bounded operators on $\clh$. Then $$DD^* \ge \mathrm{Re} (\exp{i\theta} \Sigma) \mbox{ for all } \theta \in [0,2\pi)$$ if and only if there is an $F \in \mathcal B(\mathcal D_*)$ with numerical radius of $F$ not greater than one such that $\Sigma = DFD^*$, where $\mathcal D_* = \overline{\rm Ran} D_*$. \end{lem} For our purpose, let $\Sigma_i = S_i - S_{n-i}^*S_n$. Assuming (1) above, we know that $\Sigma_i = D_{S_n} X_i D_{S_n}$ for some $X_i$ with $w(X_i) \le w(n)$. Hence by the lemma above, we have $$ w(n)(I - S_n^* S_n) \ge Re (\exp{i\theta} (S_i - S_{n-i}^*S_n) \mbox{ for all } \theta \in [0,2\pi).$$ Conversely, if we assume (2) above and want to prove (1), we apply the lemma again which guarantees the existence of an $X_i \in \mathcal B (\mathcal D_{S_n})$ such that $\Sigma_i = D_{S_n} X_i D_{S_n}$ and $w(X_i) \le w(n)$. \end{proof} The characterization obtained in Lemma \ref{hered} allows us to link the Agler-Young class to one of Agler's landmark paper \cite{AglerFamily} where he outlined an abstract approach to model theory. \begin{defn} Let $\mathcal P$ be the ring of all polynomials over the complex field in the non-commuting variables $(\underline{z}, \underline{z}^* )= (z_1, z_2, \ldots ,z_n, z_1^*, z_2^*, \ldots ,z_n^*)$. The involution on the algebra $\mathcal P$ is: $$(z_i)^* = z_i^*, (z_i^*)^* = z_i \mbox{ and } (uv)^* = v^* u^*$$ for $i=1,2, \ldots ,n$ and for any words $u, v$ in the non-commuting variables. A polynomial is called hereditary if in its monomials, all $z_i^*$ appear before all the $z_j$. \end{defn} The hereditary polynomials have found many uses in operator theory ever since they were introduced by Agler in \cite{AglerFamily}, we mention here a relevant few. \begin{enumerate} \item A contraction $T$ is characterized by $h(T, T^*) \ge 0$ where $h(z, z^*) = 1 - z^* z$, \item A spherical contraction $\underline{T} = (T_1, T_2, \ldots T_n)$ is characterized by $h(\underline{T}, \underline{T}^*) \ge 0$ where $h(\underline{z}, \underline{z}^*) = 1 - z_1^* z_1 - \cdots - z_n^*z_n$ (see \cite{RS}), \item A $\Gamma_n$-contraction, that is, a commuting tuple of bounded operators $\underline{T} = (T_1, T_2, \ldots T_n)$ having the symmetrized polydisc as a spectral set satisfies $h(\underline{T}, \underline{T}^*) \ge 0$ where $h(\underline{z}, \underline{z}^*) = \sum_{i,j=0}^n (-1)^{i+j} \{n- (i+j)\} z_i^*z_j$ \cite[Proposition 2.18]{SS}. \end{enumerate} Now, we can re-write Lemma \ref{hered} as follows. \begin{theorem}[\textbf{Characterization in terms of hereditary polynomials}] \label{ChHered} For every $n \ge 2$, there is a set of hereditary polynomials $h_{\alpha, i}$ indexed by $ (\alpha, i) \in \mathbb T \times \{1,2, \ldots ,n\}$ such that $\underline{S} = (S_1, S_2, \ldots ,S_n) \in AY_n$ with $w(X_i) \le w(n)$ for each $i$ if and only if $h_{\alpha, i} (\underline{S}) \ge 0$. \end{theorem} \begin{proof} Take $h_{\alpha, i}(\underline{z}, \underline{z}^*) = 2w(n)(1 - z_n^*z_n) - \alpha (z_i - z_{n-i}^*z_n) - \overline{\alpha} (z_i^* - z_n^*z_{n-i})$. \end{proof} It is known from Agler's work that a class characterized by hereditary polynomials must be a family. We can prove it directly for the Agler-Young class. \begin{defn} For $n \ge 1$, a family $\mathcal F$ is a collection of $n$-tuples $\underline{T} = (T_1, T_2, \ldots , T_n)$ of Hilbert space operators (acting on $\clh$ say), which is \begin{enumerate} \item[(1)] bounded, that is, $\| T_i \| \le c$ for some constant $c$ for all $i=1,2, \ldots ,n$, \item[(2)] closed under restriction to invariant subspaces, that is, if $\underline{T} \in \mathcal F$ and if $\clm \subset \clh$ is an invariant subspace for each $T_i$, then $(T_1|_\clm, T_2|_\clm, \ldots ,T_n|_\clm) \in \mathcal F$, \item[(3)] closed under direct sum, that is, if $\underline{T}^{(m)} \in \mathcal F$, then the tuple $$ \underline{T} \bydef \oplus_{m=1}^\infty \underline{T}^{(m)} = (\oplus_{m=1}^\infty T^{(m)}_1, \oplus_{m=1}^\infty T^{(m)}_2, \ldots , \oplus_{m=1}^\infty T^{(m)}_n)$$ is in $\mathcal F$. \item[(4)] closed under $*$-representation, that is, if $\pi$ is a unital $*$-representation and $\underline{T} \in \mathcal F$, then $\pi(\underline{T}) \in \mathcal F$. \end{enumerate} \end{defn} Agler defined it only for a single variable, although he mentioned that the concept generalizes effortlessly to several variables. Since then it has found widespread use: we mention the works of Dritschel and McCullough who used the family of $\rho$-contractions in \cite{DM} and Richter and Sundberg who used the family of commuting spherical isometries in \cite{RS}. Our concern is with the following collection: \begin{eqnarray*} \mathcal F_n &=& \{ (S_1^*, S_2^*, \ldots ,S_n^*) : \underline{S} = (S_1, S_2, \ldots ,S_n) \in AY_n \mbox{ with each } \\ & & S_i \mbox{ being norm bounded by a constant } c(n) \} \\ &=& \{ (S_1^*, S_2^*, \ldots ,S_n^*) : \underline{S} = (S_1, S_2, \ldots ,S_n) \in AY_n, \| S_i \| \le c(n) \}. \end{eqnarray*} It is straightforward that the bound $c(n)$ on the norm of each $S_i$ actually places a restriction on the numerical radius of of each $X_i$, i.e., there is a constant $w(n)$ such that $w(X_i) \le w(n)$ for each $i$. \begin{lem} $\mathcal F_n $ is a family. \end{lem} \begin{proof} Condition (1) is satisfied because of the constant $c$. To see that condition (2) is satisfied, let $(S_1^*, S_2^*, \ldots ,S_n^*) \in \mathcal F_n$ and $\clm \subset \clh$ is an invariant subspace for each $S_i^*$, let $\underline{R} = (R_1, R_2, \ldots , R_n)$ be defined by $R_i^* = S_i^*|_\clm$. Since $\underline{S} \in AY_n$, it has an Agler-Young isometric dilation $\underline{W} = (W_1, W_2, \ldots ,W_n)$ on $\clk$, say, by The Dilation theorem. From the way it was constructed, we know that $\clh$ is a co-invariant subspace for each $W_i$. Thus, the situation is that $\clk \supset \clh \supset \clm$ and $R_i^* = S_i^*|_\clm = (W_i^*|_\clh)|_\clm = W_i^*|_\clm$. Thus, $\underline{R}$ is the compression of the Agler-Young isometry to a co-invariant subspace. By Theorem \ref{compression}, $\underline{R} \in AY_n$. For (3), a computation involving direct sums is needed. Since it is very straightforward, we omit the proof. To prove (4), we note that unital $*$-homomorphisms preserve positivity and hence if we start with an $(S_1^*, S_2^*, \ldots , S_n^*)$ in $\mathcal F_n$, we have $$ w(n) (I - \pi(S_n)^* \pi(S_n)) \ge \mathrm{Re} (\exp{i\theta} (\pi(S_i) - \pi(S_{n-i})^*\pi(S_n)) \mbox{ for all } \theta \in [0,2\pi)$$ which is a characterization. \end{proof} \begin{defn} A subset $\mathcal B$ of a family $\mathcal F$ is called a model for the family if \begin{enumerate} \item $\mathcal B$ is closed with respect to unital representations and direct sums, \item for every operator tuple $\underline{T}$ in $\mathcal F$ acting on $\clh$, there is an operator tuple $\underline{B} \in \mathcal B$ acting on a bigger space $\clk \supset \clh$ such that $\clh$ is invariant under $\underline{B}$ and $T_i = B_i|_\clh$ for every $i$. \end{enumerate} If a model $\mathcal B$ has the property that it is smallest, that is, $\mathcal B \subset \mathcal B^\prime$ for any model $\mathcal B^\prime$, then $\mathcal B$ is called a boundary. \end{defn} Agler showed in Theorem 5.3 of \cite{AglerFamily} that every family has a unique boundary. Consequently, it is natural to ask what the boundary is of the family consisting of conjugates of Agler-Young class operators. \begin{lem} For the family $\mathcal F_n$ defined above, the boundary $\mathcal B$ is given by \begin{align} \mathcal B = &\{ (S_1^*, S_2^*, \ldots ,S_n^*) : \underline{S} = (S_1, S_2, \ldots ,S_n) \mbox{ is an Agler-Young isometry } \nonumber \\ & \mbox{ with each } S_i \mbox{ being norm bounded by the constant } c(n) \} \nonumber \end{align} \end{lem} \begin{proof} By the Dilation Theorem, $\mathcal B$ above is a model for $\mathcal F_n$. That it is actually the boundary will require a little more argument. Agler defines an element $\underline{T}$ of a family to be $extremal$ if it can be the restriction of a member of family to an invariant subspace only when the invariant subspace is actually a reducing subspace and then shows that extremals of a family are always contained in a model. Indeed, if $\underline{T}$ is an extremal and $\mathcal C$ is a model, then there is an $\underline{R}$ in the model $\mathcal C$ and invariant subspace $\mathcal N$ for $\underline{R}$ such that $\underline{T} = \underline{R} |_{\mathcal N}$. But, since $\underline{T}$ is extremal, $\mathcal N$ has to be reducing. Since $\mathcal C$ is a model, by the first criterion in the definition of a model, $\underline{T}$ is in the model. In our case, $\mathcal B$ above consists of extremals. To see it, let $\underline{T} \in \mathcal B$ and let $\underline{A}$ be an extension of it, i.e., $\underline{A}$ acts on $\clh$ and there is a subspace $\mathcal N$ such that $\underline{T} = \underline{A} |_{\mathcal N}$. Since the last component of $\underline{T}$ is a co-isometry, clearly $\mathcal{N}$ is a reducing subspace (this is the reason co-isometries form the extremals in the family of contractions). Thus with respect to the decomposition $\clh = \mathcal N \oplus \mathcal N^\perp$, we have $$ A_i = \left( \begin{array}{cc} T_i & * \\ 0 & R_i \\ \end{array} \right) \mbox{ for } i=1,2, \ldots , n-1 \mbox{ and } A_n = \left( \begin{array}{cc} T_n & 0 \\ 0 & R_n \\ \end{array} \right).$$ Now, a computation of $A_i^* - A_{n-i}A_n^*$ shows that its range cannot be contained in the range of $I - A_nA_n^*$ (which it has to be because $(A_1^*, A_2^*, \ldots ,A_n^*)$ is in the Agler-Young class) unless the $(1,2)$ entries of all $A_i$ are $0$. Thus $\mathcal B$ consists of extremals and hence is contained in every model. \end{proof} \section{The Agler-Young class and the truncated Toeplitz operators} We follow the notations of Sarason \cite{Sarason}. For an inner function $u$, denote by $K^2_u$ the orthocomplement $H^2 \ominus uH^2$ of the shift invariant subspace $uH^2$. In accordance with Sarason, $P$ will denote the projection from $L^2$ to $H^2$. We shall greatly use the fact that $ \mathcal{K}_u^2$ is invariant under the backward shift operator. The forward shift operator will be denoted by $T_z$ and the backward shift operator by $T_z^*$. Let $P_u$ denote the projection from $H^2$ onto $ \mathcal{K}_u^2$. The truncated Toeplitz operator with symbol $\varphi$ on $ \mathcal{K}_u^2$ is defined as $$A_\varphi = P_uT_\varphi \mid_{ \mathcal{K}_u^2} = P_uM_\varphi \mid_{ \mathcal{K}_u^2}$$ This first appeared in Sarason, \cite[p.~492]{Sarason}. Let $P_c$ denote the projection operator from $H^2$ to the one-dimensional space of constant functions. We shall heavily use a tool called conjugation denoted by $C$, which acts on $ \mathcal{K}_u^2$ as $$C(g)(z) = u(z)\overline{zg(z)} , \;\; g \in \mathcal{K}_u^2.$$ It is linear with respect to addition, $C(f+g)=C(f)+C(g)$, conjugate linear with respect to scalar multiplication, $C(af)= \bar{a}C(f)$ where $a \in \mathbb{C}$ and satisfies the following properties: \begin{enumerate} \item $CC(f)=f$ (involution); \item $ \langle Cf,Cg \rangle = \langle g,f \rangle$ (antiunitary), \item for truncated Toeplitz operators, $CA_\varphi C=A_\varphi^*$. \end{enumerate} Sometimes, $Cf$ will be denoted by $\tilde{f}$. Further details about $A_\varphi$ and $C$ can be found in Sarason's paper \cite[p.~495]{Sarason}. If $k_w$ denotes the reproducing kernel $k_w(z) = (1 - z\wbar)^{-1}$ on the Hardy space, then its projection $P_uk_w$ on $\mathcal{K}_u^2$ is denoted by $k_w^u$. Both $k_0^u$ and $\tilde{k_0^u}$ will play significant roles for us. Before moving forward, it will be worthwhile to mention that whenever $\mathcal{K}_u^2$ is non trivial (i.e., is a proper non zero subspace of $H^2$), then $k_0^u \neq 0$ and consequently $\tilde{k_0^u} \neq 0$. Indeed, as $k_0^u(z) = 1- \overline{u(0)}u(z)$, (see \cite[p.~494]{Sarason}), if $k_0^u$ is constant, then $u$ has to be constant, which can never give rise to a non trivial $\mathcal{K}_u^2$. So, throughout this paper, we will assume that $\mathcal{K}_u^2$ is non trivial and thus $k_0^u \neq 0$. \begin{theorem} Let $\varphi$ be an $L^\infty$ function. Then $(A_\varphi, A_z)$ is in the Agler-Young class if and only if the function $\varphi$ is of a particular form, viz., $\varphi= \bar{c} + cz + g$ where $g \in uH^2 + \overline{uH^2}$. In this case, $A_\varphi$ and $A_z$ commute. \label{AY+tT} \end{theorem} \begin{proof} This proof depends heavily on results from Sarason's paper \cite{Sarason}. Recall from Theorem 3.1 in \cite{Sarason} that $A_\varphi = A_\psi$ if and only if $\varphi - \psi$ is in $uH^2 + \overline{uH^2}$. If $\varphi= \bar{c} + cz + g$ where $g \in uH^2 + \overline{uH^2}$, then $A_\varphi = A_{\bar c + cz}$. Thus, $A_\varphi$ is the compression of the analytic Toeplitz operator $T_{\bar c + cz}$ to the co-invariant subspace $\mathcal{K}_u^2$. Thus, $A_\varphi^* = T_{\bar c + cz}^*|_{\mathcal{K}_u^2}$. So, $$A_\varphi^* - A_\varphi A_z^* = P_u ( T_{\bar c + cz}^* - T_{\bar c + cz} T_z^*) = c (I - A_z A_z^*) = D_{A_z^*}^{1/2} c D_{A_z^*}^{1/2}.$$ So, $(A_\varphi^*, A_z^*)$ is in the Agler-Young class. Also, $D_{A_z^*} = {k_0^u} \otimes {k_0^u}$ from Lemma 2.4 of \cite{Sarason}. Thus, $A_\varphi^* - A_\varphi A_z^* = c k_0^u \otimes k_0^u$. Now, a computation shows that $$A_\varphi - A_\varphi^*A_z = C(A_\varphi^* - A_\varphi A_z^*)C = \bar c \tilde{k_0^u} \otimes \tilde{k_0^u} = D_{A_z}^{1/2} \bar c D_{A_z}^{1/2}.$$ Hence, $(A_\varphi, A_z)$ is in the Agler-Young class. The converse implication is more subtle. Let $(A_\varphi, A_z)$ be in the Agler-Young class. Write $\varphi$ as $\bar g + zh$ where $g$ and $h$ are from $H^2$. Decompose $g$ as the direct sum of two functions, one coming from $uH^2$ and the other from $\mathcal{K}_u^2$. Do the same for $h$. Then, invoke Theorem 3.1 in \cite{Sarason} to conclude that $g$ and $h$ can be taken to be in $\mathcal{K}_u^2$ without loss of generality. Let $f = g - h$. Then $f$ belongs to $\mathcal{K}_u^2$. Now, putting $\varphi_1 = \bar h + zh$, we get $\varphi = \bar g + zh = (\bar h + zh) + \bar f$, so that \begin{align*}A_\varphi^* - A_\varphi A_z^* &= (A_{\varphi_1}^* - A_{\varphi_1} A_z^*) + (A_{\bar f}^* - A_{\bar f} A_z^*) \\ &= (A_{\varphi_1}^* - A_{\varphi_1} A_z^*) + (A_f - A_{\bar f}A_{\bar z}) = (A_{\varphi_1}^* - A_{\varphi_1} A_z^*) + A_{f - \overline{zf}}.\end{align*} This sum is easier to analyze because of the special form of $\varphi_1$. \begin{align*} A_{\varphi_1}^* - A_{\varphi_1} A_z^* &= A_{\bar h + zh}^* - A_{\bar h + zh} A_z^* \\ &= A_h + A_{z\bar h} - A_{z\bar h} - A_hA_zA_z^* \\ &= A_h(I - A_zA_z^*) = A_h(k_0^u \otimes k_0^u) = P_uhk_0^u\otimes k_0^u = h\otimes k_0^u\end{align*} because $k_0^u = 1 - \bar{u(0)}u$ and $h$ belongs to $\mathcal{K}_u^2$. Thus, $A_{\varphi_1}^* - A_{\varphi_1} A_z^*$ is the sum of a rank one operator and a truncated Toeplitz operator. On the other hand, it is also a rank one operator as the following argument shows. The pair $(A_\varphi, A_z)$ satisfies the fundamental equation. Moreover, $D_{A_z}$ is the rank one projection $\tilde{k_0^u} \otimes \tilde{k_0^u}$ by lemma 2.4 in \cite{Sarason}. Hence, $$A_\varphi - A_\varphi^*A_z = \bar{d_1} \tilde{k_0^u} \otimes \tilde{k_0^u}$$ for some scalar $d_1$. Conjugating both sides with $C$, we get $$A_\varphi^* - A_\varphi A_z^* = d_1 ({k_0^u} \otimes {k_0^u}).$$ As the arguments above show, the truncated Toeplitz operator $A_{f - \overline{zf}}$ is of rank one. But the only rank one operator that it could be is a multiple of $\widetilde{k_0^u} \otimes k_0^u$, see Theorem 5.1 of \cite{Sarason}. Hence, $$h \otimes k_0^u + d(\widetilde{k_0^u} \otimes k_0^u) = d_1(k_0^u \otimes k_0^u).$$ In other words, \begin{equation} \label{h} h = d_1 k_0^u - d\widetilde{k_0^u}. \end{equation} We now need to rewrite the symbol of the rank one truncated Toeplitz operator $\widetilde{k_0^u} \otimes k_0^u$, viz., $u\bar z$ (see page 502 in \cite{Sarason}) in a certain way. We write the symbol as $$u \bar z = (u - u(o))\bar z + u(0) \bar z = \widetilde{k_0^u} + u(0) \bar z = \widetilde{k_0^u} + u(0)\overline{z(k_0^u + \overline{u(0)}u)} = \widetilde{k_0^u} + u(0) \overline{zk_0^u} + u(0)^2 \overline{zu}.$$ Of the three terms, the last one does not contribute to the truncated Toeplitz operator because it belongs to $\overline{uH^2}$. Thus, $$A_{f - \overline{zf}} = d(\widetilde{k_0^u} \otimes K_0^u) = A_{d\widetilde{k_0^u} + u(0)d\overline{zk_0^u}}$$ implying that the truncated Toeplitz operator for the symbol $(f - d\widetilde{k_0^u}) - \overline{z(f + \overline{u(0)d}k_0^u)}$ is $0.$ Sarason characterized the symbols that produce the zero truncated Toeplitz operator. According to the Corollary on page 499 of \cite{Sarason}, we get \begin{equation} \label{twoequations} f - d\widetilde{k_0^u} = a k_0^u \text{ and } P_u (z(f + \overline{u(0)d}k_0^u)) = \bar a k_0^u \end{equation} for some scalar $a$. From the first of the equations above, we get that \begin{equation} \label{f} f = ak_0^u + d \widetilde{k_0^u}.\end{equation} Summing equations \eqref{h} and \eqref{f}, we get that \begin{equation} \label{g} g = f + h = ak_0^u + d_1 k_0^u. \end{equation} Since $\varphi = \bar g + zh$, we get from equations \eqref{g} and \eqref{h} that \begin{equation}\label{phiform} \varphi = \overline{ak_0^u + d_1 k_0^u} + z(d_1k_0^u - d\widetilde{k_0^u}) = (\overline{d_1 k_0^u} + z(d_1k_0^u)) + (\overline{ak_0^u} - dz\widetilde{k_0^u}).\end{equation} Let $\varphi_2 = \overline{ak_0^u} - dz\widetilde{k_0^u}$. We shall now show that this symbol gives the zero truncated Toeplitz operator. To that end, we need to analyze the second equation in \eqref{twoequations}. Decompose the vector $f + \overline{u(0)} d k_0^u$ in $\mathcal{K}_u^2$ as the direct sum of a vector in the span of $\widetilde{k_0^u}$ and one that is orthogonal to $\widetilde{k_0^u}$, viz., $f + \overline{u(0)} d k_0^u = c \widetilde{k_0^u} + v_\perp$ where $c$ is a scalar and $v_\perp$ is orthogonal to $\widetilde{k_0^u}$. Then, $zv_\perp$ is in $\mathcal{K}_u^2$ (page 512 of \cite{Sarason}) and \begin{equation} \label{P-uzetc} P_u(z \widetilde{k_0^u}) = P_u(u - u(0)) = -u(0)P_u(1) = -u(0) k_0^u. \end{equation} Thus we get, $$ \bar a k_0^u = P_u (z(f + \overline{u(0)d}k_0^u)) = P_u(z(c\widetilde{k_0^u} + v_\perp)) = P_u(z(c\widetilde{k_0^u}) + P_u(zv_\perp)) = -u(0) k_0^u + zv_\perp $$ implying that $zv_\perp = (\bar a + c u(0)) k_0^u$. But, by definition of $v_\perp$, we have $zv_\perp$ to be orthogonal to $k_0^u$. Hence, \begin{equation} \label{aandc} \bar a + cu(0) = 0. \end{equation} This also shows that $f + \overline{u(0)} d k_0^u = c\widetilde{k_0^u}$. Recalling the value of $f$ from \eqref{f}, we get that \begin{align} ak_0^u + d \widetilde{k_0^u} + \overline{u(0)d} k_0^u &= c\widetilde{k_0^u} \nonumber \\ \text{Or, } (-\overline{cu(0)} + \overline{du(0)})k_0^u &= (c-d) \widetilde{k_0^u}. \label{ceqorneqd} \end{align} This brings us to two cases. \vspace*{5mm} \textsf{Case - I} In this case, $c=d$. By \eqref{aandc}, we have $a = - \overline{du(0)}$. With this value of $a$ and using \eqref{P-uzetc}, we get $$P_u(dz\widetilde{k_0^u}) = d P_u(z\widetilde{k_0^u}) = du(0) k_0^u = \bar a k_0^u.$$ Thus, $$A_{\varphi_2} = A_{\overline{ak_0^u} - dz \widetilde{k_0^u}} = A_{\overline{ak_0^u} - P_u (dz \widetilde{k_0^u})} = A_{\overline{ak_0^u} - \overline{a} k_0^u} = 0.$$ Hence finally $$A_\varphi = A_{\overline{d_1k_0^u} + zd_1 k_0^u} = A_{\overline{d_1} + zd_1}$$ using the formula $k_0^u = 1 - \overline{u(0)} u$. Thus $\varphi = \overline{d_1} + zd_1 + g$ for some $g$ in $uH^2 + \overline{uH^2}$. \vspace*{5mm} \textsf{Case - II} In this case, $c \neq d$. This, in view of \eqref{ceqorneqd}, means that $\widetilde{k_0^u}$ is a multiple of $k_0^u$ which can only happen if $\mathcal{K}_u^2$ is one-dimensional. In this one-dimensional case, it is easily seen that the result is true. \end{proof} \section{Commutativity} The agenda for this section is to point out that many well-studied commuting operator tuples arising from complex geometry are in Agler-Young class. \subsection{$\Gamma_n$-contractions and Tetrablock contractions} The {\em{tetrablock}} is defined by $$ E = \{\underline{x}=(x_1,x_2,x_3)\in \mathbb{C}^3: 1-x_1z-x_2w+x_3zw \neq 0 \text{ whenever }|z| < 1\text{ and }|w| < 1 \}. $$ This is also a polynomially convex domain. A commuting triple of operators $(S_1,S_2,S_3)$ on a Hilbert space $\mathcal{H}$ is called a tetrablock contraction if $\bar{E}$ is a spectral set. These were introduced in \cite{BhTetra}. \begin{lem} A $\Gamma_n$-contraction is in $AY_n$ and a tetrablock contraction is in $AY_3$. \end{lem} \begin{proof} That a tetrablock contraction is in $AY_3$ was proved in \cite{BhTetra} and the proof for a $\Gamma_n$-contraction is similar with minor modifications to suit the needs. It boils down to choosing a particular family holomorphic function that leads to the hereditary polynomials $h_{\alpha,i}$ mentioned before. \end{proof} This has gained a lot of recent attention, see \cite{SS}, \cite{APal}, \cite{SPalAug2017}. In particular, the lemma above is mentioned in \cite{SPalAug2017} and is proved in \cite{APal}. \subsection{$\Gamma_n$-isometries and Tetrablock isometries} The distinguished boundary $b\Gamma_n$ of the symmetrized polydisc is the symmetrized torus, see Theorem 2.4 of \cite{SS}. An $n$-tuple of commuting normal operators with joint spectrum contained in $b\Gamma_n$ is called a $\Gamma_n$-unitary and the restriction of such a $\Gamma_n$-unitary to an invariant subspace is called a $\Gamma_n$-isometry. Appealing to Theorem 4.12 of \cite{SS}, we get that \begin{lem} Up to a unitary conjugation, a commuting tuple of bounded operators $\underline{S}=(S_1, S_2, \ldots ,S_n)$ is a $\Gamma_n$-isometry if and only if $\underline{S}$ is such an Agler-Young isometry that $$(\frac{n-1}{n}S_1, \frac{n-2}{n}S_2, \ldots ,\frac{1}{n}S_{n-1})$$ is a $\Gamma_n$-contraction. \end{lem} The description of a tetrablock isometry is simpler. An element $\underline{x} = (x_1, x_2, x_3)$ of $\mathbb C^3$ is a member of the distinguished boundary $bE$ of the tetrablock $E$ if and only if $x_1 = \bar{x}_2 x_3, |x_3| = 1$ and $|x_2| \le 1$. A commuting triple $\underline{N}=(N_1,N_2,N_3)$ of normal operators is called a tetrablock unitary if the joint spectrum is contained in $bE$. A tetrablock isometry is the restriction of a tetrablock unitary to a common invariant subspace. Tetrablock isometries were characterized in Theorem 5.7 of \cite{BhTetra}. In the language of Agler-Young isometries, a commuting tuple of bounded operators $(S_1, S_2, S_3)$ is a tetrablock isometry if and only if it is an Agler-Young isometry and all $S_i$ are contractions. \subsection{Commuting dilation} It is well-known that a commuting Agler-Young tuple need not have a commuting Agler-Young dilation, see \cite{SPalCounterexample}. We end by noting that a commuting Agler-Young isometric dilation exists under a condition. This $constrained$ dilation has been observed in \cite{BhTetra}, \cite{APal} and \cite{SPalAug2017}. \begin{lem} Let $\underline{S} = (S_1, S_2, \ldots ,S_n)$ be in the Agler-Young class. Suppose, moreover, that the $S_i$ commute with each other and the fundamental operator tuple $X_1, X_2, \ldots , X_{n-1}$ of $\underline{S}$ satisfies \eqref{simple}. Then $\underline{S}$ has a commuting Agler-Young isometric dilation. \end{lem} \begin{proof} Suppose the fundamental operator tuple satisfies the given conditions. Then the Agler-Young isometric dilation $\underline{V}^{\bl X} = (V_1^{\bl X}, V_2^{\bl X}, \ldots ,V_{n-1}^{\bl X}, V_n)$ constructed in The Dilation Theorem is a commuting tuple. This is a consequence of two computations - one to show that $V_i^{\bl X}$ commutes with $V_j^{\bl X}$ for $i,j =1,2, \ldots ,n-1$ and another to show that $V_i^{\bl X}$ commutes with $V_n$ for $i=1,2, \ldots ,n-1$. Although the computations are not trivial, similar computations have been done in the literature because the commutative theory has been pursued now for many years and hence we do not repeat them here, see for example the proof Theorem 6.1 in \cite{BhTetra}. \end{proof} This lemma immediately implies known dilation theorems of $\Gamma$-contractions and tetrablock contractions.